Thanks for the great post once again. I was looking forward to your debugging virtual training, but unfortunately it was cancelled.
The company I work for is pushing pack against building release mode binaries with debug information generated, one of the reasons I signed up for the class :). They are afraid performance will be affected.
My question is what are the best command line arguments for generating symbols in release mode? Also is there somewhere I can reference to show that there should be no performance hits.
I’m sorry about the canceled class, but the good news is that Mastering .NET Debugging was rescheduled to July 14-15.
The executive summary answer: no, generating PDB files will have no impact on performance whatsoever. As for references that I can point you too, I haven’t found any on the web that answer the exact question so let me take both .NET and native development in turn.
Recently, the always-readable Eric Lippert wrote a great post What Does the Optimize Switch Do? where he discusses the optimizations done by the compiler and Just in Time (JIT) compiler. (Basically, you can sum it up as the JITter does all the real optimization work.) There’s a bit of confusion on the C# and VB.NET compiler switches around as there are four different /debug switches, /debug, /debug+, /debug:full, and /debug:pdb-only. I contributed to that confusion because I thought /debug:pdb-only did something different that was better for release builds than the other three /debug switches.
All four switches all do the same thing in that they cause a PDB file to be generated but why are there four switches to do the same thing? Do Microsoft developers really love parsing slightly different command line options? The real reason: history. Back in .NET 1.0 there were differences, but in .NET 2.0 there isn’t. It looks like .NET 4.0 will follow the same pattern. After double-checking with the CLR Debugging Team, there is no difference at all.
What controls whether the JITter does a debug build is the /optimize switch. Building with /optimize- will add an attribute, DebuggableAttribute, in the assembly and setting the DebuggingMode parameter to DisableOptimizations. It doesn’t take a Rhodes Scholar to figure out that DisableOptimizations does exactly what it says.
The bottom line is that you want to build your release builds with /optimize+ and any of the /debug switches so you can debug with source code. Read the Visual Studio documentation to see how where to set those switches in the different types of projects.
It’s easy to prove these are the optimal switches. Taking my Paraffin program, I compiled one build with /optimize+ and /debug, and another with just /optimize+.
which is the same as /debug+ and /debug, and the other with /optimize+ /debug:pdbonly to show the differences, which is the root of how we got it wrong. After compiling, I used ILDASM with the following command line to get the raw information from the binaries
ILDASM /out=Paraffin.IL Paraffin.exe
Using a diff tool you’ll see that the IL itself is identical between both builds. The main difference will be in the DebuggableAttribute declaration for the assembly. When built /optimize+ and a /debug switch, a DebuggingMode.IgnoreSequencePoints is passed to the DebuggableAttribute to tell the JIT compiler that it doesn’t need to load the PDB file in order to correctly JIT the IL. A value of DebuggingMode.Default is also OR’d in, but that value is ignored.
Like .NET, building PDB files has nothing to do with optimizations so have zero impact on the performance of an application. If you have a manager who in Justin’s words is “afraid performance will be affected” here’s what I tell them. (Sadly, I’ve run into a few more managers who say that than I care to count).
That might be true on other operating systems, but not Windows. If you think they do, then why does Microsoft build every single product they ship with PDB files turned on for both debug and release builds? They wrote the compiler, they wrote the linker, and they wrote the operating system so they know exactly what the effects are. Microsoft has more people focused on performance than any other software company in the world. If there were any performance impact at all, they wouldn’t do it. Period. Performance isn’t the only thing at Microsoft, it’s everything.
Where .NET is pretty simple as there’s really only two switches, the appropriate optimization switches are dependent on many individual application factors. What I can tell you is what the switches you need to set to generate PDB files correctly in release builds.
For CL.EXE, the compiler, you need to add /Zi to have it put debugging symbols into the .OBJ file. For LINK.EXE, the linker, you need to specify three options. The first is /DEBUG, that tells the linker to generate a PDB file. However, that switch also tells the linker that this is a debug build. That’s not so good because that will affect the performance of your binary. Basically what happens when you use /DEBUG is the linker links faster because it no longer looks for individual references. If you use one function from a OBJ the linker throws the whole OBJ into the output binary so you now have a bunch of dead functions.
To tell the linker you want only the referenced functions, you need to add /OPT:REF as the second switch. The third switch is /OPT:ICF, which enabled COMDAT folding. There’s a term you don’t hear every day. Basically what this means is that when generating the binary, the linker will look for functions that have identical code and only generate one function but make multiple symbols point to the one function.
If you want to test the difference yourself on a native binary to see what affects generating PDB files will have, it’s nearly as easy as a .NET binary. Visual Studio comes with a nice little program, DUMPBIN, which can tell you more than you ever wanted to know about a Portable Executable file. Run it with /DISASM switch to get the disassembly of a binary.
Please keep those PDB related questions coming. Of course, if you have any other questions, I’ll be happy to take a crack at those also. Gee, I better draw the line: no investment or relationship questions. <grin>
Secure and able to meet specific compliance requirements.
Our methodology encompasses design through deployment and focuses on delivering solutions which are realistically implementable.
Our compliance services span the entire computing stack, from connectivity to applications, with stringent physical and logical security controls.
Shared responsibility and liability.
We take on our customer’s compliance and infrastructure concerns by becoming an extension of their team or their application development vendor. We share the responsibility and liability associated with maintaining a compliant environment, and stand by that commitment with a guarantee based on the defined lines of responsibility.
Utilize turnkey solutions.
Leverage proven industry experience.
Build a comprehensive program.
We provide secure and compliant cloud environments with 24x7x365 proactive managed services.
You can rely on our deep knowledge of critical security frameworks including HIPAA/HITECH, HITRUST, PCI-DSS, IRS-1075 and SSAE 16.
We help your organization begin or refine your compliance-based security practices.
Independent Security Testing and Certification
Atmosera has a proven, third-party verified Compliance Cloud to address the stringent standards associated with the following:
We meet the requirements under the Health Insurance Portability and Accountability (HIPAA) and Health Information Technology for Economic and Clinical Health (HITECH) acts. All operational, administrative, technical and physical security controls achieved a state of compliancy of “1,” demonstrating that Atmosera exhibits strong design in every respect.