Strategies for Memory Management
All of this means that managing memory in the .NET Framework requires a deep level of understanding not only of your application, but also of how the .NET Framework performs its actions. And even then it’s not possible to make one best decision in all circumstances. Instead, application development becomes a matter of continuously weighing strategies for implementing features, balancing factors such as efficiency, ease of implementation and maintainability.
What kinds of strategies are available to help developers manage memory for more efficient applications? At the simplest level, you can force a garbage collection in your application by calling the Collect method of the System.GC object class. This provides you with the primary say as to when your application takes this particular performance hit. This is a simple strategy, but it’s far from the best you can employ. According to MSDN, "you should avoid calling any of the collect methods" because doing so might produce unexpected side effects. Your GC.Collect call, for example, might actually execute during a critical time in the application, making already slow code even slower. Without a significant amount of experimentation and load testing, it’s difficult to tell the appropriate time to invoke a garbage collection.
That doesn’t mean you shouldn’t invoke the garbage collector at all. But use care, by ensuring that code is executing only a single thread when invoking garbage collection, and that the code isn’t actively processing managed instructions while garbage collection occurs. Nevertheless, unless you understand what all of your application threads are doing when you call for a System.GC, it is usually best to let the system perform this task. The impact of getting it wrong is more significant than the benefit of getting it right.
The best strategy is two-fold—to understand how the .NET Framework manages memory, and to obtain a precise picture of how your application uses memory. You can then apply both types of information to design, implement and modify your application to optimize memory use.
Applying Memory Analysis
The problem is you need information on how memory is being used, and how memory usage changes as you make changes to your code. The .NET Framework provides some information that should help you write more efficient code. Much of this information is contained within Perfmon counters, which you can examine while your application is running to get an overall picture of how memory is managed and what effect it has. The Perfmon counters that are useful for evaluating memory management include the percent of time spent in the garbage collector, the generational heap sizes and bytes promoted between memory generations, and the large object heap size. These can help you spot trends in your code, such as too many large objects or too many bytes promoted to higher generations.
But the Perfmon counters by themselves are inadequate. One problem is they aren’t necessarily application-specific. In other words, while you can select the instances you want to see counters on, those instances might not clearly correspond to applications, processes or threads you want to see. And you can’t start and stop Perfmon memory counters to view specific activities or to compare among those activities.
Perfmon counters also lack the level of detail you need to analyze memory use and make decisions. They don’t give you any indication as to why the application is spending so much time using the garbage collector, for example, and they don’t tell you which objects are temporary and which are long-lived. The information you get is primarily summary data, and that doesn’t enable you to identify individual objects and the memory associated with them.
Instead, what is needed to examine .NET Framework memory accurately is an interactive, real-time memory analysis capability that can track individual objects in memory over time. One such product, Compuware DevPartner Studio, incorporates memory analysis on the .NET Framework that enables developers to investigate potential and actual memory problems, obtain detailed information on object behaviors and their effects on memory, and determine strategies for using memory efficiently in managed applications.
DevPartner Studio provides three fundamental views of .NET memory: RAM (memory) footprint, temporary objects and memory leaks. You can take snapshots of all these views to examine the state of memory at an instant of your choosing. It also lets you force a garbage collection, so you can observe the effects of memory reclamation as well as determine if an application has an object leak.
Taking a RAM footprint snapshot shows you who allocated the memory, what objects it comprises and which components are holding references to it, thus preventing it from being freed. In the case of the sample application shown in Figure 1, the snapshot shows String objects are using by far the most memory. This information might prompt you to revisit your design and implementation decisions to reduce the use of String objects.
The RAM footprint can provide still more information. You might, for example, observe your application uses more and more memory over time while running. But from simply watching the amount of memory in use by the system, you can’t tell what objects are being leaked. To find this out, you must be able to start and stop memory analysis to take snapshots of the memory state at any particular time.
You can use analysis of temporary objects—the second fundamental view of .NET memory—to look for unusual or inefficient behavior that creates large numbers of temporary objects or large-sized temporary objects. These problems tend to be easy to fix. They typically require changing the construct that is creating the temporary objects, or changing the times those objects are being created.
DevPartner Studio lets you see the objects that allocate the most memory, along with the methods that use the most memory (see Figure 2). Further, you can drill down to examine the methods, how many times they are called, whom they are calling and who is calling them; see Figure 3(a). DevPartner Studio’s memory analysis also provides memory allocation/consumption details down to the line of source code; see Figure 3(b). The call graph provides a visual display of this information that enables you to see at a glance how these calls occur. This provides you with a precise path on when and why these methods are called.
The last important view of .NET memory is that of memory leaks. Objects can leak memory because references aren’t released promptly—or aren’t released at all—and their effects can lead to poor performance and even application failures.
When the memory leaks snapshot loads, you can examine where the leaking objects were allocated and find out which objects still hold references to them, thus preventing them from being garbage collected. DevPartner Studio tracks memory allocations among objects to determine which are not releasing their instances over time (see Figure 4). You can use this information to determine which objects are leaking memory from your application and when.