Memory Management and Garbage CollectionWhen running through all of its normal execution paths, your code will cause type definitions to get loaded into memory, code to get compiled and executed and objects to get allocated and discarded. The .NET Compact Framework was designed to provide the best balance between startup performance, steady-state performance, and the need to continue executing code even in highly memory-constrained environments. To do this, the class loader, JIT engine, memory allocator, and garbage collector work together.As your code is running, it will likely be creating new objects and discarding these objects periodically. To handle the need for object allocation and cleanup efficiently, the .NET Compact Framework uses garbage collection. Most modern managed-code environments use some form of garbage collection.A garbage collector fundamentally is designed to do two things: (1) reclaim memory that is no longer being used; and (2) compact the pieces of memory being used so that the largest blocks of continuous memory are available for new allocations. This solves a common problem called memory fragmentation. Memory fragmentation is similar in concept to disk fragmentation. If no reorganization of storage takes place periodically, after a time the storage area becomes disorganized with small free and occupied storage spaces scattered; these are fragments. Think of this as entropy applied to disk and memory storage; active work is required to reverse these entopic effects. The result of disk fragmentation is slow access times because lots of hopping around the disk is required to read a file. Fragmentation in memory is even worse because object allocation requires a continuous block of memory of sufficient size for the object's data. After a large number of allocations and releases of different-sized chunks of memory, the free pieces of memory are too small and scattered to allow the larger allocation sizes required by objects. Compacting all the memory that is currently in use but scattered can create large blocks of free memory that can efficiently be used for new object allocations.Reclamation of memory is a relatively straightforward process. Execution is temporarily suspended and the set of live objects (objects that can be reached directly and indirectly by your code) is traced down and recursively marked. The rest of the objects in memory, because they are no longer reachable by live code, remain unmarked and so can be identified as garbage objects and can be reclaimed. This kind of garbage collection is known as "mark and sweep." Typically this operation is fairly quick.The ability to compact the objects in memory is an advanced benefit of managed-code execution. Unlike native code, all known references to your objects are known to the execution engine. This allows objects to be moved around in memory when it becomes useful to do so. The manager of the memory (the execution engine) keeps track of which objects are where and can move them around when it needs to. In this way, a set of managed objects scattered around in memory can be compacted.The .NET Compact Framework, with its JIT compiler, memory manager, and garbage collector, also has one additional trick up its sleeve: the capability to garbage collect JITed code. Normally this is not a desirable thing to do, but under special circumstances it can prove very valuable. If the execution engine has gone to the effort of JIT compiling the managed code to native instructions in order to achieve maximum execution performance, throwing out the JITed code seems wasteful because it will need to re-JIT the code the next time it needs to execute. It is, however, beneficial to be able to do this under two circumstances:When the application has JITed and run a lot of code that will not need to be executed again any time soon. A common case for this is when an application runs in different stages. Code that executes in the beginning of the application to set things up may never need to be run again. If memory is needed, it makes sense to reclaim the memory that holds this code.When the set of live objects in memory is so large that the application execution will fail if more memory cannot be found for additional object allocations that need to occur during the normal execution of the application's algorithms. In this case, the execution engine must be willing to throw out and periodically recompile code the application needs just so the application can keep running. If the alternative is the application halting execution because there is no memory left, throwing out JITed code is the only solution, even if it means wasting time recompiling code later. Memory Management and Garbage Collection WalkthroughIt is useful to understand how the execution engine interacts with your application's code and objects during execution. The following set of schematic diagrams walk through the memory management that occurs during different application execution stages.
Figure 3.2. A simple schematic representing the state of an application's memory while running.[View full size image] ![]() Figure 3.3. Application memory state right before garbage collection.[View full size image] ![]() As your application goes about creating and discarding objects and other heap-allocated types, eventually it will hit a point where no additional objects can be created without cleaning up the dead objects. At this point, the execution engine will force a garbage collection. Figure 3.3 shows the application's memory state right before garbage collection.When a garbage collection occurs, the live objects can also be compacted. It is often possible to free up a good deal of continuous memory for the creation of new objects by removing the dead objects from memory and compacting live objects. Figure 3.4 shows the application's memory state right after garbage collection and compaction. Figure 3.4. Dead objects garbage collected and memory compacted.[View full size image] ![]() Figure 3.5. Typical application memory state for steady-state execution.[View full size image] ![]() Figure 3.6. Typical application memory state for steady-state execution right after garbage collection.[View full size image] ![]() Figure 3.7. Live objects crowding all of the available memory.[View full size image] ![]() Figure 3.8. Previously JITed code is thrown out, and the memory the methods' JITed code previously took up has been reclaimed.[View full size image] ![]() Figure 3.9. Methods get re-JITed as they are called, new objects are allocated, and discarded objects become garbage.[View full size image] ![]() Figure 3.10. Severe memory pressure. Live objects take up all available memory, even after all possible JITed code has been discarded.[View full size image] ![]() |