Garbage collection (computer science)

Garbage collection (computer science)

In computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector, or just collector, attempts to reclaim garbage, or memory occupied by objects that are no longer in use by the program. Garbage collection was invented by John McCarthy around 1959 to solve problems in Lisp.[1][2]

Garbage collection is often portrayed as the opposite of manual memory management, which requires the programmer to specify which objects to deallocate and return to the memory system. However, many systems use a combination of approaches, including other techniques such as stack allocation and region inference.

Garbage collection does not traditionally manage limited resources other than memory that typical programs use, such as network sockets, database handles, user interaction windows, and file and device descriptors. Methods used to manage such resources, particularly destructors, may suffice as well to manage memory, leaving no need for GC. Some GC systems allow such other resources to be associated with a region of memory that, when collected, causes the other resource to be reclaimed; this is called finalization. Finalization may introduce complications limiting its usability, such as intolerable latency between disuse and reclaim of especially limited resources, or a lack of control over which thread performs the work of reclaiming.

Contents

Principles

The basic principles of garbage collection are:

  1. Find data objects in a program that cannot be accessed in the future
  2. Reclaim the resources used by those objects

Many computer languages require garbage collection, either as part of the language specification (e.g., Java, C#, and most scripting languages) or effectively for practical implementation (e.g., formal languages like lambda calculus); these are said to be garbage collected languages. Other languages were designed for use with manual memory management, but have garbage collected implementations available (e.g., C, C++). Some languages, like Ada, Modula-3, and C++/CLI allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects; others, like D, are garbage collected but allow the user to manually delete objects and also entirely disable garbage collection when speed is required. While integrating garbage collection into the language's compiler and runtime system enables a much wider choice of methods,[citation needed] post hoc GC systems exist, including some that do not require recompilation. (Post-hoc GC is sometimes distinguished as litter collection.) The garbage collector will almost always be closely integrated with the memory allocator.

Benefits

Garbage collection frees the programmer from manually dealing with memory deallocation. As a result, certain categories of bugs are eliminated or substantially reduced:

  • Dangling pointer bugs, which occur when a piece of memory is freed while there are still pointers to it, and one of those pointers is dereferenced. By then the memory may have been re-assigned to another use, with unpredictable results.
  • Double free bugs, which occur when the program tries to free a region of memory that has already been freed, and perhaps already been allocated again.
  • Certain kinds of memory leaks, in which a program fails to free memory occupied by objects that have actually become unreachable, which leads to memory exhaustion if such a behavior is repeated indefinitely. (Garbage collection typically does not deal with the unbounded accumulation of data which is reachable, but which will actually not be used by the program.)

Some of the bugs addressed by garbage collection can have security implications.

Disadvantages

Typically, garbage collection has certain disadvantages:

  • Garbage collection consumes computing resources in deciding which memory to free, reconstructing facts that may have been known to the programmer. The penalty for the convenience of not annotating object lifetime manually in the source code is overhead, often[citation needed] leading to decreased or uneven performance. Interaction with memory hierarchy effects can make this overhead intolerable in circumstances that are hard to predict or to detect in routine testing.
  • The moment when the garbage is actually collected can be unpredictable, resulting in stalls scattered throughout a session. Unpredictable stalls can be unacceptable in real-time environments, in transaction processing, or in interactive programs.
  • You can't have pointers, only references.
  • Non-deterministic GC is incompatible with RAII based management of non GCed resources. As a result, the need for explicit manual resource management (release/close) for non-GCed resources becomes transitive to composition. That is: in a non-deterministic GC system, if a resource or a resource like object requires manual resource management (release/close), and this object is used as 'part of' an other object, than the composed object will also become a resource like object that itself requires manual resource management (release/close).

Tracing garbage collectors

Tracing garbage collectors are the most common type of garbage collector. They first determine which objects are reachable (or potentially reachable), and then discard all remaining objects.

Reachability of an object

Informally, an object is reachable if it is referenced by at least one variable in the program, either directly or through references from other reachable objects. More precisely, objects can be reachable in only two ways:

  1. A distinguished set of objects are assumed to be reachable: these are known as the roots. Typically, these include all the objects referenced from anywhere in the call stack (that is, all local variables and parameters in the functions currently being invoked), and any global variables.
  2. Anything referenced from a reachable object is itself reachable; more formally, reachability is a transitive closure.

The reachability definition of "garbage" is not optimal, insofar as the last time a program uses an object could be long before that object falls out of the environment scope. A distinction is sometimes drawn between syntactic garbage, those objects the program cannot possibly reach, and semantic garbage, those objects the program will in fact never again use. For example:

Object x = new Foo();
Object y = new Bar();
x = new Quux();
/* at this point, we know that the Foo object 
 * originally assigned to x will never be
 * accessed: it is syntactic garbage
 */
 
if(x.check_something()) {
 x.do_something(y);
}
System.exit(0);
/* in the above block, y *could* be semantic garbage,
 * but we won't know until x.check_something() returns
 * some value: if it returns at all
 */

The problem of precisely identifying semantic garbage can easily be shown to be partially decidable: a program that allocates an object X, runs an arbitrary input program P, and uses X if and only if P finishes would require a semantic garbage collector to solve the halting problem. Although conservative heuristic methods for semantic garbage detection remain an active research area, essentially all practical garbage collectors focus on syntactic garbage.[citation needed]

Another complication with this approach is that, in languages with both reference types and unboxed value types, the garbage collector needs to somehow be able to distinguish which variables on the stack or fields in an object are regular values and which are references: in memory, an integer and a reference might look alike. The garbage collector then needs to know whether to treat the element as a reference and follow it, or whether it is a primitive value. One common solution is the use of tagged pointers.

Strong and Weak references

The garbage collector can reclaim only objects that have no references. However, there can exist additional references that, in a sense, do not matter, which are called weak references. In discussions about weak references, ordinary references are sometimes called strong references. An object is eligible for garbage collection if there are no strong (i.e. ordinary) references to it, even though there still might be some weak references to it.

A weak reference is not merely just any pointer to the object that a garbage collector does not care about. The term is usually reserved for a properly managed category of special reference objects which are safe to use even when the object disappears because they lapse to a safe value. An unsafe reference that not known to the garbage collector will simply remain dangling by continuing to refer to the address where the object previously resided. This is not a weak reference.

In some implementations, notably in Microsoft .NET[3], the weak references are divided into two further subcategories: long weak references (tracks resurrection) and short weak references.

Weak Collections

Objects which maintain collections of other objects can also be devised which have weak tracking features. For instance, weak hash tables are useful. Like a regular hash table, a weak hash table maintains an association between pairs of objects, where each pair is understood to be a key and value. However, the hash table does not actually maintain a strong reference on these objects. A special behavior takes place when either the key or value or both become garbage: the hash table entry is spontaneously deleted. There exist further refinements such as hash tables which have only weak keys (value references are ordinary, strong references) or only weak values (key references are weak).

Weak hash tables are important for maintaining associations between objects, such that the objects engaged in the association can still become garbage if nothing in the program refers to them any longer (other than the associating hash table).

The use of a regular hash table for such a purpose could lead to a "logical memory leak": the accumulation of reachable data which the program does not need and will not use.

Basic algorithm

Tracing collectors are so called because they trace through the working set of memory. These garbage collectors perform collection in cycles. A cycle is started when the collector decides (or is notified) that it needs to reclaim memory, which happens most often when the system is low on memory[citation needed]. The original method involves a naïve mark-and-sweep in which the entire memory set is touched several times.

Naïve mark-and-sweep

In the naïve mark-and-sweep method, each object in memory has a flag (typically a single bit) reserved for garbage collection use only. This flag is always cleared, except during the collection cycle. The first stage of collection does a tree traversal of the entire 'root set', marking each object that is pointed to as being 'in-use'. All objects that those objects point to, and so on, are marked as well, so that every object that is ultimately pointed to from the root set is marked. Finally, all memory is scanned from start to finish, examining all free or used blocks; those with the in-use flag still cleared are not reachable by any program or data, and their memory is freed. (For objects which are marked in-use, the in-use flag is cleared again, preparing for the next cycle.)

This method has several disadvantages, the most notable being that the entire system must be suspended during collection; no mutation of the working set can be allowed. This will cause programs to 'freeze' periodically (and generally unpredictably), making real-time and time-critical applications impossible. In addition, the entire working memory must be examined, much of it twice, potentially causing problems in paged memory systems.

Tri-color marking

Because of these pitfalls, most modern tracing garbage collectors implement some variant of the tri-colour marking abstraction, but simple collectors (such as the mark-and-sweep collector) often do not make this abstraction explicit. Tri-colour marking works as follows:

  1. Create initial white, grey, and black sets; these sets will be used to maintain progress during the cycle.
    • Initially the white set or condemned set is the set of objects that are candidates for having their memory recycled.
    • The black set is the set of objects that cheaply can be proven to have no references to objects in the white set; in many implementations the black set starts off empty.
    • The grey set is all the objects that are reachable from root references but the objects referenced by grey objects haven't been scanned yet. Grey objects are known to be reachable from the root, so cannot be garbage collected: grey objects will eventually end up in the black set. The grey state means we still need to check any objects that the object references.
    • The grey set is initialised to objects which are referenced directly at root level; typically all other objects are initially placed in the white set.
    • Objects can move from white to grey to black, never in the other direction.
  2. Pick an object from the grey set. Blacken this object (move it to the black set), by greying all the white objects it references directly. This confirms that this object cannot be garbage collected, and also that any objects it references cannot be garbage collected.
  3. Repeat the previous step until the grey set is empty.
  4. When there are no more objects in the grey set, then all the objects remaining in the white set have been demonstrated not to be reachable, and the storage occupied by them can be reclaimed.

The 3 sets partition memory; every object in the system, including the root set, is in precisely one set.

The tri-colour marking algorithm preserves an important invariant:

No black object points directly to a white object.

This ensures that the white objects can be safely destroyed once the grey set is empty. (Some variations on the algorithm do not preserve the tricolour invariant but they use a modified form for which all the important properties hold.)

The tri-colour method has an important advantage: it can be performed 'on-the-fly', without halting the system for significant time periods. This is accomplished by marking objects as they are allocated and during mutation, maintaining the various sets. By monitoring the size of the sets, the system can perform garbage collection periodically, rather than as-needed. Also, the need to touch the entire working set each cycle is avoided.

Implementation strategies

In order to implement the basic tri-colour algorithm, several important design decisions must be made, which can significantly affect the performance characteristics of the garbage collector.

Moving vs. non-moving

Once the unreachable set has been determined, the garbage collector may simply release the unreachable objects and leave everything else as it is, or it may copy some or all of the reachable objects into a new area of memory, updating all references to those objects as needed. These are called "non-moving" and "moving" (or, alternatively, "non-compacting" and "compacting") garbage collectors, respectively.

At first, a moving GC strategy may seem inefficient and costly compared to the non-moving approach, since much more work would appear to be required on each cycle. In fact, however, the moving GC strategy leads to several performance advantages, both during the garbage collection cycle itself and during actual program execution:

  • No additional work is required to reclaim the space freed by dead objects; the entire region of memory from which reachable objects were moved can be considered free space. In contrast, a non-moving GC must visit each unreachable object and somehow record that the memory it alone occupied is available.
  • Similarly, new objects can be allocated very quickly. Since large contiguous regions of memory are usually made available by the moving GC strategy, new objects can be allocated by simply incrementing a 'free memory' pointer. A non-moving strategy may, after some time, lead to a heavily fragmented heap, requiring expensive consultation of "free lists" of small available blocks of memory in order to allocate new objects.
  • If an appropriate traversal order is used (such as cdr-first for list conses), objects that refer to each other frequently can be moved very close to each other in memory, increasing the likelihood that they will be located in the same cache line or virtual memory page. This can significantly speed up access to these objects through these references.

One disadvantage of a moving garbage collector is that it only allows access through references that are managed by the garbage collected environment, and does not allow pointer arithmetic. This is because any native pointers to objects will be invalidated when the garbage collector moves the object (they become dangling pointers). For interoperability with native code, the garbage collector must copy the object contents to a location outside of the garbage collected region of memory. An alternative approach is to pin the object in memory, preventing the garbage collector from moving it and allowing the memory to be directly shared with native pointers (and possibly allowing pointer arithmetic).[4]

Copying vs. mark-and-sweep vs. mark-and-don't-sweep

To further refine the distinction, tracing collectors can also be divided by considering how the three sets of objects (white, grey, and black) are maintained during a collection cycle.

The most straightforward approach is the semi-space collector, which dates to 1969. In this moving GC scheme, memory is partitioned into a "from space" and "to space". Initially, objects are allocated into "to space" until they become full and a collection is triggered. At the start of a collection, the "to space" becomes the "from space", and vice versa. The objects reachable from the root set are copied from the "from space" to the "to space". These objects are scanned in turn, and all objects that they point to are copied into "to space", until all reachable objects have been copied into "to space". Once the program continues execution, new objects are once again allocated in the "to space" until it is once again full and the process is repeated. This approach has the advantage of conceptual simplicity (the three object color sets are implicitly constructed during the copying process), but the disadvantage that a (possibly) very large contiguous region of free memory is necessarily required on every collection cycle. This technique is also known as stop-and-copy. Cheney's algorithm is an improvement on the semi-space collector.

A mark and sweep garbage collector maintains a bit (or two) with each object to record whether it is white or black; the grey set is either maintained as a separate list (such as the process stack) or using another bit. As the reference tree is traversed during a collection cycle (the "mark" phase), these bits are manipulated by the collector to reflect the current state. A final "sweep" of the memory areas then frees white objects. The mark and sweep strategy has the advantage that, once the unreachable set is determined, either a moving or non-moving collection strategy can be pursued; this choice of strategy can even be made at runtime, as available memory permits. It has the disadvantage of "bloating" objects by a small amount.

A mark and don't sweep garbage collector, like the mark-and-sweep, maintains a bit with each object to record whether it is white or black; the gray set is either maintained as a separate list (such as the process stack) or using another bit. There are two key differences here. First, black and white mean different things than they do in the mark and sweep collector. In a "mark and don't sweep" system, all reachable objects are always black. An object is marked black at the time it is allocated, and it will stay black even if it becomes unreachable. A white object is unused memory and may be allocated. Second, the interpretation of the black/white bit can change. Initially, the black/white bit may have the sense of (0=white, 1=black). If an allocation operation ever fails to find any available (white) memory, that means all objects are marked used (black). The sense of the black/white bit is then inverted (for example, 0=black, 1=white). Everything becomes white. This momentarily breaks the invariant that reachable objects are black, but a full marking phase follows immediately, to mark them black again. Once this is done, all unreachable memory is white. No "sweep" phase is necessary.

Generational GC (ephemeral GC)

It has been empirically observed that in many programs, the most recently created objects are also those most likely to become unreachable quickly (known as infant mortality or the generational hypothesis). A generational GC (also known as ephemeral GC) divides objects into generations and, on most cycles, will place only the objects of a subset of generations into the initial white (condemned) set. Furthermore, the runtime system maintains knowledge of when references cross generations by observing the creation and overwriting of references. When the garbage collector runs, it may be able to use this knowledge to prove that some objects in the initial white set are unreachable without having to traverse the entire reference tree. If the generational hypothesis holds, this results in much faster collection cycles while still reclaiming most unreachable objects.

In order to implement this concept, many generational garbage collectors use separate memory regions for different ages of objects. When a region becomes full, those few objects that are referenced from older memory regions are promoted (copied) up to the next highest region, and the entire region can then be overwritten with fresh objects. This technique permits very fast incremental garbage collection, since the garbage collection of only one region at a time is all that is typically required.

Generational garbage collection is a heuristic approach, and some unreachable objects may not be reclaimed on each cycle. It may therefore occasionally be necessary to perform a full mark and sweep or copying garbage collection to reclaim all available space. In fact, runtime systems for modern programming languages (such as Java and the .NET Framework) usually use some hybrid of the various strategies that have been described thus far; for example, most collection cycles might look only at a few generations, while occasionally a mark-and-sweep is performed, and even more rarely a full copying is performed to combat fragmentation. The terms "minor cycle" and "major cycle" are sometimes used to describe these different levels of collector aggression.

Stop-the-world vs. incremental vs. concurrent

Simple stop-the-world garbage collectors completely halt execution of the program to run a collection cycle, thus guaranteeing that new objects are not allocated and objects do not suddenly become unreachable while the collector is running.

This has the obvious disadvantage that the program can perform no useful work while a collection cycle is running (sometimes called the "embarrassing pause"). Stop-the-world garbage collection is therefore mainly suitable for non-interactive programs. Its advantage is that it is both simpler to implement and faster than incremental garbage collection.

Incremental and concurrent garbage collectors are designed to reduce this disruption by interleaving their work with activity from the main program. Incremental garbage collectors perform the garbage collection cycle in discrete phases, with program execution permitted between each phase (and sometimes during some phases). Concurrent garbage collectors do not stop program execution at all, except perhaps briefly when the program's execution stack is scanned. However, the sum of the incremental phases takes longer to complete than one batch garbage collection pass, so these garbage collectors may yield lower total throughput.

Careful design is necessary with these techniques to ensure that the main program does not interfere with the garbage collector and vice versa; for example, when the program needs to allocate a new object, the runtime system may either need to suspend it until the collection cycle is complete, or somehow notify the garbage collector that there exists a new, reachable object.

Precise vs. conservative and internal pointers

Some collectors can correctly identify all pointers (references) in an object; these are called precise (also exact or accurate) collectors, the opposite being a conservative or partly conservative collector. Conservative collectors assume that any bit pattern in memory could be a pointer if, interpreted as a pointer, it would point into an allocated object. Conservative collectors may produce false positives, where unused memory is not released because of improper pointer identification. This is not always a problem in practice unless the program handles a lot of data that could easily be misidentified as a pointer. False positives are generally less problematic on 64-bit systems than on 32-bit systems because the range of valid memory addresses tends to be a tiny fraction of the range of 64-bit values. Thus, an arbitrary 64-bit pattern is unlikely to mimic a valid pointer. Whether a precise collector is practical usually depends on the type safety properties of the programming language in question. An example for which a conservative garbage collector would be needed is the C language, which allows typed (non-void) pointers to be type cast into untyped (void) pointers, and vice versa.

A related issue concerns internal pointers, or pointers to fields within an object. If the semantics of a language allow internal pointers, then there may be many different addresses that can refer to parts of the same object, which complicates determining whether an object is garbage or not. An example for this is the C++ language, in which multiple inheritance can cause pointers to base objects to have different addresses. Even in languages like Java, internal pointers can exist during the computation of, say, an array element address. In a tightly-optimized program, the corresponding pointer to the object itself may have been overwritten in its register, so such internal pointers need to be scanned.

Performance implications

Tracing garbage collectors require some implicit runtime overhead that may be beyond the control of the programmer, and can sometimes lead to performance problems. For example, commonly used stop-the-world garbage collectors, which pause program execution at arbitrary times, may make garbage collection inappropriate for some embedded systems, high-performance server software, and applications with real-time needs.

Manual heap allocation
  • search for best/first-fit block of sufficient size
  • free list maintenance
Garbage collection
  • locate reachable objects
  • copy reachable objects for moving collectors
  • read/write barriers for incremental collectors
  • search for best/first-fit block and free list maintenance for non-moving collectors

It is difficult to compare the two cases directly, as their behavior depends on the situation. For example, in the best case for a garbage collecting system, allocation just increments a pointer, but in the best case for manual heap allocation, the allocator maintains freelists of specific sizes and allocation only requires following a pointer. However, this size segregation usually cause a large degree of external fragmentation, which can have an adverse impact on cache behaviour. Memory allocation in a garbage collected language may be implemented using heap allocation behind the scenes (rather than simply incrementing a pointer), so the performance advantages listed above don't necessarily apply in this case. In some situations, most notably embedded systems, it is possible to avoid both garbage collection and heap management overhead by preallocating pools of memory and using a custom, lightweight scheme for allocation/deallocation.[5]

The overhead of write barriers is more likely to be noticeable in an imperative-style program which frequently writes pointers into existing data structures than in a functional-style program which constructs data only once and never changes them.

Some advances in garbage collection can be understood as reactions to performance issues. Early collectors were stop-the-world collectors, but the performance of this approach was distracting in interactive applications. Incremental collection avoided this disruption, but at the cost of decreased efficiency due to the need for barriers. Generational collection techniques are used with both stop-the-world and incremental collectors to increase performance; the trade-off is that some garbage is not detected as such for longer than normal.

Determinism

  • Tracing garbage collection is not deterministic in the timing of object finalization. An object which becomes eligible for garbage collection will usually be cleaned up eventually, but there is no guarantee when (or even if) that will happen. This is an issue for program correctness when objects are tied to non-memory resources, whose release is an externally-visible program behavior, such as closing a network connection, releasing a device or closing a file. One garbage collection technique which provides determinism in this regard is reference counting.
  • Garbage collection can have a nondeterministic impact on execution time, by potentialy introducing pauses into the execution of a program which are not correlated with the algorithm being processed. Under tracing garbage collection, the request to allocate a new object can sometimes return quickly and at other times trigger a lengthy garbage collection cycle. Under reference counting, whereas allocation of objects is usually fast, decrementing a reference is nondeterministic, since a reference may reach zero, triggering recursion to decrement the reference counts of other objects which that object holds.

Real-time garbage collection

While garbage collection is generally nondeterministic, it is possible to use it in hard real-time systems. Real-time garbage collector should guaranty that even in worst case it will dedicate certain number of computational resources to mutator threads. Constraints imposed on real-time garbage collector are usually either work based or time based. Time based constraint would look like: within each time window of duration T, mutator threads should be allowed to run at least for Tm time. For work based analysis, MMU (minimal mutator utilization)[6] is usually used as real time constraint for garbage collection algorithm.

One of first implementation of real-time garbage collection for JVM was work on Metronome algorithm[7]. There are other commercial implementations[8].

Reference counting

Reference counting is a form of garbage collection whereby where each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created, and decremented when a reference is destroyed. The object's memory is reclaimed when the count reaches zero.

There are two major disadvantages to reference counting:

  • If two or more objects refer to each other, they can create a cycle whereby neither will be collected as their mutual references never let their reference counts become zero. Some garbage collection systems using reference counting (like the one in CPython) use specific cycle-detecting algorithms to deal with this issue.[9] Another strategy is to use weak references for the "backpointers" which create cycles. Under reference counting, a weak reference is similar to a weak reference under a tracing garbage collector. It is a special reference object whose existence does not increment the reference count of the referent object. Furthermore, a weak reference is safe in that when the referent object becomes garbage, any weak reference to it lapses, rather than being permitted to remain dangling, meaning that it turns into a predictable value, such as a null reference.
  • In naive implementations, each assignment of a reference and each reference falling out of scope often require modifications of one or more reference counters. However, in the common case, when a reference is copied from an outer scope variable into an inner scope variable, such that the lifetime of the inner variable is bounded by the lifetime of the outer one, the reference incrementing can be eliminated. The outer variable "owns" the reference. In the programming language C++, this technique is readily implemented and demonstrated with the use of const references. Reference counting in C++ is usually implemented using "smart pointers" whose constructors, destructors and assignment operators manage the references. A smart pointer can be passed by reference to a function, which avoids the need to copy-construct a new reference (which would increase the reference count on entry into the function and decrease it one exit). Instead the function receives a reference to the smart pointer which is produced inexpensively.
  • [clarification needed] When used in a multithreaded environment, these modifications (increment and decrement) may need to be atomic operations such as compare-and-swap, at least for any objects which are shared, or potentially shard among multiple threads. Atomic operations are expensive on a multiprocessor, and even more expensive if they have to be emulated with software algorithms. There are ways to optimize this situation, such as having two-level references with multiple counts. A reference to R to an object O can actually be instead represented as a reference R to a reference-counted object RO which itself is a reference to O. R and RO are not shared among threads. Different threads have their own nonshared R and RO pointing to the shared O. Each RO owns a single reference count within O. The reference R can be passed around within a thread, such that only the count within RO increments and decrements, without the use of atomic operations. Only when the reference count of RO reaches zero (the thread has lots its last private reference to the object), then the reference count of O is decremented using an atomic operation. A special operation for inter-thread-object passing can instantiate a new R pointing to a new RO. All of this is encapsulated so that it is transparent: R just looks like a reference to O in most situations in the program.
  • It is a common misconception that reference counting has deterministic performance compared to tracing garbage collection. Reference counting provides quasi-deterministing reclamation performance in situations where objects are obviously short-lived (the same situation in which generational GC techniques also provide quasi-deterministic behavior). In the general case, whenever an object's reference is decremented, there is no telling how much CPU time will be consumed. Each object whose reference count reaches zero (i.e. each object which is garbage) has to recursively trigger a reference count decrement operation on all other objects which it holds, some of which may also reach zero reference counts. Reference counting in the general case will also introduce nondeterministic pauses.

Escape analysis

Escape analysis can be used to convert heap allocations to stack allocations, thus reducing the amount of work needed to be done by the garbage collector.

Availability

Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In languages that do not have built in garbage collection, it can often be added through a library, as with the Boehm garbage collector for C and C++. This approach is not without drawbacks, such as changing object creation and destruction mechanisms.

Most functional programming languages, such as ML, Haskell, and APL, have garbage collection built in. Lisp, which introduced functional programming, is especially notable for introducing this mechanism.

Other dynamic languages, such as Ruby (but not Perl 5, or PHP, which use reference counting), also tend to use GC. Object-oriented programming languages such as Smalltalk, Java and ECMAScript usually provide integrated garbage collection. Notable exceptions are C++ and Delphi which have destructors. Objective-C has not traditionally had it, but ObjC 2.0 as implemented by Apple for Mac OS X uses a runtime collector developed in-house, while the GNUstep project uses a Boehm collector.

Historically, languages intended for beginners, such as BASIC and Logo, have often used garbage collection for heap-allocated variable-length data types, such as strings and lists, so as not to burden programmers with manual memory management. On early microcomputers, with their limited memory and slow processors, BASIC garbage collection could often cause apparently random, inexplicable pauses in the midst of program operation. Some BASIC interpreters such as Applesoft BASIC on the Apple II family, had terribly inefficient garbage collectors for strings which repeatedly scanned the string descriptors for the string having the highest address in order to compact it toward high memory. This one-string-at-a-time processing loop resulted in O(N*N) time performance in the number of strings, which would introduce a pause more than one minute long into the execution of string-intensive programs. A replacement garbage collector for Applesoft BASIC published in Call-A.P.P.L.E. (January 1981, pages 40-45, Randy Wiggington) identified a group of strings in every pass over the heap, reducing a pause of two minutes into less than a second depending on the size of the group. Other approaches were published, but none ever made it into a new revision of the BASIC interpreter.

Limited environments

Garbage collection is rarely used on embedded or real-time systems because of the perceived need for very tight control over the use of limited resources. However, garbage collectors compatible with such limited environments have been developed.[10] The Microsoft .NET Micro Framework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection.

See also

  • International Symposium on Memory Management
  • Garbage-first collector

References

  1. ^ "Recursive functions of symbolic expressions and their computation by machine". Portal.acm.org. http://portal.acm.org/citation.cfm?id=367177.367199. Retrieved 2009-03-29. 
  2. ^ "Recursive functions of symbolic expressions and their computation by machine, Part I". http://www-formal.stanford.edu/jmc/recursive.html. Retrieved 2009-05-29. 
  3. ^ http://msdn.microsoft.com/en-us/library/ms404247.aspx
  4. ^ "Copying and Pinning". Msdn2.microsoft.com. http://msdn2.microsoft.com/en-us/library/23acw07k.aspx. Retrieved 2010-07-09. 
  5. ^ "Memory allocation in embedded systems". Eros-os.org. http://www.eros-os.org/pipermail/cap-talk/2007-January/006795.html. Retrieved 2009-03-29. 
  6. ^ "A parallel, real-time garbage collector". ACM SIGPLAN Notices 36.5: 125–136. 22. doi:378795.378823. http://dl.acm.org/citation.cfm?doid=378795.378823. 
  7. ^ "The Metronome: A Simpler Approach to Garbage Collection in Real-Time Systems". http://www.research.ibm.com/people/d/dfb/papers/Bacon03Metronome.pdf. 
  8. ^ "Real-time Java, Part 4: Real-time garbage collection". http://www.ibm.com/developerworks/java/library/j-rtj4/index.html. 
  9. ^ "Reference Counts". Extending and Embedding the Python Interpreter. 2008-02-21. http://www.python.org/doc/2.5.2/ext/refcounts.html. Retrieved 2008-11-13. "While Python uses the traditional reference counting implementation, it also offers a cycle detector that works to detect reference cycles." 
  10. ^ "Wei Fu and Carl Hauser, "A Real-Time Garbage Collection Framework for Embedded Systems". ACM SCOPES '05, 2005". Portal.acm.org. http://portal.acm.org/ft_gateway.cfm?id=1140392&type=pdf&coll=GUIDE&dl=GUIDE&CFID=15151515&CFTOKEN=6184618. Retrieved 2010-07-09. 

Further reading

  • Jones, Richard; Hosking, Antony; Moss, Eliot (19 August 2011). The Garbage Collection Handbook: The Art of Automatic Memory Management. CRC Applied Algorithms and Data Structures Series. Chapman and Hall/CRC. ISBN 1420082795. 
  • Jones, Richard; Lins, Rafael D. (1996). Garbage Collection: Algorithms for Automatic Dynamic Memory Management. Wiley. ISBN 0471941484. 
  • Wilson, Paul R. (1992). "Uniprocessor Garbage Collection Techniques". IWMM '92 Proceedings of the International Workshop on Memory Management (Springer-Verlag). http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.47.2438. 

External links

Implementations

Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Garbage collection — may refer to:* Waste collection, one part of the municipal waste management cycle. * Garbage collection (computer science), the detection and pruning of unused or inaccessible data structures. See also * Waste …   Wikipedia

  • Garbage (computer science) — Garbage, in the context of computer science, refers to objects, data, or other regions of the memory of a computer system (or other system resources), which will not be used in any future computation by the system, or by a program running on it.… …   Wikipedia

  • Garbage collector — may refer to:* Waste collector, a person employed to collect waste. * A waste collection vehicle. * A waste picker, or a dumpster diver. * Garbage collection (computer science), the detection and pruning of unused or inaccessible data structures …   Wikipedia

  • Closure (computer science) — In computer science, a closure (also lexical closure, function closure, function value or functional value) is a function together with a referencing environment for the non local variables of that function.[1] A closure allows a function to… …   Wikipedia

  • Object (computer science) — In computer science, an object is any entity that can be manipulated by the commands of a programming language, such as a value, variable, function, or data structure. (With the later introduction of object oriented programming the same word,… …   Wikipedia

  • Reference (computer science) — This article is about a general notion of reference in computing. For the more specific notion of reference used in C++, see Reference (C++). In computer science, a reference is a value that enables a program to indirectly access a particular… …   Wikipedia

  • String (computer science) — In formal languages, which are used in mathematical logic and theoretical computer science, a string is a finite sequence of symbols that are chosen from a set or alphabet. In computer programming, a string is traditionally a sequence of… …   Wikipedia

  • Reification (computer science) — Reification is the act of making an abstract concept or low level implementation detail of a programming language accessible to the programmer.For example, * the C programming language reifies the low level detail of memory addresses * the Scheme …   Wikipedia

  • Resource (computer science) — A resource, or system resource, is any physical or virtual component of limited availability within a computer system. Every device connected to a computer system is a resource. Every internal system component is a resource. Virtual system… …   Wikipedia

  • Coalescence (computer science) — In computer science, coalescing is the act of merging two adjacent free blocks of memory. When an application frees memory, gaps can fall in the memory segment that that application uses. Among other techniques, coalescing is used to reduce… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”