Thinking in Java - 4th Edition
1079 pág.

Thinking in Java - 4th Edition

DisciplinaProgramação Orientada A Objetos5.238 materiais79.628 seguidores
Pré-visualização50 páginas
the object that it points to and then follow all the references in that object, tracing into 
the objects they point to, etc., until you\u2019ve moved through the entire Web that originated with 
the reference on the stack or in static storage. Each object that you move through must still 
be alive. Note that there is no problem with detached self-referential groups\u2014these are 
simply not found, and are therefore automatically garbage. 
In the approach described here, the JVM uses an adaptive garbage-collection scheme, and 
what it does with the live objects that it locates depends on the variant currently being used. 
One of these variants is stop-and-copy. This means that\u2014for reasons that will become 
apparent\u2014the program is first stopped (this is not a background collection scheme). Then, 
each live object is copied from one heap to another, leaving behind all the garbage. In 
addition, as the objects are copied into the new heap, they are packed end-to-end, thus 
compacting the new heap (and allowing new storage to simply be reeled off the end as 
previously described). 
Of course, when an object is moved from one place to another, all references that point at the 
object must be changed. The reference that goes from the heap or the static storage area to 
the object can be changed right away, but there can be other references pointing to this object 
Initialization & Cleanup 123 
that will be encountered later during the \u201cwalk.\u201d These are fixed up as they are found (you 
could imagine a table that maps old addresses to new ones). 
There are two issues that make these so-called \u201ccopy collectors\u201d inefficient. The first is the 
idea that you have two heaps and you slosh all the memory back and forth between these two 
separate heaps, maintaining twice as much memory as you actually need. Some JVMs deal 
with this by allocating the heap in chunks as needed and simply copying from one chunk to 
The second issue is the copying process itself. Once your program becomes stable, it might be 
generating little or no garbage. Despite that, a copy collector will still copy all the memory 
from one place to another, which is wasteful. To prevent this, some JVMs detect that no new 
garbage is being generated and switch to a different scheme (this is the \u201cadaptive\u201d part). This 
other scheme is called mark-and-sweep, and it\u2019s what earlier versions of Sun\u2019s JVM used all 
the time. For general use, mark-and-sweep is fairly slow, but when you know you\u2019re 
generating little or no garbage, it\u2019s fast. 
Mark-and-sweep follows the same logic of starting from the stack and static storage, and 
tracing through all the references to find live objects. However, each time it finds a live 
object, that object is marked by setting a flag in it, but the object isn\u2019t collected yet. Only 
when the marking process is finished does the sweep occur. During the sweep, the dead 
objects are released. However, no copying happens, so if the collector chooses to compact a 
fragmented heap, it does so by shuffling objects around. 
\u201cStop-and-copy\u201d refers to the idea that this type of garbage collection is not done in the 
background; instead, the program is stopped while the garbage collection occurs. In the Sun 
literature you\u2019ll find many references to garbage collection as a low-priority background 
process, but it turns out that the garbage collection was not implemented that way in earlier 
versions of the Sun JVM. Instead, the Sun garbage collector stopped the program when 
memory got low. Mark-and-sweep also requires that the program be stopped. 
As previously mentioned, in the JVM described here memory is allocated in big blocks. If you 
allocate a large object, it gets its own block. Strict stop-and-copy requires copying every live 
object from the source heap to a new heap before you can free the old one, which translates 
to lots of memory. With blocks, the garbage collection can typically copy objects to dead 
blocks as it collects. Each block has a generation count to keep track of whether it\u2019s alive. In 
the normal case, only the blocks created since the last garbage collection are compacted; all 
other blocks get their generation count bumped if they have been referenced from 
somewhere. This handles the normal case of lots of short-lived temporary objects. 
Periodically, a full sweep is made\u2014large objects are still not copied (they just get their 
generation count bumped), and blocks containing small objects are copied and compacted. 
The JVM monitors the efficiency of garbage collection and if it becomes a waste of time 
because all objects are long-lived, then it switches to mark-andsweep. Similarly, the JVM 
keeps track of how successful mark-and-sweep is, and if the heap starts to become 
fragmented, it switches back to stop-and-copy. This is where the \u201cadaptive\u201d part comes in, so 
you end up with a mouthful: \u201cAdaptive generational stop-and-copy mark-andsweep.\u201d 
There are a number of additional speedups possible in a JVM. An especially important one 
involves the operation of the loader and what is called a just-in-time (JIT) compiler. A JIT 
compiler partially or fully converts a program into native machine code so that it doesn\u2019t 
need to be interpreted by the JVM and thus runs much faster. When a class must be loaded 
(typically, the first time you want to create an object of that class), the .class file is located, 
and the bytecodes for that class are brought into memory. At this point, one approach is to 
simply JIT compile all the code, but this has two drawbacks: It takes a little more time, 
which, compounded throughout the life of the program, can add up; and it increases the size 
of the executable (bytecodes are significantly more compact than expanded JIT code), and 
this might cause paging, which definitely slows down a program. An alternative approach is 
lazy evaluation, which means that the code is not JIT compiled until necessary. Thus, code 
124 Thinking in Java Bruce Eckel 
that never gets executed might never be JIT compiled. The Java HotSpot technologies in 
recent JDKs take a similar approach by increasingly optimizing a piece of code each time it is 
executed, so the more the code is executed, the faster it gets. 
Member initialization 
Java goes out of its way to guarantee that variables are properly initialized before they are 
used. In the case of a method\u2019s local variables, this guarantee comes in the form of a compile-
time error. So if you say: 
void f() { 
 int i; 
 i++; // Error -- i not initialized 
you\u2019ll get an error message that says that i might not have been initialized. Of course, the 
compiler could have given i a default value, but an uninitialized local variable is probably a 
programmer error, and a default value would have covered that up. Forcing the programmer 
to provide an initialization value is more likely to catch a bug. 
If a primitive is a field in a class, however, things are a bit different. As you saw in the 
Everything Is an Object chapter, each primitive field of a class is guaranteed to get an initial 
value. Here\u2019s a program that verifies this, and shows the values: 
//: initialization/ 
// Shows default initial values. 
import static net.mindview.util.Print.*; 
public class InitialValues { 
 boolean t; 
 char c; 
 byte b; 
 short s; 
 int i; 
 long l; 
 float f; 
 double d; 
 InitialValues reference; 
 void printInitialValues() { 
 print("Data type Initial value"); 
 print("boolean " + t); 
 print("char [" + c + "]"); 
 print("byte " + b); 
 print("short " + s); 
 print("int " + i); 
 print("long " + l); 
 print("float " + f); 
 print("double " + d); 
 print("reference " + reference); 
 public static void main(String[] args) { 
 InitialValues iv = new InitialValues(); 
 /* You could also say: 
 new InitialValues().printInitialValues();