Finding bugs in Java programs

A while back I came across findbugs, which is a static analysis tool that can be used to locate bugs in Java programs. Findbugs is able to identify a number of bug patterns, which range from bad practices to performance and multithreaded programming bugs. Findbugs can be invoked through a graphical utility, or by running the findbugs command line utility. The command line option has the advantage that it can be easily incorporated into existing build processes (there are options readily available to integrate findbugs with maven and ant), allowing code to be tested when new builds are created.

To use the command line interface, you can run the findbugs executable with the “-textui” option and one or more options to control how findbugs goes about locating bugs. The following example uses the “-effort” option to tell findbugs to but the maximum amount of effort into finding bugs, requests that all bugs that are considered medium to high in priority be displayed, allocates 1GB of memory to findbugs, and sets the default output format to HTML:

$ findbugs -textui -effort:max -maxHeap 1024 -html -medium test.jar

Once findbugs completes its analysis, an HTML report similar to the ones on the findbugs website will be written to standard output. Since findbugs is free and can be easily integrated with several build tools, there is little to no reason that java developers shouldn’t use it to analyze their code for bugs. If your interested in learning more about findbugs or the bug patterns it detects, you should check out the findbugs website and the talk Bill Pugh gave at Google!

Generating byte code from a Java class file

I have been reading through the Java virtual machine specification, which covers all the details needed to implement a JVM. Once thing the specification talks about in detail is Java bytecode, which is the machine independent code that is executed by the virtual machine implementation. At various places in the specification I wondered what the byte code would look like for a chunk of Java code. Fortunately for me, the Java SDK comes with the javap utility, which can be run with the “-c” option to translate a class file into byte code:

$ javap -c -s -verbose MyEnv

  Signature: ()V
   Stack=1, Locals=1, Args_size=1
   0:   aload_0
   1:   invokespecial   #1; //Method java/lang/Object."":()V
   4:   return
   line 3: 0

public static void main(java.lang.String[]);
  Signature: ([Ljava/lang/String;)V
   Stack=2, Locals=2, Args_size=1
   0:   new     #2; //class MyEnv
   3:   dup
   4:   invokespecial   #3; //Method "":()V
   7:   astore_1
   8:   aload_1
   9:   invokespecial   #4; //Method allocateObjects:()V
   12:  return
   line 7: 0
   line 8: 8
   line 9: 12

In the output above, you can see the methods and constants the MyEnv class utilizes, and the byte code that makes up the main method in the MyEnv class. Javap is amazingly cool, and I am starting to realize that understanding byte code can a super useful tool for debugging problems.

Graphing JVM utilization with orca

The Sun JDK has included an SNMP agent since version 5, and the agent has a number of objects in its MIB that provide runtime utilization data. The current set of objects include thread utilization counters, garbage collection events, time spent performing garbage collection, heap usage, class loader activity, compiler metrics, etc. Graphing these values can help with performance analysis, since you can view activity over time. This data is also useful for capacity planningl, since the graphs can be used to size hardware for an application, and for forecasting future hardware upgrades.

Numerous tools are available to graph utilization data, but my personal favorite is Orca. Orca can graph arbitrary data, and is a perfect fit for generating JVM performance graphs. Configuring Orca to generate JVM performance graphs is a snap. To begin, you will first need to enable the Java SNMP agent. To enable the agent, you can add the “” (interface to bind to), “” (use a different SNMP port other that the default, port 161), “” (use ACLs to limit who can read and write to the server) and ”” (ACL file to use) options to the java command line:

$ java \ \ \ …

The $JAVA_HOME/jre/lib/management/ directory contains a sample SNMP ACL file (aptly named snmp.acl.template) that can be tailored to fit your site’s security policy (if you use the default locations, you do not need to specify the last two parameters). After the SNMP agent is enabled, you can run from cron to collect statistics at specific intervals. Each time runs, it will write the current values of the Java agent’s performance counters (e.g., number of active threads, time spent performing garbage collection, etc.) to the file pased to the “-f” option, and will add a header to the file if it doesn’t exist.

The example below show how can be used to connect to server on UDP port 8161, collect the current set of JVM statistics, and write these statistics to a file (the file name should match up with the regex used in the Orca find_files configuration stanza):

$ -s -p 8161 -c public
        -f /home/matty/java/jvmstats/server1_myappjvm/javallator-2007-11-05

$ cat /home/matty/java/jvmstats/server1_myappjvm/javallator-2007-11-05

1194308951 218176652 735 308464 10 1348 0 380934 0 5309065 0 0 \
266797056 165481912 19136512 7156344

After a few samples are taken, the orca utility can be run with the javallator.cfg configuration file to graph the data collected above (if you haven’t used Orca before, you can peruse my monitoring LDAP article to see how to configure it). This will result in a set of graphs that look similar to the samples posted to my website. I really dig Java, and having historical performance graphs handy sure makes performance analysis a whole lot easier!

Using the DTrace hotspot provider to observe java object allocations

In my previous post, I discussed how jmap can be used to view the contents of the Java heap. Jmap is a wonderful utility for viewing heap utilization at a specific point in time, but it’s not the best utiltiy for answer questions like “which call stack is allocating objects of type foo?”, or “how many objects are being allocated per second?”, or “what is the average size of allocated objects?”. To answer these types of questions, we need to look to a more thorough instrumentation framework.

One such framework is the DTrace hotspot provider which was introduced in Java 6, and provides a number of probes to dynamically instrument Java applications. These probes allow you to observe thread behavior, garbage collection, method calls, class loader activity, monitor contention and object allocation. The probe to observe object allocation is rightfully named “object-alloc,” and contains several arguments (since the probe arguments are documented extremely well in the Java community, I won’t describe them here) that can be used to understand how the heap is being used.

To use the object-alloc probe, you will need to start the JVM with the “-XX:+DTraceAllocProbes” option, or run the jinfo utility to enable it at runtime (enabling it at runtime is preferred):

$ jinfo -XX:+DTraceAllocProbes `pgrep java`

Once the object-alloc probe is enabled, you can begin prodding your application to see what it’s doing. If you are seeing lots of garbage collection events in jconsole, jstat or your garbage collection logs, the object allocation probe can be used to display object allocations by object type, the number of objects being allocated, and the size of these allocations. The newobjects.d DTrace script provides the necessary probe definitions to display this information:

$ newobjects.d

Class                           Objects created  Bytes Allocated
[I                              112              16064     
Wasteful                        17943            287088    
java/lang/StringBuilder         17943            287088    
java/lang/String                17943            430632    
[C                              107658           52393560  

Class                           Objects created  Bytes Allocated
[I                              121              18520     
Wasteful                        19541            312656    
java/lang/StringBuilder         19541            312656    
java/lang/String                19542            469008    
[C                              117252           57062640  

If you see tons of object allocations, you can use the DTrace quantize() action to create a histogram with object counts or object sizes. The objectsize.d DTrace script shows how to create a distribution that is keyed on object size:

$ objectsize.d

         value  ------------- Distribution ------------- count    
             8 |                                         0        
            16 |@@@@@@@@@@@@@                            95897    
            32 |@@@@                                     31965    
            64 |                                         1        
           128 |@@@@                                     32157    
           256 |@@@@                                     31965    
           512 |@@@@@@@@@                                63932    
          1024 |@@@@                                     31967    
          2048 |                                         0        

Once we know how many objects are being allocated, it would useful to see which Java call stacks are responsible for allocating these objects. To generate a list of Java calls stacks with the number of objects allocated per call stack, we can run the whoallocatecount.d DTrace script:

$ whoallocatecount.d

     < ..... >`__1cNSharedRuntimeTdtrace_object_alloc6FpnHoopDesc__i_+0x4f
     StubRoutines (1)`_pnGThread__v_+0x1a3`jni_CallStaticVoidMethod+0x15d

In the output above (which was cleaned up to make it easy to view), we can see that 5678 objects where allocated by the main() method in the Wasteful class. If you would prefer to see call stacks by bytes allocated, you can run the whoallocatebybytes.d script:

$ whoallocatebytes.d

     < ..... >`__1cNSharedRuntimeTdtrace_object_alloc6FpnHoopDesc__i_+0x4f
     StubRoutines (1)`_pnGThread__v_+0x1a3`jni_CallStaticVoidMethod+0x15d

The hotspot provider is an incredible debugging tool, and it’s pretty amazing what you can do with just one probe!! If you decide to use these probes in production, be somewhat cautious. Selectively enable probes, and do so only for the duration of your troubleshooting session. Be safe, be careful, and most of all, have fun! Niiiiiiice!

Summarizing Java heap utilization with jmap

Java has become one of the most successful languages to hit the IT industry. One of the reasons behind it’s high rate of adoption is that fact that Java manages memory resources for the programmer. This makes programming significantly easier, but introduces additional complexity, since engineers need to size the Java heap and pick a garbage collection algorithm (or in the case of generational collectors, more than one) that best matches an application’s workload. Fortunately, two extremely useful tools allow you to observe the Java heap: the DTrace hotspot provider and the Java jmap utility. In this blog post, I will discuss the jmap utility.

The Java jmap utility provides a number of useful options to summarize heap usage, and get a break down of objects in the new and old generations. To summarize the new and old generations, the jmap utility can be run with the “-heap” option, and the PID of the JVM to summarize (the PID can be acquired by running the jps utility, or by reviewing the ps output):

$ jmap -heap `pgrep java`

Attaching to process ID 5365, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 1.6.0_01-b06

using parallel threads in the new generation.
using thread-local object allocation.
Concurrent Mark-Sweep GC

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize      = 1073741824 (1024.0MB)
   NewSize          = 268435456 (256.0MB)
   MaxNewSize       = 268435456 (256.0MB)
   OldSize          = 805306368 (768.0MB)
   NewRatio         = 7
   SurvivorRatio    = 6
   PermSize         = 21757952 (20.75MB)
   MaxPermSize      = 88080384 (84.0MB)

Heap Usage:
New Generation (Eden + 1 Survivor Space):
   capacity = 234881024 (224.0MB)
   used     = 102062424 (97.33431243896484MB)
   free     = 132818600 (126.66568756103516MB)
   43.452818053109304% used
Eden Space:
   capacity = 201326592 (192.0MB)
   used     = 94318424 (89.94905853271484MB)
   free     = 107008168 (102.05094146728516MB)
   46.84846798578898% used
From Space:
   capacity = 33554432 (32.0MB)
   used     = 7744000 (7.38525390625MB)
   free     = 25810432 (24.61474609375MB)
   23.07891845703125% used
To Space:
   capacity = 33554432 (32.0MB)
   used     = 0 (0.0MB)
   free     = 33554432 (32.0MB)
   0.0% used
concurrent mark-sweep generation:
   capacity = 805306368 (768.0MB)
   used     = 15032688 (14.336288452148438MB)
   free     = 790273680 (753.6637115478516MB)
   1.8667042255401611% used
Perm Generation:
   capacity = 50577408 (48.234375MB)
   used     = 30285240 (28.88225555419922MB)
   free     = 20292168 (19.35211944580078MB)
   59.87898786746842% used

The jmap output contains the size of eden (the place where new objects are created), each survivor space, the old generation and the permanent generation. If you aren’t familiar with the generations that are used in modern JVMs, you might want to take a look at Tony Printezis’ article on this topic.

In addition to printing summary data, jmap can also be used to display the objects in the heap.

$ jmap -histo `pgrep java` |more

num   #instances    #bytes  class name
  1:    224437    27673848  [C
  2:     38611    23115312  [B
  3:     47801    12187536  [I
  4:    208624     8344960  java.lang.String
  5:     45332     6192904  <constMethodKlass>
  6:     45332     5450864  <methodKlass>
  7:      3889     4615536  <constantPoolKlass>
  8:     45671     4193136  [Ljava.lang.Object;
  9:     66203     3222312  <symbolKlass>
 10:      3889     3192264  <instanceKlassKlass>
 11:      3455     2999296  <constantPoolCacheKlass>
 12:     19754     1106224  java.nio.HeapCharBuffer

< ..... >

This summary contains the type of each object (e.g., “[C” would refer to arrays of characters), the number of objects of each type as well as the total number of bytes these objects take up in the heap. Jmap additionally allows you to filter the output to display only the live objects:

$ jmap -histo:live 698

num   #instances    #bytes  class name
  1:     45256     6185464  
  2:     45256     5441744  
  3:      6899     5201176  [B
  4:      3851     4577776  
  5:     29133     4259592  [C
  6:     66069     3213448  
  7:      3851     3169464  
  8:      3417     2970112  
  9:     29714     1188560  java.lang.String
 10:      1553      745224  
 11:      4161      732336  java.lang.Class

< ..... >

There are also jmap options to dump the heap to a file, display the contents of the permanent generation and view obejcts that are awaiting finalization. Jmap is an incredibly useful tool, and in my next blog post I will show how the DTrace hotspot provider can be used to dig into object allocation in much more detail.

Viewing Java stack traces with jstack

I wrote yesterday about the jmap utility, which is a great utility for better understanding the arrangement of the JVM’s heap. Each thread that lives inside the JVM also contains an execution stack, which is used to store local variables and state information to allow function calls to work. The Java SDK /JRE comes with the jstack utility, which can be used to print the stack of each thread in human readable form:

$ /usr/java/bin/jstack `pgrep java`

Attaching to process ID 16498, please wait...
Debugger attached successfully.
Client compiler detected.
JVM version is 1.5.0_06-b05
Thread t@25: (state = BLOCKED)
 - java.lang.Thread.sleep(long) @bci=721371649 (Interpreted frame)
 - java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
 - com.sun.patchpro.model.PatchProStateMachine$17.synchronize(com.sun.patchpro.util.StateMachine) @bci=30, line=828 (Interpreted frame)
 - @bci=45, line=261 (Interpreted frame)
 - @bci=11, line=595 (Interpreted frame)

Thread t@23: (state = IN_NATIVE)
 - java.lang.UNIXProcess.waitForProcessExit(int) @bci=0 (Interpreted frame)
 - java.lang.UNIXProcess.waitForProcessExit(int) @bci=0 (Interpreted frame)
 - java.lang.UNIXProcess.access$900(java.lang.UNIXProcess, int) @bci=2, line=17 (Interpreted frame)
 - java.lang.UNIXProcess$2$ @bci=17, line=86 (Interpreted frame)

Thread t@21: (state = BLOCKED)
 - java.lang.Thread.sleep(long) @bci=144113 (Interpreted frame)
 - java.lang.Thread.sleep(long) @bci=0 (Interpreted frame)
 - com.sun.patchpro.plugins.sunos.pkg.SunOSBaseDataExtension.buildDatabase( @bci=54, line=258 (Interpreted frame)
 - @bci=110, line=112 (Interpreted frame)
 - @bci=11, line=595 (Interpreted frame)

Thread t@17: (state = BLOCKED)
 - java.lang.Object.wait(long) @bci=706035048 (Interpreted frame)
 - java.lang.Object.wait(long) @bci=0 (Interpreted frame)
 - java.lang.Thread.join(long) @bci=70, line=1103 (Interpreted frame)
 - com.sun.patchpro.analysis.SunOSBaseData$DetectorThread.join(int) @bci=6, line=310 (Interpreted frame)
 - @bci=310, line=132 (Interpreted frame)
 - @bci=11, line=595 (Interpreted frame)

[ ..... ]

Each stack frame contains the state (e.g., runnable, blocked, etc) of the thread and a stack backtrace with the current stack frame displayed first. Good stuff!