The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or click here to continue anyway

Defanging the Java OutOfMemoryError - Dan Kimmel

From the classic computer error haikus:

Out of memory. We wish to hold the whole sky, But we never will.

Specifically, your application wishes to hold the whole sky and its failing intermittently because its running out of memory. You cant find the issue and your customers are growing impatient. How do you defuse the situation?

I was inspired to write this post by a recent bug we fixed, where our core infrastructure team worked on a problem involving the Java heap, the garbage collector, Java Native Access (JNA), native memory allocations, and even segfaults! It is an attempt to detail all of the debugging tools and techniques we use to root cause OutOfMemoryErrors, and some approaches for fixing them. My hope is that publishing our playbook on OutOfMemoryErrors will help others solve tough Java memory allocation issues with less effort and discovery than its taken us to get where we are today.

Although its specifically written about the Oracle JVM for Java 7 running on a Unix operating system, most of the tools I describe are available or have a 1:1 replacement in other JVMs and OSes. The high-level concepts and techniques even apply to garbage collected languages that dont run on the JVM, but youll have to do a bit more mental translation.

Demystifying Java memory management

First and foremost, its important to understand vaguely how the JVM manages memory. In a normal, single-threaded C program your memory map will look something like this:

0x00000000
program code and global variables
heap (grows towards higher addresses as you malloc() more memory)
<large gap>
stack
0xffffffff

However, in Java its a little different. Everyone knows that Java is garbage collected, and probably a lot of people assume that the Java memory allocator and garbage collector use malloc() and free() to manage the memory they use to store your Java objects. It turns out that this isnt the case, so to figure out whats actually going on, lets take a look at the output from one of the JDKs standard memory debugging tools (I removed some lines of output and bolded others to keep focused on the interesting stuff):

$ jmap -heap $(pgrep java)
Heap Usage:
PS Young Generation
Eden Space:
  capacity = 705167360 (672.5MB)
  52.7369411993204% used
From Space:
  capacity = 6029312 (5.75MB)
  48.43298870584239% used
To Space:
  capacity = 5767168 (5.5MB)
  0.0% used
PS Old Generation
  capacity = 1216348160 (1160.0MB)
  88.26010605384563% used
PS Perm Generation
  capacity = 85983232 (82.0MB)
  98.37924678151201% used

As you can see, there are several different heap spaces that Java uses. Java objects are stored in the Young Generation (a combination of the Eden / From / To spaces) and the Old Generation; the Perm[anent] Generation is where .class files are loaded. Typically, each space has one (or sometimes multiple) native memory regions associated with it, and none of the spaces reside in the native heap (where malloc() takes its memory from).

The reason that the JVM uses multiple spaces has to do with its garbage collector implementation. Because the lifetime for objects tends to either be very short (stack variables, objects passed as user input or read temporarily into memory from a database, etc. usually only last for a few seconds) or very long (singleton objects, caches, etc. last the entire lifetime of the application), the garbage collector inspects the short-lived objects frequently and only occasionally inspects the long-lived objects. When you say Object o = new Object(), that object initially resides in the Eden space of the young generation, and if it is short-lived it will be garbage collected without ever leaving this space. However, if it survives a few collection cycles in the Eden space, it will travel through the To and From spaces and eventually be moved to the old generation, signifying that the garbage collector doesnt think it will become garbage soon. Notably, there is no guarantee on how often or how long the garbage collector will run, and its common that full garbage collections (the ones that traverse all objects) wont happen at all unless the young generation fills up most of the way.

The native heap isnt even listed in the jmap output above, but it is used by the JVM to store internal state such as just-in-time compiled code, information about the state of the program or bytecode interpreter, etc. On most UNIX OSes you can use pmap to see all the memory regions in the JVMs address space (including the heap):

$ pmap $(pgrep java)
8813: /opt/jdk/bin/java -server -enableassertions -XX:MaxPermSize=512m -Xfut
08043000 20K rw--- [ stack ]
08050000 4K r-x-- /opt/jdk1.7.0_65/bin/java
08060000 4K rwx-- /opt/jdk1.7.0_65/bin/java
08061000 832036K rwx-- [ heap ]
401B0000 1024K rwx-- [ anon ]
4146A000 32K rw--R [ stack tid=3004855 ]
41744000 28K rw--R [ stack tid=3147981 ]
41D95000 40K rw--R [ stack tid=3164365 ]
...

Unlike Java heap space which can only be used to store Java objects allocated using new, native memory is allocated by many different users, some trusted and others less so. The most trustworthy one is the JVM itself — whenever it needs memory in order to run some Java code, the memory will come from the native heap. The other normal user of native memory is more surprising: some of Javas standard libraries use native implementations for performance reasons. Specifically, this applies to some encryption and compression functions as well as for java.nio ByteBuffer objects. These two native memory users are very well behaved: they are well tested, rarely leak memory, and throw Java exceptions when they cant allocate enough memory.

There are also less-trusted allocators of native memory: third party native code. If youre interfacing to native libraries through JNA or JNI (for instance, we use JNA to make calls to some DelphixOS-specific libraries for filesystem and OS management) then both JNA / JNI and the native libraries themselves will use native memory. Java agents attached to your JVM also run natively and use native memory. Notably, all these native third-party additions to the JVM are also able to bypass the native heap and directly manipulate your JVMs memory map by allocating new memory regions, mmaping files, etc. This can really screw stuff up, so you should be careful to only use native libraries and Java agents that meet a high quality bar.

Fixing the obvious OutOfMemoryError cases

If youre like me, at this point youre itching to get to OutOfMemoryErrors and how to debug them. When you run out of memory, theres a lot of different reasons that could be responsible. Lets look at the easy ones first, along with the messages youll see them generate in your logs or on stdout/stderr.

java.lang.OutOfMemoryError: Requested array size exceeds VM limit

If you get this, it means youve just tried to allocate an array that is so large it would consume all Java heap space, so in 99.9% of cases its a bug in your code where youve incorrectly calculated the array size to allocate. In the very unlikely case that its not a bug, youll need to increase your maximum Java heap size by passing a higher value to the JVMs -Xmx argument (and probably buy a lot more RAM).

java.lang.OutOfMemoryError: PermGen space

As mentioned above, the Permanent Generation is where .class files with your Java bytecode are stored in memory. About 98% of the time, this error is really simple: if every time you start your program it quickly runs out of PermGen space, it means the code youre using for your project has become too large and you need to increase the maximum PermGen size to make room for it using the -XX:MaxPermSize=<size> JVM argument. To give you an idea of a reasonable limit, our PermGen size is about 82MB for a Java codebase with maybe 500 thousand lines of code and several dozen .jar libraries, and we cap the PermGen at a safe distance of 512MB to avoid being a nuisance but to still catch egregious PermGen space hogs.

However, if youre hitting PermGen issues a long time after Java starts, it typically means that either your code or some third-party library youre using is dynamically generating and running Java code. For instance, before we switched to PostgreSQL a couple years ago, we used an in-Java SQL database called Derby to store our application metadata, and it would dynamically compile code to execute each new query we asked it. (I learned about this feature when it ran us out of PermGen space at a customer site. Ugh.) Fixing problems like this is a lot more application-specific, but in general the best way to handle it is to cap the amount of generated code. In the worst case scenario, you can also just cope with it by increasing the limit a lot.

If youre not sure which case your error falls into or you dont know what library is generating code and causing the PermGen to grow, the graphical VisualVM tool for analyzing Java heap problems will be your best friend. It allows you to do many things, including breaking down the the PermGen contents by what type of object they are, or breaking down the .class files in the PermGen by the classloader that loaded them.

(If you get an OutOfMemoryError when you try to open a large Java dump in VisualVM, you can adjust the VisualVM options (on Mac OS X this file is in /Applications/VisualVM.app/Contents/Resources/visualvm/etc/visualvm.conf) and increase the maximum memory limit with the line:

"visualvm_default_options" "-J-Xmx2048m"

Obligatory Inception movie poster here.)

Another possible tool is the simpler jmap -permstat, which can be used to get some lower-resolution PermGen data:

$ /opt/jdk/bin/jmap -permstat $(pgrep java)
Attaching to process ID 13238, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 24.65-b04
finding class loader instances ..done.
computing per loader stat ..done.
please wait.. computing liveness.liveness analysis may be inaccurate ...
class_loader classes bytes parent_loader alive? type
<bootstrap> 4119 14824160 null live <internal>
0x7f298e28 1 1800 0x7a6c3f78 dead sun/reflect/DelegatingClassLoader@0x59c3a520
0x80c5c3a0 1 1816 0x7a6c3f78 dead sun/reflect/DelegatingClassLoader@0x59c3a520
0x7a1d5cc0 3 13224 0x79deb0c0 dead org/apache/catalina/loader/WebappClassLoader@0x5a3a7c48
...

Note that this tool works a bit better during postmortem analysis on a core file (live analysis as shown above is frequently not super accurate).

java.lang.OutOfMemoryError: Unable to create new native thread

From the message, you probably already guessed that this has to do with native memory — specifically, threads require native memory for their stacks to reside in that is separate from the heap. This memory shows up in pmap output as [ stack tid=… ]. Depending on what OS youre using and what the default stack size is, threads take up different amounts of native memory so the limit on thread count varies a lot, but its typically in the thousands. However, one thing is platform independent: its almost always a bug in your Java code to use this many threads. In our stack we typically have around 300-500 threads, but that number hasnt grown in the past few years so I think its become pretty stable. Ways to work around this limit include using thread pools, Futures, SelectableChannels, WatchService, etc. to distribute the same amount of work across a smaller number of threads.

If youre not sure what all your threads are doing and need to figure it out to decide how to consolidate your thread use, try the jstack tool, which gives you a backtrace of every running thread in your JVM:

$ jstack $(pgrep java)
2015-03-06 16:36:12
Full thread dump Java HotSpot(TM) Server VM (24.65-b04 mixed mode):

"pool-44-thread-50798" prio=3 tid=0x13a68400 nid=0x30fcb1 runnable [0x443eb000]
   java.lang.Thread.State: RUNNABLE
        at FileDispatcherImpl.readv0(Native Method)
        at FileDispatcherImpl.readv(FileDispatcherImpl.java:56)
        ...
        at Thread.run(Thread.java:745)

"Attach Listener" daemon prio=3 tid=0x139c1c00 nid=0x30fcad waiting on condition [0x00000000]
   java.lang.Thread.State: RUNNABLE

"pool-2725422-thread-1" prio=3 tid=0x0a15b800 nid=0x30fcac runnable [0x50d9d000]
   java.lang.Thread.State: RUNNABLE
        at SocketInputStream.socketRead0(Native Method)
        at SocketInputStream.read(SocketInputStream.java:152)
...

Characterizing heap exhaustion

Now that weve gone through the special cases, its time to tackle real heap exhaustion. This is the broadest category of OutOfMemoryError issues, and as a result its also one of the hardest to debug. Theres a variety of error messages that might show up in this case, including:

  • java.lang.OutOfMemoryError: Java heap space
  • java.lang.OutOfMemoryError: GC overhead limit exceeded
  • java.lang.OutOfMemoryError: <reason> <stack trace> (Native method)
  • java.lang.OutOfMemoryError: request <size> bytes for <reason>. Out of swap space?
  • The Java process was killed by a SIGSEGV signal (segfault).

The first two happen when youve overflowed the Java heap (or are about to because the garbage collector only freed 2% of the heap after stealing the processor for 98% of the runtime of your process) and the rest are signs of native heap exhaustion. The main difference between these two is the tools you use to identify and debug the issue, so Ill talk about the broad categories of heap exhaustion before diving into concrete methodology.

Lets simplify by characterizing the three most common cases: when you have a heap that grows at a steady rate and eventually overflows, when you have a heap thats holding steady at some baseline and then overflows during a sudden flood of allocations, and when you have a heap whose baseline is higher than the maximum size.

Steady growth

In the case where you have a heap growing at a roughly linear rate over time, usually it means that there is some kind of memory leak. To be clear, if youre looking at native heap exhaustion it could be a real leak (in which case you should fix it by freeing the leaked memory), but in general I dont mean memory which nobody is referencing anymore. What I really mean are logical leaks: data structures that sometimes grow and never shrink. The classic case is an object cache, where every time a user gives you more data you add it to the cache, but then you never remove anything from the cache even if its not being used. Generally speaking, this is either a result of laziness (using an in-memory data structure instead of persistent storage because it requires less effort) or premature optimization (imagining that the cached objects will need to be fetched from disk much more frequently than they actually are). In the former case, storing the objects to disk to free the memory associated with them is the way to go, and in the latter, just remove the cache and ensure that you dont see a noticeable performance regression in the code someone thought would exercise the cache.

In the rare case that a cache is actually necessary for performance reasons, the best approach is to bound the size of the cache. The easiest way to do this is to set a hard limit on the number of objects in the cache and remove the least-recently or least-frequently used ones when you reach that number. If youre interested in a minimal-effort caching solution, the excellent Guava library includes caching mechanisms with lots of nice options. An improvement which requires very little effort is allowing manually tuning of the limit on individual systems where performance is degrading (note: this only works well if you have a small number of deployments to tune manually!). The hardest but certainly the best approach is creating a cache which grows to hold as many objects as necessary to achieve some performance goal. However, you need to ensure your goal is feasible in all scenarios, otherwise youll end up growing the cache indefinitely — usually this means using a cache hit percentage as your performance goal instead of, say, the mean fetch time through the cache. Alternately, you can add a sanity check upper limit (which can itself be tunable).

Perhaps surprisingly, because the cache usually doesnt grow very quickly the thread that receives the first out of memory notification is frequently not the thread leaking objects, so you will have to look at the contents of the heap to determine where all your memory has gone.

Explosive growth

If your heap has reached a roughly stable size but then suddenly fills due to a very large group of allocations, this usually is a result of receiving unexpectedly large input. Like all growing companies, as our customers have put more data into our systems weve hit a couple of bugs of this nature where we do a database query that unexpectedly returns, say, 100-1000x the number of rows we expected. This is especially common with time-based data thats being collected every second / minute. Not only does this tie up the database for a good long time fetching all the data, but once it hands the data to the JVM theres no possible way Java can hold it all in memory. The same basic problem has been seen when reading large files into memory as well.

If youre only using a small subset of the data youre reading, the obvious workaround is to only read in whats necessary. For us, a big subset of these problems involves reading too many rows from a database before processing them in memory (maybe calculating an average value of one of the columns, etc.). Usually the best fix for this involves pushing the calculation down a level to the database itself, which can easily do moderately complex calculations on a large amount of data in a more memory-efficient way.

It gets a bit trickier if you really need all the data, or if your calculations are impossible to express in SQL. You could resort to a streaming approach, where you read a little of the data at a time and process it, but that idea doesnt work for all algorithms. To accommodate these use cases, you either have to increase the heap size (easy, basically ignores the problem and places the burden on your OSs paging system, can result in very poor performance) or use persistent storage to hold intermediate results (hard, usually results in hard to maintain code, might be faster than relying on paging if you design it well). Note that MapReduce is essentially a design paradigm for solving this problem, so if you find yourself writing this kind of code a lot, you might consider a similar design.

Identifying the problem after theres explosive growth is usually not very difficult because your heap will be chock-full of one object type allocated from one code path, and the thread that runs out of memory is usually also the one which is allocating it like crazy.

Baseline is higher than maximum size

This case is relatively difficult to identify but very easy to deal with. Once youve ruled out that youre neither dealing with a leak or a flood, and especially if you start running out of memory during many different but equally pedestrian workloads, youre probably dealing with this case. This is by far the best reason to expand the heap size because it means that your programs working set has simply grown beyond the bounds you set for it.

Heap exhaustion diagnosis

The first question when I start any heap debugging investigation is, Am I out of Java heap or native heap? Its usually obvious from the error message. Lets look at the tools you can use in each case to help you determine what went wrong.

Java heap debugging

My favorite tool for debugging the Java heap is VisualVM (mentioned above). It allows you to break down the heap by Java object type, view the object reference tree to see what objects are pinning the most memory, and browse other pre-built metrics. The pre-built metrics are enough for most cases, but VisualVMs truly killer feature is the object query language, which allows you to run SQL-like queries over the entire heap to collect specific metrics about each object that might help you debug your problem. An alternative tool is the Eclipse Memory Analyzer tool; much of its functionality is the same as VisualVMs but I find it somewhat harder to navigate.

To use either of these tools, youll need to collect a Java heap dump from a live process:

$ jmap -dump:live,format=b,file=dump.hprof $(pgrep java)

Or from a core file (make sure the jmap and java executables you use in this command line are from the same JDK):

$ jmap -dump:format=b,file=dump.hprof $(which java) /path/to/core

A slightly lower-overhead tool that gives a subset of the same information is jmap -histo:

$ jmap -histo:live $(pgrep java)
 num #instances #bytes class name
----------------------------------------------
   1: 322199 36240648 [C
   2: 171690 23666992 <constMethodKlass>
   3: 373220 23225128 [B
   4: 171690 15116512 <methodKlass>
   5: 15320 15013256 [I
   6: 21855 11372328 <constantPoolKlass>
   7: 21850 8540424 <instanceKlassKlass>
   8: 317534 7620816 java.lang.String
   9: 81346 6507680 java.lang.reflect.Method
  10: 17663 6083392 <constantPoolCacheKlass>
  11: 44042 4373912 [Ljava.util.HashMap$Entry;
  12: 162101 3890424 java.util.HashMap$Entry
  13: 71463 3306112 [Ljava.lang.Object;
  14: 10839 3290392 <methodDataKlass>
  15: 23095 2778072 java.lang.Class
...

This is less useful because it only tells you what the types are, and usually the most-used object types are primitive types (the [B and [I above), String, HashMap$Entry, and so on. Since every library under the sun uses Strings and HashMaps, this doesnt give much insight into who is holding the memory, so I find the object reference tree that VisualVM provides to be more helpful.

Another useful bit of information that the JVM provides is information about when the garbage collector runs. The simplest way to get debug information is by adding the JVM arguments -XX:+PrintGC, -XX:+PrintGCDetails, and -XX:+PrintGCTimeStamps. This produces output on stdout or a log file you specify via -Xloggc that contains a line for every garbage collection run that is performed and the time it happened at. The output lines themselves are a bit difficult to parse, but with a tiny awk script and a spreadsheet you can graph the heap size over time. If they go back far enough in time, these graphs provide the best insight about which of the heap exhaustion cases above you are debugging, so it can direct where you spend your debugging effort next.

Finally, since DelphixOS is based on illumos (and therefore originally based on OpenSolaris), DTrace is available to do arbitrarily complex runtime logging of various JVM events. The place to start is the documentation on the DTrace hotspot provider, which can tell you when methods are JIT compiled, when garbage collection runs happen, and even when methods are invoked (although that requires an additional JVM flag -XX:+ExtendedDTraceProbes because enabling it creates a big performance hit). DTrace and these probes are also available in FreeBSD and Mac OS X.

Native heap debugging

Debugging the native heap is much more platform-dependent. First, lets talk about where to start if your JVM is segfaulting. If this wasnt caused by a bug in your native libraries, it probably indicates that JNA or JNI called malloc(), it returned NULL, and then that address was dereferenced. You can use gdbs backtrace command (or mdbs ::status / ::stack commands if youre on a Solaris-based system) on the core dump to figure out what the failed allocation was (sometimes this stack will be from the thread that caused the error, sometimes not, as mentioned above).

DelphixOS also has the excellent tool libumem for tracking native memory allocations through malloc(), and mdb is available to analyze the libumem debugging information. To use libumem, we start the JVM with the environment variables LD_PRELOAD=/path/to/libumem.so and UMEM_DEBUG=audit set. This makes libumem intercept all calls to malloc() from the Java process and log some information about them, including the stack trace where malloc was called — an invaluable piece of information since native memory has no type information to help you understand what its being used for. Notably, using UMEM_DEBUG=audit will also cause your program to crash when it detects that youve overwritten freed memory, done multiple frees of the same buffer, etc. so you should either thoroughly test your programs with libumem enabled (like we do) or add the noabort argument to the UMEM_DEBUG options to keep it from killing your process when it makes mistakes (you can make it log memory errors to stdout by adding the verbose option too). Because of the additional debugging information, libumem can also increase the memory utilization of your processes when its in audit mode, so try to use it with programs that dont do too many native allocations (the JVM is usually a fine candidate), or limit the amount of logging information it collects.

You can use the Solaris-only debugger mdb to read the libumem debug information in many ways. For example, you can check for leaks by performing a mark and sweep algorithm on your native heap. You can also use libumems internal tracking to understand what kind of memory youve been allocating (by allocation size) and see if any allocation failures have occurred:

$ mdb -p $(pgrep java)
Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 libnvpair.so.1 libtopo.so.1 libavl.so.1 libproc.so.1 ]
> ::umastat
cache buf buf buf buf memory alloc alloc
name size in use in ptc total in use succeed fail
------------------------- ------ ------- ------- ------- ------- --------- -----
umem_magazine_1 8 849 - 1016 8K 14666 0
umem_magazine_3 16 880 - 1524 24K 22849 0
umem_magazine_7 32 10552 - 10668 336K 198515 0
...
------------------------- ------ ------- ------- ------- ------- --------- -----
Total [umem_internal] 107M 2413287 0
Total [umem_default] 169M 2651405275 0
...

On the lines that say Total [umem_internal] and Total [umem_default] you can see that for 169MB of native heap space, libumems debugging information occupies 107MB of additional native heap space. A lot of that space is being used to store stack traces so that we can see where native allocations happened, which can give us a very clear idea of what the top memory native users are:

$ mdb -p $(pgrep java)
Loading modules: [ ld.so.1 libumem.so.1 libc.so.1 libuutil.so.1 libnvpair.so.1 libtopo.so.1 libavl.so.1 libproc.so.1 ]
> ::umausers
8188960 bytes for 51181 allocations with data size 160:
 libumem.so.1`umem_cache_alloc_debug+0x1fe
 ...
 zfs.74.65541.4795`callback_invoke+0x148
 zfs.74.65541.4795`callback_dispatch+0x13d
5570560 bytes for 136 allocations with data size 40960:
 libumem.so.1`umem_cache_alloc_debug+0x1fe
 ...
 libzip.so`zcalloc+0x26
 libzip.so`updatewindow+0x3a
 libzip.so`inflate+0x1499
 libzip.so`Java_java_util_zip_Inflater_inflateBytes+0xef
 0xf9ec3813
...

In the output above, you can obviously see that at least 8MB of memory is being used by a ZFS routine, and 5.5MB is being used by a Java unzipping utility.

Before we started collecting this information, we were in the dark about why any JVM process ran out of native heap space. We would look at the Java heap and hope that there would be many instances of one of the objects normally associated with native allocations, but usually there wouldnt be and we would be stumped, or we would spend several stressful hours or days trying to reproduce the issue in-house while running DTrace on all calls to malloc(). With the information that libumem collects, we can root cause native heap exhaustion much more easily.

Unfortunately, I dont know of an equivalent thats available on non-Solaris platforms. A port of libumem is available, but on other platforms theres no equivalent to mdb, so theres no way to read the debugging information. On Linux, the next-best thing is probably valgrind, but that couldnt be used in production because of the high overhead that it places on every allocation. If you know of anything better, share it in the comments!

Inter-heap exhaustion

This final type of heap exhaustion is orthogonal to the others, and worth pulling out into its own case because its a bit more difficult to understand. (In fact, this was the problem that inspired this blog post.) Sometimes, objects that you allocate in Java have native memory associated with them — for instance, we already mentioned ByteBuffers from the java.nio framework. ByteBuffers occupy very little Java heap space (basically theyre just a pointer to native memory), so the Java heap might be relatively empty and the garbage collector wont run even if there are a lot of unreferenced ByteBuffers lying around waiting to be collected. Unfortunately, ByteBuffers correspond to a comparatively large amount of native memory, so you can actually end up running out of native memory because the garbage collector needs to run on the Java heap.

Theres actually a bug filed against the JVM describing this issue. Although the bug references Java NIO in particular, other native libraries can cause the issue as well and are not as easily resolved — our bug was caused by callbacks from native code back to Java methods (through JNA). The workaround listed in the JVM bug (run System.gc() more often, reduce the size of the young generation to force more frequent garbage collections) are pretty weak because theyre just making suggestions to the garbage collector that might be ignored. What we ended up doing was a brutal hack, but it amounted to wrapping the Java object so that it had a .close() method which would free the native memory associated with it deterministically.

Postmortem Debugging

What debugging blog post would be complete without information on how to collect the information youll need to debug issues after they cause a crash? This information is vital for fixing customer problems quickly, otherwise youre stuck waiting for the issue to be reproduced, which could take days or weeks and erodes customer faith in your ability to solve the issue.

The essentials youll want to diagnose an OutOfMemoryError are:

  1. Collect Java heap dumps when OutOfMemoryErrors occur. To turn these on you should use -XX:+HeapDumpOnOutOfMemoryError and related options.
  2. To tell what happened when the JVM dies because of a fatal error, dump an error file using the -XX:ErrorFile=/path/to/errfile.log JVM option.
  3. Dump a core file when your JVM segfaults. This is highly OS-dependent, so look up the relevant documentation for your OS.
  4. If its available, you should limit the native address space and/or native heap size of your JVM. This is highly OS-dependent, but many Unix systems provide rlimit (RLIMIT_AS or RLIMIT_DATA, in particular) and on Solaris-based systems you can use resource_controls(5). This is important on 64-bit JVMs because they are not artificially bounded to 4GB of address space like 32-bit JVMs are. Ideally whatever limiting mechanism you use should send a fatal signal to the JVM so that you get a core dump which can be debugged after the fact.
  5. If its available, you should use libumem with UMEM_DEBUG=audit. This will make your native core dumps significantly more useful on systems that can read the information libumem generates.
  6. A minimal set of debug symbols should be loaded into your JVM process to make postmortem native debugging easier. On Solaris and other systems that use CTF symbols this barely requires more memory to be used and is very valuable; on Linux you may need to download a special JDK.
  7. Logs of when your JVM starts, stops, restarts, crashes. Usually if youre running the JVM from upstart, startd, service, svcadm, or some other OS-level service management program these events are probably logged for you, but otherwise you should print recognizable strings into your Java log saying the JVM is being booted. (Hopefully, youre doing something to ensure that if Java crashes it will be rebooted.)
  8. Garbage collection logging with the -XX:PrintGC JVM family of arguments.
  9. Moderately detailed logs of what general activity your application was doing.

Pitfalls

Here are a few lessons weve learned about issues that can hide OutOfMemoryErrors until theyre in production, instead of putting them front and center during development.

  1. The most important lesson: never use different garbage collection or memory settings in development than you use in production. More specifically:
    1. Dont use the -client JVM option in development and the -server option in production.
    2. Dont use a 32-bit JVM in development and 64-bit JVM in production.
    3. Dont switch garbage collectors, garbage collection tuning, or heap maximums between development and production.
    4. Dont switch OSes or JVMs between development and production.
  2. When your development machine or a production system creates core files, Java heap dumps, or fatal error files, surface them so that you dont accidentally ignore them. When your user is a consumer the best way to do this is to offer them the ability to submit a bug report; otherwise, make a utility that scans for these files automatically on developer / tester systems and makes it easy to turn them into bug reports when they are found on test systems.

Conclusion

As a member of the core infrastructure / escalation team at Delphix, Ive worked on many OutOfMemoryError issues firsthand and Im excited to help raise the quality bar for Java applications elsewhere by sharing my experiences. I hope this post will help others using the JVM and other garbage collected environments to get the information they need to identify, fix, and postmortem-debug any heap exhaustion issue they might encounter. Its always good to invest in infrastructure: Remember, if anything exciting happens, weve done it wrong.

Continue reading on blog.delphix.com