Header background

The Top Java Memory Problems – Part 2

Some time back I planned to publish a series about java memory problems. It took me longer than originally planned, but here is the second installment. In the first part I talked about the different causes for memory leaks, but memory leaks are by far not the only issue around java memory management.

Edit: A portion of this article was taken from our Java enterprise performance book, which in turn used an Article that Alois Reitbauer and Mirko Novakovic wrote for the German “Java Magazin” as a base. I forgot to mention this here and want to correct this. Credit where credit is due!

High Memory Usage

It may seem odd, but too much memory usage is an increasingly frequent and critical problem in today’s enterprise applications. Although the average server often has 10, 20 or more GB of memory, a high degree of parallelism and a lack of awareness on the part of the developer lead to memory shortages. Another issue is that while it is possible to use multiple gigabytes of memory in today’s JVMs the side effects are very long GC pauses. Sometimes increasing the memory is seen as a workaround to memory leaks or badly written software. More often than not this makes things worse in the long run and not better. These are the most common causes for high memory usage.

HTTP Session as Cache

The Session caching anti-pattern refers to the misuse of the HTTP session as data cache. The HTTP session is used to store user data or state that needs to survive a single HTTP request. This is referred to as “conversational state” and is found in most web applications that deal with non-trivial user interactions. The HTTP session has several problems. First, as we can have many users, a single Web Server can have quite a lot of active sessions, so it is important to keep them small. The second problem is that they are not specifically released by the application at a given point. Instead Web Servers have a session timeout which is often quite high to increase user comfort. This alone can easily lead to quite large memory demands if we consider the number of parallel users. However, in reality, we often see HTTP session with multiple megabytes in size.

These so called session caches happen because it is easy and convenient for the developer to simply add objects to the session instead of thinking about other solutions like a cache. To make matters worse this is often done in a fire and forget mode, meaning data is never removed. After all, why should you, the session will be removed after the user has left the page anyway, or so we may think. What is often ignored is that session timeouts from 30 minutes to several hours are not unheard of.

A practical example is the storage of data that is displayed in HTML selection fields (such as country lists). This semi-static data is often multiple kilobytes in size and is held per user in the heap if kept in the session. It is better to store this – which moreover is not user-specific – in one central cache. Another example is the misuse of the hibernate session to manage the conversational state. The hibernate session is stored in the HTTP session in order to facilitate quick access to data. This means storage of far more state than necessary, and with only a couple of users, memory usage immediately increases greatly. In modern Ajax applications, it may also be possible to shift the conversational state to the client. In the ideal case, this leads to a state-less or state-poor server application that scales much better.

Another side effect of big HTTP sessions is that session replication becomes a real problem.

Wrong Cache Usage

Caches are used to increase performance and scalability by loading data only once. However, excessive use of caches can quickly lead to Java performance problems. In addition to the typical problems of a cache, such as misses and high turnaround, a cache can also lead to high memory usage and, even worse, to excessive GC behavior. Mostly these problems are simply due to an excessively large cache. Sometimes, however, the problem lies deeper. The key word here is the so-called soft reference. A soft reference is a special form of object reference. Soft references can be released at any time at the discretion of the garbage collector. In reality however, they are released only to avoid an out-of-memory error. In this respect, they differ greatly from weak references, which never prevent the garbage collection of an object. Soft References are very popular in cache implementations for precisely this reason. The cache developer assumes, correctly, that the cache data is to be released in the event of a memory shortage. If the cache is incorrectly configured, however, it will grow quickly and indefinitely until memory is full. When a GC is initiated, all the soft references in the cache are cleared and their objects garbage collected. The memory usage drops back to the base level, only to start growing again. This phenomenon can easily be mistaken to be an incorrectly configured young generation. It looks as if objects get tenured to early only to be collected by the next major GC. This kind of problem often leads to a GC tuning exercise that cannot succeed.

Only proper monitoring of the cache metrics or a heap dump can help identify the root cause of the problem.

Churn Rate and High transactional memory usage

Java allows us to allocate a large number of objects very quickly. The generational GC is designed for a large number of very short-lived objects, but there is a limit to everything. If transactional memory usage is too high, it can quickly lead to performance or even stability problems. The difficulty here is that this type of problem comes to light only during a load test and can be overlooked very easily during development.

If too many objects are created in too short a time, this naturally leads to an increased number of GCs in the young generation. Young generation Gcs are only cheap if most objects die! If a lot of objects survive the GC it is actually more expensive than an old generation GC would be under similar circumstances! Thus high memory needs of single transactions might not be a problem in a functional test but can quickly lead to GC thrashing under load. If the load becomes even higher these transactional objects will be promoted to the old generation as the young generation becomes too small. One could approach this from the this angle and increase the size of the young generation, in many cases this will simply push the problem a little further out, but would ultimately lead to even longer GC pauses (due to more objects being alive at the time of the GC).

The worst of all possible scenarios, which we see often nevertheless, is an Out-of-memory error due to high transactional memory demand. If memory is already tight, higher transaction load might simply max out the available heap. The tricky part is that once the OutOfMemory hits, transactions that wanted to allocate objects but couldn’t are being aborted. Subsequently a lot of memory is released and garbage collected. In other words the very reason for the Out Of Memory is hidden by the OutOfMemory Error! As most memory tools only look at the java memory every couple of seconds they might not even show 100% memory at any point in time.

Since Java 6 it is possible to trigger a Heap dump in the event of an OutOfMemory which will show the root cause quite nicely in such a case. If there is no OutOfMemory one can use trending or histo memory dumps (check out jmap or Dynatrace) to identify those classes whose object numbers fluctuate the most. Those are usually classes that are allocated and garbage collected a lot. The last resort is to do a full scale allocation analysis.

Large Temporary Objects

In extreme cases, temporary objects can also lead to an out-of-memory error or to increased GC activity. This happens, for example, when very large documents (XML, PDF…) have to be read and processed. In one specific case, an application was unavailable temporarily for a few minutes due to such a problem. The cause was quickly found to be memory bottlenecks and garbage collection that was operating at its limit. In a detailed analysis, it was possible to pin down the cause to the creation of a PDF document:

byte tmpData[] = new byte[1024];
int offs = 0;
do
{
 int readLen = bis.read(tmpData, offs, tmpData.length - offs);
 if(readLen == -1)
 break;
 offs += readLen;
 if(offs == tmpData.length) {
 byte newres[] = new byte[tmpData.length + 1024];
 System.arraycopy(tmpData, 0, newres, 0,tmpData.length);
 tmpData = newres;
 }
} while(true);

To the seasoned developer it will be quite obvious that processing multiple megabytes with such a code leads to bad performance due to a lot of unnecessary allocations and ever growing copy operations. However a lot of times such a problem is not noticed during testing, but only once a certain level of concurrency is reached where the number of GCs and/or amount of temporary memory needed, becomes a problem.

When working with large documents, it is very important to optimize the processing logic and prevent it from being held completely in the memory.

Sometimes I think that the Classloader is to java what the dll-hell was to windows. When there are memory problems, one thinks primarily of objects that are located in the heap and occupy memory. In addition to normal objects, however, classes and constant values are also administered in the heap. In modern enterprise applications, the memory requirements for loaded classes can quickly amount to several hundred MB, and thus often contribute to memory problems. In the Hotspot JVM, classes are located in the so-called permanent generation or PermGen. It represents a separate memory area, and its size must be configured separately. If this area is full, no more classes can be loaded and an out-of-memory occurs in the PermGen. The other JVMs do not have a permanent generation, but that does not solve the problem. It is merely recognized later. Class loader problems are some of the most difficult problems to detect. Most developers never have to deal with this topic and tool support is also poorest in this area. I want to show some of the most common memory related class loader problems:

Large classes

It is important not to increase the size of classes unnecessarily. This is especially the case when classes contain a great many string constants, such as in GUI applications. Here all strings are held in constants. This is basically a good design approach, however, it should not be forgotten that these constants also require space in the memory. On top of that, in the case of the Hotspot JVM, string constants are a part of the PermGen, which can then quickly become too small. In a concrete case, the application had a separate class for every language it supported, where each class contained every single text constant. Each of these classes itself was actually too large already. Due to a coding error, that happened in a minor release, all languages, meaning all classes, were loaded into memory. The JVM crashed during start up no matter how much memory was given to it.

Same class in memory multiple times

Especially application servers and OSGi containers tend to have a problem with too many loaded classes and the resulting memory usage. Application servers make it possible to load different applications or parts of applications in isolation to one another. One „feature“ is that multiple versions of the same class can be loaded in order to run different applications inside the same JVM. Due to incorrect configuration, this can quickly double or triple the amount of memory needed for classes. One of our customers had to run his JVMs with a PermGen of 700MB. A real problem since he ran it on 32bit Windows where the maximum overall JVM size is 1500MB. In this case, the SOA application was loaded in a JBoss application server. Each service was loaded into a separate class loader without using the shared classes jointly. All common classes, about 90% of them, were loaded up to 20 times, and thus regularly led to out-of-memory errors in the PermGen area. The solution here was strikingly simple: proper configuration of the class loading behavior in JBoss.

The interesting point here is that it was not just a memory problem, but a major performance problem as well! The different applications did use the same classes, but as they came from different class loaders, the server had to view them as different. The consequence was that a call from one component to the next, inside the same JVM, had to serialize and deserialize all argument objects.

This problem can best be diagnosed with a heap dump or trending dump (jmap -histo). If a class is loaded multiple times, its instances are also counted multiple times. Thus, if the “same” class appears multiple times with a different number of instances, we have identified such a problem. The responsible class loader can be determined in a heap dump through simple reference tracking. We can also take a look at the variables of the class loader and, in most cases, will find a reference to the application module and the Jar file. This makes it possible to determine whether the same Jar file is being loaded multiple times by different application modules.

Same class loaded again and again

A rare phenomenon, but a very large problem when it occurs, is the repeated loading of the same class, which does not appear to be present twice in the memory. What many forget is that classes are garbage collected too, in all three large JVMs. The Hotspot JVM does this only during a major GC, whereas both IBM and JRockit can do so during every GC. Therefore, if a class is used for only a short time, it can be removed from the memory again immediately. Loading a class is not exactly cheap and usually not optimized for concurrency. If the same class is loaded by multiple threads, Java synchronizes these threads. In one real world case, the classes of a script framework (bean shell) were loaded and garbage collected repeatedly because they were used for only a short time and the system was under load. Since this took place in multiple threads, the class loader was quickly identified as the bottleneck once analyzed under load. However, the development took place exclusively on the Hotspot JVM, so this problem was not discovered until it was deployed in production.

In case of the Hotspot JVM this specific problem will only occur under load and memory pressure as it requires a major GC, whereas in the IBM JVM or JRockit this can already happen under moderate load. The class might not even survive the first garbage collection!

Incorrect implementation of equal and hashcode

The relationship between the hashcode method and memory problems is not obvious at first glance. However, if we consider where the hashcode method is of high importance this becomes clearer.

The hashcode and equals methods are used within hash maps to insert and find objects based on their key. However, if the implementation of the operator is faulty, existing entries are not found and new ones keep being added.

While the collection responsible for the memory problem can be identified very quickly, it may be difficult to determine why the problem occurs. We had this case at several customers. One of them had to restart his server every couple of hours even though it was configured to ran at 40GB! After fixing the problem they ran quite happily with 800MB.

A heap dump – even if complete information on the objects is available – rarely helps in this case. One would simply have to analyze too many objects to identify the problem. In this case, the best variant is to test comparative operators proactively, in order to avoid such problems. There are a few free frameworks (such as https://github.com/jqno/equalsverifier) that ensure that equals and hash code confirm to the contract.

Conclusion

High memory usage is still one of the most frequent problems that we see and they often have performance implications. However, most of them can be identified rather quickly with today’s tools. In the next installment of this series, I will talk about how to tune your GC for optimal performance, provided you do not suffer from memory leaks or the problems mentioned in this blog.

You might also want to read my other memory blogs: