Excessive Memory Use

Chapter: Memory Management

Even though an average server might have 16 GB or more memory, excessive memory usage in enterprise applications has become an increasingly frequent and critical problem. For one thing, a high degree of parallelism and a lack of awareness on the part of the developer can quickly lead to memory shortages. At other times, there may be sufficient memory available, but with JVMs using gigabytes of memory, GC pauses can be unacceptably long.

Increasing memory is the obvious workaround for memory leaks or badly written software, but done thoughtlessly this can actually make things worse in the long run. After all, more memory means longer garbage-collection suspensions.

The following are the two most common causes for Java high memory usage.

Incorrect Cache Usage

It may seem counterintuitive, but excessive cache usage can easily lead to performance problems. In addition to the typical problems, such as misses and high turnaround, an overused cache can quickly exhaust available memory. Proper cache sizing can fix the problem, assuming you can identify the cache as the root cause. The key problem with caches is soft references, which have the advantage that they can be released at any time at the discretion of the garbage collector. It's this property that makes them popular in cache implementations. The cache developer assumes, correctly, that the cache data is released in the event of a potential memory shortage; in essence, to avoid an out-of-memory error (in contrast to weak references, which never prevent the garbage collection of an object and would not be useful in a cache).

If improperly configured, the cache will grow until the available memory is exhausted, which causes the JVM to trigger a GC, clearing all soft references and removing their objects. Memory usage drops back to its base level, only to start growing again. The symptoms are easily mistaken for an incorrectly configured young generation and often trigger a GC tuning exercise.

Session Caching Antipattern

When an HTTP session is misused as a data cache, we refer to it as the session caching antipattern. Correctly implemented, the HTTP session is used to store user data or a state that needs to survive beyond a single HTTP request. This conversational state is found in most web applications dealing with nontrivial user interactions, but there are potential problems.

First, when an application has many users, a single web server may end up with many active sessions. Most obviously, it's important that each session be kept small to avoid using up all available memory. Second, these sessions are not explicitly released by the application! Instead, web servers use a session timeout, which can be set quite high to increase the perceived comfort of the users. This can easily lead to large memory demands and HTTP sessions that are on the order of multiple megabytes in size.

Session caches are convenient because it is easy for developers to add objects to the session without considering other solutions that might be more efficient. This is often done in fire-and-forget mode, meaning data is never removed. The session will be removed after the user has left the page anyway, or so we may think, so why bother? What we ignore is that session timeouts from 30 minutes to several hours are not unheard of.

One currently popular version of this antipattern is the misuse of hibernate sessions to manage the conversational state. The hibernate session is stored in the HTTP session to facilitate quick access to data. This means storage of far more state than necessary, and with only a couple of users, memory usage immediately increases greatly.

If we couple a big HTTP session with session replication, we get large object trees that are expensive to serialize, and a lot of data to be transferred to the other web servers. We just added a severe performance problem on top of the fact that we run out of memory quickly.

Table of Contents