Excessive Memory Use

Chapter: Memory Management

Even though an average server might have 16 GB or more memory, excessive memory usage in enterprise applications has become an increasingly frequent and critical problem. For one thing, a high degree of parallelism and a lack of awareness on the part of the developer can quickly lead to memory shortages. At other times, there may be sufficient memory available, but with JVMs using gigabytes of memory, GC pauses can be unacceptably long.

Increasing memory is the obvious workaround for memory leaks or badly written software, but done thoughtlessly this can actually make things worse in the long run. After all, more memory means longer garbage-collection suspensions.

The following are the two most common causes for high memory usage.

Incorrect Cache Usage

It may seem counterintuitive, but excessive cache usage can easily lead to performance problems. In addition to the typical problems, such as misses and high turnaround, an overused cache can quickly exhaust available memory. Proper cache sizing can fix the problem, assuming you can identify the cache as the root cause. The key problem with caches is soft references, which have the advantage that they can be released at any time at the discretion of the garbage collector. It’s this property that makes them popular in cache implementations. The cache developer assumes, correctly, that the cache data is released in the event of a potential memory shortage; in essence, to avoid an out-of-memory error (in contrast to weak references, which never prevent the garbage collection of an object and would not be useful in a cache).

If improperly configured, the cache will grow until the available memory is exhausted, which causes the JVM to trigger a GC, clearing all soft references and removing their objects. Memory usage drops back to its base level, only to start growing again. The symptoms are easily mistaken for an incorrectly configured young generation and often trigger a GC tuning exercise.

Session Caching Antipattern

When an HTTP session is misused as a data cache, we refer to it as the session caching antipattern. Correctly implemented, the HTTP session is used to store user data or a state that needs to survive beyond a single HTTP request. This conversational state is found in most web applications dealing with nontrivial user interactions, but there are potential problems.

First, when an application has many users, a single web server may end up with many active sessions. Most obviously, it’s important that each session be kept small to avoid using up all available memory. Second, these sessions are not explicitly released by the application! Instead, web servers use a session timeout, which can be set quite high to increase the perceived comfort of the users. This can easily lead to large memory demands and HTTP sessions that are on the order of multiple megabytes in size.

Session caches are convenient because it is easy for developers to add objects to the session without considering other solutions that might be more efficient. This is often done in fire-and-forget mode, meaning data is never removed. The session will be removed after the user has left the page anyway, or so we may think, so why bother? What we ignore is that session timeouts from 30 minutes to several hours are not unheard of.

One currently popular version of this antipattern is the misuse of hibernate sessions to manage the conversational state. The hibernate session is stored in the HTTP session in order to facilitate quick access to data. This means storage of far more state than necessary, and with only a couple of users, memory usage immediately increases greatly.

If we couple a big HTTP session with session replication, we get large object trees that are expensive to serialize, and a lot of data to be transferred to the other web servers. We just added a severe performance problem on top of the fact that we run out of memory quickly.

Read the Java enterprise performance eBook online

Chapter: Application Performance Concepts

Differentiating Performance from Scalability

Calculating Performance Data

Collecting Performance Data

Collecting and Analyzing Execution Time Data

Visualizing Performance Data

Controlling Measurement Overhead

Theory Behind Performance

How Humans Perceive Performance

Chapter: Memory Management

How Garbage Collection Works

The Impact of Garbage Collection on application performance

Reducing Garbage Collection Pause time

Making Garbage Collection faster

Not all JVMS are created equal

Analyzing the Performance impact of Memory Utilization and Garbage Collection

Tuning

GC Configuration Problems

The different kinds of Java memory leaks and how to analyse them

High Memory utilization and their root causes

Classloader releated memory issues

Out-Of-Memory, Churn Rate and more

Chapter: Performance Engineering

Approaching Performance Engineering Afresh

Agile Principles for Performance Evaluation

Employing Dynamic Architecture Validation

Performance in Continuous Integration

Enforcing Development Best Practices

Load Testing—Essential and Not Difficult!

Load Testing in the Era of Web 2.0

Chapter: Virtualization and Cloud Performance

Introduction to Performance Monitoring in virtualized and Cloud Environments

IaaS, PaaS and Saas – All Cloud, All different

Virtualization’s Impact on Performance Management

Monitoring Applications in Virtualized Environments

Monitoring and Understanding Application Performance in The Cloud

Performance Analysis and Resolution of Cloud Applications

Start your 30-day free Java monitoring trial!

Try for free Contact us