As a performance architect, I get called into various production performance issues. One of our recent production issues happened on Tomcat AppServer running on an AWS EC2 instance in a VPC. VPC is joined with an on-premise DNS server. This service calls another micro service. When service went live, we noticed a high response time from a downstream micro service, and the downstream service logs did not show any performance issue.

In this blog, I’ll walk through the steps taken by our tech arch Neeraj Verma to analyze this issue in our production environments, which tools were used, explaining background information DNS lookup, as well as how this problem was resolved. I hope you find this useful!

API High Response Time Analysis in Production Environment

Performance engineering is the science of discovering problem areas in applications under varying but realistic load conditions. It is not always easy to simulate real traffic and find all problems before going live. Therefore, it is advisable to determine out how to analyze performance problems, not only in test, but also in a real production environment. Having the right tools installed in production allows us to analyze issue and find root causes that are hard to simulate in testing.

The following diagram visualizes our service layout. Our eCommerce API’s call ShoppingHistory API using customerid. ShoppingHistory API calls DynamoDB and Customer- API-to-server requests from eCommerce.

Architectural Overview: Transactional Flow when the eCommerce Frontend calls the OrderAPI and how it makes its way through different service layers deployed on AWS
Architectural Overview: Transactional Flow when the eCommerce Frontend calls the OrderAPI, and how it makes its way through different service layers deployed on AWS.

In order to monitor individual service health, we log entry and exit calls of each service invocation in a custom telemetry system. Each team uses Kibana/Grafana dashboards to measure health. Through the ShoppingHistory API dashboard, the team can see that time is being taken by Customer Service, even though the Customer Service dashboard did not show any issue at all. This is when the classical blame game would start. In our case we tasked the ShoppingHistory API team to find the actual root cause. And here is what we did.

Application Monitoring with Dynatrace AppMon

Our tool of choice was Dynatrace AppMon, which we already used to for live production performance monitoring of all our services in production. Let me walk you through the steps in Dynatrace AppMon on how we identified high response time and its root cause. In case you want to try it on your own I suggest you do the following:

  1. Get your own Dynatrace Personal License
  2. Watch the Dynatrace YouTube Tutorials
  3. Read up on Java Memory Management

Step #1: Basic Transaction

Once Dynatrace AppMon collects data you can decide whether to analyze it in the Dynatrace AppMon Diagnostics Client or go directly to the Dynatrace AppMon Web interface. With the recent improvements in Dynatrace AppMon 2017 May (v7) the Web Interface is even more convenient when analyzing PurePaths, which is why we go there. In the Web Interface we often start by looking at the Transaction Flow of our System. The Transaction Flow is dynamically generated by Dynatrace thanks to its capability to trace every single transaction, end-to-end, enabled through their PurePath technology.

Looking at the Transaction Flow, we could immediately see the most time (91%) was actually spent in ShoppingHistory JVM instead of Customer Service which we assumed until that point to be the problem as was indicated by our logging. Fortunately, Dynatrace AppMon told us otherwise!

The Dynatrace AppMon PurePath highlighted our Shopping History JVM as the response time hotspot.
The Dynatrace AppMon PurePath highlighted our Shopping History JVM as the response time hotspot.

Step 2: Drill Down into PurePath (show all nodes)

The detailed PurePath shows where most of the time is spent, down to the method itself. In our case we could spot that resolving the address of the backend microservice took about 2s. In the screenshot below you can see that when the frontend service tries to call the backend service it must first open a connection (HttpClient.connect method) which itself has to resolve the passed endpoint address. This method then calls the internal Java classes to do the actual DNS name resolution.

The PurePath Tree shows complete transaction flow, executed methods and how long they took to execute. Easy to spot the 2s execution time of the internal Java DNS name resolution method
The PurePath Tree shows complete transaction flow, executed methods and how long they took to execute. Easy to spot the 2s execution time of the internal Java DNS name resolution method
The high level performance overview that you get for each PurePath also gives a good indication on which component is currently our hotspot. Clearly indicates the same problem. All time spent when making web request calls.
The high level performance overview that you get for each PurePath also gives a good indication on which component our hotspot resides, clearly indicating the same problem — the amount of time spent making web request calls.

Solution

Based on the collected information from our production environment we tried finding a solution on the internet, and found an explanation for a similar issue on the IBM blog. This is where I found the answer to our issue:

The problem could be lookup issues between IPv6 versus IPv4. If the Domain Name System (DNS) server is not configured to handle IPv6 queries, the application may have to wait for the IPv6 query to timeout for IPv6 queries. By default, java.net.InetAddress resolutions requiring the network will be carried out over IPv6 if the operating system supports both IPv4 and IPv6. However, if the name service does not support IPv6, then a performance issue may be observed as the first IPv6 query has prolonged until its timeout before a successful IPv4 query can be made.

To reverse the default behavior and use IPv4 over IPv6, add the following Generic JVM argument:

  1. **-Djava.net.preferIPv4Stack=true**

We added -Djava.net.preferIPv4Stack=true  in JVM param and restarted JVM.

Now the transaction executed much faster, as shown in Dynatrace:

PurePath overview shows us a total execution time of 128ms. Most of the time now is spent in other areas – such as database calls – but no longer in resolving DNS addresses
PurePath overview shows us a total execution time of 128ms. Most of the time now is spent in other areas – such as database calls – but no longer in resolving DNS addresses.
The PurePath Top Contributors tab makes this even clearer. HTTP calls now finish in milliseconds
The PurePath Top Contributors tab makes this even clearer. HTTP calls now finish in milliseconds.
The API Breakdown also clearly shows that we solved this one problem. Now we can focus on the next hotspots if we want to improve performance further
The API Breakdown also clearly shows that we solved this one problem. Now we can focus on the other hotspots if we want to improve performance further.

This story showed us how important it is to monitor all your applications and services in all the different environments on which they run. We will see a larger push towards hybrid cloud which means we have to find a way to detect these problems. Dynatrace natively supports all these technologies and, thanks to its analytics capabilities, makes it easy to find and fix them.