Application performance problems can be quite challenging to resolve and even more difficult to predict. In my role as Dynatrace Guardian Consultant — leading the implementation of APM best practices with our customers — I’ve experienced quite a few “head-scratcher” situations. In this blog, I’ll relate a recent problem that had most of the IT department scrambling early one Monday morning.

In the Guardian’s shoes

My customer is a global financial services company, providing hundreds of online services to world-wide consumers. One of them helps businesses research auto history records, representing 5% of the company’s activities. Let’s call this application CarFacts. (While the names have been changed, the story you are about to read is a reasonably accurate retelling.) Dealers in particular leverage information from CarFacts to uphold their professional reputation and to finalize transactions, relying on their subscription to the company’s website for clear and accurate statistics on vehicles.

When browsing the web application, end-users submit diverse parameters in order to generate a report on a particular set of vehicles. Each month, the customers are billed for the total amount of reports generated. It was no wonder, then, that when the support team received a dozen calls from customers reporting a painfully slow or even unavailable system, the fire alarm was triggered.

Infrastructure Performance management vs. Application Performance Management

While most IT teams have domain-specific infrastructure performance management (IPM) tools, the global monitoring team to which I belong is responsible for comprehensive end-user-driven application performance analysis using two complementary Dynatrace Application Performance Management (APM) solutions: Data Center Real User Monitoring (DC RUM) and Application Monitoring (AppMon). DC RUM is a passive network probe-based APM solution, with its main use cases being end-user experience monitoring and fault domain isolation, while AppMon, an agent-based APM solution, provides deep-dive root cause analysis for application performance and availability problems.

Insert1
Figure 1: DC RUM monitors user experience and provides fault domain isolation, while AppMon traces transactions for root cause analysis.

My customer’s IT department was in triage mode that day. Can you imagine being back at the office on Monday morning, after a peaceful and well-deserved weekend, and being caught in a tornado, right after passing through the door?

No morning coffee break

Without wasting a second, the support team logs the complaints and raises a priority-1 incident for CarFacts. The ticket is picked up by the incident management team who hurries to assemble a response team. Each IT team is represented: application support, global monitoring, development, database, system and network. Completing the team with an incident management representative and a manager, in case escalations are needed, we ended with a taskforce composed of at least 8 people. A conference bridge is set up and we all receive the symptoms reported by the end-users.

Here are the facts: at around 8:50 a.m. that Monday morning, the average response time for logins jumped from 5 seconds to 35 seconds. Help desk phones began ringing shortly after DC RUM’s response time alert; this was no fire drill. The CarFacts application’s health index had dropped to 50%, and – the business manager pointed out the metric that she cared about most – 86% of the almost 900 users were impacted by the problem. It means almost no reports are being generated, and it means the company is losing money…

Here are a few DC RUM dashboard snips, with private information masked.

Figure 2: DCRUM Application Health Status reporting poor performance for CarFacts on Monday morning.
Figure 2: DCRUM Application Health Status reporting poor performance for CarFacts on Monday morning.

The data center – specifically, the web tier – is isolated as the primary reason for the slowness, at the same time showing that there are no network issues and no contributing problems at the application server or database tier. The end-user network is also performing well. The network team and the database administrator can go relax, leave the call and have that coffee break…

Drilling down a level, DC RUM identifies the pages that are impacted. These are the landing page and the sign-in page, with page load times ranging up to 30 seconds or more; other pages are relatively unaffected, even as DC RUM shows less load than usual.

We also noted some HTTP 500 errors for these same pages.

Figure 3: A breakdown of HTTP 500 server errors by operation (url).
Figure 3: A breakdown of HTTP 500 server errors by operation (url).

So that’s a brief view into DC RUM’s “outside-in” perspective of the problem; as a network probe-based monitoring solution, we use DC RUM for an enterprise-wide perspective of our applications as well as key parts of the supporting infrastructure such as load balancing. It provides an understanding of internal and external user experience, a view into the business impact of degraded performance, and automates fault domain isolation.

Fault Domain Isolation vs. Root Cause Analysis

The excitement reaches the next level within the team: DC RUM alerted us and provided us with the fault domain, the landing and login pages on the web servers. We are getting closer to the root cause, and in just a few minutes! Selecting the impacted landing page operation (URL), I click on the Application Monitoring PurePath link on the DC RUM dashboard. Drilling down from one Dynatrace application to another is as simple as a click and should get us directly to the answer to our problem.

Figure 4: The faulty operation is identified and a drill-down link to Application Monitoring allows RCA.
Figure 4: The faulty operation is identified and a drill-down link to Application Monitoring allows RCA.

A few seconds later, my AppMon client pops up and displays an application-centric view of our slow transactions. I close the DC RUM interface, bearing in mind I may come back to it later to further document the incident and to analyze the business impact.

A new screen is now displayed on my monitor, showing a transaction flow and a response time hotspots analysis view based on AppMon PurePaths, or end-to-end transaction traces. Those PurePaths are generated by the agents injected in the web and application servers of CarFacts.

Figure 5: DCRUM / Application Monitoring integration.
Figure 5: DCRUM / Application Monitoring integration.

Gathering the clues, keeping the relevant facts

My work to understand the issue has just started when the system engineer mentions on the call that the four web servers are under stress with high CPU usage. His analysis of IPM tools data shows this behavior started at around 9:00 as well. The manager asks if there is anything we can do, as an immediate action, to solve it. The business and end-user are still complaining, and the pressure on the team increases. The manager asks for a progressive server reboot (as the four servers are load balanced, the service, even performing poorly, would not be interrupted). The system engineers restarted the server between 10:00 and 10:20.

Figure 6: Application Monitoring transaction flow.
Figure 6: Application Monitoring transaction flow.

In the meantime, I started my root cause investigations with the Transaction Flow Analysis, a diagram which here highlights that all of the web servers were impaired by low resources (the lower arc on the node icon, below), leading to CPU utilization alerts. Also, the errors spotted by DC RUM are reported with a red top-right arc on each impacted tier.

I then took a look at the Response Time hotspots for those landing page transactions, where the average transaction time was 21.3 seconds. This highlighted the CarFacts code, which was spending 63% of its time on Synchronization. It means the company’s code was waiting for some other activities to complete before continuing its execution. I drilled into the Method Hotspots dashlet, where we see the Enter(Object) method called from the UserTokenCacheManager and AuthenticationService classes as the root cause.

Figure 7: Method Hotspots dashlet showing method breakdown by synchronization time.
Figure 7: Method Hotspots dashlet showing method breakdown by synchronization time.

I also drilled down from the Tier hotspots to see if the high CPU utilization was caused by the same component, and found the same method – Enter(Object) – as the primary bottleneck at the web tier, followed by the Security API processing certificate information, also consuming significant CPU time.

Figure 8: Method Hotspots dashlet showing method breakdown by CPU time.
Figure 8: Method Hotspots dashlet showing method breakdown by CPU time.

Finally, we examined the HTTP 500 errors, and quickly learned that these are not related to the Monday morning problem. In fact, those errors have been present for some time and, of course, will be addressed to improve service levels.

The root cause: Taking actions and prioritizing the next steps

My conclusion was: the main issue is related to high synchronization time on the Authentication API caused by the Enter(Object) method. The developers investigated and found the synchronization time is caused by the way the application handles the certificate and other security-related information, with high CPU utilization being a symptom. After a code change, they were able to partially solve the issue. As the problem can only be completely solved by an architecture change, the team decided to add four additional web servers to stabilize the response time while studying possible long-term solutions. Finally, as part of the immediate actions to take, I implemented additional alerts in DC RUM and AppMon to specifically alert us to this particular performance problem; next time, we’ll know exactly what to do.

Summary: An example to implement as a process

As the environment at my customer becomes more complex, comprehensive visibility into performance becomes more critical. We see clear and steady improvement in our ability to identify and correct problems, to collaborate between teams, and to identify areas for improvement. DC RUM’s end-user, application, and network infrastructure insights complement AppMon’s deep-dive visibility into code execution; together, they provide gapless data for our key business applications. And, we’ve learned that one of the more beneficial results of a centralized monitoring team is the convergence of these two performance perspectives, eliminating blind spots, and removing any doubt that sometimes arises from a different team’s perspective.

The Dynatrace Digital Performance platform located at this customer offers enterprise-wide visibility into end-user experience, automated fault domain isolation, and deep-dive root cause analysis. Our goal is to use this example to help define a repeatable, cross-discipline investigative process for future triage events, resulting in reduced Mean Time To Repair and happier customers.