Triggered by current expected load projections for our community portal, our Apps Team was tasked to run a stress on our production system to verify whether we can handle 10 times the load we currently experience on our existing infrastructure. In order to have the least impact in the event the site crumbled under the load, we decided to run the first test on a Sunday afternoon. Before we ran the test we gave our Operations Team a heads-up: they could expect significant load during a two hour window with the potential to affect other applications that also run on the same environment.
During the test, with both the Ops and Application Teams watching the live performance data, we all saw end user response time go through the roof and the underlying infrastructure running out of resources when we hit a certain load level. What was very interesting in this exercise is that both the Application and Ops teams looked at the same data but examined the results from a different angle. However, they both relied on the recently announced Compuware PureStack Technology, the first solution that – in combination with dynaTrace PurePath – exposes how IT infrastructure impacts the performance of critical business applications in heavy production environments.
The root cause of the poor performance in our scenario was CPU exhaustion – on a main server machine hosting both the Web and App Server – caused us not to meet our load goal. This turned out to be both an IT provisioning and an application problem. Let me explain the steps these teams took and how they came up with their list of action items in order to improve the current system performance in order to do better in the second scheduled test.
Step 1: Monitor and Identify Infrastructure Health Issues
Operations Teams like having the ability to look at their list of servers and quickly see that all critical indicators (CPU, Memory, Network, Disk, etc) are green. But when they looked at the server landscape when our load test reached its peak, their dashboard showed them that two of their machines were having problems:
Step 2: What is the actual impact on the hosted applications?
Clicking on the Impacted Applications Tab shows us the applications that run on the affected machine and which ones are currently impacted:
Already the load test has taught us something: As we expect higher load on the community in the future, we might need to move the support portal to a different machine to avoid any impact.
When examined independently, operations-oriented monitoring would not be that telling. But when it is placed in a context that relates it to data (end user response time, user experience, …) important to the Applications team, both teams gain more insight. This is a good start, but there is still more to learn.
Step 3: What is the actual impact on the critical transactions?
Clicking on the Community Portal application link shows us the transactions and pages that are actually impacted by the infrastructure issue, but there still are two critical unanswered questions:
- Are these the transactions that are critical to our successful operation?
- How badly are these transactions and individual users impacted by the performance issues?
Step 4: Visualizing the impact of the infrastructure issue on the transaction
The transaction-flow diagram is a great way to get both the Ops and App Teams on the same page and view data in its full context, showing the application tiers involved, the physical and virtual machines they are running on, and where the hotspots are.
Step 5: Pinpointing host health issue on the problematic machine
Drilling to the Host Health Dashboard shows what is wrong on that particular server:
Step 6: Process Health dashboards show slow app server response
We see that the two main processes on that machine are IIS (Web Server) and Tomcat (Application Server). A closer look shows how they are doing over time:
Step 7: Pinpointing heavy CPU usage
Our Apps Team is now interested in figuring out what consumes all this CPU and whether this is something we can fix in the application code or whether we need more CPU power:
Ops and Apps teams now easily prioritize both Infrastructure and app fixes
So as mentioned, ‘context is everything’. But it’s not simply enough to have data – context relies on the ability to intelligently correlate all of the data into a coherent story. When the “horizontal” transactional data for end-user response-time analysis is connected to the “vertical” infrastructure stack information, it becomes easy to get both teams to read from the same page and prioritize fixes that have the greatest negative impact on the business.
This exercise allowed us to identify several action items:
- Deploy our critical applications on different machines when the applications impact each other negatively
- Optimize the way our pages are built to reduce CPU usage
- Increase CPU power on these virtualized machines to handle more load