As much as we try to avoid performance problems, they do happen. It is inevitable. But it is possible to learn to react fast; and in some occasions fast enough that the impact on the end users is negligible. Despite operators’ best efforts, 73% of performance issues are reported by users, according to “APM: Getting IT on the C-Level’s agenda” report by Aberdeen Group. This number is quite large considering that less than 5% of all users typically bother to complain at all. User Experience has a significant impact on business success. According to the Aberdeen report poor performance of applications can reduce revenue by 9% and productivity by 64%.
The goal of application performance monitoring is to ensure and improve the quality of applications as perceived by the end users. Getting to the root of the problem quickly is only part of the solution. When we ask various Operation teams how they learn about performance problems they sometimes reply: “Our users tell us.” As we already pointed it out we should not wait for the disaster to happen, but rather take appropriate actions as soon as we see the storm coming.
In this article we recount two incidents that happened to our client, ZinMines, a steel and mining company from Zinariya (names changed for commercial reasons). In both cases the Operations team at ZinMines got notified about the problem well in advance of any user reaction. The team members were able to start analyzing and improving the situation by the time users eventually notified them about the problem. If they have waited for users to notify them, the problem would have been solved much later and users would have been much more frustrated.
Case #1: The Maintenance Page
When the Operations team first setup its application performance monitoring solution, the members made sure that the alerts on potential performance problems were setup correctly.
One morning, just before 8am, an alert that monitored total transaction time went off. The Operations team used the APM solution to chart the total time as seen by the end user broken down by: time spent on the server and time spent on the network. They saw that significant time was spent on the server (see Figure 1). This could make the whole application slower.
The team started to analyze the problem together with the engineering team. They learned that there was a serious bug and the engineering team would need few hours to fix it.
Meanwhile, shortly after 9am, they got a call from a user that the services run by ZinMines were particularly slow. The helpdesk informed the users that the problem was already being investigated and a team had been appointed to look into the issue.
Since the delays in processing user requests kept coming in, and in order to avoid further frustration among the end users, the decision was made to enter into maintenance mode. Around 9:30 the Operations team started redirecting part of the traffic to the maintenance page. This took some load off the application, decreased total time and gave the engineering time to handle the issue. The chart in Figure 1 shows the change in server time and redirect time after the redirect to the maintenance page was enabled, which is indicated on the time line with the blue arrow.
Figure 1 also shows that the traffic was already pretty high for at least one hour before one of the end users notified the Operations team about the problem. the red arrow on the time line with the red arrow in Figure 1 indicates when the problem was first reported.
Had the Operations team waited for an end user to report the problem they would not have contacted the engineering team early enough to gain extra time to start resolving the problem. Thanks to properly configured alerts they were able to act in time and shorten the time the users were impacted by poor application performance.
Case #2: 4xx Errors
Sometime later, the Operations team got another alert. They consulted the APM solution and discovered that one of the application servers was generating a lot of 404 errors.
Figure 2 shows the 4xx errors charted around the time of the incident report. The green arrow indicates when the alert was raised.
The Operations team performed root cause analysis and discovered that the problem was caused by some caching issues and they decided to restart the server. The blue arrow on the time line in Figure 2 shows when the server was restarted.
Shortly before they initiated the restart procedures they got an incident report submitted by one of the end users from the finance department (see Figure 3).
They could close the issue almost immediately since when they checked the report with HTTP 4xx errors (see Figure 2) the situation was back to normal again. This was again seen as a very positive element by the customer. The team was not only aware of the issues that are troubling the users, before any user would complain, but could also see if the action taken to resolve the issue actually improves the situation.
The red arrow in Figure 2 shows when the report was submitted by the end user. The problem started more than an hour before the user reported the incident; similarly to the previous incident where users waited more than an hour. If the Operations team waited for their users they would lost at least an hour in resolving the issue.
Application performance monitoring is more than just following the fault domain isolation workflow to determine the root cause of the problem that is reported by an end user. In many cases waiting for the end users to report problems simply takes too long, while the frustration due to poor performance grows.
APM solutions, such as Compuware dynaTrace Data Center Real User Monitoring (DCRUM) enables us to setup alerts, e.g.: against good performance baselines. In many cases these alerts will be triggered long before the end users report an incident so it gives the Operations team more time to react before issues get serious. As shown in the Aberdeen report mentioned above 95% of users will not even bother to report the problem at all.
Secondly it also enables the team to see if the action taken to rectify the problem actually improves the end-users experience.
(This article has been based on materials contributed by Pieter Jan Switten and Pieter Van Heck based on original customer data. Some screens presented are customized while delivering the same value as out of the box reports.)