I love movies. There is just something about them that can teach us a lot about life. One of my favorites is Groundhog Day.
The construct of this movie is that the main character Phil, played by Bill Murray, is trapped having to live the same day over and over again. This is what it can be like for IT when it comes to managing application performance. When a problem is detected in production, a “groundhog day” process is put into action to try and address that issue. Once the problem is addressed everyone resets and waits for the next problem to occur. In the end, the same process takes place over and over and the team never really makes an impact on the performance of an application.
In order to escape “Groundhog Day” there are some key practices that you can implement to build a team and transition the management of your applications’ performance from firefighting to truly being proactive. Those key things are:
- getting some separation with the performance team
- picking the right skill set
- having the right tools in place
Each one of these practices changes the culture of application performance management (APM) from being reactive to becoming proactive. The concepts address some of the pitfalls that organizations face when trying to deal with the issue of performance problems.
To illustrate these key concepts lets look at how one of CompuwareAPM’s customers, Raiffeisen Bank in Hungary, was able to change the culture of performance and escape its own “Groundhog Day.” By implementing these key things the company was able to transition from variable performance in production with sporadic unplanned downtime to a more agile operation, deploying multiple releases a week with zero downtime.
Different Day – Same Results
Once Phil discovered that he was trapped in Punxatawney he attempted to escape his fate by trying to do the repeat his actions over and over. Similarly, most companies do the same thing when it comes to APM. I have discussed this scenario before. With Raiffeisen this was a common occurrence surrounding its portal application.
The teams at Raiffeisen were constantly fighting the same battles. It did not matter whether it was during a peak traffic load or some random set of circumstances – the issue was always the same. The portal application’s performance would degrade and the operations team had to spring into action. In some cases there was no indication that there was a performance problem other than the end users calling in stating they were having problems.
When a problem was detected, the operations team’s process of performance management was reactive. First, the team would capture all the data relating to the performance of the portal. This included all the logs, dumps and hardware statistics associated with the portal. The data then had to be manually correlated and communicated to all the team members. This process was highly resource intensive both in time and manpower.
All of these measures still did not guarantee that the performance issues would be identified in time to rectify the issues. Without understanding the root cause of the performance bottlenecks the team was left with few options. In a lot of cases the only way to address the issues in a timely fashion was with drastic measures. The application cluster had to be restarted. This was not an acceptable solution to these problems.
This is the “Groundhog Day” event that I am talking about. No matter what the team did the result was always the same. This is the constant fire drill that IT shops have been fighting about for years. The constant cycle of wait for the fire, find the fire, put out the fire as best as possible is an everyday occurrence for a lot of IT shops. Many APM tools that you might have implemented (legacy tools that can’t copy any more and ‘point’ solutions that only address one aspect of APM) in the past are usually only designed to cope with this cycle, not break it.
Continuous Improvement: The ONLY Way Out
Just as Phil was only able to escape his “day” after he began to continuously improve himself and others around him, The team at Raiffeisen realized the same thing for their own “Groundhog Day”. They came to the understanding that what had to change was the way they were managing the performance of the portal application. The company made a conscious decision to fundamentally change the way it did APM as a team.
Get Some Separation
The first thing the team did was spin off a group and created their own team just to manage application performance. This is a key difference between performance initiatives that succeed and fail. The problem is that most companies assign the task of performance to one of the phases of the lifecycle. This isolates the impact that this team has over the entire lifecycle. For example, if this team is attached to testing then it has very little influence in the architecture of the application. The team also has no visibility other than what is provided to them by the tools currently in place.
Being separate gives the team a level of visibility so that its members can look at the whole process. This allows them to look for bottlenecks wherever their place in the lifecycle. This single act had a lasting impact at Raiffeisen. They had the ability to scrutinize the application at every level. When problems were discovered they had the visibility to make actionable recommendations.
The make up of this team is as important as how it is positioned into the application lifecycle. When building a team to oversee the performance of the portal application Raiffeisen was looking for the right people. Each member was hand selected. This was not a task that was assigned to a single team or multiple groups of teams.
Selecting members who are very good detectives and subject matter experts is better than selecting the leads of the different application support groups. Picking people who have never worked on the application means they come with no potentially unhelpful prejudices and preconceptions: they’re a blank slate. There are no thoughts of familiarity with the application that can lead to overlooking a problem.
Here, Raiffeisen was very specific about the criteria around how this team was to be built. The members of the team were selected based on their skills not their knowledge of the application. Each member of the team had no previous knowledge of the portal environment. Nor were they involved in any of the development or design phases. This was very important when it came to scrutinizing the application. The whole application was suspect until proven otherwise.
Right Tools For the Job
As discussed earlier, most performance tools that are in place are only good for fighting fires- not preventing them. To complete the transformation from reactive to proactive you need to get a solution that allows you to manage across the entire application lifecycle. Having the integrations and the ability to plug into the release process is crucial when selecting the right APM solution. Any solution that cannot do this is only upgrading the fire extinguisher.
When it came to Raiffeisen, it selected Dynatrace because of this. This was the turning point for dealing with the performance of the portal application. With this solution in place the newly formed team broke the fire fighting cycle and started transforming the way Raiffeisen managed application performance. Teams easily shared data with each other, and they could clearly see the implications for actions needed to improve the performance of the application.
Fast Turn Around
The last “day” for Phil was after he was able to see how his life impacted those around him. With that he was able to escape Puxatawney and move on. With Raiffeisen that “day” ended two months after starting the project. The team was able to make a huge impact and change how the application was performing.
In the start of 2012 the performance team was created and Dynatrace was implemented. One month later the new application performance troubleshooting team had completed a full analysis of the portal application and created custom dashboards and views for all teams involved with the portal application: developers, testers, and operations managers. On top of that, developers started to implement changes into the application based on the findings from the performance troubleshooting team. In March of 2012 those changes were being tested and released into the production environment. Since the end of March there have been no unplanned outages. They have also been able to increase the number of updates to an almost daily occurrence. Additionally, the team was able to find problems over 30x faster than before and improved the login transaction from an average of 10 seconds to 3 seconds.
When the problem of performance is addressed across the lifecycle true change can be made. This is real lasting change in the way performance is managed. That is why managing performance is not just about having the newest tool in production. It is simply not enough. It just repeats the same cycle even though it may seem “easier” than the previous day. A company must realize that it is a combination of software, cultural change, and process that creates a lasting effect.