Thoughts on the Past and Future of APM

I have spent the past 2 years developing what I’ll call a tool and process agnostic Application Performance Management (APM) framework. This framework couples the valuable APM practices of active monitoring, proactive trending and near-real-time response with related value-adding practices throughout the rest of the application development and delivery lifecycle, such as code profiling and production simulations (A.K.A. Performance, Load and Stress Testing).  Over the past year, I’ve been beta-testing and tuning the framework with peers and clients. The response has been overwhelmingly positive to the point that I am now prepared to share the framework with the world.

To explain why I chose to dedicate so much time to this, allow me to share a brief history of my experience with APM. As someone who has been around software development since well before the Y2K bug failed to result in the world’s demise (as some predicted), I’ve seen many crises, tools, trends, fads, processes, vendors, service providers, acronyms, methods and technologies come and go. One acronym that has stood the test of time is APM; but that fact alone is somewhat misleading. APM survived for reasons I don’t entirely understand, has changed significantly (but not enough, in my opinion), and has never been the “comprehensive application performance solution” it has often claimed to be.

Early Doubts

I remember rolling my eyes when I first encountered APM (then short for Application Performance Monitoring). My opinion at the time was that it was nothing more than a buzzword created by marketing departments designed to encourage non-technical executives to spend money on solutions for their Operations or IT departments that duplicated the functionality of native Operating System utilities. Based on that opinion, I systematically ignored APM for several years. I wasn’t wrong – as far as I am concerned, for as long as the “M” stood for Monitoring and was focused on a narrow set of system metrics in production, APM was nothing more than a marketing gimmick.

From ‘Monitoring’ to ‘Management’

I was less dismissive years later when I was re-introduced to APM. This time the “M” stood for Management, the solutions seemed to add some value, and folks had realized that simply monitoring something was insufficient – you needed to make use of the monitoring data in some way to improve the application (why it took years for those promoting APM to include that as part of their message, I’ll never know). Even so, I was underwhelmed. The focus remained entirely on IT/Ops departments that owned production support.

Don’t get me wrong, I think active monitoring and tuning in production is a mission-critical component of managing application performance. I just don’t think it is the only component necessary to manage application performance effectively as the APM champions of the day seemed to imply.

When the DevOps movement took hold, I was hopeful it would bring the value of APM to a larger segment of the application lifecycle, but my hope was short lived. APM remained too focused on production software, and had somewhat forgotten that managing application performance begins when software development begins; and that the people doing the development, testing and APM needed help integrating performance management into the processes they follow in their daily work.

As I recognized that DevOps wasn’t going to lead to significant changes in APM, I also realized that APM wasn’t going away, was too popular to be dismissed, and on its own was never going to become anything more than a proactive approach to being reactionary. This is also when I admitted to myself that it was past time for me to at least try to improve the situation by seeing if I could find a way to bring the value of APM principles to people, tasks and processes throughout the lifecycle.

The T4 Framework

My professional career has been focused on testing, delivering and improving application performance and teaching organizations, teams and individuals to do the same.

Along the way, I’ve learned quite a bit about what kinds of things add value, what kinds of changes last and what kinds of things make a big splash then fade away. The things that add value and last are conceptually simple. They are also incrementally implementable, valid in both the large and the small, and are easy to remember.

Keeping those lessons in mind, I developed a framework as a response to the history of APM as I lived it that I call T4APM™. The T4 part stands for an ongoing cycle of:

  • Target – identify a feature, metric, code segment, configuration, etc. of interest
  • Test – essentially a data gathering exercise against the Target
  • Trend – plot the test data over time, build-to-build for example
  • and Tune – based on test results and trend data, proactively optimize the system

This cycle has been demonstrated by my peers and clients to be applicable, valuable, and frequently simple to implement. In fact, some teams have reported that every member of the team has successfully applied this cycle, individually and collectively, siloed and cross-functionally, to the majority of their tasks throughout the lifecycle. These same teams are reporting this kind of pervasive change taking 3 months or less without needing to add additional people or work extra hours.  More simply put, T4APM™ is an approach to delivering performant applications by leveraging just a little work, every day, from everyone.

Intrigued? I’ll be writing and speaking about T4APM™ regularly this year, but its “Official Release Party” kicks off with a free informational webcast co-hosted by Compuware APM. Hope to see you there!

System Performance Strategist and CTO; PerfTestPlus, Inc. Author; Web Load Testing for Dummies Scott Barber is a thought-leader in delivering performant systems and software testing who is best known as “one of the most energetic and entertaining” keynote speakers in the industry and as a prolific author (including his blog, over 100 articles, and 4 books.)