Dynatrace is proud to be sponsoring and offering a complimentary version of the O’Reilly eBook, ”Beyond the Twelve-Factor App”. The original Twelve-Factor App framework was created in 2012 by developers at early cloud pioneer Heroku as a set of rules and guidelines for organizations building modern web applications that run “as a service.” The framework was inspired by Martin Fowler’s work on application architecture and what he saw as suboptimal application development processes.
In the book, Solutions Architect Kevin Hoffman walks through each of the original 12 factors, recommends updates to some, and suggests three more. According to Kevin, “…technology has advanced since their original creation, and in some situations, it is necessary to elaborate on the initial guidelines as well as add new guidelines designed to meet modern standards for application development.” The three additional factors he suggests are Telemetry, Security, and the idea of “API first”.
What piqued my interest in the book is the recommended addition of Telemetry, which includes Application Performance Management, Domain-Specific Telemetry, and Health and System Logs. Based on our interactions and experiences with our customers, it is no surprise to us that telemetry (monitoring) is a critical factor that every modern application needs. We believe strongly that it should be built into the fabric of your platform and is no longer a “nice to have.”
Three reasons telemetry should be core
Kevin states in the book, “Getting telemetry done right can mean the difference between success and failure in the cloud.” We agree with this statement and here are three reasons monitoring should be a core component of your development methodology:
1. Modern applications are more complex
To be clear, when I talk about modern applications, I am talking about cloud applications built using microservices and containers. These applications and supporting environments are composed of hundreds, if not thousands of microservices. They are highly dynamic, distributed and scale automatically to meet user’s needs. Gartner, in their blog “Microservices: Building Services with the Guts on the Outside,” note that while microservices simplify an application environment, their complexity makes monitoring core to an implementation’s success.
2. Applications are more important than ever
In an article from 2011 (Light years ago in the tech world), Forbes contributor David Kirkpatrick wrote the article, Now Every Company Is A Software Company. Kirkpatrick asserts, “Ford sells computers-on-wheels. McKinsey hawks consulting-in-a-box. FedEx boasts a developer skunkworks. The era of separating traditional industries and technology industries is over—and those who fail to adapt right now will soon find themselves obsolete.”
Fast forward almost 6 years and all companies have applications that are mission critical. If these applications are not monitored and performance degrades, or crashes, revenue is at stake – and possibly bad press and call outs on social media. Earlier this year our VP of Marketing, Dave Anderson, wrote a blog about the UK train system failure, Trainmageddon: When the machines stop working, people get upset. The ticketing machines went down and chaos ensued. Even train companies are software companies now.
3. Customers expect more features more rapidly
Modern software gives companies the ability to “have a feature idea in the morning and ship it by evening,” (paraphrasing our friends at Pivotal). In this highly competitive world, this is great to have, but how do you know if your customers are happy with new features or the impact an added feature might have on performance? Only monitoring can give you the real-time feedback you need to quickly understand how customers are reacting to that new feature and if its performance is optimal.
What capabilities should a monitoring solution have to monitor modern applications?
As application architectures have evolved, so should your monitoring solution. Modern applications require a modern approach to monitoring. Here are three capabilities we see as essential:
Full stack providing causation, not just correlation
A modern monitoring solution should include all three components of telemetry that Kevin lists in the book. It should allow you to “connect the dots” from the web click to the code level. All components of the stack should be monitored as if they are a single system – not separate pieces where each piece needs to be correlated against other pieces. Only when you monitor modern applications via a single view can you get true causation, not just correlation.
Modern apps are designed to scale as needed. Containers can live for seconds and be gone. The only effective way to monitor an environment like this is through automation. It must be easy to install and use. The monitoring solution should automatically install, discover, instrument and dynamically baseline all components. And, as the application scales, new components should automatically be discovered, instrumented and baselined. Manual configuration is not efficient in highly scalable, highly complex applications. The monitoring system needs to respond in real-time to the changing application environment.
Artificial intelligence (AI) powered analytics
We talked about the highly dynamic and distributed nature of modern applications and how these apps can have hundreds if not thousands of microservices. It is not humanly possible to manually manage and troubleshoot these application environments. AI allows you to automate the real-time inspection of all components and understand all the dependencies. The AI should learn the normal performance of your application and understand seasonal patterns. It should be able to recognize anomalies faster, perform root cause analysis, and send you a single alert with the source of the problem. It should also be able to help your application self-heal. If new code is committed that results in a performance problem, the AI should be able to identify the issue and roll-back the code to the previous stable version.