Many cloud admins unwittingly sabotage their OpenStack monitoring processes. I collected a few common mistakes and what to do instead to make your OpenStack troubleshooting easier.
Mistake #1: Investing a lot of time and effort into configuration
Let’s get this straight: most commercial and open source monitoring tools avoid talking about how much effort you must invest in their configuration, let alone in their maintenance. So first try to find out if you’ll have to modify files. Set permissions. Run command lines for every OpenStack component. Then consider the size of your environment. If you run a large-scale, hyper-dynamic environment and your monitoring tools require a lot of effort to set up, soon you’ll find yourself hiring more and more staff only for administering your toolset.
Given the fact that Dynatrace works with one agent, out-of-the box, and requires zero configuration, setting it up for monitoring OpenStack is easy. Features like the automatic integration with all common deployment automation mechanisms, or the auto-discovery of the OpenStack cloud components enable you to see performance metrics within minutes.
Mistake #2: Using too many different tools for different monitoring use cases
As an OpenStack user, you might be interested in resource utilization metrics, how your OpenStack services perform, their availability, and of course you want to see the log files.
But because there is a shortage of real all-in-one OpenStack monitoring tools, most companies implement a separate tool for each of these use cases. However, while running different monitoring tools for different silos, they quickly realize that they are unable to identify the root cause of a performance issue. Or, to find the team responsible for fixing it.
In contrast to conventional monitoring tools, which typically cover only a single monitoring domain, Dynatrace provides a unified monitoring solution. It gives insights into resource utilization, OpenStack services, service availability and log files on a single dashboard.
Mistake #3: Overloading yourself with too many problem alerts
Alert overload is one of the biggest time wasters for modern businesses — this is what we see at companies that implement countless monitoring tools to look at data centers, hosts, processes and services. When any of these components fail or slow down, it can trigger a chain reaction of hundreds of other failures, leaving IT teams drowning in a sea of alerts. APM solutions with a traditional alerting approach provide you with countless metrics and charts, but then it’s up to you to correlate those metrics to determine what is really happening.
Go beyond correlation and get causation. Dynatrace gives you the answer to an end user-impacting issue, not just a bunch of alerts.
How do we do it?
First, we automatically discover, map and monitor all the dependencies from the user click to the back-end services, code, database and infrastructure. Second, we apply artificial intelligence to analyze the data. We examine billions of dependencies to identify the actual cause of the problem. This is key because application environments are quickly reaching a tipping point of complexity, where it is impossible for a human being to effectively analyze the data.
Mistake #4: Relying on averages and transaction samples to determine normal performance
Correctly setting up alert thresholds is crucial to effective application performance monitoring. But that can involve a lot of time-consuming and potentially error-prone manual effort with traditional APM tools—especially because most of them rely on averages and transaction samples to determine normal performance. Averages are ineffective because they are too simplistic and one-dimensional. They mask underlying issues by “flattening” performance spikes and dips. Sampling lets performance issues slip through the cracks—creating false negatives. This is especially problematic in modern hyper-dynamic cloud- and microservice-based environments.
The far more accurate and more useful approach is to use percentiles based on 100% gap-free data, like Dynatrace does. Looking at percentiles (median and slowest 10%) tells you what’s really going on: how most users are actually experiencing your application.
Use artificial intelligence to pin down all the baseline metrics related to the performance of your applications, services, and infrastructure — from back-end through user experience at the browser level. With AI, outliers don’t skew baseline thresholds — so you don’t get false positives. 100% gap-free full-stack data means you catch every single degradation, even those that materialize rapidly in ultra-dynamic environments — no false negatives. Such intelligent and automatic baselining allows Dynatrace to detect anomalies at a highly granular level and to notify you of problems in real time.
Mistake #5: Picking monitoring tools that are unable to scale with your business
You can keep deploying more and more monitoring tools for each silo to ensure the system limits are not reached, but this quickly becomes very hard to maintain and can add a lot of extra cost in terms of both licensing and hardware. Soon questions like these will come up:
- How far will this scale?
- How long until I‘ll need a newer, faster, or bigger one?
Modern application environments based on OpenStack run thousands of nodes with multiple hypervisor technologies, distributed across data centers around the globe. Managing a bunch of monitoring solutions used to be nearly impossible at this scale. Therefore, one of the key challenges for modern app-based businesses is the scalability of their IT monitoring.
Dynatrace was built with the world’s largest application environments in mind and scales to any size. We defined an approach to ensure performance and scalability over the application lifecycle — from development to production. We work with our customers to make performance management part of their software processes going beyond performance testing and firefighting when there are problems in production.
Mistake #6: Focusing only on firefighting at the infrastructure level and forgetting about your apps
A solid IT infrastructure is the backbone of any agile, scalable and successful business, so it’s natural to look for infrastructure monitoring first. But to reach the next stage of maturity as an IT organization, you might want to think beyond just infrastructure. IT organizations that are able to proactively improve and optimize performance gain credibility with the business and are looked on as strategic enablers of business value.
Dynatrace tracks every build moving through your delivery pipeline, every operations deployment, all user behavior, and the impact on your supporting infrastructure. It integrates with whichever technology stack you build on and whichever container based technology you’re using to orchestrate and manage your dynamic application environments on top of OpenStack. It provides a holistic view of your application, the technology stack, and OpenStack. Through analytics and artificial intelligence, you can start building what users want, remove what’s not needed, and optimize the remaining system to be lean, agile, and innovative.
At the end of the day, you want to focus on providing great user experience, not spending time fixing your infrastructure. To do that, you need a monitoring platform that delivers the capabilities today’s complex business applications require:
- Full stack power, to see the big picture
- AI-power, to understand data in context
- Automation power, to do this without any manual intervention
See how we can help to connect the dots from your different OpenStack infrastructure components all the way up to the application front end level – and provide great performance and user experience in your business-critical applications.
Have you done/seen any OpenStack monitoring mistakes that top these? Share your thoughts in the comments section below as I learn just as much from you as you do from me.