In recent months, more and more organizations have told me that their monitoring tool sprawl has gotten out of control. This is forcing them to re-think their whole monitoring strategy with tool rationalization leading the way.
Case in point, two days ago the chief architect of a Fortune 100 company said to me, “We’ve had too many teams purchasing their own tool-sets for their own needs. Now we have every tool under the sun. Tool rationalization is a high priority now.” Similarly, another company recently mentioned to me that they own 85 different monitoring tools (and counting)!
At Dynatrace, we advocate that traditional monitoring is dead for many reasons and the drowning effect of tool sprawl is a prime example. This is why we decided to redefine how monitoring is done.
How did we get here?
Every organization has a different story, but most can relate to the timeline of sprawl depicted here. You can almost hear the sounds of the multiplying Mogwais popping up everywhere.
Best of breed – but that was yesterday.
For obvious reasons, a popular longtime monitoring methodology has been to use “best of breed” solutions. This made perfect sense. However, there is a long list of vendors that have not kept up with the constant and rapid pace of technology causing these “best of breed” solutions to quickly become old, antiquated, dust collectors (but yet you are still paying for them).
Single pane of glass – add yet another tool?
Time and time again, teams have attempted to manage the overload of tool sprawl by buying or building a monitor of monitors. By consolidating all the monitoring data and events into a single platform, they hope to create a single pane of glass of simplicity.
I don’t know about you, but when I hear “single pane of glass”, I cringe. This is because I have yet to see anyone pull this off effectively (including myself – I’ve tried).
If you are considering going down this path with a vendor solution like Moogsoft or by building your own platform using something like ElasticSearch – proceed with caution: this is harder than you think. These projects often end up costing more, taking longer and providing less value than originally planned. This is because it’s up to you to define the dependencies and map the data model to the incoming metrics and events from all the different tools. Many companies have found that after months of trial and error, they are still overloaded. A typical outcome is that alert storms still happen, but they are slightly smaller than before due to simple de-duplication of identical alerts. Unless you have a talented team devoted to this effort, the outcome is not worth the investment.
What is the cost?
The obvious costs of tool sprawl can quickly add up:
- Software licensing costs
- Support and maintenance costs
- Training costs
- Professional services costs
- Hardware costs
- Extra FTEs required to manage all the tools (upgrades, deal with support tickets, etc.)
There is a hidden cost caused by tool sprawl which is perhaps the most expensive… the war room blame game. More and more tools will lead to more and more voices saying, “not my problem”. This results in slower problem resolution, excessive war room costs, and frustration for everyone.
Dynatrace customers are discovering a better way
When our customers deploy Dynatrace, they quickly see that it is more than just an APM solution. It’s an all-in-one solution that monitors the full stack with automation and simplicity. It doesn’t take long before they recognize that they can do so much more with Dynatrace and that’s the moment when they realize they’ve finally found a real-world solution to their tool sprawl problem.
We can help
If you are using Dynatrace and want to get your tool sprawl under control, give us a call. We have some nifty assets (yes assets, not more tools) to help you explore the possibilities and the potential business impact of rationalizing the tools owned by your organization.
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.Go to forum