Hybrid, multi-cloud is the norm
Enterprises are rapidly adopting cloud infrastructure as a service (IaaS), platform as a service (PaaS), and function as a service (FaaS) to increase agility and accelerate innovation. Cloud adoption is so widespread that hybrid, multi-cloud is now the norm. According to RightScale, 81% of enterprises are executing a multi-cloud strategy.
As enterprises migrate applications to the cloud or build new cloud native applications, they are also maintaining traditional applications and infrastructure. Over time, the balance will shift from the traditional tech stack to the new stack, but both new and old will continue to coexist and interact.
Different cloud platforms have different features and benefits, technologies, levels of abstraction, price, and geographic footprints that make them suitable for specific services. Enterprises started with a single cloud provider but quickly embraced multiple clouds, resulting in highly distributed application and infrastructure architectures.
The result of hybrid multi-cloud is bimodal IT—the practice of building and running two distinctly different application and infrastructure environments. Enterprises need to continue to enhance and maintain existing, relatively static environments. They also need to build and run new applications and scalable, dynamic software defined infrastructure in the cloud.
Putting traditional IT to one side for a moment to focus solely on multiple cloud platforms, the frequent output is monitoring tool proliferation. This is because of teams operating in silos, despite critical interdependencies between services running across clouds.
The challenge of multiple monitoring tools across clouds is further compounded when we bring traditional IT back into focus. And with it, the need to monitor and manage a range of existing technologies that also have service interdependencies with cloud environments.
Simplicity and cost saving were the drivers for early cloud adoption. But today, cloud use has evolved into complex and dynamic landscapes that incorporate multiple clouds as well as traditional on-premise technologies. Being able to seamlessly monitor the full technology stack across multiple clouds as well as traditional on-premise technology stacks is critical to automating operations–no matter how highly distributed the applications and infrastructure.
Microservices and containers introduce speed
Microservices and containers are revolutionizing the way applications are built and deployed. They provide tremendous benefits in terms of speed, agility, and scale. In fact, 98% of enterprise development teams expect microservices to become their default architecture. IDC predicts that by 2022, 90% of all apps will feature microservices architectures.
Close to three in four (72%) CIOs say that monitoring containerized microservices in real-time is almost impossible. Moving to microservices running in containers makes it harder to get visibility into environments. Each container acts like a tiny server, multiplying the number of points you need to monitor. They live, scale, and die based on health and demand. As you scale your Pivotal Platform environment from on-premise to cloud to multi-cloud, the number of dependencies and data generated increases exponentially. This makes it seem impossible to understand the system as a whole.
The traditional approach to instrumenting applications involves manual deployment of multiple agents. When environments consist of thousands of containers with orchestrated scaling, manual instrumentation becomes unfeasible and severely restricts your ability to innovate.
A manual approach to instrumenting, discovering, and monitoring microservices and containers will not work. For dynamic, scalable platforms like Pivotal Platform, a fully automated approach becomes a requirement. For agent deployment, for continuous discovery of containers, and for monitoring the applications and services running within them.
72% of CIOs say monitoring containerized microservices in real-time is almost impossible.
- Dynatrace CIO Complexity Report 2018
Not all AI is created equally. Attempting to enhance existing monitoring tools with AI, such as machine learning and anomaly-based AI, will provide limited value. AI needs to be inherent in all aspects of the monitoring platform and see everything in real-time—from the topology of the architecture to dependencies and service flow. AI should also be able to ingest additional data sources for inclusion in the AI algorithms rather than by people having to correlate data via charts and graphs.
30% of IT organizations that fail to adopt AI will no longer be operationally viable by 2022.
Visualizing and prioritizing impact
Can you see how specific issues or overall performance impacts every single user session or device? Are you then able to prioritize by magnitude?
Visibility from the edge to the core
Do you have a single view across your entire multi-cloud ecosystem—from the performance of users and edge devices to your applications and cloud platforms—and all in context?
A single source of truth for all
Are you able to ensure stakeholders—from IT to marketing—have access to the same data so you can avoid silos, finger pointing, and war rooms?
76% of CIOs say multi-cloud deployments make monitoring user experience difficult.
- Dynatrace CIO Complexity Report 2018
Check out other e-books
We offer several premium e-books on aspects of modern observability.Learn more