Header background

Boost DevOps maturity with observability and a data lakehouse

In a world driven by macroeconomic uncertainty, businesses increasingly turn to data-driven decision-making to stay agile.

That’s especially true of the DevOps teams who must drive digital-fueled sustainable growth. They’re unleashing the power of cloud-based analytics on large data sets to unlock the insights they and the business need to make smarter decisions. From a technical perspective, however, cloud-based analytics can be challenging. Data volumes are growing all the time, making it harder to orchestrate, process, and analyze to turn information into insight. Cost and capacity constraints for managing this data are becoming a significant burden to overcome.

All of these factors challenge DevOps maturity. Teams need a technology boost to deal with managing cloud-native data volumes, such as using a data lakehouse for centralizing, managing, and analyzing data.

Data scale and silos present challenges to DevOps maturity

DevOps teams often run into problems trying to drive better data-driven decisions with observability and security data. That’s because of the heterogeneity of the data their environments generate and the limitations of the systems they rely on to analyze this information. This data complexity is actually thwarting the capacity of many organizations to mature in their DevOps practices.

What is DevOps maturity?

DevOps maturity is a model that measures the completeness and effectiveness of an organization’s processes for software development, delivery, operations, and monitoring. Many organizations, including the global advisory and technology services provider, ICF, describe DevOps maturity using a DevOps maturity model framework. Organizations generally measure DevOps using a scale ranging from no DevOps practices at all to continuous delivery with full automation. When extended to include application security practices, DevOps becomes DevSecOps and includes security-focused criteria, such as security by design and making security a critical release criterion.

Increasing an organization’s DevOps maturity is a key goal as teams adopt more cloud-native technologies, which simultaneously makes their environments more scalable and feature-rich but also more complex.

Cloud complexity leads to data silos

Silos

Most organizations are battling cloud complexity. Research has found that 99% of organizations have embraced a multicloud architecture. On top of these cloud platforms, they’re using an array of observability and security tools to deliver insight and control—seven on average. This results in siloed data that is stored in different formats, adding further complexity. What’s more, 55% of organizations admit they’re forced to make tradeoffs among quality, security, and user experience to meet the need for rapid transformation.

This challenge is exacerbated by the high cardinality of data generated by cloud-native, Kubernetes-based apps. The sheer number of permutations can break traditional databases.

Many teams look to huge cloud-based data lakes, repositories that store data in its natural or raw format, to help teams centralize disparate data. Others adopt a data lake, which enables teams to keep as much raw data as they want to at a relatively low cost until analysts find a use for it.

When it comes to extracting insight, however, teams need to transfer data to a warehouse technology so technologies can aggregate and prepare it for analysis. Various teams usually end up transferring the data again to another warehouse platform, so they can run queries related to their specific business requirements. All these steps and stages slow down processes, are error-prone, and introduce additional security risks.

When data storage strategies become problematic to DevOps maturity

Data warehouse-based approaches add cost and time to analytics projects.

With a data warehouse, teams may need to manually define tens of thousands of tables to prepare data for querying. The data warehouse approach also requires a multitude of indexes and schemas to retrieve and structure the data and define the queries that teams will ask of it. That’s a lot of effort.

Any user who wants to ask a new question for the first time will need to start from scratch to redefine all those tables and build new indexes and schemas, which creates a lot of manual effort. This can add hours or days to the process of querying data, meaning insights are at risk of being stale or are of limited value by the time teams surface them.

The more cloud platforms, data warehouses, and data lakes an organization maintains to support cloud operations and analytics, the more money they will need to spend. In fact, the storage space required for the indexes used to support data retrieval and analysis may end up costing more than the data storage itself.

Teams will incur further costs if they need technologies to track where their data is and to monitor data handling for compliance purposes. Frequently moving data from place to place may also create inconsistencies and formatting issues, which could affect the value and accuracy of any resulting analysis.

Combining data lakes and data warehouses

How a data lakehouse boosts DevOps maturity

A data lakehouse approach combines the capabilities of a data warehouse and a data lake to solve the challenges associated with each architecture, thanks to its enormous scalability and massively parallel processing capabilities. With a data lakehouse approach to data retention, organizations can cope with high-cardinality data in a time- and cost-effective manner, maintaining full granularity and extra-long data retention to support instant, precise, and contextual predictive analytics.

But to realize this vision, a data lakehouse must be schemaless, indexless, and lossless.

  • Schema-free means users don’t need to predetermine the questions they want to ask of data, so new queries can be raised instantly as the business need arises.
  • Indexless means teams have rapid access to data without the storage cost and resources needed to maintain massive indexes.
  • Lossless means technical and business teams can query the data with its full context in place, such as interdependencies between cloud-based entities, to surface more precise answers to questions.

Unifying observability data to promote DevOps maturity

Let’s consider the key types of observability data that any lakehouse must be capable of ingesting to support the analytics needs of a modern digital business.

three pillars of observability

Logs

Logs are the highest volume and often most detailed data that organizations capture for analytics projects or querying. As a result, logs provide vital insights to verify new code deployments for quality and security, identify the root causes of performance issues in infrastructure and applications, investigate malicious activity such as a cyberattack, and support various ways of optimizing digital services.

However, logs without context very often become more noise teams have to sift through to find what matters.

Metrics

Metrics are the quantitative measurements of application performance or user experience that teams can calculate or aggregate over time to feed into observability-driven analytics.

The challenge is that aggregating metrics in traditional data warehouse environments can create a loss of fidelity and make it more difficult for analysts to understand the relevance of data.

There’s also a potential scalability challenge with metrics in the context of microservices architectures. As digital services environments become increasingly distributed and are broken into smaller pieces, the sheer scale and volume of the relationships among data from different sources is too much for traditional metrics databases to capture. Only a data lakehouse can handle such high-cardinality data without losing fidelity.

Traces

Traces are the data source that reveals the end-to-end path a transaction takes across applications, services, and infrastructure. With access to the traces across all services in their hybrid and multicloud technology stack, developers can better understand the dependencies they contain and more effectively debug applications in production.

Cloud-native architectures built on Kubernetes, however, greatly increase the length of traces and the number of spans they contain, as there are more hops and additional tiers, such as service meshes, to consider. Organizations can architect a data lakehouse such that teams can better track these lengthy, distributed traces without losing data fidelity or context.

Accelerate DevOps maturity by going beyond metrics, logs and traces

While metrics, logs, and traces can tell you what happened, they can’t always tell you why. For that, you need additional insight and context to make analytics more precise.

If DevOps teams can build a real-time topology map of their digital services environment and feed this data into a data lakehouse alongside metrics, logs, and traces, it can provide critical context about the dynamic relationships between application components across all tiers. With context from the Dynatrace observability and security analytics platform on data in the Grail data lakehouse, DevOps teams have centralized situational awareness that enables them to raise queries about what’s happening in their multicloud environments. With this access, teams can better understand how to optimize systems more effectively, which in turn helps them automate DevOps processes. Such access also helps DevSecOps teams to build security into the software delivery lifecycle and quickly detect, investigate, and remediate the impact of security incidents.

Observability data can also provide insights into user session data, which teams can use to gain a better understanding of how customers interact with application interfaces. This insight helps teams to identify how an issue is affecting users and pinpoints what optimizations the system needs and where.

As digital services environments become more complex and data volumes explode, observability is certainly becoming more challenging. However, it’s also never been more critical. With a data lakehouse-based approach, DevOps teams can finally turn petabytes of high-fidelity data into actionable intelligence without breaking the bank or becoming burnt out in the effort.

To learn more about the Dynatrace Grail data lakehouse and how it can help accelerate DevOps, join us for the on-demand webinar, Get to know Dynatrace: Grail edition.