With the Dynatrace Grail data lakehouse, IT teams can conduct log analysis without losing data context.
Modern organizations ingest petabytes of data daily, but legacy approaches to log analysis and management cannot accommodate this volume of data.
Traditional log analysis evaluates logs and enables organizations to mitigate myriad risks and meet compliance regulations. With more automated approaches to log monitoring and log analysis, however, organizations can gain visibility into their applications and infrastructure efficiently and with greater precision—even as cloud environments grow.
Causal AI—which brings AI-enabled actionable insights to IT operations—and a data lakehouse, such as Dynatrace Grail, can help break down silos among ITOps, DevSecOps, site reliability engineering, and business analytics teams.
At Dynatrace Perform 2023, Maciej Pawlowski, senior director of product management for infrastructure monitoring at Dynatrace, and a senior software engineer at a U.K.-based financial services group, discussed how the bank uses log monitoring on the Dynatrace platform with an emphasis on observability and security data.
Logs highlight observability challenges
Ingesting, storing, and processing the unprecedented explosion of data from sources such as software as a service, multicloud environments, containers, and serverless architectures can be overwhelming for today’s organizations.
Indeed, according to Dynatrace data, 71% of CIOs say the explosion of data from cloud-native architectures is beyond human ability to manage. Further, business leaders must often determine whether the data is relevant for the business and if they can afford it.
Logs are automatically produced and time-stamped documentation of events relevant to cloud architectures. They enable IT teams to identify and address the precise cause of application and infrastructure issues.
“Logs magnify these issues by far due to their volatile structure, the massive storage needed to process them, and due to potential gold hidden in their content,” Pawlowski said, highlighting the importance of log analysis.
To grasp the challenges of multifeatured, cross-team cooperation dealing with observability data, consider the content of the logs generated.
“Multiple teams in an organization are looking for various data to support different usages, but issues can rarely be pinpointed by a single team or silo,” Pawlowski explained. “That’s exactly why logs illustrate the desperate need for converged observability in context.”
The Grail causational data lakehouse combines data lake and data warehouse capabilities
A data lakehouse combines the attributes of a data lake and a data warehouse, such as efficiently storing both structured and unstructured data.
“The weakness of a data lake is they fail when you need to access them fast,” Pawlowski said. “To access the data, you need to preprocess the data, create indexes, etc. And it gets worse every time you need to refine a query or extend it.”
A data warehouse, on the other hand, is an efficient and fast option for querying data. But they struggle to store unstructured data.
With the extent of observability data going beyond human capacity to manage, Grail is the first purpose-built causational data lakehouse that allows for immediate answers with cost-efficient, scalable storage.
“[Grail] also provides Dynatrace partners answers with no indexes, no schema definition, and no rehydration,” Pawlowski added.
Using Grail to heal observability pains
Grail logs not only store big data, but also map out dependencies to enable fast analytics and data reasoning. Further, Grail uses Dynatrace Query Language (DQL) to analyze logs with full data context and to cut to the chase of how you want to see the modern log monitoring, management, and analytics at scale, Pawlowski explained.
The U.K.-based financial services company has 15 unique brands; more than 58,000 employees; numerous retail businesses, banking and consumer services; and other financial products, such as credit cards, lending deals, and stockbroking. Out of 26 million customers, some 18 million use their mobile apps to access digital services through the web. This equates to 26,000 customer logins per minute at peak demand times.
“It’s quite a big scale,” said an engineer at the financial services group. “The Dynatrace software intelligence platform with Grail helps us understand everything that is going on in the environment, how customers interact with the environment, and problems that need to be addressed.”
Weighing the value and cost of indexed databases vs. Grail
With standard index databases, teams must choose relevant indexes before data ingestion. With Grail, teams can choose what’s relevant at any time to ensure data is always on hand. Standard structured query language (SQL) also has limited query operators and only simple text search capabilities. Grail powered with DQL, on the other hand, goes beyond standard operators and text search with a powerful data exploration and transformation engine.
“With a standard solution, you will need to wait for the new logs to arrive,” Pawlowski said. “While with Grail, you can have access to all your historical analysis at once.”
In many cases, indexed databases only provide access to a sample of statistical data summaries. Grail enables 100% precision insights into all stored data. But what about value versus cost?
Grail is at the center of the Dynatrace open AI-powered platform. Yet the ability to make decisions regarding value versus cost is prioritized at each stage of log management and analytics processes. For example, during the log ingestion, preprocessing, and processing, teams can decide what’s best and most cost-efficient for a specific use case or business requirement.
“Dynatrace allows you to make conscious decisions on values versus cost on each stage of this journey working with logs,” Pawlowski explained.
The benefits of Grail, Dynatrace, and DQL to scale operations
Cloud-native or on-premises/off-premises distributed cloud architecture scalability enables an organization to meet customer demand and fulfill service-level objectives (SLOs) while reducing infrastructure and application management costs.
“To be able to query those along with logs, traces, and metrics of events that come in later on really gives us the benefit of being able to understand from a business perspective how we are meeting SLOs and if we are doing what we set out to do,” the engineer explained, emphasizing the value of Grail, Dynatrace, and DQL for the financial services company.
Additional benefits of implementing Grail with the Dynatrace software intelligence platform and DQL include the following:
- Simple log ingestion. Business leaders can decide which logs they want to use and tune storage to their data needs. Because Grail is cloud-native off the shelf, anyone can enable it and immediately start getting logs from all sources.
- Seamless integration. Metrics and traces give context to a user’s objectives, such as checking a bank balance, making a payment, or applying for a mortgage.
- Fast, precise answers. Dynatrace Davis AI provides answers instantly, while the common DQL makes it easy to share information among stakeholders. That way, teams all work from the same data set.
- Ingesting, processing, retaining, and querying logs. Grail is licensed to ingest, process, retain, and query information, and it can store information for years. This allows business leaders to prioritize data based on complexity and usage.
- Dissolving data silos. DQL quickly analyzes and visualizes data from logs in a shared data query and processing language. This makes it accessible to all teams regardless of background.
Solve complex business challenges with Grail
With an observability platform like Dynatrace, IT teams can automate log analysis and security observability across the full stack. This provides organizations with precise answers about performance and security issues as their environments grow increasingly complex. Dynatrace Grail unifies data from logs, metrics, traces, and events within a real-time model. This includes topology and dependencies for instant cost-efficient, AI-powered analytics at scale.