Background Half Wave
Infrastructure

What is log aggregation?

Log aggregation is a software function that collects, stores, and analyzes log data produced by applications and infrastructure in a central repository. By consolidating logs into a unified data store, log aggregation can make it easier to detect bottlenecks, measure resource utilization, and predict trends over time. Centralized log management and analytics also helps DevSecOps teams to promptly identify and resolve network anomalies related to security incidents.

Log aggregation helps provide visibility into complex, distributed infrastructure by making it more structured and searchable. The basic steps involved in log aggregation include:

  • Identifying log sources. Components of the IT environment that generate log data are identified. Depending on the specific use case, this may encompass all log data or just a limited set of data from certain events, such as failed logins, queries exceeding a certain time threshold, or critical error messages from a Kubernetes cluster.
  • Collecting logs. Log collectors, or agents, are deployed on system components to create log data and collect log entries locally. These entries are then forwarded to a centralized location. Dynatrace OneAgent automatically discovers log data from various sources within an environment.
  • Storing in a centralized location. The collected data is stored in a central repository, such as a data lakehouse, dedicated server, cloud-based SaaS platform, or another reliable and easily accessible location.
  • Normalizing log data. Log entries from various sources are parsed, transformed, and enriched to maintain a consistent format with pertinent metadata. Dynatrace OneAgent collects raw data from monitored components. Davis AI then processes, enriches, and transforms the data to provide insights, detect anomalies, and correlate events.
  • Analyzing data: Users are provided with visual representations of log data and insights through tools such as dashboards and charts. This helps improve performance and detect security threats.
  • Setting up alerts: Alerts are configured for specific events or patterns detected in the data, enabling administrators to proactively address issues.

Dynatrace OpenPipeline further enhances log aggregation for API ingested logs. OpenPipeline enriches and contextualizes API-ingested logs like logs that are ingested using Dynatrace OneAgent. As a result, OpenPipeline elevates Dyntrace log aggregation by enriching and contextualizing all logs, whether ingested through API, OpenTelemetry, or OneAgent. With Dynatrace log management and analytics, SRE and DevOps teams can automatically discover, ingest, and manage logs in context, conduct real-time analysis, create dashboards, and automate anomaly detection.