• Home
  • Observe and explore
  • Logs
  • Log Monitoring Classic
  • Log Monitoring configuration
  • Connecting log data to traces

Connecting log data to traces

Log Monitoring Classic

Dynatrace can enrich your ingested log data with additional information that helps Dynatrace to recognize, correlate, and evaluate the data. Log enrichment results in a more refined analysis of your logs.

OneAgent version 1.239+

Automatically connecting log data to traces works for all log data, no matter how the log data was ingested by Dynatrace. You can manually enrich logs for log data ingested by Dynatrace by defining a log pattern to include the dt.span_id, dt.trace_id, dt.trace_sampled and dt.entity.process_group_instance fields.

Log enrichment enables you to:

  • Seamlessly switch context and analyze individual spans, transactions, or entire workloads
  • Empower development teams by making it easier and faster for them to detect and pinpoint problems
I use Log Monitoring v1

To get the most out of log enrichment, you need to use the latest version of Dynatrace Log Monitoring.

Dynatrace deployments using Log Monitoring v1 can enrich your log data (log enrichment is performed by OneAgent), but Log Monitoring v1 doesn't recognize the additional information of your enriched log data, so you will not benefit from the enhanced analysis and reporting that is available with the latest version of Dynatrace Log Monitoring.

Supported frameworks

Supported frameworks for trace/span log context enrichment:

.NET

Logging frameworksVersions
Microsoft Logging Extensions3.0.0+
Serilog2.9+

Apache HTTP Server

Automatic log enrichment is supported for error.logs and access.logs

Go

Logging frameworksVersions
Logrus1.7.1 - 1.91
Zap1.10 - 1.24
1
Versions 1.7.0 and lower are not supported due to a race condition problem in the Logrus framework

Java

Logging frameworksVersions
Log4J2 (Apache)2.7.x - 2.12.x, 2.13.0, 2.13.1, 2.13.3, 2.14.x - 2.17.1, 2.17.2 - 2.20.x
Logback (QOS)1.x
java.util.loggingAll versions supported

NGINX

Automatic log enrichment is supported for error.logs, but manual log enrichment is required for access.logs

Node.js

Logging frameworksVersions
pino>=5.14.0, 6.x, 7.x, 8.x
winston3.x

PHP

Logging frameworksVersions
Monolog2.3 - 2.4, 3.0

Supported frameworks for trace/span unstructured log context enrichment:

  • .NET
  • Java

There are two ways to enrich the log data that you send to Dynatrace:

  • Automatic log enrichment
    This method is recommended for common technologies and applications generating structured log data.
  • Manual log enrichment
    This method is recommended for custom technologies and applications generating unstructured log data.

Enrich logs automatically

You can enable log enrichment for a particular technology used to create log data and let Dynatrace automatically inject additional attributes into every log record received. This method is recommended for structured log data of known technologies.

Limiting log enrichment

Use Process group override to limit log enrichment to a specific process group or a process within a process group.

Enable/disable log enrichment for a specific technology

To enable log enrichment for a specific technology:

  1. In the Dynatrace menu, go to Settings and select Preferences > OneAgent features.
  2. Filter for enrichment.
  3. Enable/disable each log enrichment for each technology that you use to generate ingested log data.
  4. Select Save changes to save your configuration.
  1. Open the Process group you are looking for.
  2. Select More (…) > OneAgent features.
  3. Filter for enrichment.
  4. Enable/disable each log enrichment for each technology that you use to generate ingested log data.
  5. Select Save changes to save your configuration.

What does automatic log enrichment do?

Log enrichment modifies your ingested log data and adds the following information to each detected log record:

  • dt.trace_id
  • dt.span_id
  • dt.entity.process_group_instance

Structured log data

For structured log data such as JSON, XML, and well-defined text log formats, Dynatrace adds an attribute field to the log record entry.

Example of enriched log data in JSON format

Log data in JSON format is enriched with additional <dt.trace_id>, <dt.span_id>, and dt.entity.process_group_instance properties.

json
{ "severity": "error", "time": 1638957438023, "pid": 1, "hostname": "paymentservice-788946fdcd-42lgq", "name": "paymentservice-charge", "dt.trace_id": "d04b42bc9f4b6ecdbf6bc9f4b6ecdbc", "dt.span_id": "9adc716eb808d428", "dt.entity.process_group_instance": "PROCESS_GROUP_INSTANCE-27204EFED3D8466E", "message": "Unsupported card type for cardNumber=************0454" }

Example of enriched log data in XML format

Log data in XML format is enriched with additional <dt.trace_id>, <dt.span_id>, and <dt.entity.process_group_instance> nodes.

xml
<?xml version="1.0" encoding="windows-1252" standalone="no"?> <record> <date>2021-08-24T14:41:36.565218700Z</date> <millis>1629816096565</millis> <nanos>218700</nanos> <sequence>0</sequence> <logger>com.apm.testapp.logging.jul.XMLLoggingSample</logger> <level>INFO</level> <class>com.apm.testapp.logging.jul.BaseLoggingSample</class> <method>info</method> <thread>1</thread> <message>Update completed successfully.</message> <dt.trace_id>513fcd4e9b08792fcd4e9b08792</dt.trace_id> <dt.span_id>125840e3125840e3</dt.span_id> <dt.entity.process_group_instance>PROCESS_GROUP_INSTANCE-27204EFED3D8466E</dt.entity.process_group_instance> </record>

Unstructured log data

Important

Check if Dynatrace log enrichment has an impact on your existing log data pipeline before using automatic log enrichment on unstructured log data.

Unstructured log data is typically made of raw plain text that is sequentially ordered and is designed to be read by people. Dynatrace does not automatically enrich unstructured log data. Dynatrace is able to enrich unstructured log data, but appending additional information to log data may have an impact on third-party tools that consume that same log data.

Example of enriched log data in raw text format

Log data in raw text is enriched with an additional [!dt dt.trace_id=$trace_id, dt.span_id=$span_id, dt.entity.process_group_instance=$dt.entity.process_group_instance] string (attributes and their value).

plaintext
127.0.0.1 - [21/Oct/2021:10:33:28 +0200] GET /index.htm HTTP/1.1 404 597 [!dt dt.trace_id=aa764ee37ebaa764ee37eaa764ee37e,dt.span_id=b93ede8b93ede8, dt.entity.process_group_instance=PROCESS_GROUP_INSTANCE-27204EFED3D8466E]

Enrich logs manually

OneAgent version 1.239+

You can manually enrich your Dynatrace ingested log data by defining a log pattern to include the dt.span_id, dt.trace_id, dt.trace_sampled, and dt.entity.process_group_instance fields. You can enable manual log enrichment for a specific technology by following the Log enrichment steps.

Be sure to follow these rules for the format of the enriched fields in an unstructured log:

  • Fields must be encapsulated in square brackets ([]) with a !dt prefix.
    For example, [!dt dt.trace_id=$dt_trace_id,dt.span_id=$dt_span_id, dt.entity.process_group_instance=$dt.entity.process_group_instance]
  • Fields must be formatted without double quotes.
  • Any invalid characters for the field and field value must be escaped.
  • Any control characters like \n must be excluded from the enrichment definition.

Example of manually enriching NGINX log data

Suppose you want to manually enrich your NGINX log data with dt.trace_id, dt.span_id and dt.trace_sampled. The NGINX configuration file contains numerous standard NGINX variables, your log format definition must be in the log_format section. For example:

plaintext
log_format custom '$remote_addr - [$time_local] $request $status $body_bytes_sent [!dt dt.trace_id=$dt_trace_id,dt.span_id=$dt_span_id,dt.trace_sampled=$dt_trace_sampled]'; access_log logs/access.log custom;

The result will be an access.log file containing the enriched log records:

plaintext
127.0.0.1 - [22/Mar/2022:08:50:45 +0100] GET /index.htm HTTP/1.1 200 30 [!dt dt.trace_id=b9e5c9ec08be5fab5071d76f427be7da,dt.span_id=43c5bb9432593963,dt.trace_sampled=true] 127.0.0.1 - [22/Mar/2022:08:50:45 +0100] GET /index.htm HTTP/1.1 200 30 [!dt dt.trace_id=01e52950b145d97bf22345e68c5e6c58,dt.span_id=de819d856eecb236,dt.trace_sampled=true]

For OneAgent version 1.237 and earlier, the NGINX variables used are different. For example:

plaintext
log_format custom '$remote_addr - [$time_local] $request $status $body_bytes_sent [!dt dt.trace_id=$trace_id,dt.span_id=$span_id]'; access_log logs/access.log custom

The result will be an access.log file containing the enriched log records:

plaintext
127.0.0.1 - [21/Oct/2021:10:33:28 +0200] GET /index.htm HTTP/1.1 404 597 [!dt dt.trace_id=e1c0afeb0b8a91d7748139aa764ee37e,dt.span_id=e5e6748fab93ede8] 127.0.0.1 - [21/Oct/2021:10:33:31 +0200] GET /index.html HTTP/1.1 200 1056 [!dt dt.trace_id=81fe7816ba6c38f7aa09aef3684cd941,dt.span_id=3bdacc466ae073cd]

If you use a logging framework and log formatter that allows custom log patterns, you can adapt the pattern in the log formatter and directly access the Dynatrace enrichment attributes.

Example of manually enriching Log4j log data

In the Log4j PatternFormatter, you can specify a pattern like this to include Dynatrace enrichment information:

xml
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} dt.trace_id=%X{dt.trace_id} dt.span_id=%X{dt.span_id} dt.entity.process_group_instance=%X{dt.entity.process_group_instance} - %msg%n"/>

Example of manually enriching Logstash Logback encoder

Logback is a successor to the log4j project. Logstash Logback is an extension that provides logback encoders, layouts, and appenders to log in JSON and other formats supported by Jackson.

The following is an example of manual enrichment using the Logstash encoder. Note the additional mdc property in the configuration file, where you can include MDC variables.

xml
<appender name="COMPOSITEJSONENCODER" class="ch.qos.logback.core.FileAppender"> <file>compositejsonencoder.log</file> <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"> <providers> <timestamp> <fieldName>timestamp</fieldName> <timeZone>UTC</timeZone> </timestamp> <loggerName> <fieldName>logger</fieldName> </loggerName> <logLevel> <fieldName>level</fieldName> </logLevel> <threadName> <fieldName>thread</fieldName> </threadName> <mdc> <includeMdcKeyName>dt.span_id</includeMdcKeyName> <includeMdcKeyName>dt.trace_id</includeMdcKeyName> <includeMdcKeyName>dt.entity.host</includeMdcKeyName> </mdc> <stackTrace> <fieldName>stackTrace</fieldName> <!-- maxLength - limit the length of the stack trace --> <throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter"> <maxDepthPerThrowable>200</maxDepthPerThrowable> <maxLength>14000</maxLength> <rootCauseFirst>true</rootCauseFirst> </throwableConverter> </stackTrace> <message /> <throwableClassName> <fieldName>exceptionClass</fieldName> </throwableClassName> </providers> </encoder> </appender>

NGINX ingress with Kubernetes

You can enrich your logs using NGINX ingress with Kubernetes in two steps:

  1. Execute the ingress-nginx on Kubernetes instrumentation instructions.
  2. Add the command below to the configmap.yaml file for NGINX ingress. Note: adding the main-snippet line enables OneAgent ingestion and is optional if you have followed the manual instrumentation instructions already.
plaintext
main-snippet: load_module /opt/dynatrace/oneagent/agent/bin/current/linux-musl-x86-64/liboneagentnginx.so; log-format-upstream: '$remote_addr - $remote_user [$time_local] "$request" [!dt dt.trace_id=$dt_trace_id,dt.span_id=$dt_span_id,dt.trace_sampled=$dt_trace_sampled] $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length'
Example of configmap.yaml file
plaintext
apiVersion: v1 kind: Namespace metadata: name: prod-ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx --- # Source: ingress-nginx/templates/controller-serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: helm.sh/chart: ingress-nginx-4.0.6 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.4 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx namespace: prod-ingress-nginx automountServiceAccountToken: true --- # Source: ingress-nginx/templates/controller-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-4.0.6 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 1.0.4 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: prod-ingress-nginx data: allow-snippet-annotations: 'true' main-snippet: load_module /opt/dynatrace/oneagent/agent/bin/current/linux-musl-x86-64/liboneagentnginx.so; log-format-upstream: '$remote_addr - $remote_user [$time_local] "$request" [!dt dt.trace_id=$dt_trace_id,dt.span_id=$dt_span_id,dt.trace_sampled=$dt_trace_sampled] $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length' ...

Retrieve span and trace IDs

To match log messages with the corresponding traces, you can include dt.span_id and dt.trace_id information in your logs, using OpenTelemetry Python, OpenTelemetry JavaScript (Node.js), and OpenTelemetry Java as shown in the below examples:

OpenTelemetry Python

In the example below, a dt_log function has been created to enrich a given log message with trace_id and span_id information. Printing this enriched message to stdout associates the log message with the currently active span in the Dynatrace web UI.

python
from opentelemetry import trace def dt_log(msg): ctx = trace.get_current_span().get_span_context() trace_id = format(ctx.trace_id, "032x") span_id = format(ctx.span_id, "016x") print("[!dt dt.trace_id={},dt.span_id={}] - {}".format(trace_id, span_id, msg)); def lambda_handler(event, context): msg = "Hello World" dt_log(msg) return { "statusCode": 200, "body": msg }

OpenTelemetry JavaScript (Node.js)

In the example below, a dt_log function has been created to enrich a given log message with trace_id and span_id information. Printing this enriched message to stdout associates the log message with the currently active span in the Dynatrace web UI.

javascript
const opentelemetry = require('@opentelemetry/api'); function dtLog(msg) { let current_span = opentelemetry.trace.getSpan(opentelemetry.context.active()); let trace_id = current_span.spanContext().traceId; let span_id = current_span.spanContext().spanId; console.log(`[!dt dt.trace_id=${trace_id},dt.span_id=${span_id}] - ${msg}`); } exports.handler = function(event, context) { let msg = "Hello World" dtLog(msg); context.succeed({ statusCode: 200, body: msg }); };

OpenTelemetry Java

In the example below, a dtLog method has been created to enrich a given log message with TraceId and SpanId information. Printing this enriched message via System.out associates the log message with the currently active span in the Dynatrace web UI.

java
package com.amazonaws.lambda.demo; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import io.opentelemetry.api.trace.Span; import io.opentelemetry.api.trace.SpanContext; public class HelloJava implements RequestHandler<Object, String> { private static void dtLog(final String msg) { SpanContext spanContext = Span.current().getSpanContext(); System.out.printf( "[!dt dt.trace_id=%s,dt.span_id=%s] - %s%n", spanContext.getTraceId(), spanContext.getSpanId(), msg ); } @Override public String handleRequest(Object input, Context context) { String msg = "Hello World"; dtLog(msg); return msg; } }

For details on configuration, see AWS Lambda logs in context of traces.

Retrieving the span_id and trace_id fields is also possible via API using the OneAgent SDK for Go and OneAgent SDK for .NET.

For instructions on how to source these attributes via OneAgent SDK:

  • Go: see the GO documentation on GitHub
  • .NET: see the .NET documentation on GitHub

Retrieve group instance ID

You can get the dt.entity.process_group_instance field using the OpenTelemetry Python command containing merged. The process_group_instance is retrieved as one of the attributes delivered in merged, as shown in the example below:

With OneAgent, you can simply point to a local endpoint without an authentication token to enable trace ingestion.

python
import json from opentelemetry import trace as OpenTelemetry from opentelemetry.exporter.otlp.proto.http.trace_exporter import ( OTLPSpanExporter, ) from opentelemetry.sdk.resources import Resource from opentelemetry.sdk.trace import TracerProvider, sampling from opentelemetry.sdk.trace.export import ( BatchSpanProcessor, ) merged = dict() for name in ["dt_metadata_e617c525669e072eebe3d0f08212e8f2.json", "/var/lib/dynatrace/enrichment/dt_metadata.json"]: try: data = '' with open(name) as f: data = json.load(f if name.startswith("/var") else open(f.read())) merged.update(data) except: pass merged.update({ "service.name": "python-quickstart", #TODO Replace with the name of your application "service.version": "1.0.1", #TODO Replace with the version of your application }) resource = Resource.create(merged) tracer_provider = TracerProvider(sampler=sampling.ALWAYS_ON, resource=resource) OpenTelemetry.set_tracer_provider(tracer_provider) tracer_provider.add_span_processor( BatchSpanProcessor(OTLPSpanExporter( endpoint="http://localhost:14499/otlp/v1/traces" )))

When using OneAgent, make sure to enable the public Extension Execution Controller in your Dynatrace Settings, otherwise no data will be sent.

In the Dynatrace menu, go to Settings > Preferences > Extension Execution Controller. The toggles Enable Extension Execution Controller and Enable local PIPE/HTTP metric and Log Ingest API should be active.

For details on configuration, see Instrument Python applications with OpenTelemetry

Limitations

If you use a custom winston formatter/transport (applicable to Node.js only), you need to manually add your injected dt.traceId and dt.spanId as in the example below:

JavaScript
const winston = require("winston"); const Transport = require("winston-transport"); class CustomTransport extends Transport { log(info, next) { let myLogLine = `MyLogLine: ${info.timestamp} level=${info.level}: ${info.message}`; // this is important as above line only picks timestamp, level and message but nothing else from metadata if (info["dt.trace_id"]) { myLogLine += ` [!dt dt.trace_id=${info["dt.trace_id"]},dt.span_id=${info["dt.span_id"]},dt.trace_sampled=${info["dt.trace_sampled"]}]`; } console.log(myLogLine); next(); } } const logger = winston.createLogger({ level: "info", format: winston.format.timestamp(), transports: [ new CustomTransport(), // this transport includes all metadata (including dynatrace added traceId,..) new winston.transport.Console({ format: winston.format.simple() }) ] })
Related topics
  • Log analysis with PurePath® technology

    Enhance your distributed trace analysis with logs.