Collector Configuration
To successfully configure your Collector instance, you need to configure each component (receiver, optional processor, and exporter) individually in a YAML file and enable them via pipelines.
Configuration example
Here is an example YAML file for a very basic Collector configuration that can be used to export OpenTelemetry traces, metrics, and logs to Dynatrace.
receivers:
otlp:
protocols:
grpc:
http:
processors:
cumulativetodelta:
exporters:
otlphttp:
endpoint: "https://{your-environment-id}.live.dynatrace.com/api/v2/otlp"
headers:
Authorization: "Api-Token ${API_TOKEN}"
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [cumulativetodelta]
exporters: [otlphttp]
logs:
receivers: [otlp]
processors: []
exporters: [otlphttp]
In this YAML file, we configure the following components:
-
An OTLP receiver (
otlp
) that can receive data via gRPC and HTTP -
A processor to convert any metrics with cumulative temporality to delta temporality (see Delta metrics for more details)
-
An OTLP HTTP exporter (
otlphttp
) configured with the Dynatrace endpoint and API tokenIn the example configuration above, the Dynatrace token needs to have the Ingest OpenTelemetry traces (
openTelemetryTrace.ingest
), the Ingest metrics (metrics.ingest
), and the Ingest logs (logs.ingest
) permissions.
The section on API tokens provides more information on how to obtain and configure your API token.
Within the service section, you define each component separately.
-
Extensions can be enabled in their own section, while receivers, processors, and exporters are grouped under a pipeline section.
-
Pipelines can be of type traces, metrics, or logs.
-
Each receiver/processor/exporter can be used in more than one pipeline. For processors referenced in multiple pipelines, each pipeline gets a separate instance of the processors. This contrasts with receivers/exporters referenced in multiple pipelines, where only one instance of a receiver/exporter is used for all pipelines. Also, note that the order of processors dictates the order in which data is processed.
-
You can also define the same components more than once. For example, you can have two different receivers or even two or more distinct parts of the pipeline.
-
Even if a component is properly configured in its section, it will not be enabled unless it's also defined in the service section.
Where to place the config
When running a local build of the Collector, the configuration can be provided as a plain YAML file.
git clone https://github.com/open-telemetry/opentelemetry-collector.git
cd opentelemetry-collector
make install-tools
make otelcorecol./bin/otelcorecol_* --config ./examples/local/otel-config.yaml
To run the Collector using a pre-built image, use the following commands. The Collector config has to be in a YAML file on your disk and mounted into the container as shown.
docker pull otel/opentelemetry-collector
docker run -v $(pwd)/config.yaml:/etc/otelcol/config.yaml otel/opentelemetry-collector
It's also possible to add the Collector to an existing docker-compose.yaml
# Collector
otel-collector:
image: otel/opentelemetry-collector
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- "1888:1888" # pprof extension
- "8888:8888" # Prometheus metrics exposed by the collector
- "8889:8889" # Prometheus exporter metrics
- "13133:13133" # health_check extension
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP http receiver
- "55679:55679" # zpages extension
Deploying the Collector as a custom resource requires a working OpenTelemetry Kubernetes operator in the target cluster. The Collector config is then part of the CR manifest.
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel-col
namespace: otel-col
spec:
mode: daemonset
config: |
receivers:
otlp:
protocols:
grpc:
http:
exporters:
otlphttp:
endpoint: "https://{your-environment-id}.live.dynatrace.com/api/v2/otlp"
headers:
Authorization: "Api-Token <API_TOKEN>"
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: []
exporters: [otlphttp]
When using Helm to deploy the Collector, the Collector config is put into a values file provided when calling Helm. Here is an example of a values file holding a Collector configuration.
mode: daemonset
presets:
logsCollection:
enabled: true
includeCollectorLogs: true
config:
receivers:
otlp:
protocols:
grpc:
http:
exporters:
otlphttp:
endpoint: "https://{your-environment-id}.live.dynatrace.com/api/v2/otlp"
headers:
Authorization: "Api-Token <API_TOKEN>"
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: []
exporters: [otlphttp]
Now you can use the Helm charts provided by the OpenTelemetry community to run the Collector in your Kubernetes cluster.
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install my-opentelemetry-collector open-telemetry/opentelemetry-collector -f values.yaml
Delta metrics
Dynatrace requires metrics data to be sent with delta temporality and not cumulative temporality.
If your application doesn't allow you to configure delta temporality, you can use the cumulativetodelta
processor to have your Collector instance adjust cumulative values to delta values. The configuration example above shows how to configure and reference the processor in your Collector configuration.
Chained and load-balanced Collectors
When you use more than one Collector instance, it's important to maintain stable value propagation across all instances.
This is particularly important when you send OTLP requests across different Collector instances (for example, load balancing), as each Collector instance keeps track of its own delta offset, which may break the data reported to the Dynatrace backend.
In such scenarios, we recommend routing your OTLP requests through a single, outbound Collector instance that forwards the data to the Dynatrace backend and takes care of the delta conversion. The other Collector instances should use a cumulative aggregation, to ensure stable and consistent value propagation.
API tokens
OTLP requests to ActiveGate require authentication information provided by Dynatrace API tokens.
The previous configuration sample shows how to configure the Authorization
header for the exporter.
exporters:
otlphttp:
headers:
Authorization: "Api-Token ${API_TOKEN}"
While you could hardcode the API token, we recommend using an external data source, such as environment variables, for better security.
In the example here, we specified the API token using the environment variable API_TOKEN
and reference the variable with the ${}
notation.