Batch OTLP requests
The following configuration example shows how you configure a Collector instance and its native batch processor to queue and batch OTLP requests and improve throughput performance.
receivers: otlp: protocols: grpc: http: processors: batch: send_batch_max_size: 1000 timeout: 30s send_batch_size : 800 exporters: otlphttp: endpoint: $DT_ENDPOINT/api/v2/otlp headers: Authorization: "Api-Token $DT_API_TOKEN" service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [otlphttp] metrics: receivers: [otlp] processors: [batch] exporters: [otlphttp] logs: receivers: [otlp] processors: [batch] exporters: [otlphttp]
- At least Core distribution with the batch processor
- The API URL of your Dynatrace environment
- An API token with the relevant access scope
For our configuration, we configure the following components.
receivers, we specify the standard
otlp receiver as active receiver component for our Collector instance.
This is for demonstration purposes. You can specify any other valid receiver here (for example,
processors, we specify the
batch processor with the following parameters:
send_batch_max_sizeconfigured for a maximum of 1,000 entries per batch
timeoutconfigured to always send data after 30 seconds, regardless of any other batch limits
send_batch_sizeconfigured to always send data after 800 entries, regardless of any other batch limits
With this configuration, the Collector queues telemetry entries in batches and sends a batch either after 30 seconds have passed or at least 800 entries are queued.
exporters, we specify the default
otlphttp exporter and configure it with our Dynatrace API URL and the required authentication token.
For this purpose, we set the following two environment variables and reference them in the configuration values for
DT_ENDPOINTcontains the base URL of your ActiveGate (for example,
DT_API_TOKENcontains the API token
service, we assemble our receiver and exporter objects into pipelines for traces, metrics, and logs and enable our batch processor by referencing it under
processors for each respective pipeline.