Monitor Prometheus metrics
Dynatrace version 1.232+
Prometheus is an open-source monitoring and alerting toolkit which is popular in the Kubernetes community. Prometheus scrapes metrics from a number of HTTP(s) endpoints that expose metrics in the OpenMetrics format. See the list of available exporters in the Prometheus documentation.
Dynatrace integrates gauge, counter, and, starting with ActiveGate version 1.245, summary metrics from Prometheus exporters in Kubernetes and makes them available for charting, alerting, and analysis. Starting with ActiveGate version 1.261, there is a limited support for histogram metrics.
Note: A summary datatype is ingested as three metrics:
- A gauge-based metric with the same name as the original exported metric (for example,
go_gc_duration_seconds
), containing the buckets as dimensions - A counter-based metric for the sum, suffixed with
_sum.count
(for example,go_gc_duration_seconds_sum.count
) - A counter-based metric for the count, suffixed with
_count
(for example,go_gc_duration_seconds_count
)
Note: For histogram support, a lightweight solution is provided, where a histogram datatype is ingested as two metrics:
- A counter-based metric for the sum, suffixed with
_sum.count
(for example,pilot_proxy_convergence_time_sum.count
) - A counter-based metric for the count, suffixed with
_count
(for example,pilot_proxy_convergence_time_count
)
Prerequisites
-
ActiveGate version 1.217+
Note: We recommend that you use an ActiveGate that is running inside the Kubernetes cluster from which you want to scrape Prometheus metrics. If your ActiveGate is running outside the monitored cluster (for example, in a VM or in a different Kubernetes cluster), it won't be able to scrape the Prometheus endpoint on pods that require authentication (such as RBAC or client authentication). An ActiveGate running inside the cluster will also provide improved performance.
-
In Dynatrace, go to your Kubernetes cluster settings page and enable
- Monitor Kubernetes namespaces, services, workloads, and pods
- Monitor annotated Prometheus exporters
-
Annotated pod definitions, see below.
Annotate Prometheus exporter pods
Dynatrace collects metrics from any pods that are annotated with a metrics.dynatrace.com/scrape
property set to true
in the pod definition.
Depending on the actual exporter in a pod, you might need to set additional annotations to the pod definition in order to allow Dynatrace to properly ingest those metrics.
Enable metrics scraping required
Set metrics.dynatrace.com/scrape
to 'true'
to enable Dynatrace to collect Prometheus metrics exposed for this pod.
Metrics port required
By default, Prometheus metrics are available at the first exposed TCP port of the pod. Set metrics.dynatrace.com/port
to the respective port.
Path to metrics endpoint optional
Use metrics.dynatrace.com/path
to override the default (/metrics
) Prometheus endpoint.
HTTP/HTTPS optional
Set metrics.dynatrace.com/secure
to true
if you want to collect metrics that are exposed by an exporter via HTTPS. The default value is false
, because most exporters expose their metrics via HTTP.
Filter metrics optional
Use metrics.dynatrace.com/filter
to define a filter that allows you to include ("mode": "include"
) or exclude (("mode": "exclude"
)) a list of metrics. If no filter annotation is defined, all metrics are collected.
The filter syntax also supports the asterisk (*
). This symbol allows you to filter metrics keys that begin with, end with, or contain a particular sequence, such as:
redis_db*
filters all metrics starting withredis_db
*db*
filters all metrics containingdb
*bytes
filters all metrics ending withbytes
Note: Using the *
symbol within a filter, such as redis_*_bytes
, is not supported.
This example shows a simple pod definition with annotations.
Note: The values for metrics.dynatrace.com/path
, metrics.dynatrace.com/port
, and metrics.dynatrace.com/secure
depend on the exporter you use; adapt it to your requirements. To determine the port value, see Default port allocations for a list of common ports for known exporters.
apiVersion: v1
kind: Pod
metadata:
name: mypod
annotations:
metrics.dynatrace.com/scrape: 'true'
metrics.dynatrace.com/path: '/path/to-metrics'
metrics.dynatrace.com/port: '9001'
metrics.dynatrace.com/secure: 'false'
metrics.dynatrace.com/filter: |
{
"mode": "include",
"names": [
"redis_db_keys",
"redis_db_values",
"redis*"
]
}
spec:
containers:
- name: mycontainer
image: myregistry/myimage:mytag
For more information on how to annotate pods, see annotations best practices.
Annotate Kubernetes services
Requirements: Add the permission to access services in the Kubernetes ClusterRole (not needed for Dynatrace Operator users, as this is enabled by default in clusterrole-kubernetes-monitoring.yaml).
You can also annotate services instead of pods. Pods corresponding to the Kubernetes services are automatically discovered via the service label selector, which scrapes all pods belonging to the service.
Note: The service and the corresponding pods need to be in the same namespace.
You can have annotations on services and pods at the same time. If the resulting metric endpoints are identical, they are only scraped once.
For more information on how to annotate services, see Annotation best practices.
Client authentication optional
Requirements: Add the permissions to access secrets
and configmaps
in the Kubernetes ClusterRole.
Some systems require extra authentication before Dynatrace can scrape them. For such cases, you can set the following additional annotations:
metrics.dynatrace.com/tls.ca.crt
metrics.dynatrace.com/tls.crt
metrics.dynatrace.com/tls.key
The required certificates/keys are automatically loaded from secret
/configmaps
specified in the annotation value.
The schema for the annotation values is <configmap|secret>:<namespace>:<resource_name>:<field_name_in_data_section>
.
For example, for etcd, the annotations could look as follows:
metrics.dynatrace.com/tls.ca.crt='configmap:kubernetes-config:etcd-metric-serving-ca:ca-bundle.crt'
metrics.dynatrace.com/tls.crt='secret:kubernetes-config:etcd-metric-client:tls.crt'
metrics.dynatrace.com/tls.key='secret:kubernetes-config:etcd-metric-client:tls.key'
Role-based access control (RBAC) authorization for metric ingestion
Some exporter pods such as node-exporter, kube-state-metrics, and openshift-state-metrics require RBAC authorization. For these exporter pods, add the following annotation:
metrics.dynatrace.com/http.auth: 'builtin:default'
Annotation best practices
There are multiple ways to place annotations on pods or services. See below to decide which approach fits your scenario best.
Recommended if you have full control
If you have full control over the pod template or service definition, we recommend adding the annotations by editing these files. This is the most reliable way to ensure persistency of annotations. We recommend editing the pod template over editing the service definition, as this requires fewer permissions (for example, if you don't have access to services).
Pro: Annotations are persistent, so they don't need to be recreated if a pod is removed.
Options if you don't have full control
If you don't have full control over the pod template, you have the following options:
-
Annotate an existing service (in YAML)
Requirements: Have control over an existing YAML and the permission to edit the existing Kubernetes service object.
Pro: Annotations are persistent.
Con: None. -
Create a new service (in YAML)
Requirements The new service needs to have the prefixdynatrace-monitoring-
, be in the same namespace as the pods, and have the permission to create a Kubernetes service object.
Pro: You have control over the original workload/service.
Con: A label selector sync is required. We support only the label selector.
Example:
Note: The values formetrics.dynatrace.com/path
,metrics.dynatrace.com/port
, andmetrics.dynatrace.com/secure
depend on the exporter you use; adapt it to your requirements. To determine the port value, see Default port allocations for a list of common ports for known exporters.kind: Service apiVersion: v1 metadata: name: dynatrace-monitoring-node-exporter namespace: kubernetes-monitoring annotations: metrics.dynatrace.com/port: '9100' metrics.dynatrace.com/scrape: 'true' metrics.dynatrace.com/secure: 'true' metrics.dynatrace.com/path: '/metrics' spec: ports: - name: dynatrace-monitoring-node-exporter-port port: 9100 selector: app.kubernetes.io/name: node-exporter clusterIP: None
-
Annotate an existing service (in CLI)
Requirements: Have permission to edit the existing Kubernetes service object.
Pro: No label selector sync is required.
Con: Annotations aren't persistent, so changes will overwrite the annotations. We support only the label selector. -
Annotate existing pods (in CLI)
Requirements: None.
Pro: You can quickly test metric ingestion.
Con: Annotations aren't persistent, so changes will overwrite the annotations.
View metrics on a dashboard
Metrics from Prometheus exporters are available in the Data Explorer for custom charting. Select Create custom chart and select Try it out in the top banner. For more information, see Data explorer.
You can simply search for metric keys of all available metrics and define how you’d like to analyze and chart your metrics. After that you can pin your charts on a dashboard.
Metric alerts
You can also create custom alerts based on the Prometheus scraped metrics. From the navigation menu, select Settings > Anomaly detection > Metric events and select Add metric event. In the Add metric event page, search for a Prometheus metric using its key and define your alert. For more information, see Metric events for alerting.
Limitations
The current limitations of the Prometheus metrics integration are as follows:
Prometheus metric types
Only the counter, gauge, and summary Prometheus metric types are supported.
Multiple exporters in a pod
Multiple exporters currently aren't supported; you can only select the exporter that is being used with the metrics.dynatrace.com/port
annotation.
Number of pods, metrics, and metric data points
This integration supports a maximum of
-
1,000 exporter pods
-
1,000 metrics per pod
-
200,000 metric data points
Note: Even though larger datasets are allowed, these can lead to ingestion gaps, as Dynatrace collects all metrics every minute before sending them to the cluster.
Monitoring consumption
Prometheus metrics in Kubernetes environments are subject to DDU consumption.
- Prometheus metrics from exporters that run on OneAgent-monitored hosts are first deducted from your quota of included metrics per host unit. Once this quota is exceeded, the remaining metrics consume DDUs.
- Prometheus metrics from exporters that run on hosts that aren't monitored by OneAgent always consume DDUs.
Troubleshoot
To troubleshoot Prometheus integration issues, download the Kubernetes Monitoring Statistics extension.