Monitor Prometheus metrics

Prometheus is an open-source monitoring and alerting toolkit which is popular in the Kubernetes community. Prometheus scrapes metrics from a number of HTTP(s) endpoints that expose metrics in the OpenMetrics format.

Dynatrace integrates gauge and counter metrics from Prometheus exporters in Kubernetes and makes them available for charting, alerting, and analysis. See the list of available exporters in the Prometheus documentation.

Prerequisites

  • In the Dynatrace menu, go to Settings > Cloud and virtualization > Kubernetes and turn on Enable monitoring and Monitor Prometheus exporters.
  • Annotated pod definitions, see below.

Annotate Prometheus exporter pods

Dynatrace collects metrics from any pods that are annotated with a metrics.dynatrace.com/scrape property set to true in the pod definition.

Depending on the actual exporter in a pod, you might need to set additional annotations to the pod definition in order to allow Dynatrace to properly ingest those metrics.

Enable metrics scraping required

Set metrics.dynatrace.com/scrape to 'true' to enable Dynatrace to collect Prometheus metrics exposed for this pod.

Path to metrics endpoint optional

Use metrics.dynatrace.com/path to override the default (/metrics) Prometheus endpoint.

Metrics port required

By default, Prometheus metrics are available at the first exposed TCP port of the pod. Set metrics.dynatrace.com/port to the respective port.

HTTP/HTTPS optional

Set metrics.dynatrace.com/secure to true if you want to collect metrics that are exposed by an exporter via HTTPS. The default value is false, because most exporters expose their metrics via HTTP.

Filter metrics optional

Use metrics.dynatrace.com/filter to define a filter that allows you to either include ("mode": "include") or exclude (("mode": "exclude")) a list of metrics. If no filter annotation is defined, all metrics are collected.

See an example of a simple pod definition with the annotations.

Note: The values for metrics.dynatrace.com/path, metrics.dynatrace.com/port, and metrics.dynatrace.com/secure depend on the exporter you use; adapt it to your requirements. To determine the port value, see Default port allocations for a list of common ports for known exporters.

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  annotations:
    metrics.dynatrace.com/scrape: 'true'
    metrics.dynatrace.com/path: '/path/to-metrics'
    metrics.dynatrace.com/port: '9001'
    metrics.dynatrace.com/secure: 'false'
    metrics.dynatrace.com/filter: |
    {
      "mode": "include",
      "names": [
          "redis_db_keys",
          "redis_db_values"
          ]
    }
spec:
  containers:
  - name: mycontainer
    image: myregistry/myimage:mytag

For more information on how to annotate pods, see annotations best practices.

Annotate Kubernetes services

Requirements

You can also annotate services instead of pods. Pods corresponding to the Kubernetes services are automatically discovered via the service label selector, which scrapes all pods belonging to the service.

Note: The service and the corresponding pods need to be in the same namespace.

You can have annotations on services and pods at the same time. If the resulting metric endpoints are identical, they are only scraped once.
For more information on how to annotate services, see Annotation best practices.

Client authentication optional

Requirements

Some systems require extra authentication before Dynatrace can scrape them. For such cases, you can set the following additional annotations:

  • metrics.dynatrace.com/tls.ca.crt
  • metrics.dynatrace.com/tls.crt
  • metrics.dynatrace.com/tls.key

The required certificates/keys are automatically loaded from secret/configmaps specified in the annotation value.
The schema for the annotation values is <configmap|secret>:<namespace>:<resource_name>:<field_name_in_data_section>.

For example, for etcd, the annotations could look as follows:

metrics.dynatrace.com/tls.ca.crt='configmap:kubernetes-config:etcd-metric-serving-ca:ca-bundle.crt'
metrics.dynatrace.com/tls.crt='secret:kubernetes-config:etcd-metric-client:tls.crt'
metrics.dynatrace.com/tls.key='secret:kubernetes-config:etcd-metric-client:tls.key'

Role-based access control (RBAC) authorization for metric ingestion

Requirements

Some exporter pods such as node-exporter, kube-state-metrics, and openshift-state-metrics require RBAC authorization. For these exporter pods, add the following annotation:

metrics.dynatrace.com/http.auth: 'builtin:default'

Annotation best practices

There are multiple ways to place annotations on pods or services. See below to decide which approach fits your scenario best.

If you have full control over the pod template or service definition, we recommend adding the annotations by editing these files. This is the most reliable way to ensure persistency of annotations. We recommend editing the pod template over editing the service definition, as this requires fewer permissions (for example, if you don't have access to services).
Pro: Annotations are persistent, so they don't need to be recreated if a pod is removed.

Options if you don't have full control

If you don't have full control over the pod template, you have the following options:

  • Annotate an existing service (in YAML)

Prerequisites: Have control over an existing YAML and the necessary service permission.
Pro: Annotations are persistent.
Con: None.

  • Create a new service (in YAML)

Prerequisites The new service needs to have the prefix dynatrace-monitoring-, be in the same namespace as the pods, and have the necessary service permission.
Pro: You have control over the original workload/service.
Con: A label selector sync is required. We support only the label selector.
Example:
Note: The values for metrics.dynatrace.com/path, metrics.dynatrace.com/port, and metrics.dynatrace.com/secure depend on the exporter you use; adapt it to your requirements. To determine the port value, see Default port allocations for a list of common ports for known exporters.

kind: Service
apiVersion: v1
metadata:
  name: dynatrace-monitoring-node-exporter
  namespace: kubernetes-monitoring
  annotations:
    metrics.dynatrace.com/port: '9100'
    metrics.dynatrace.com/scrape: 'true'
    metrics.dynatrace.com/secure: 'true'
    metrics.dynatrace.com/path: '/metrics'
spec:
  ports:
    - name: dynatrace-monitoring-node-exporter-port
      port: 9100
  selector:
    app.kubernetes.io/name: node-exporter
  clusterIP: None
  • Annotate an existing service (in CLI)

Prerequisites: Have the necessary service permission.
Pro: No label selector sync is required.
Con: Annotations aren't persistent, so changes will overwrite the annotations. We support only the label selector.

  • Annotate existing pods (in CLI)

Prerequisites: None.
Pro: You can quickly test metric ingestion.
Con: Annotations aren't persistent, so changes will overwrite the annotations.

View metrics on a dashboard

Metrics from Prometheus exporters are available in the Data Explorer for custom charting. Select Create custom chart and select Try it out in the top banner. For more information, see Data explorer.

You can simply search for metric keys of all available metrics and define how you’d like to analyze and chart your metrics. After that you can pin your charts on a dashboard.

Metric alerts

You can also create custom alerts based on the Prometheus scraped metrics. From the navigation menu, select Settings > Anomaly detection > Custom events for alerting and select Create custom event for alerting. In the Create custom event for alerting page, search for a Prometheus metric using its key and define your alert. For more information, see Metric events for alerting.

Limitations

The current limitations of the Prometheus metrics integration are as follows:

  • This integration supports only the counter and gauge Prometheus metric types.
  • If you run multiple exporters in a pod, you need to set the metrics.dynatrace.com/port annotation to direct Dynatrace to the one it should use.
  • This integration supports up to 1,000 pods with 200 metric data points each per minute.

Monitoring consumption

Prometheus metrics in Kubernetes environments are subject to DDU consumption.

  • Prometheus metrics from exporters that run on OneAgent-monitored hosts are first deducted from your quota of included metrics per host unit. Once this quota is exceeded, the remaining metrics consume DDUs.
  • Prometheus metrics from exporters that run on hosts that aren't monitored by OneAgent always consume DDUs.