• Home
  • Platform modules
  • Infrastructure Monitoring
  • Container platform monitoring
  • Kubernetes
  • Organize Kubernetes/OpenShift deployments by tags

Organize Kubernetes/OpenShift deployments by tags

Dynatrace automatically derives tags from your Kubernetes/OpenShift labels. This enables you to automatically organize and filter all your monitored Kubernetes/OpenShift application components.

Recommendation

We recommend that you define additional metadata at the deployed system. For Kubernetes-based applications, you can simply use Kubernetes annotations. Dynatrace automatically detects and retrieves all Kubernetes and OpenShift annotations for pods that are monitored with a OneAgent code module. This enables you to use automated tagging rules, based on existing or custom metadata, to define your filter sets for charts, alerting, and more. These tags and rules can be changed and adapted any time and will apply almost immediately without any change to the monitored environment or applications.

Automatic detection of Kubernetes properties and annotations

Dynatrace detects Kubernetes properties and annotations. Such properties and annotations can be used when specifying automated rule-based tags.

Additionally Dynatrace detects the following properties that can be used for automated rule-based tags and property-based process group detection rules.

  • Kubernetes base pod name: User-provided name of the pod the container belongs to.
  • Kubernetes container: Name of the container that runs the process.
  • Kubernetes full pod name: Full name of the pod the container belongs to.
  • Kubernetes namespace: Namespace to which the containerized process is assigned.
  • Kubernetes pod UID: Unique ID of the related pod.

Leverage Kubernetes labels in Dynatrace

Kubernetes-based tags are searchable via Dynatrace search. This allows you to easily find and inspect the monitoring results of related processes running in your Kubernetes or OpenShift environment. You can also leverage Kubernetes tags to set up fine-grained alerting profiles. Kubernetes tags also integrate perfectly with Dynatrace filters.

Import your labels and annotations

Requirements

For OneAgent to detect Kubernetes annotations and properties, make sure that

  • Pods are monitored with a code module
  • automountServiceAccountToken: false isn't set in your pod's spec

You can specify Kubernetes labels in the deployment definition of your application or you can update the labels of your Kubernetes resources using the command kubectl label.

You can specify OpenShift labels in the Pod object definition of your application or you can update the labels of your OpenShift resources using the command oc label.

Dynatrace automatically detects all labels attached to pods at application deployment time. All you have to do is grant sufficient privileges to the pods that allow for reading the metadata from the Kubernetes REST API endpoint. This way, the OneAgent code modules can read these labels directly from the pod.

Grant viewer role to service accounts

In Kubernetes, every pod is associated with a service account which is used to authenticate the pod's requests to the Kubernetes API. If not otherwise specified the pod uses the default service account of its namespace.

Every namespace has its own set of service accounts and thus also its own namespace-scoped default service account. The labels of each pod for which the service account has view permissions will be imported into Dynatrace automatically.

The following steps show you how to add view privileges to the default service account in the namespace1 namespace. You need to repeat these steps for all service accounts and namespaces you want to enable for Dynatrace.

Create the following Role and RoleBinding, which allow the default service account to view the necessary metadata about your namespace namespace1 via the Kubernetes REST API:

yaml
# dynatrace-oneagent-metadata-viewer.yaml kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: namespace1 name: dynatrace-oneagent-metadata-viewer rules: - apiGroups: [""] resources: ["pods"] verbs: ["get"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: dynatrace-oneagent-metadata-viewer-binding namespace: namespace1 subjects: - kind: ServiceAccount name: default apiGroup: "" roleRef: kind: Role name: dynatrace-oneagent-metadata-viewer apiGroup: ""
bash
kubectl -n namespace1 create -f dynatrace-oneagent-metadata-viewer.yaml

In OpenShift, every pod is associated with a service account that's used to authenticate the pod's requests to the Kubernetes API. If not otherwise specified, the pod uses the default service account of its OpenShift project.

Each OpenShift project has its own set of service accounts and thus also its own project-scoped default service account. The labels of every pod whose service account has view permissions will be imported to Dynatrace automatically.

The following steps show you how to add view privileges to the default service account in the project1 project. You need to repeat these steps for all service accounts and projects you want to enable for Dynatrace.

Create the following Role, which will allow a service account to view the necessary metadata about your namespace project1 via the Kubernetes REST API:

yaml
# dynatrace-oneagent-metadata-viewer.yaml kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: project1 name: dynatrace-oneagent-metadata-viewer rules: - apiGroups: [""] resources: ["pods"] verbs: ["get"]
bash
oc -n project1 create -f dynatrace-oneagent-metadata-viewer.yaml

Bind the Role to the default service account for the Role to take effect:

bash
oc -n project1 policy add-role-to-user dynatrace-oneagent-metadata-viewer --role-namespace="project1" -z default

Alternatively, OpenShift also allows you to bind the Role to all service accounts in a project.

bash
oc -n project1 policy add-role-to-group dynatrace-oneagent-metadata-viewer --role-namespace="project1" system:serviceaccounts:project1

As a result, Kubernetes processes monitored in your Dynatrace environment will have Kubernetes labels attached as Kubernetes tags. For namespaces, pods, and workloads, Kubernetes tags are not evaluated.

Related topics
  • Set up Dynatrace on Kubernetes/OpenShift

    Ways to deploy and configure Dynatrace on Kubernetes/OpenShift