Organize Kubernetes deployments by tags
Dynatrace automatically derives tags from your Kubernetes/OpenShift labels. This enables you to automatically organize and filter all your monitored Kubernetes/OpenShift application components.
It's recommended that you define additional metadata at the deployed system. For Kubernetes-based applications, you can simply use Kubernetes annotations. Dynatrace automatically detects and retrieves all Kubernetes and OpenShift annotations for pods that are monitored with a OneAgent code module. This enables you to use automated tagging rules, based on existing or custom metadata, to define your filter sets for charts, alerting, and more. These tags and rules can be changed and adapted any time and will apply almost immediately without any change to the monitored environment or applications.
Automatic detection of Kubernetes properties and annotations
- Kubernetes base pod name: User-provided name of the pod the container belongs to.
- Kubernetes container: Name of the container that runs the process.
- Kubernetes full pod name: Full name of the pod the container belongs to.
- Kubernetes namespace: Namespace to which the containerized process is assigned.
- Kubernetes pod UID: Unique ID of the related pod.
Leverage Kubernetes labels in Dynatrace
Kubernetes-based tags are searchable via Dynatrace search. This allows you to easily find and inspect the monitoring results of related processes running in your Kubernetes or OpenShift environment. You can also leverage Kubernetes tags to set up fine-grained alerting profiles. Kubernetes tags also integrate perfectly with Dynatrace filters.
Import your labels and annotations
Dynatrace automatically detects all labels attached to pods at application deployment time. All you have to do is grant sufficient privileges to the pods that allow for reading the metadata from the Kubernetes REST API endpoint. This way, the OneAgent code modules can read these labels directly from the pod.
Note: OneAgent will pick up annotations and labels only from pods that are monitored with a code module.
Grant viewer role to service accounts
In Kubernetes, every pod is associated with a service account which is used to authenticate the pod's requests to the Kubernetes API. If not otherwise specified the pod uses the
default service account of its namespace.
Every namespace has its own set of service accounts and thus also its own namespace-scoped
default service account. The labels of each pod for which the service account has view permissions will be imported into Dynatrace automatically.
The following steps show you how to add view privileges to the
default service account in the
namespace1 namespace. You need to repeat these steps for all service accounts and namespaces you want to enable for Dynatrace.
Create the following
RoleBinding, these allow the
default service account to view the necessary metadata about your namespace
namespace1 via the Kubernetes REST API:
# dynatrace-oneagent-metadata-viewer.yaml kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: namespace1 name: dynatrace-oneagent-metadata-viewer rules: - apiGroups: [""] resources: ["pods"] verbs: ["get"] kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: dynatrace-oneagent-metadata-viewer-binding namespace: namespace1 subjects: - kind: ServiceAccount name: default apiGroup: "" roleRef: kind: Role name: dynatrace-oneagent-metadata-viewer apiGroup: ""
kubectl -n namespace1 create -f dynatrace-oneagent-metadata-viewer.yaml
Your Kubernetes labels will be automatically attached as Kubernetes tags to all monitored Kubernetes processes in your Dynatrace environment.