• Home
  • Setup and configuration
  • Set up Dynatrace on container platforms
  • Kubernetes
  • Get started with Kubernetes/OpenShift monitoring
  • Configuration options for Dynatrace Operator on Kubernetes/OpenShift

Configuration options for Dynatrace Operator on Kubernetes/OpenShift

See below for a list of configuration options available for Dynatrace Operator.

Configure build label propagation

As part of getting started with Kubernetes monitoring, you may want to configure build label propagation. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

Build label propagation enables you to provide build and version metadata information to the injected OneAgent about the newly deployed pods. This information is then visible on the Properties and tags section of your entities pages.

How it works

You can reference the value of a metadata field in an environment variable.

Example:

yaml
env: - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace

Then OneAgent injects into the newly deployed pods and collects the metadata provided via environment variables.

Enable feature

To enable build label propagation, you need to set feature.dynatrace.com/label-version-detection to true in DynaKube. Note that since enabling build label propagation requires webhook injection, it only works with applicationMonitoring and cloudNativeFullStack deployments.

Example:

yaml
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace annotations: feature.dynatrace.com/label-version-detection: "true" ... oneAgent: cloudNativeFullStack: {}

Default behavior

  • The DT_RELEASE_VERSION environment variable gets the value from metadata.labels['app.kubernetes.io/version'].
  • The DT_RELEASE_PRODUCT environment variable gets the value from metadata.labels['app.kubernetes.io/part-of'].

For example, if your application has the following pod:

yaml
apiVersion: v1 kind: Pod metadata: ... labels: app.kubernetes.io/version: "1.0.0" app.kubernetes.io/part-of: "store" spec: ...

the value of the labels is added to the environment variables of the injected containers:

yaml
apiVersion: v1 kind: Pod metadata: ... labels: app.kubernetes.io/version: "1.0.0" app.kubernetes.io/part-of: "Store" spec: ... containers: - name: app ... env: - name: "DT_RELEASE_VERSION" valueFrom: fieldRef: fieldPath: metadata.labels['app.kubernetes.io/version'] - name: "DT_RELEASE_PRODUCT" valueFrom: fieldRef: fieldPath: metadata.labels['app.kubernetes.io/part-of']

Note: If the DT_RELEASE_VERSION or DT_RELEASE_PRODUCT environment variables are already set on the container before the OneAgent injection, they will not be overwritten.

Configuration options

You can annotate your namespace to provide further mappings or overrule the defaults for pods within that namespace.

  • Each annotation key is mapped to a specific environment variable.
  • Each annotation value is the reference path in fieldPath.
  • The available information for fieldPath is the same as for fieldRef.

Example to overwrite the default values for version and product, and enable stage and build-version:

yaml
annotations: mapping.release.dynatrace.com/version: "metadata.annotations['my-version']" mapping.release.dynatrace.com/product: "metadata.labels['app.kubernetes.io/name']" mapping.release.dynatrace.com/stage: "metadata.namespace" mapping.release.dynatrace.com/build-version: "metadata.labels['release.dynatrace.com/stage']"

Each of these annotations configures a different environment variable:

  • mapping.release.dynatrace.com/version holds the fieldPath used for DT_RELEASE_VERSION. If this annotation is missing, mapping falls back to the default behavior.
  • mapping.release.dynatrace.com/product holds the fieldPath used for DT_RELEASE_PRODUCT. If this annotation is missing, mapping falls back to the default behavior.
  • mapping.release.dynatrace.com/stage holds the fieldPath used for DT_RELEASE_STAGE.
  • mapping.release.dynatrace.com/build-version holds the fieldPath used for DT_RELEASE_BUILD_VERSION.

Note: The values aren't validated by Dynatrace Operator or the webhook, so make sure they are valid.

Add a custom properties file optional

As part of getting started with Kubernetes monitoring, you may want to add a custom properties file. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

You can add a custom properties file by providing it as a value or by referencing it from a secret.

  • To add the custom properties file as a value, see the example below.
yaml
customProperties: value: | [kubernetes_monitoring] ...
  • To reference the custom properties file from a secret
  1. Create a secret with the following content.

Note: The content of the secret has to be base64 encoded in order to work.

yaml
apiVersion: v1 kind: Secret metadata: name: <customproperties-secret> namespace: dynatrace data: customProperties: <base64 encoded properties>
  1. Add the secret to the custom properties.
yaml
customProperties: valueFrom: <customproperties-secret>

Add a custom certificate for ActiveGate optional

As part of getting started with Kubernetes monitoring, you may want to add a custom certificate for ActiveGate. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

To add a custom certificate for ActiveGate:

  1. Create a secret.

    bash
    kubectl -n dynatrace create secret generic dynakube-custom-certificate --from-file=server.p12 --from-literal=password=<password_to_server.p12> --from-file=server.crt
    bash
    oc -n dynatrace create secret generic dynakube-custom-certificate --from-file=server.p12 --from-literal=password=<password_to_server.p12> --from-file=server.crt
  2. In your custom resource, enable the tlsSecretName parameter and enter the name of the secret you created.

    Example:

    json
    apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://FQDN/api activeGate: tlsSecretName: dynakube-custom-certificate capabilities: - kubernetes-monitoring

    Note: HTTP clients connecting to the ActiveGate REST endpoint must trust provided certificates.

Configure proxy optional

As part of getting started with Kubernetes monitoring, you may want to configure a proxy. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

You can configure optional parameters like proxy settings in the DynaKube custom resource file in order to:

  • Download the OneAgent installer
  • Ensure communication between the OneAgent and your Dynatrace environment
  • Ensure communication between Dynatrace Operator and the Dynatrace API

There are two ways to provide the proxy, depending on whether your proxy uses credentials.

If you have a proxy that doesn't use credentials, enter your proxy URL directly in the value field for the proxy.

Example

plaintext
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://environmentid.live.dynatrace.com/api proxy: value: http://mysuperproxy

If your proxy uses credentials

  1. Create a secret with a field called proxy that holds your encrypted proxy URL with the credentials.

    Example.

    plaintext
    kubectl -n dynatrace create secret generic myproxysecret --from-literal="proxy=http://<user>:<password>@<IP>:<PORT>"
    plaintext
    oc -n dynatrace create secret generic myproxysecret --from-literal="proxy=http://<user>:<password>@<IP>:<PORT>"
    Rules for the proxy password
    1. The proxy password needs to meet the following requirements:

      RequirementsCorresponding characters
      Characters allowed[A-Za-z0-9]
      ! " # $ ( ) * - . / : ; < > ? @ [ ] ^ _ { | }
      Characters not allowedblank space
      ' ` , & = + % \
    2. The password specified in the CR or in the proxy secret has to be a URL-encoded string. For example, if the actual password is password!"#$()*-./:;<>?@[]^_{|}~, the corresponding URL-encoded string is password!%22%23%24()*-.%2F%3A%3B%3C%3E%3F%40%5B%5D%5E_%7B%7C%7D~.

  2. Provide the name of the secret in the valueFrom section.
    Example.

    plaintext
    apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://environmentid.live.dynatrace.com/api proxy: valueFrom: myproxysecret

Read-only file systems support

Dynatrace Operator version 0.5.0+

cloudNativeFullStack

hostMonitoring

As part of getting started with Kubernetes monitoring, you may want to review the support for read-only file systems. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

For read-only host file systems, support is enabled by default for cloudNativeFullStack and hostMonitoring with CSI driver configurations, so you don't need to set the ONEAGENT_ENABLE_VOLUME_STORAGE environment variable to true anymore.

To disable this feature, you can add the following annotation in your DynaKube custom resource.

yaml
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace annotations: feature.dynatrace.com/oneagent-readonly-host-fs: "true"

Configure monitoring for namespaces and pods

cloudNativeFullStack

applicationMonitoring

As part of getting started with Kubernetes monitoring, you may want to configure monitoring for namespaces and pods. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

By default, Dynatrace Operator injects OneAgent into all namespaces, with the following exceptions:

  • Namespaces starting with kube-or openshift-.
  • The namespace where Dynatrace Operator was installed.

For more configuration options, see below.

Monitor all namespaces except selected pods

To disable monitoring for selected pods, annotate the pods that should be excluded, as in the example below.

yaml
... metadata: annotations: ... oneagent.dynatrace.com/inject: "false"

For more pod annotation options, see Pod annotation list.

Monitor only specific namespaces

If you don't want Dynatrace Operator to inject OneAgent in all namespaces, you can set the namespaceSelector parameter in the DynaKube custom resource, and enable monitoring for specific namespaces that have the chosen label.

To label namespaces, use the command below, making sure to replace the placeholder with your own value.

sh
kubectl label namespace <my_namespace> monitor=app
sh
oc label namespace <my_namespace> monitor=app

To enable monitoring for the namespace that was just labelled, edit the DynaKube custom resource file as in the example below.

yaml
... namespaceSelector: matchLabels: monitor: app

For details, see Labels and selectors.

Note: To add exceptions for specific pods within the selected namespaces, you can annotate the respective pods.

Exclude specific namespaces from being monitored

To enable this option, edit the DynaKube custom resource file as in the example below. Note that

  • key is the key of the label, for example monitor.
  • value is the value of the label, for example app.
yaml
... spec: namespaceSelector: matchExpressions: - key: KEY operator: NotIn values: - VALUE

The webhook will inject every namespace that matches all namespaceselector.

The operator property can have the following values: In and NotIn.

  • If you set In, the webhook will only inject the pods in the namespace that matches the namespace selector.
  • If you set NotIn, the webhook will only inject the pods in all other namespaces that don't match the namespace selector. –> For details, see Resources that support set-based requirements.

Monitor only specific pods

Dynatrace Operator version 0.8.0+

Dynatrace Operator can be set to monitor namespaces without injecting into any pods, so you can choose which pods to monitor.

To enable this option

  1. Disable automatic injection for namespaces that are monitored by this DynaKube.

Example:

yaml
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace annotations: feature.dynatrace.com/automatic-injection: "false" spec: ...
  1. Annotate the pods that are to be monitored.

Example:

yaml
... metadata: annotations: ... oneagent.dynatrace.com/inject: "true"

Pod annotation list

  • All applicable pod annotations for applicationMonitoring without CSI driver:

    • data-ingest.dynatrace.com/inject: <"false">. If set to false, no metric enrichment file will be added to the pod.

    • oneagent.dynatrace.com/inject: <"false">. If set to false, no modifications regarding OneAgent will be applied to the pod.

    • dynatrace.com/inject: <"false">. If set to false, the webhook will not modify the pod.

      • oneagent.dynatrace.com/inject: <"false">. If set to false, no modifications regarding OneAgent will be applied to the pod.
      • data-ingest.dynatrace.com/inject: <"false">. If set to false, no modifications regarding metric enrichment will be applied to the pod.
    • oneagent.dynatrace.com/flavor: <"default"> or <"musl">. If set, it indicates whether binaries for glibc or musl are to be downloaded. It defaults to glibc.
      Note: If your container uses musl (for example, Alpine base image), you must add the flavor annotation in order to monitor it.

    • oneagent.dynatrace.com/technologies: <"comma-separated technologies list">. If set, it filters which code modules are to be downloaded. It defaults to "all".

    • oneagent.dynatrace.com/install-path: <"path">. If set, it indicates the path where the unpacked OneAgent directory will be mounted. It defaults to "/opt/dynatrace/oneagent-paas".

    • oneagent.dynatrace.com/installer-url: <"url">. If set, it indicates the URL from where the OneAgent app-only package will be downloaded. It defaults to the Dynatrace environment API configured on the API URL of DynaKube.

  • All applicable pod annotations for applicationMonitoring with CSI driver:

    • data-ingest.dynatrace.com/inject: <"false">. If set to false, no metric enrichment file will be added to the pod.
    • oneagent.dynatrace.com/inject: <"false">. If set to false, no modifications regarding OneAgent will be applied to the pod.
    • dynatrace.com/inject: <"false">. If set to false, the webhook will not modify the pod.
      • oneagent.dynatrace.com/inject: <"false">. If set to false, no modifications regarding OneAgent will be applied to the pod.
      • data-ingest.dynatrace.com/inject: <"false">. If set to false, no modifications regarding metric enrichment will be applied to the pod.

Example annotations:

yaml
... metadata: annotations: oneagent.dynatrace.com/technologies: "java,nginx" oneagent.dynatrace.com/flavor: "musl" oneagent.dynatrace.com/install-path: "/dynatrace" oneagent.dynatrace.com/installer-url: "https://my-custom-url/route/file.zip"

Import Kubernetes API certificates

As part of getting started with Kubernetes monitoring, you may want to check how importing Kubernetes API certificates works. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

Starting with Dynatrace Operator version 0.3.0, Kubernetes API certificates are automatically imported for certification validation checks. Kubernetes automatically creates a kube-root-ca.crt configmap in every namespace. This certificate is automatically mounted into every container to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt and merged into the ActiveGate truststore file using an initContainer. To get this feature, be sure to update Dynatrace Operator if you're using an earlier version.

Configure security context constraints (OpenShift)

As part of getting started with Kubernetes monitoring, you may want configure security context constraints (SCC) for OpenShift. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

Configuring SCC is required for OpenShift for cloudNativeFullStack and applicationMonitoring with CSI driver deployments.

Dynatrace Operator needs permission to access the csi volumes, which are used to provide the necessary binaries to different pods. You must modify existing Security Context Constraints for your applications and make sure to add the csi volume entry. You can configure other entries according to your environment needs.

Example adding the csi volume:

yaml
apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: <custom> ... volumes: ... - csi

For more configuration options, see Example security context constraints.

Metadata metric enrichment

Dynatrace Operator version 0.4.0+

cloudNativeFullStack

applicationMonitoring

As part of getting started with Kubernetes monitoring, you may want to configure metadata metric enrichment. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

Metadata metric enrichment leverages data from OneAgent and Dynatrace Operator by adding additional context or relevant data to the metrics sent. Enrichment means the logs and data are related back to entities (pods, processes, hosts). Every metric prefixed with dt.entity is due to metadata enrichment.

Starting with Dynatrace Operator version 0.4+, every application pod that is instrumented by the Dynatrace Webhook is automatically enriched with metric metadata.

Activate metadata enrichment

To activate metadata enrichment, you need to create a special token for data ingest and add it to the secret.

  1. Create a dataIngestToken token and enable the Ingest metrics permission (API v2).
  2. Follow the deployment instructions, making sure the dynakube secret you create in step 4 of the instructions includes the dataIngestToken token.
  3. Redeploy your monitored pods.

Note: You can add the dataIngestToken token manually at any time by editing the secret:

  1. Edit the existing secret.

    bash
    kubectl edit secret <dynakube>
    bash
    oc edit secret <dynakube>
  2. Add a new dataIngestToken key with your generated token to the secret, as in the example below:

    yaml
    apiVersion: v1 kind: Secret metadata: name: dynakube namespace: dynatrace data: apiToken: <apiToken base64 encoded> dataIngestToken: <dataIngestToken base64 encoded> type: Opaque
  3. Redeploy your monitored pods.

Disable metadata enrichment

To disable the metadata enrichments, add the following annotation to the DynaKube custom resource:

yaml
metadata: annotations: ... feature.dynatrace.com/disable-metadata-enrichment: "true"

Alternatively, you can disable the metadata enrichments by running the command below. Be sure to replace the placeholder (<...>) with the name of your DynaKube sample.

bash
kubectl annotate dynakube -n dynatrace <your_DynaKube_CR> feature.dynatrace.com/disable-metadata-enrichment="true"
bash
oc annotate dynakube -n dynatrace <your_DynaKube_CR> feature.dynatrace.com/disable-metadata-enrichment="true"

Enable AppArmor for enhanced security

Dynatrace Operator version 0.6.0+

As part of getting started with Kubernetes monitoring, you may want to enable AppArmor for enhanced security. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.

Enable AppArmor for Dynatrace Operator

You can make Dynatrace Operator more secure by enabling AppArmor. Depending on whether you set up monitoring using manual (kubectl/oc) or helm, select one of the options below.

  1. Add the following annotation to your DynaKube file to deploy ActiveGate with AppArmor profile enabled:

    yaml
    apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: annotations: feature.dynatrace.com/activegate-apparmor: true
  2. Add the following annotations to your Kubernetes/OpenShift YAML to deploy the webhook and Dynatrace Operator with AppArmor profile enabled:

    yaml
    kind: Deployment metadata: name: dynatrace-webhook spec: template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/webhook: runtime/default kind: Deployment metadata: name: dynatrace-operator spec: template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/dynatrace-operator: runtime/default

Add the following properties to the values.yaml file to deploy ActiveGate and Dynatrace Operator with AppArmor profile enabled:

yaml
operator: apparmor: true webhook: apparmor: true activeGate: apparmor: true

Enable a custom AppArmor profile for OneAgent

You can restrict the OneAgent access to a desired set of features. See below for how to enable a custom AppArmor profile and apply it to the OneAgent pods.

Create a custom OneAgent AppArmor profile

Install the profile on all worker nodes

Enforce the profile on all OneAgent pods

Create a custom OneAgent AppArmor profile

See Run OneAgent as a Docker container for details on how to create a custom AppArmor profile.

Install the profile on all worker nodes

OneAgent is deployed as a daemonset by default, which means that pods that use the AppArmor profile will be used on every node. Therefore, you need to install the OneAgent AppArmor profile on all nodes. Depending on the environment, this can be achieved in several ways, such as by using the kube-apparmor-manager or the security-profiles-operator. Please refer to the official documentation of these tools on how to apply them in your cluster.

Enforce the profile on all OneAgent pods

To enable AppArmor for all the OneAgent pods, add the container.apparmor.security.beta.kubernetes.io/dynatrace-oneagent: localhost/oneagent annotation to one of the following fields, depending on your deployment:

  • oneAgent.classicFullStack.annotations
  • oneAgent.cloudNativeFullStack.annotations
  • oneAgent.hostMonitoring.annotations

Example for cloudNativeFullStack deployment:

yaml
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://ENVIRONMENTID.live.dynatrace.com/api oneAgent: cloudNativeFullStack: annotations: container.apparmor.security.beta.kubernetes.io/dynatrace-oneagent: localhost/oneagent

High availability mode for Helm deployments

Dynatrace Operator version 0.6+

As part of getting started with Kubernetes monitoring, you may want to configure high availability. When you're finished, you can return to the installation instructions for your helm deployment.

Note: For now, this feature is limited to Helm deployments.

The high availability mode offers the following capabilities:

  • Increases replicas to two replicas for webhook deployment.
  • Adds pod topology spread constraints:
    • Pods are spread across different nodes, with the nodes in different zones where possible.
    • Multiple pods are allowed in the same zone.
  • Adds pod disruption budget:
    • It restricts graceful shutdowns of the webhook pod, if it's the last remaining pod.

To enable this, you can add the following to the values.yaml:

yaml
webhook: highAvailability: true

Using priorityClass for critical Dynatrace components

Starting with Dynatrace Operator version 0.8.0+, a priorityClass object is created by default when installing the Dynatrace Operator. This priority class is initially set to a high value to ensure that the components that use it have a higher priority than other pods, and that critical components like the CSI driver are scheduled by Kubernetes. For details, see the Kubernetes documentation on PriorityClass.

You can change the default value of this parameter according to your environment and the individual use of priority classes within your cluster. Be aware that lowering the default value might impact the scheduling of the pods created by Dynatrace. priorityClass is used on the CSI driver pods by default, but it can also be used on OneAgent pods (see the priorityClassName parameter in DynaKube parameters).

Set namespace-based isolation levels for pods

Kubernetes version 1.25+

You can set namespace-based isolation levels for pods using Pod Security Standards.

If the defaults property in the built-in admission controller is set to baseline or restricted, you need to mark the dynatrace namespace as privileged, as only the Privileged policy is supported by Dynatrace Operator (the CSI driver and OneAgent pods require more permissions than the Baseline or Restricted policies allow).

To do that, run the command below.

bash
kubectl label namespace dynatrace pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged
Related topics
  • Kubernetes/OpenShift monitoring

    Monitor Kubernetes/OpenShift with Dynatrace.