• Home
  • Setup and configuration
  • Set up Dynatrace on container platforms
  • Kubernetes
  • Get started with Kubernetes/OpenShift monitoring
  • Configuration options for Dynatrace Operator on Kubernetes/OpenShift

Configuration options for Dynatrace Operator on Kubernetes/OpenShift

See below for a list of configuration options available for Dynatrace Operator.

Add a custom properties file optional

As part of getting started with Kubernetes monitoring, you may want to add a custom properties file. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

You can add a custom properties file by providing it as a value or by referencing it from a secret.

  • To add the custom properties file as a value, see the example below.
yaml
customProperties: value: | [kubernetes_monitoring] ...
  • To reference the custom properties file from a secret
  1. Create a secret with the following content.

Note: The content of the secret has to be base64 encoded in order to work.

yaml
apiVersion: v1 kind: Secret metadata: name: <customproperties-secret> namespace: dynatrace data: customProperties: <base64 encoded properties>
  1. Add the secret to the custom properties.
yaml
customProperties: valueFrom: <customproperties-secret>

Add a custom certificate for ActiveGate optional

As part of getting started with Kubernetes monitoring, you may want to add a custom certificate for ActiveGate. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

To add a custom certificate for ActiveGate:

  1. Create a secret.

    shell
    kubectl -n dynatrace create secret generic dynakube-custom-certificate --from-file=server.p12 --from-literal=password=<password_to_server.p12> --from-file=server.crt
    shell
    oc -n dynatrace create secret generic dynakube-custom-certificate --from-file=server.p12 --from-literal=password=<password_to_server.p12> --from-file=server.crt
  2. In your custom resource, enable the tlsSecretName parameter and enter the name of the secret you created.

    Example:

    json
    apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://FQDN/api activeGate: tlsSecretName: dynakube-custom-certificate capabilities: - kubernetes-monitoring

    Note: HTTP clients connecting to the ActiveGate REST endpoint must trust provided certificates.

Configure proxy optional

As part of getting started with Kubernetes monitoring, you may want to configure proxy. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

You can configure optional parameters like proxy settings in the DynaKube custom resource file in order to:

  • Download the OneAgent installer
  • Ensure communication between the OneAgent and your Dynatrace environment
  • Ensure communication between Dynatrace Operator and the Dynatrace API.

There are two ways to provide the proxy, depending on whether your proxy uses credentials.

If you have a proxy that doesn't use credentials, enter your proxy URL directly in the value field for the proxy.

Example

plaintext
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://environmentid.live.dynatrace.com/api proxy: value: http://mysuperproxy

If your proxy uses credentials

  1. Create a secret with a field called proxy that holds your encrypted proxy URL with the credentials.

    Example.

    plaintext
    kubectl -n dynatrace create secret generic myproxysecret --from-literal="proxy=http://<user>:<password>@<IP>:<PORT>"
    plaintext
    oc -n dynatrace create secret generic myproxysecret --from-literal="proxy=http://<user>:<password>@<IP>:<PORT>"
    Rules for the proxy password
    1. The proxy password needs to meet the following requirements:

      RequirementsCorresponding characters
      Characters allowed[A-Za-z0-9]
      ! " # $ ( ) * - . / : ; < > ? @ [ ] ^ _ { | }
      Characters not allowedblank space
      ' ` , & = + % \
    2. The password specified in the CR or in the proxy secret has to be a URL-encoded string. For example, if the actual password is password!"#$()*-./:;<>?@[]^_{|}~, the corresponding URL-encoded string is password!%22%23%24()*-.%2F%3A%3B%3C%3E%3F%40%5B%5D%5E_%7B%7C%7D~.

  2. Provide the name of the secret in the valueFrom section.
    Example.

    plaintext
    apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://environmentid.live.dynatrace.com/api proxy: valueFrom: myproxysecret

Read-only file systems support

Dynatrace Operator version 0.5.0+cloudNativeFullStackhostMonitoring

As part of getting started with Kubernetes monitoring, you may want to review the support for read-only file systems. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

For read-only host file systems, support is enabled by default for cloudNativeFullStack and hostMonitoring with CSI driver configurations, so you don't need to set the ONEAGENT_ENABLE_VOLUME_STORAGE environment variable to true anymore.

To disable this feature, you can add the following annotation in your DynaKube custom resource.

yaml
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace annotations: feature.dynatrace.com/disable-oneagent-readonly-host-fs: "true"

Configure monitoring for namespaces and pods

cloudNativeFullStackapplicationMonitoring

As part of getting started with Kubernetes monitoring, you may want to configure monitoring for namespaces and pods. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

By default, Dynatrace Operator injects OneAgent into all namespaces, with the following exceptions:

  • Namespaces starting with kube-or openshift-.
  • The namespace where Dynatrace Operator was installed.

For more configuration options, see below.

  • Option 1: Monitor all namespaces except selected pods.

To disable monitoring for selected pods, annotate the pods that should be excluded, as in the example below.

yaml
... metadata: annotations: ... oneagent.dynatrace.com/inject: "false"

For more pod annotation options, see Pod annotation list.

  • Option 2: Monitor only specific namespaces.

If you don't want Dynatrace Operator to inject OneAgent in all namespaces, you can set the namespaceSelector parameter in the DynaKube custom resource, and enable monitoring for specific namespaces that have the chosen label.

To label namespaces, use the command below, making sure to replace the placeholder with your own value.

shell
kubectl label namespace <my_namespace> monitor=app
shell
oc label namespace <my_namespace> monitor=app

To enable monitoring for the namespace that was just labelled, edit the DynaKube custom resource file as in the example below.

yaml
... namespaceSelector: matchLabels: monitor: app

For details, see Labels and selectors.

Note: To add exceptions for specific pods within the selected namespaces, you can annotate the respective pods.

  • Option 3: Exclude specific namespaces from being monitored.

To enable this option, edit the DynaKube custom resource file as in the example below. Note that

  • key is the key of the label, for example monitor.
  • value is the value of the label, for example app.
yaml
... spec: namespaceSelector: matchExpressions: - key: KEY operator: NotIn values: - VALUE

The webhook will inject every namespace that matches all namespaceselector.

The operator property can have the following values: In and NotIn.

  • If you set In, the webhook will only inject the pods in the namespace that matches the namespace selector.
  • If you set NotIn, the webhook will only inject the pods in all other namespaces that don't match the namespace selector. –> For details, see Resources that support set-based requirements.

Pod annotation list

  • All applicable pod annotations for applicationMonitoring without CSI driver:

    • oneagent.dynatrace.com/inject: <"false">. If set to false, no modifications will be applied to the pod. If not set, the default on the namespace is used.
    • oneagent.dynatrace.com/flavor: <"default"> or <"musl">. If set, it indicates whether binaries for glibc or musl are to be downloaded. It defaults to glibc.
      Note: If your container uses musl (for example, Alpine base image), you must add the flavor annotation in order to monitor it.
    • oneagent.dynatrace.com/technologies: <"comma-separated technologies list">. If set, it filters which code modules are to be downloaded. It defaults to "all".
    • oneagent.dynatrace.com/install-path: <"path">. If set, it indicates the path where the unpacked OneAgent directory will be mounted. It defaults to "/opt/dynatrace/oneagent-paas".
    • oneagent.dynatrace.com/installer-url: <"url">. If set, it indicates the URL from where the OneAgent app-only package will be downloaded. It defaults to the Dynatrace environment API configured on the API URL of Dynakube.
  • All applicable pod annotations for applicationMonitoring with CSI driver:

    • oneagent.dynatrace.com/inject: <"false">. If set to false, no modifications will be applied to the pod. If not set, the default on the namespace is used.

Example annotations:

yaml
... metadata: annotations: oneagent.dynatrace.com/technologies: "java,nginx" oneagent.dynatrace.com/flavor: "musl" oneagent.dynatrace.com/install-path: "/dynatrace" oneagent.dynatrace.com/installer-url: "https://my-custom-url/route/file.zip"

Import Kubernetes API certificates

As part of getting started with Kubernetes monitoring, you may want to check how importing Kubernetes API certificates works. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

Starting with Dynatrace Operator version 0.3.0, Kubernetes API certificates are automatically imported for certification validation checks. Kubernetes automatically creates a kube-root-ca.crt configmap in every namespace. This certificate is automatically mounted into every container to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt and merged into the ActiveGate truststore file using an initContainer. To get this feature, be sure to update Dynatrace Operator if you're using an earlier version.

Configure security context constraints (OpenShift)

As part of getting started with Kubernetes monitoring, you may want configure security context constraints for OpenShift. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

Note: Configuring security context constraints is required for OpenShift for cloudNativeFullStack and applicationMonitoring with CSI driver deployments.

Dynatrace Operator needs permission to access the csi volumes, which are used to provide the necessary binaries to different pods. To allow pods access to the csi volumes you must add a security context constraint.

To add a security context constraint

  1. Create a file called restricted-csi.yaml with the following content.

    Note: You can configure the file according to your needs, just make sure you add csi to the volumes.

    yaml
    apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: restricted-csi runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs fsGroup: type: MustRunAs supplementalGroups: type: RunAsAny allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: null priority: null readOnlyRootFilesystem: false groups: - system:authenticated requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID users: [] volumes: - configMap - downwardAPI - emptyDir - hostPath - persistentVolumeClaim - projected - secret - csi
  2. Save the file.

  3. Run the command below to create the security context constraint.

    shell
    oc apply -f restricted-csi.yaml

Metadata metric enrichment

Dynatrace Operator version 0.4.0+cloudNativeFullStackapplicationMonitoring

As part of getting started with Kubernetes monitoring, you may want configure metadata metric enrichment. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

Metadata metric enrichment leverages data from OneAgent and Dynatrace Operator by adding additional context or relevant data to the metrics sent. Enrichment means the logs and data are related back to entities (pods, processes, hosts). Every metric prefixed with dt.entity is due to metadata enrichment.

Starting with Dynatrace Operator version 0.4+, every application pod that is instrumented by the Dynatrace Webhook is automatically enriched with metric metadata.

Activate metadata enrichment

To activate metadata enrichment, you need to create a special token for data ingest and add it to the secret.

  1. Create a dataIngestToken token and enable the Ingest metrics permission (API v2).
  2. Follow the deployment instructions, making sure the dynakube secret you create in step 4 of the instructions includes the dataIngestToken token.
  3. Redeploy your monitored pods.

Note: You can add the dataIngestToken token manually at any time by editing the secret:

  1. Edit the existing secret.

    shell
    kubectl edit secret <dynakube>
    shell
    oc edit secret <dynakube>
  2. Add a new dataIngestToken key with your generated token to the secret, as in the example below:

    yaml
    apiVersion: v1 kind: Secret metadata: name: dynakube namespace: dynatrace data: apiToken: <apiToken base64 encoded> dataIngestToken: <dataIngestToken base64 encoded> type: Opaque
  3. Redeploy your monitored pods.

Disable metadata enrichment

To disable the metadata enrichments, add the following annotation to the DynaKube custom resource:

yaml
metadata: annotations: ... feature.dynatrace.com/disable-metadata-enrichment: "true"

Alternatively, you can disable the metadata enrichments by running the command below. Be sure to replace the placeholder (<...>) with the name of your DynaKube sample.

shell
kubectl annotate dynakube -n dynatrace <your_DynaKube_CR> feature.dynatrace.com/disable-metadata-enrichment="true"
shell
oc annotate dynakube -n dynatrace <your_DynaKube_CR> feature.dynatrace.com/disable-metadata-enrichment="true"

Enable AppArmor for enhanced security

Dynatrace Operator version 0.6.0+

As part of getting started with Kubernetes monitoring, you may want to enable AppArmor for enhanced security. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

You can make Dynatrace Operator more secure by enabling AppArmor. Depending on whether you set up monitoring using kubectl/oc or helm, select one of the options below.

  1. Add the following annotation to your DynaKube file to deploy ActiveGate with AppArmor profile enabled:

    yaml
    apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: annotations: feature.dynatrace.com/activegate-apparmor: true
  2. Add the following annotations to your Kubernetes/OpenShift YAML to deploy the webhook and Dynatrace Operator with AppArmor profile enabled:

    yaml
    kind: Deployment metadata: name: dynatrace-webhook spec: template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/webhook: runtime/default kind: Deployment metadata: name: dynatrace-operator spec: template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/dynatrace-operator: runtime/default

Add the following properties to the values.yaml file to deploy ActiveGate and Dynatrace Operator with AppArmor profile enabled:

yaml
operator: apparmor: true webhook: apparmor: true activeGate: apparmor: true

High availability mode for Helm deployments

Dynatrace Operator version 0.6+

As part of getting started with Kubernetes monitoring, you may want to configure high availability. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.

Note: For now, this feature is limited to Helm deployments.

The high availability mode offers the following capabilities:

  • Increases replicas to two replicas for webhook deployment.
  • Adds pod topology spread constraints:
    • Pods are spread across different nodes, with the nodes in different zones where possible.
    • Multiple pods are allowed in the same zone.
  • Adds pod disruption budget:
    • It restricts graceful shutdowns of the webhook pod, if it's the last remaining pod.

To enable this, you can add the following to the values.yaml:

yaml
webhook: highAvailability: true

Authenticate ActiveGate to the Dynatrace Cluster optional

Dynatrace Operator version 0.7.0+

Starting with Dynatrace Operator version 0.7.0+, you can create an authentication token for your ActiveGate and use it to connect to the Dynatrace Cluster. Adding this annotation automatically creates an authentication token for the ActiveGate that is rotated by Dynatrace Operator every 30 days. When an authentication token is rotated, the affected ActiveGate is automatically deleted and redeployed.

To create the authentication token

  1. Make sure to enable the activeGateTokenManagement.create permission (API v2) for your API token.

  2. Add the following annotation to the DynaKube custom resource.

    yaml
    apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace annotations: feature.dynatrace.com/enable-activegate-authtoken: "true"

To disable the configuration, remove the annotation.

Using priorityClass for critical Dynatrace components

Starting with Dynatrace Operator version 0.8.0+, a priorityClass object is created by default when installing the Dynatrace Operator. This priority class is initially set to a high value to ensure that the components that use it have a higher priority than other pods, and that critical components like the CSI driver are scheduled by Kubernetes. For details, see the Kubernetes documentation on PriorityClass.

You can change the default value of this parameter according to your environment and the individual use of priority classes within your cluster. Be aware that lowering the default value might impact the scheduling of the pods created by Dynatrace. priorityClass is used on the CSI driver pods by default, but it can also be used on OneAgent pods (see the priorityClassName parameter in DynaKube parameters).

Related topics
  • Kubernetes/OpenShift monitoring

    Monitor Kubernetes/OpenShift with Dynatrace.