Configuration options for Dynatrace Operator on Kubernetes/OpenShift
See below for a list of configuration options available for Dynatrace Operator.
Configure build label propagation
As part of getting started with Kubernetes monitoring, you may want to configure build label propagation. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
Build label propagation enables you to provide build and version metadata information to the injected OneAgent about the newly deployed pods. This information is then visible on the Properties and tags section of your entities pages.
How it works
You can reference the value of a metadata field in an environment variable.
Example:
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Then OneAgent injects into the newly deployed pods and collects the metadata provided via environment variables.
Enable feature
To enable build label propagation, you need to set feature.dynatrace.com/label-version-detection
to true
in DynaKube. Note that since enabling build label propagation requires webhook injection, it only works with applicationMonitoring
and cloudNativeFullStack
deployments.
Example:
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: dynakube
namespace: dynatrace
annotations:
feature.dynatrace.com/label-version-detection: "true"
...
oneAgent:
cloudNativeFullStack: {}
Default behavior
- The
DT_RELEASE_VERSION
environment variable gets the value frommetadata.labels['app.kubernetes.io/version']
. - The
DT_RELEASE_PRODUCT
environment variable gets the value frommetadata.labels['app.kubernetes.io/part-of']
.
For example, if your application has the following pod:
apiVersion: v1
kind: Pod
metadata:
...
labels:
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/part-of: "store"
spec:
...
the value of the labels is added to the environment variables of the injected containers:
apiVersion: v1
kind: Pod
metadata:
...
labels:
app.kubernetes.io/version: "1.0.0"
app.kubernetes.io/part-of: "Store"
spec:
...
containers:
- name: app
...
env:
- name: "DT_RELEASE_VERSION"
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/version']
- name: "DT_RELEASE_PRODUCT"
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/part-of']
Note: If the DT_RELEASE_VERSION
or DT_RELEASE_PRODUCT
environment variables are already set on the container before the OneAgent injection, they will not be overwritten.
Configuration options
You can annotate your namespace to provide further mappings or overrule the defaults for pods within that namespace.
- Each annotation key is mapped to a specific environment variable.
- Each annotation value is the reference path in
fieldPath
. - The available information for
fieldPath
is the same as forfieldRef
.
Example to overwrite the default values for version
and product
, and enable stage
and build-version
:
annotations:
mapping.release.dynatrace.com/version: "metadata.annotations['my-version']"
mapping.release.dynatrace.com/product: "metadata.labels['app.kubernetes.io/name']"
mapping.release.dynatrace.com/stage: "metadata.namespace"
mapping.release.dynatrace.com/build-version: "metadata.labels['release.dynatrace.com/stage']"
Each of these annotations configures a different environment variable:
mapping.release.dynatrace.com/version
holds thefieldPath
used forDT_RELEASE_VERSION
. If this annotation is missing, mapping falls back to the default behavior.mapping.release.dynatrace.com/product
holds thefieldPath
used forDT_RELEASE_PRODUCT
. If this annotation is missing, mapping falls back to the default behavior.mapping.release.dynatrace.com/stage
holds thefieldPath
used forDT_RELEASE_STAGE
.mapping.release.dynatrace.com/build-version
holds thefieldPath
used forDT_RELEASE_BUILD_VERSION
.
Note: The values aren't validated by Dynatrace Operator or the webhook, so make sure they are valid.
Add a custom properties file optional
As part of getting started with Kubernetes monitoring, you may want to add a custom properties file. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
You can add a custom properties file by providing it as a value or by referencing it from a secret.
- To add the custom properties file as a value, see the example below.
customProperties:
value: |
[kubernetes_monitoring]
...
- To reference the custom properties file from a secret
- Create a secret with the following content.
Note: The content of the secret has to be base64
encoded in order to work.
apiVersion: v1
kind: Secret
metadata:
name: <customproperties-secret>
namespace: dynatrace
data:
customProperties: <base64 encoded properties>
- Add the secret to the custom properties.
customProperties:
valueFrom: <customproperties-secret>
Add a custom certificate for ActiveGate optional
As part of getting started with Kubernetes monitoring, you may want to add a custom certificate for ActiveGate. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
To add a custom certificate for ActiveGate:
-
Create a secret.
kubectl -n dynatrace create secret generic dynakube-custom-certificate --from-file=server.p12 --from-literal=password=<password_to_server.p12> --from-file=server.crt
oc -n dynatrace create secret generic dynakube-custom-certificate --from-file=server.p12 --from-literal=password=<password_to_server.p12> --from-file=server.crt
-
In your custom resource, enable the
tlsSecretName
parameter and enter the name of the secret you created.Example:
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://FQDN/api activeGate: tlsSecretName: dynakube-custom-certificate capabilities: - kubernetes-monitoring
Note: HTTP clients connecting to the ActiveGate REST endpoint must trust provided certificates.
Configure proxy optional
As part of getting started with Kubernetes monitoring, you may want to configure a proxy. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
You can configure optional parameters like proxy settings in the DynaKube custom resource file in order to:
- Download the OneAgent installer
- Ensure communication between the OneAgent and your Dynatrace environment
- Ensure communication between Dynatrace Operator and the Dynatrace API
There are two ways to provide the proxy, depending on whether your proxy uses credentials.
If you have a proxy that doesn't use credentials, enter your proxy URL directly in the value
field for the proxy.
Example
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: dynakube
namespace: dynatrace
spec:
apiUrl: https://environmentid.live.dynatrace.com/api
proxy:
value: http://mysuperproxy
If your proxy uses credentials
-
Create a secret with a field called
proxy
that holds your encrypted proxy URL with the credentials.Example.
kubectl -n dynatrace create secret generic myproxysecret --from-literal="proxy=http://<user>:<password>@<IP>:<PORT>"
oc -n dynatrace create secret generic myproxysecret --from-literal="proxy=http://<user>:<password>@<IP>:<PORT>"
-
Provide the name of the secret in the
valueFrom
section.
Example.apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://environmentid.live.dynatrace.com/api proxy: valueFrom: myproxysecret
Read-only file systems support
cloudNativeFullStack
hostMonitoring
As part of getting started with Kubernetes monitoring, you may want to review the support for read-only file systems. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
For read-only host file systems, support is enabled by default for cloudNativeFullStack
and hostMonitoring
with CSI driver configurations, so you don't need to set the ONEAGENT_ENABLE_VOLUME_STORAGE
environment variable to true
anymore.
To disable this feature, you can add the following annotation in your DynaKube custom resource.
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: dynakube
namespace: dynatrace
annotations:
feature.dynatrace.com/oneagent-readonly-host-fs: "false"
Configure monitoring for namespaces and pods
cloudNativeFullStack
applicationMonitoring
As part of getting started with Kubernetes monitoring, you may want to configure monitoring for namespaces and pods. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
By default, Dynatrace Operator injects OneAgent into all namespaces, with the following exceptions:
- Namespaces starting with
kube-
oropenshift-
. - The namespace where Dynatrace Operator was installed.
For more configuration options, see below.
Monitor all namespaces except selected pods
To disable monitoring for selected pods, annotate the pods that should be excluded, as in the example below.
...
metadata:
annotations:
...
oneagent.dynatrace.com/inject: "false"
For more pod annotation options, see Pod annotation list.
Monitor only specific namespaces
If you don't want Dynatrace Operator to inject OneAgent in all namespaces, you can set the namespaceSelector
parameter in the DynaKube custom resource, and enable monitoring for specific namespaces that have the chosen label.
To label namespaces, use the command below, making sure to replace the placeholder with your own value.
kubectl label namespace <my_namespace> monitor=app
oc label namespace <my_namespace> monitor=app
To enable monitoring for the namespace that was just labelled, edit the DynaKube custom resource file as in the example below.
...
namespaceSelector:
matchLabels:
monitor: app
For details, see Labels and selectors.
Note: To add exceptions for specific pods within the selected namespaces, you can annotate the respective pods.
Exclude specific namespaces from being monitored
To enable this option, edit the DynaKube custom resource file as in the example below. Note that
key
is the key of the label, for examplemonitor
.value
is the value of the label, for exampleapp
.
...
spec:
namespaceSelector:
matchExpressions:
- key: KEY
operator: NotIn
values:
- VALUE
The webhook will inject every namespace that matches all namespaceselector
.
The operator
property can have the following values: In
and NotIn
.
- If you set
In
, the webhook will only inject the pods in the namespace that matches the namespace selector. - If you set
NotIn
, the webhook will only inject the pods in all other namespaces that don't match the namespace selector. –> For details, see Resources that support set-based requirements.
Monitor only specific pods
Dynatrace Operator version 0.8.0+
Dynatrace Operator can be set to monitor namespaces without injecting into any pods, so you can choose which pods to monitor.
To enable this option
- Disable automatic injection for namespaces that are monitored by this DynaKube.
Example:
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: dynakube
namespace: dynatrace
annotations:
feature.dynatrace.com/automatic-injection: "false"
spec:
...
- Annotate the pods that are to be monitored.
Example:
...
metadata:
annotations:
...
oneagent.dynatrace.com/inject: "true"
Pod annotation list
-
All applicable pod annotations for
applicationMonitoring
without CSI driver:-
data-ingest.dynatrace.com/inject
:<"false">
. If set tofalse
, no metric enrichment file will be added to the pod. -
oneagent.dynatrace.com/inject
:<"false">
. If set tofalse
, no modifications regarding OneAgent will be applied to the pod. -
dynatrace.com/inject
:<"false">
. If set tofalse
, the webhook will not modify the pod.oneagent.dynatrace.com/inject
:<"false">
. If set tofalse
, no modifications regarding OneAgent will be applied to the pod.data-ingest.dynatrace.com/inject
:<"false">
. If set tofalse
, no modifications regarding metric enrichment will be applied to the pod.
-
oneagent.dynatrace.com/flavor
:<"default">
or<"musl">
. If set, it indicates whether binaries for glibc or musl are to be downloaded. It defaults toglibc
.
Note: If your container uses musl (for example, Alpine base image), you must add the flavor annotation in order to monitor it. -
oneagent.dynatrace.com/technologies
:<"comma-separated technologies list">
. If set, it filters which code modules are to be downloaded. It defaults to"all"
. -
oneagent.dynatrace.com/install-path
:<"path">
. If set, it indicates the path where the unpacked OneAgent directory will be mounted. It defaults to"/opt/dynatrace/oneagent-paas"
. -
oneagent.dynatrace.com/installer-url
:<"url">
. If set, it indicates the URL from where the OneAgent app-only package will be downloaded. It defaults to the Dynatrace environment API configured on the API URL of DynaKube.
-
-
All applicable pod annotations for
applicationMonitoring
with CSI driver:data-ingest.dynatrace.com/inject
:<"false">
. If set tofalse
, no metric enrichment file will be added to the pod.oneagent.dynatrace.com/inject
:<"false">
. If set tofalse
, no modifications regarding OneAgent will be applied to the pod.dynatrace.com/inject
:<"false">
. If set tofalse
, the webhook will not modify the pod.oneagent.dynatrace.com/inject
:<"false">
. If set tofalse
, no modifications regarding OneAgent will be applied to the pod.data-ingest.dynatrace.com/inject
:<"false">
. If set tofalse
, no modifications regarding metric enrichment will be applied to the pod.
Example annotations:
...
metadata:
annotations:
oneagent.dynatrace.com/technologies: "java,nginx"
oneagent.dynatrace.com/flavor: "musl"
oneagent.dynatrace.com/install-path: "/dynatrace"
oneagent.dynatrace.com/installer-url: "https://my-custom-url/route/file.zip"
Import Kubernetes API certificates
As part of getting started with Kubernetes monitoring, you may want to check how importing Kubernetes API certificates works. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
Kubernetes API certificates are automatically imported for certification validation checks. Kubernetes automatically creates a kube-root-ca.crt
configmap in every namespace. This certificate is automatically mounted into every container to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
and merged into the ActiveGate truststore file using an initContainer.
To get this feature, be sure to update Dynatrace Operator if you're using an earlier version.
Configure security context constraints (OpenShift)
As part of getting started with Kubernetes monitoring, you may want configure security context constraints (SCC) for OpenShift. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
Configuring SCC is required for OpenShift for cloudNativeFullStack
and applicationMonitoring
with CSI driver deployments.
Dynatrace Operator needs permission to access the csi
volumes, which are used to provide the necessary binaries to different pods. You must modify existing Security Context Constraints for your applications and make sure to add the csi
volume entry. You can configure other entries according to your environment needs.
Example adding the csi
volume:
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: <custom>
...
volumes:
...
- csi
For more configuration options, see Example security context constraints.
Metadata metric enrichment
cloudNativeFullStack
applicationMonitoring
As part of getting started with Kubernetes monitoring, you may want to configure metadata metric enrichment. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
Metadata metric enrichment leverages data from OneAgent and Dynatrace Operator by adding additional context or relevant data to the metrics sent. Enrichment means the logs and data are related back to entities (pods, processes, hosts). Every metric prefixed with dt.entity
is due to metadata enrichment.
Every application pod that is instrumented by the Dynatrace webhook is automatically enriched with metric metadata.
Activate metadata enrichment
To activate metadata enrichment, you need to create a special token for data ingest and add it to the secret.
- Create a
dataIngestToken
token and enable the Ingest metrics permission (API v2). - Follow the deployment instructions, making sure the
dynakube
secret you create in step 4 of the instructions includes thedataIngestToken
token. - Redeploy your monitored pods.
Note: You can add the dataIngestToken
token manually at any time by editing the secret:
-
Edit the existing secret.
kubectl edit secret <dynakube>
oc edit secret <dynakube>
-
Add a new
dataIngestToken
key with your generated token to the secret, as in the example below:apiVersion: v1 kind: Secret metadata: name: dynakube namespace: dynatrace data: apiToken: <apiToken base64 encoded> dataIngestToken: <dataIngestToken base64 encoded> type: Opaque
-
Redeploy your monitored pods.
Disable metadata enrichment
To disable the metadata enrichments, add the following annotation to the DynaKube custom resource:
metadata:
annotations:
...
feature.dynatrace.com/disable-metadata-enrichment: "true"
Alternatively, you can disable the metadata enrichments by running the command below. Be sure to replace the placeholder (<...>
) with the name of your DynaKube sample.
kubectl annotate dynakube -n dynatrace <your_DynaKube_CR> feature.dynatrace.com/disable-metadata-enrichment="true"
oc annotate dynakube -n dynatrace <your_DynaKube_CR> feature.dynatrace.com/disable-metadata-enrichment="true"
Enable AppArmor for enhanced security
As part of getting started with Kubernetes monitoring, you may want to enable AppArmor for enhanced security. When you're finished, you can return to the installation instructions for your manual (kubectl/oc) or helm deployment.
Enable AppArmor for Dynatrace Operator
You can make Dynatrace Operator more secure by enabling AppArmor. Depending on whether you set up monitoring using manual (kubectl/oc) or helm, select one of the options below.
-
Add the following annotation to your DynaKube file to deploy ActiveGate with AppArmor profile enabled:
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: annotations: feature.dynatrace.com/activegate-apparmor: true
-
Add the following annotations to your Kubernetes/OpenShift YAML to deploy the webhook and Dynatrace Operator with AppArmor profile enabled:
kind: Deployment metadata: name: dynatrace-webhook spec: template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/webhook: runtime/default kind: Deployment metadata: name: dynatrace-operator spec: template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/dynatrace-operator: runtime/default
Add the following properties to the values.yaml
file to deploy ActiveGate and Dynatrace Operator with AppArmor profile enabled:
operator:
apparmor: true
webhook:
apparmor: true
activeGate:
apparmor: true
Enable a custom AppArmor profile for OneAgent
You can restrict the OneAgent access to a desired set of features. See below for how to enable a custom AppArmor profile and apply it to the OneAgent pods.
Create a custom OneAgent AppArmor profile
Install the profile on all worker nodes
Enforce the profile on all OneAgent pods
Create a custom OneAgent AppArmor profile
See Run OneAgent as a Docker container for details on how to create a custom AppArmor profile.
Install the profile on all worker nodes
OneAgent is deployed as a daemonset by default, which means that pods that use the AppArmor profile will be used on every node. Therefore, you need to install the OneAgent AppArmor profile on all nodes. Depending on the environment, this can be achieved in several ways, such as by using the kube-apparmor-manager or the security-profiles-operator. Please refer to the official documentation of these tools on how to apply them in your cluster.
Enforce the profile on all OneAgent pods
To enable AppArmor for all the OneAgent pods, add the container.apparmor.security.beta.kubernetes.io/dynatrace-oneagent: localhost/oneagent
annotation to one of the following fields, depending on your deployment:
oneAgent.classicFullStack.annotations
oneAgent.cloudNativeFullStack.annotations
oneAgent.hostMonitoring.annotations
Example for cloudNativeFullStack
deployment:
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: dynakube
namespace: dynatrace
spec:
apiUrl: https://ENVIRONMENTID.live.dynatrace.com/api
oneAgent:
cloudNativeFullStack:
annotations:
container.apparmor.security.beta.kubernetes.io/dynatrace-oneagent: localhost/oneagent
High availability mode for Helm deployments
As part of getting started with Kubernetes monitoring, you may want to configure high availability. When you're finished, you can return to the installation instructions for your helm deployment.
Note: For now, this feature is limited to Helm deployments.
The high availability mode offers the following capabilities:
- Increases replicas to two replicas for webhook deployment.
- Adds pod topology spread constraints:
- Pods are spread across different nodes, with the nodes in different zones where possible.
- Multiple pods are allowed in the same zone.
- Adds pod disruption budget:
- It restricts graceful shutdowns of the webhook pod, if it's the last remaining pod.
To enable this, you can add the following to the values.yaml
:
webhook:
highAvailability: true
Using priorityClass
for critical Dynatrace components
Starting with Dynatrace Operator version 0.8.0+, a priorityClass
object is created by default when installing the Dynatrace Operator. This priority class is initially set to a high value to ensure that the components that use it have a higher priority than other pods, and that critical components like the CSI driver are scheduled by Kubernetes. For details, see the Kubernetes documentation on PriorityClass.
You can change the default value of this parameter according to your environment and the individual use of priority classes within your cluster. Be aware that lowering the default value might impact the scheduling of the pods created by Dynatrace. priorityClass
is used on the CSI driver pods by default, but it can also be used on OneAgent pods (see the priorityClassName
parameter in DynaKube parameters).
Set namespace-based isolation levels for pods
Kubernetes version 1.25+
You can set namespace-based isolation levels for pods using Pod Security Standards.
If the defaults
property in the built-in admission controller is set to baseline
or restricted
, you need to mark the dynatrace
namespace as privileged
, as only the Privileged
policy is supported by Dynatrace Operator (the CSI driver and OneAgent pods require more permissions than the Baseline
or Restricted
policies allow).
To do that, run the command below.
kubectl label namespace dynatrace pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged
Configure customPullSecret
in DynaKube
To define a custom pull secret
-
Sign in to Docker with your Dynatrace environment ID as username.
docker login <ADDRESS> -u <environmentID>
-
Follow the instructions for creating a secret based on existing credentials.
Expected behavior
When a Dynakube custom resource is applied, a pull-secret
called <dynakube-name>-pull-secret
is generated by default. This pull-secret
is used by:
- Kubernetes to pull OneAgent and ActiveGate images from the environment registry.
- Dynatrace Operator to check the manifests of the registry.
If you set the customPullSecret
field in DynaKube, no pull secret is generated. To pull an image directly from the environment registry, the secret that the customPullSecret
points to needs to have auth
credentials for the environment registry.
Example auth
entry for the environment registry in .dockerconfig.json
.dockerconfigjson: {
"auths": {
"<tenant-registry>": {
"username": "<tenant-uid>",
"password": "<apiToken>",
"auth": "<tenant-uid>:<apiToken>" # <- base64 encoded; should be the one used by DynaKube.
};
....
},
}
Exclude selected URLs from proxy configuration
To set the list of URLs that should be excluded from the proxy configuration, add the following annotation to the DynaKube custom resource.
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
annotations:
feature.dynatrace.com/no-proxy: "some.url.com,other.url.com"
The Dynatrace Operator then uses the no-proxy
value when communicating with the Dynatrace environment. It does not affect communication with OneAgent or ActiveGate.
Configure minimum time between requests
Dynatrace Operator version 0.11.0+
Dynatrace Operator makes regular calls to Dynatrace to gather the information necessary to function properly.
The minimum time between requests from the Dynatrace Operator, which was previously hard coded to 15 minutes in order to reduce network load, can now be configured.
To set this time (in minutes), add the feature.dynatrace.com/dynatrace-api-request-threshold
annotation to your DynaKube.
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
annotations:
feature.dynatrace.com/dynatrace-api-request-threshold: "5"
The Operator makes three different types of requests for:
- ActiveGate connection details.
- OneAgent connection details.
- Token scope verification.
The specified interval is counted independently for each of these request types.
Configure failure policy
Dynatrace Operator version 0.11.0+
The failure policy determines what should happen when OneAgent injection fails for a particular pod in a Kubernetes cluster. By default, the failure policy is set to silent
. You can override the failure policy for all injected pods that match the DynaKube by setting the feature.dynatrace.com/injection-failure-policy
to one of the following values.
silent
—if OneAgent injection fails for a particular pod, the pod will continue to run without monitoring.fail
—if OneAgent injection fails for a particular pod, the pod will not start, and the injection failure will be treated as an error.
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
annotations:
feature.dynatrace.com/injection-failure-policy=[fail|silent]
Configure CSI Inline Ephemeral Volume Security
OpenShift version 4.13+
Dynatrace Operator version 0.11.1 and older
Starting with Openshift 4.13, you need to set an additional label in order to use the CSI driver for applicationMonitoring
, hostMonitoring
, or cloudNativefullStack
monitoring modes.
To configure the CSI driver, execute the following command:
kubectl label csidriver security.openshift.io/csi-ephemeral-volume-profile=restricted