Configuration options for Dynatrace Operator on Kubernetes/OpenShift
See below for a list of configuration options available for Dynatrace Operator.
Add a custom properties file optional
As part of getting started with Kubernetes monitoring, you may want to add a custom properties file. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
You can add a custom properties file by providing it as a value or by referencing it from a secret.
- To add the custom properties file as a value, see the example below.
customProperties:
value: |
[kubernetes_monitoring]
...
- To reference the custom properties file from a secret
- Create a secret with the following content.
Note: The content of the secret has to be base64
encoded in order to work.
apiVersion: v1
kind: Secret
metadata:
name: <customproperties-secret>
namespace: dynatrace
data:
customProperties: <base64 encoded properties>
- Add the secret to the custom properties.
customProperties:
valueFrom: <customproperties-secret>
Add a custom certificate for ActiveGate optional
As part of getting started with Kubernetes monitoring, you may want to add a custom certificate for ActiveGate. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
To add a custom certificate for ActiveGate:
-
Create a secret.
kubectl -n dynatrace create secret generic dynakube-custom-certificate --from-file=server.p12 --from-literal=password=<password_to_server.p12> --from-file=server.crt
oc -n dynatrace create secret generic dynakube-custom-certificate --from-file=server.p12 --from-literal=password=<password_to_server.p12> --from-file=server.crt
-
In your custom resource, enable the
tlsSecretName
parameter and enter the name of the secret you created.Example:
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://FQDN/api activeGate: tlsSecretName: dynakube-custom-certificate capabilities: - kubernetes-monitoring
Note: HTTP clients connecting to the ActiveGate REST endpoint must trust provided certificates.
Configure proxy optional
As part of getting started with Kubernetes monitoring, you may want to configure proxy. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
You can configure optional parameters like proxy settings in the DynaKube custom resource file in order to:
- Download the OneAgent installer
- Ensure communication between the OneAgent and your Dynatrace environment
- Ensure communication between Dynatrace Operator and the Dynatrace API.
There are two ways to provide the proxy, depending on whether your proxy uses credentials.
If you have a proxy that doesn't use credentials, enter your proxy URL directly in the value
field for the proxy.
Example
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: dynakube
namespace: dynatrace
spec:
apiUrl: https://environmentid.live.dynatrace.com/api
proxy:
value: http://mysuperproxy
If your proxy uses credentials
-
Create a secret with a field called
proxy
that holds your encrypted proxy URL with the credentials.Example.
kubectl -n dynatrace create secret generic myproxysecret --from-literal="proxy=http://<user>:<password>@<IP>:<PORT>"
oc -n dynatrace create secret generic myproxysecret --from-literal="proxy=http://<user>:<password>@<IP>:<PORT>"
-
Provide the name of the secret in the
valueFrom
section.
Example.apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace spec: apiUrl: https://environmentid.live.dynatrace.com/api proxy: valueFrom: myproxysecret
Read-only file systems support
Dynatrace Operator version 0.5.0+cloudNativeFullStackhostMonitoringAs part of getting started with Kubernetes monitoring, you may want to review the support for read-only file systems. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
For read-only host file systems, support is enabled by default for cloudNativeFullStack
and hostMonitoring
with CSI driver configurations, so you don't need to set the ONEAGENT_ENABLE_VOLUME_STORAGE
environment variable to true
anymore.
To disable this feature, you can add the following annotation in your DynaKube custom resource.
apiVersion: dynatrace.com/v1beta1
kind: DynaKube
metadata:
name: dynakube
namespace: dynatrace
annotations:
feature.dynatrace.com/disable-oneagent-readonly-host-fs: "true"
Configure monitoring for namespaces and pods
cloudNativeFullStackapplicationMonitoringAs part of getting started with Kubernetes monitoring, you may want to configure monitoring for namespaces and pods. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
By default, Dynatrace Operator injects OneAgent into all namespaces, with the following exceptions:
- Namespaces starting with
kube-
oropenshift-
. - The namespace where Dynatrace Operator was installed.
For more configuration options, see below.
- Option 1: Monitor all namespaces except selected pods.
To disable monitoring for selected pods, annotate the pods that should be excluded, as in the example below.
...
metadata:
annotations:
...
oneagent.dynatrace.com/inject: "false"
For more pod annotation options, see Pod annotation list.
- Option 2: Monitor only specific namespaces.
If you don't want Dynatrace Operator to inject OneAgent in all namespaces, you can set the namespaceSelector
parameter in the DynaKube custom resource, and enable monitoring for specific namespaces that have the chosen label.
To label namespaces, use the command below, making sure to replace the placeholder with your own value.
kubectl label namespace <my_namespace> monitor=app
oc label namespace <my_namespace> monitor=app
To enable monitoring for the namespace that was just labelled, edit the DynaKube custom resource file as in the example below.
...
namespaceSelector:
matchLabels:
monitor: app
For details, see Labels and selectors.
Note: To add exceptions for specific pods within the selected namespaces, you can annotate the respective pods.
- Option 3: Exclude specific namespaces from being monitored.
To enable this option, edit the DynaKube custom resource file as in the example below. Note that
key
is the key of the label, for examplemonitor
.value
is the value of the label, for exampleapp
.
...
spec:
namespaceSelector:
matchExpressions:
- key: KEY
operator: NotIn
values:
- VALUE
The webhook will inject every namespace that matches all namespaceselector
.
The operator
property can have the following values: In
and NotIn
.
- If you set
In
, the webhook will only inject the pods in the namespace that matches the namespace selector. - If you set
NotIn
, the webhook will only inject the pods in all other namespaces that don't match the namespace selector. –> For details, see Resources that support set-based requirements.
Pod annotation list
-
All applicable pod annotations for
applicationMonitoring
without CSI driver:oneagent.dynatrace.com/inject
:<"false">
. If set tofalse
, no modifications will be applied to the pod. If not set, the default on the namespace is used.oneagent.dynatrace.com/flavor
:<"default">
or<"musl">
. If set, it indicates whether binaries for glibc or musl are to be downloaded. It defaults toglibc
.
Note: If your container uses musl (for example, Alpine base image), you must add the flavor annotation in order to monitor it.oneagent.dynatrace.com/technologies
:<"comma-separated technologies list">
. If set, it filters which code modules are to be downloaded. It defaults to"all"
.oneagent.dynatrace.com/install-path
:<"path">
. If set, it indicates the path where the unpacked OneAgent directory will be mounted. It defaults to"/opt/dynatrace/oneagent-paas"
.oneagent.dynatrace.com/installer-url
:<"url">
. If set, it indicates the URL from where the OneAgent app-only package will be downloaded. It defaults to the Dynatrace environment API configured on the API URL of Dynakube.
-
All applicable pod annotations for
applicationMonitoring
with CSI driver:oneagent.dynatrace.com/inject
:<"false">
. If set tofalse
, no modifications will be applied to the pod. If not set, the default on the namespace is used.
Example annotations:
...
metadata:
annotations:
oneagent.dynatrace.com/technologies: "java,nginx"
oneagent.dynatrace.com/flavor: "musl"
oneagent.dynatrace.com/install-path: "/dynatrace"
oneagent.dynatrace.com/installer-url: "https://my-custom-url/route/file.zip"
Import Kubernetes API certificates
As part of getting started with Kubernetes monitoring, you may want to check how importing Kubernetes API certificates works. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
Starting with Dynatrace Operator version 0.3.0, Kubernetes API certificates are automatically imported for certification validation checks. Kubernetes automatically creates a kube-root-ca.crt
configmap in every namespace. This certificate is automatically mounted into every container to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
and merged into the ActiveGate truststore file using an initContainer.
To get this feature, be sure to update Dynatrace Operator if you're using an earlier version.
Configure security context constraints (OpenShift)
As part of getting started with Kubernetes monitoring, you may want configure security context constraints for OpenShift. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
Note: Configuring security context constraints is required for OpenShift for cloudNativeFullStack
and applicationMonitoring
with CSI driver deployments.
Dynatrace Operator needs permission to access the csi
volumes, which are used to provide the necessary binaries to different pods. To allow pods access to the csi
volumes you must add a security context constraint.
To add a security context constraint
-
Create a file called
restricted-csi.yaml
with the following content.Note: You can configure the file according to your needs, just make sure you add
csi
to thevolumes
.apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: restricted-csi runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs fsGroup: type: MustRunAs supplementalGroups: type: RunAsAny allowHostDirVolumePlugin: true allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: null defaultAddCapabilities: null priority: null readOnlyRootFilesystem: false groups: - system:authenticated requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID users: [] volumes: - configMap - downwardAPI - emptyDir - hostPath - persistentVolumeClaim - projected - secret - csi
-
Save the file.
-
Run the command below to create the security context constraint.
oc apply -f restricted-csi.yaml
Metadata metric enrichment
Dynatrace Operator version 0.4.0+cloudNativeFullStackapplicationMonitoringAs part of getting started with Kubernetes monitoring, you may want configure metadata metric enrichment. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
Metadata metric enrichment leverages data from OneAgent and Dynatrace Operator by adding additional context or relevant data to the metrics sent. Enrichment means the logs and data are related back to entities (pods, processes, hosts). Every metric prefixed with dt.entity
is due to metadata enrichment.
Starting with Dynatrace Operator version 0.4+, every application pod that is instrumented by the Dynatrace Webhook is automatically enriched with metric metadata.
Activate metadata enrichment
To activate metadata enrichment, you need to create a special token for data ingest and add it to the secret.
- Create a
dataIngestToken
token and enable the Ingest metrics permission (API v2). - Follow the deployment instructions, making sure the
dynakube
secret you create in step 4 of the instructions includes thedataIngestToken
token. - Redeploy your monitored pods.
Note: You can add the dataIngestToken
token manually at any time by editing the secret:
-
Edit the existing secret.
kubectl edit secret <dynakube>
oc edit secret <dynakube>
-
Add a new
dataIngestToken
key with your generated token to the secret, as in the example below:apiVersion: v1 kind: Secret metadata: name: dynakube namespace: dynatrace data: apiToken: <apiToken base64 encoded> dataIngestToken: <dataIngestToken base64 encoded> type: Opaque
-
Redeploy your monitored pods.
Disable metadata enrichment
To disable the metadata enrichments, add the following annotation to the DynaKube custom resource:
metadata:
annotations:
...
feature.dynatrace.com/disable-metadata-enrichment: "true"
Alternatively, you can disable the metadata enrichments by running the command below. Be sure to replace the placeholder (<...>
) with the name of your DynaKube sample.
kubectl annotate dynakube -n dynatrace <your_DynaKube_CR> feature.dynatrace.com/disable-metadata-enrichment="true"
oc annotate dynakube -n dynatrace <your_DynaKube_CR> feature.dynatrace.com/disable-metadata-enrichment="true"
Enable AppArmor for enhanced security
Dynatrace Operator version 0.6.0+As part of getting started with Kubernetes monitoring, you may want to enable AppArmor for enhanced security. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
You can make Dynatrace Operator more secure by enabling AppArmor. Depending on whether you set up monitoring using kubectl/oc or helm, select one of the options below.
-
Add the following annotation to your DynaKube file to deploy ActiveGate with AppArmor profile enabled:
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: annotations: feature.dynatrace.com/activegate-apparmor: true
-
Add the following annotations to your Kubernetes/OpenShift YAML to deploy the webhook and Dynatrace Operator with AppArmor profile enabled:
kind: Deployment metadata: name: dynatrace-webhook spec: template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/webhook: runtime/default kind: Deployment metadata: name: dynatrace-operator spec: template: metadata: annotations: container.apparmor.security.beta.kubernetes.io/dynatrace-operator: runtime/default
Add the following properties to the values.yaml
file to deploy ActiveGate and Dynatrace Operator with AppArmor profile enabled:
operator:
apparmor: true
webhook:
apparmor: true
activeGate:
apparmor: true
High availability mode for Helm deployments
Dynatrace Operator version 0.6+As part of getting started with Kubernetes monitoring, you may want to configure high availability. When you are finished, you can return to the installation instructions for your kubectl/oc or helm deployment.
Note: For now, this feature is limited to Helm deployments.
The high availability mode offers the following capabilities:
- Increases replicas to two replicas for webhook deployment.
- Adds pod topology spread constraints:
- Pods are spread across different nodes, with the nodes in different zones where possible.
- Multiple pods are allowed in the same zone.
- Adds pod disruption budget:
- It restricts graceful shutdowns of the webhook pod, if it's the last remaining pod.
To enable this, you can add the following to the values.yaml
:
webhook:
highAvailability: true
Authenticate ActiveGate to the Dynatrace Cluster optional
Dynatrace Operator version 0.7.0+Starting with Dynatrace Operator version 0.7.0+, you can create an authentication token for your ActiveGate and use it to connect to the Dynatrace Cluster. Adding this annotation automatically creates an authentication token for the ActiveGate that is rotated by Dynatrace Operator every 30 days. When an authentication token is rotated, the affected ActiveGate is automatically deleted and redeployed.
To create the authentication token
-
Make sure to enable the
activeGateTokenManagement.create
permission (API v2) for your API token. -
Add the following annotation to the DynaKube custom resource.
apiVersion: dynatrace.com/v1beta1 kind: DynaKube metadata: name: dynakube namespace: dynatrace annotations: feature.dynatrace.com/enable-activegate-authtoken: "true"
To disable the configuration, remove the annotation.
Using priorityClass
for critical Dynatrace components
Starting with Dynatrace Operator version 0.8.0+, a priorityClass
object is created by default when installing the Dynatrace Operator. This priority class is initially set to a high value to ensure that the components that use it have a higher priority than other pods, and that critical components like the CSI driver are scheduled by Kubernetes. For details, see the Kubernetes documentation on PriorityClass.
You can change the default value of this parameter according to your environment and the individual use of priority classes within your cluster. Be aware that lowering the default value might impact the scheduling of the pods created by Dynatrace. priorityClass
is used on the CSI driver pods by default, but it can also be used on OneAgent pods (see the priorityClassName
parameter in DynaKube parameters).