Deploy OneAgent on Google Kubernetes Engine clusters

Google Kubernetes Engine (GKE) is a managed environment for operating Kubernetes clusters and running containerized workloads at scale.

For full-stack monitoring of Kubernetes clusters, you need to roll-out Dynatrace OneAgent on each cluster node using OneAgent Operator for Kubernetes 1.9 or higher.

While full-stack monitoring of Ubuntu-based GKE clusters is fully supported, monitoring GKE clusters running Container-Optimized OS is in EAP at the moment. This means the following:

  • the solution is still in development.
  • we plan to make it available as BETA and GA within the near-to-mid-term future, but a specific date isn't defined yet.
  • deploying the solution in a production environment isn't recommended.
  • it is not supported with official Dynatrace SLAs.

Please review the limitations section below.

Prepare Dynatrace tokens for OneAgent Operator

OneAgent Operator requires two different tokens for interacting with Dynatrace servers. These two tokens are made available to OneAgent Operator by means of a Kubernetes secret as explained at a later step.

  1. Get an API token for the Dynatrace API. This token is later referenced as API_TOKEN.
  2. Get a Platform-as-a-Service token. This token is later referenced as PAAS_TOKEN.

Install OneAgent Operator

Create a role binding to grant your GKE user a cluster-admin before you can create the role necessary for the OneAgent Operator in later steps.

$ kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole=cluster-admin --user=$(gcloud config get-value account)

Create the necessary objects for OneAgent Operator. OneAgent Operator acts on its separate namespace dynatrace. It holds the operator deployment and all dependent objects like permissions, custom resources and the corresponding DaemonSet. You can also observe the logs of OneAgent Operator.

$ LATEST_RELEASE=$(curl -s | grep tag_name | cut -d '"' -f 4)
$ kubectl create -f$LATEST_RELEASE/deploy/kubernetes.yaml
$ kubectl -n dynatrace logs -f deployment/dynatrace-oneagent-operator

Create the secret holding API and PaaS tokens for authenticating to the Dynatrace cluster. The name of the secret is important in a later step when you configure the custom resource (.spec.tokens). In the following code-snippet the name is oneagent. Be sure to replace API_TOKEN and PAAS_TOKEN with the values explained above.

$ kubectl -n dynatrace create secret generic oneagent --from-literal="apiToken=API_TOKEN" --from-literal="paasToken=PAAS_TOKEN"

Save the following custom resource snippet to a file cr.yaml. The rollout of Dynatrace OneAgent is governed by a custom resource of type OneAgent.

Alternatively, you can use the snippet from the GitHub repository.

$ curl -o cr.yaml$LATEST_RELEASE/deploy/cr.yaml

Adapt the values of the custom resource as indicated in the following table.

Parameter Description Default value
apiUrl Dynatrace SaaS: Replace ENVIRONMENTID with your Dynatrace environment ID in
Dynatrace Managed: Provide your Dynatrace Server URL (https://<YourDynatraceServerURL>/e/<ENVIRONMENTID>/api)
tokens Name of the secret that holds the API and PaaS tokens from above. Name of custom resource ( if unset
args Parameters to be passed to the OneAgent installer. All the command line parameters of the installer are supported, with the exception of INSTALL_PATH. We recommend to set APP_LOG_CONTENT_ACCESS=1 []
env Environment variables for OneAgent container. []

If you want to participate in the Early Access Program and roll-out Dynatrace OneAgent to GKE clusters running Container-Optimized OS, please be aware of the limitations and risk as explained above. You'll need to add the following entry to the env section in the custom resource.

Create the custom resource.

$ kubectl create -f cr.yaml


The same limitations apply as when deploying OneAgent as a Docker container, except the auto-update. The operator makes sure OneAgents are properly updated.

Limitations for Container Optimized OS based GKE clusters

  • Disks aren't detected properly and therefore the disk metrics aren't collected properly.
  • Only local Docker volume driver has been tested and is supported.