• Home
  • Deploy
  • Kubernetes
  • Deployment
  • Troubleshooting
  • General troubleshooting

General troubleshooting

This guide provides general troubleshooting steps and guidance for common issues encountered when using Dynatrace with Kubernetes. It covers how to access debug logs, use the troubleshoot subcommand, or generate a support archive.

Troubleshoot common Dynatrace Operator setup issues using the troubleshoot subcommand

Dynatrace Operator version 0.9.0+

Run the command below to retrieve a basic output on DynaKube status, such as:

  • Namespace: If the dynatrace namespace exists (name can be overwritten via parameter)

  • DynaKube:

    • If CustomResourceDefinition exists
    • If CustomResource with the given name exists (name can be overwritten via parameter)
    • If the API URL ends with /api
    • If the secret name is the same as DynaKube (or .spec.tokens if used)
    • If the secret has apiToken and paasToken set
    • If the secret for customPullSecret is defined
  • Environment: If your environment is reachable from the Dynatrace Operator pod using the same parameters as the Dynatrace Operator binary (such as proxy and certificate).

  • OneAgent and ActiveGate image: If the registry is accessible; if the image is accessible from the Dynatrace Operator pod using the registry from the environment with (custom) pull secret.

bash
kubectl exec deploy/dynatrace-operator -n dynatrace -- dynatrace-operator troubleshoot

If you use a different DynaKube name, add the --dynakube <your_dynakube_name> argument to the command.

Example output if there are no errors for the above-mentioned fields:

bash
{"level":"info","ts":"2022-09-12T08:45:21.437Z","logger":"dynatrace-operator-version","msg":"dynatrace-operator","version":"<operator version>","gitCommit":"<commithash>","buildDate":"<release date>","goVersion":"<go version>","platform":"<platform>"} [namespace ] --- checking if namespace 'dynatrace' exists ... [namespace ] √ using namespace 'dynatrace' [dynakube ] --- checking if 'dynatrace:dynakube' Dynakube is configured correctly [dynakube ] CRD for Dynakube exists [dynakube ] using 'dynatrace:dynakube' Dynakube [dynakube ] checking if api url is valid [dynakube ] api url is valid [dynakube ] checking if secret is valid [dynakube ] 'dynatrace:dynakube' secret exists [dynakube ] secret token 'apiToken' exists [dynakube ] customPullSecret not used [dynakube ] pull secret 'dynatrace:dynakube-pull-secret' exists [dynakube ] secret token '.dockerconfigjson' exists [dynakube ] proxy secret not used [dynakube ] √ 'dynatrace:dynakube' Dynakube is valid [dtcluster ] --- checking if tenant is accessible ... [dtcluster ] √ tenant is accessible

Debug logs

By default, OneAgent logs are located in /var/log/dynatrace/oneagent.

To debug Dynatrace Operator issues, run

bash
kubectl -n dynatrace logs -f deployment/dynatrace-operator
bash
oc -n dynatrace logs -f deployment/dynatrace-operator

You might also want to check the logs from OneAgent pods deployed through Dynatrace Operator.

bash
kubectl get pods -n dynatrace NAME READY STATUS RESTARTS AGE dynatrace-operator-64865586d4-nk5ng 1/1 Running 0 1d dynakube-oneagent-<id> 1/1 Running 0 22h
bash
kubectl logs dynakube-oneagent-<id> -n dynatrace
bash
oc get pods -n dynatrace NAME READY STATUS RESTARTS AGE dynatrace-operator-64865586d4-nk5ng 1/1 Running 0 1d dynakube-classic-8r2kq 1/1 Running 0 22h
bash
oc logs oneagent-66qgb -n dynatrace

Generate a support archive using the support-archive subcommand

Dynatrace Operator version 0.11.0+

Use support-archive to generate a support archive containing all the files that can be potentially useful for the RFA analysis:

  • operator-version.txt—a file containing the current Operator version information
  • logs—logs from all containers of the Dynatrace Operator pods in the Dynatrace Operator namespace (usually dynatrace); this also includes logs of previous containers, if available:
    • dynatrace-operator
    • dynatrace-webhook
    • dynatrace-oneagent-csi-driver
  • manifests—the Kubernetes manifests for Dynatrace Operator components and deployed DynaKubes in the Dynatrace Operator namespace
  • troubleshoot.txt—output of a troubleshooting command that is automatically executed by the support-archive subcommand
  • supportarchive_console.log—complete output of the support-archive subcommand

Usage

To create a support archive, execute the following command.

bash
kubectl exec -n dynatrace deployment/dynatrace-operator -- dynatrace-operator support-archive

The collected files are stored in a zipped tarball and can be downloaded from the pod using the kubectl cp command.

bash
kubectl -n dynatrace cp <operator pod name>:/tmp/dynatrace-operator/operator-support-archive.tgz ./tmp/dynatrace-operator/operator-support-archive.tgz

The recommended approach is to use the --stdout parameter line switch to stream the tarball directly to your disk.

bash
kubectl exec -n dynatrace deployment/dynatrace-operator -- dynatrace-operator support-archive --stdout > operator-support-archive.tgz

If you use the --stdout parameter, all support archive command output is written to stderr so as not to corrupt the support archive tar file.

Sample output

The following is sample output from running support-archive with the --stdout parameter.

bash
kubectl exec -n dynatrace deployment/dynatrace-operator -- dynatrace-operator support-archive --stdout > operator-support-archive.tgz
plaintext
[support-archive] dynatrace-operator {"version": "v0.11.0", "gitCommit": "...", "buildDate": "...", "goVersion": "...", "platform": "linux/amd64"} [support-archive] Storing operator version into operator-version.txt [support-archive] Starting log collection [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-bdnpc/server.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-bdnpc/provisioner.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-bdnpc/registrar.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-bdnpc/liveness-probe.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-cb4pc/server.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-cb4pc/provisioner.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-cb4pc/registrar.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-cb4pc/liveness-probe.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-k8bl5/server.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-k8bl5/provisioner.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-k8bl5/registrar.log [support-archive] Successfully collected logs logs/dynatrace-oneagent-csi-driver-k8bl5/liveness-probe.log [support-archive] Successfully collected logs logs/dynatrace-operator-6d9fd9b9fc-sw5ll/dynatrace-operator.log [support-archive] Successfully collected logs logs/dynatrace-webhook-7d84599455-bfkmp/webhook.log [support-archive] Successfully collected logs logs/dynatrace-webhook-7d84599455-vhkrh/webhook.log [support-archive] Starting K8S object collection [support-archive] Collected manifest for manifests/injected_namespaces/Namespace-default.yaml [support-archive] Collected manifest for manifests/dynatrace/Namespace-dynatrace.yaml [support-archive] Collected manifest for manifests/dynatrace/Deployment-dynatrace-operator.yaml [support-archive] Collected manifest for manifests/dynatrace/Deployment-dynatrace-webhook.yaml [support-archive] Collected manifest for manifests/dynatrace/StatefulSet-dynakube-activegate.yaml [support-archive] Collected manifest for manifests/dynatrace/DaemonSet-dynakube-oneagent.yaml [support-archive] Collected manifest for manifests/dynatrace/DaemonSet-dynatrace-oneagent-csi-driver.yaml [support-archive] Collected manifest for manifests/dynatrace/DynaKube-dynakube.yaml

Debug configuration and monitoring issues using the Kubernetes Monitoring Statistics extension

The Kubernetes Monitoring Statistics extension can help you:

  • Troubleshoot your Kubernetes monitoring setup
  • Troubleshoot your Prometheus integration setup
  • Get detailed insights into queries from Dynatrace to the Kubernetes API
  • Receive alerts when your Kubernetes monitoring setup experiences issues
  • Get alerted on slow response times of your Kubernetes API

Potential issues when changing the monitoring mode

  • Changing the monitoring mode from classicFullStackto cloudNativeFullStack affects the host ID calculations for monitored hosts, leading to new IDs being assigned and no connection between old and new entities.
  • If you want to change the monitoring method from applicationMonitoring or cloudNativeFullstack to classicFullstack or hostMonitoring, you need to restart all the pods that were previously instrumented with applicationMonitoring or cloudNativeFullstack.