- Latest OneAgent image from Docker Hub with tag 1.38.1000+
- In your Dynatrace environment, go to Settings > Cloud and virtualization > Kubernetes and turn on Enable monitoring and Show workloads and cloud applications.
Workload and cloud applications support is available within an Early Adopter release. Workload metrics ingested into Dynatrace are subject to custom metric licensing and are free of charge during the Early Adopter release phase.
Get an instant overview of your Kubernetes environment
Once you enable workloads and cloud applications support, you can easily see how many cluster resources have been allocated through the workloads that are running on the cluster.
Learn about your Kubernetes workloads with the cloud application view
Taking a closer look at the applications deployed in one of the namespaces, you can learn about their most important resource usage metrics. The cloud applications view covers workloads like:
The CPU throttling metric tells you how long the application was throttled, so you can determine where more CPU time would have been needed for processing. This usually happens when the containers don't have enough CPU resources (limits) in the workload definition. This might affect the performance of the processes and applications running inside the containers.
You can also see the number of running pods versus desired pods for every cloud application.
Get deep visibility into your Kubernetes pods and containers
Especially for complex environments, with many microservices and applications that depend on each other, you can find out if and where any anomalous behavior has occurred.
In the example below, the paymentservice-v1 consists of more technologies than expected.
OneAgent automatically monitors all the technologies and microservices that run on your cluster. You can easily automate OneAgent rollout with Dynatrace Operator.
Find out if your applications are getting enough CPU resources
In addition to the auto-discovery and auto-tracing capabilities, OneAgent captures low-level container metrics to reflect the effect of container resource limits.
Generic resource metrics for all supported container runtimes on Linux are available in custom charting and grouped in Containers > CPU and Containers > Memory.
Metrics for the number of running and desired pods are also available under the Cloud Platform section.
The CPU throttled time and memory usage percentage shows if the resource limits in the Kubernetes pod specs are set correctly. If memory usage reaches 100%, containers or applications will crash (out of memory) and need to be restarted.
Fine-grained control of visibility into namespaces and cloud applications via management zones
You can use management zones to control user access to the monitoring data of specific Kubernetes objects in your environment. For example, you can limit the access to specific cloud applications and namespaces to specific user groups. With this approach, you can control user access to specific Dynatrace Kubernetes pages, custom charts, and dashboards.