Monitor OpenShift workloads and cloud applications


Note: Workload and cloud applications support is available within an Early Adopter release. Workload metrics ingested into Dynatrace are subject to custom metric licensing and are free of charge during the Early Adopter release phase.

Enable workloads integration

You can enable the workload and cloud applications support on the Kubernetes settings page where you set up Kubernetes API monitoring for your clusters. For details see Monitor your OpenShift clusters with Dynatrace.

Note: Make sure the respective service account has the proper permissions to allow accessing the required API endpoints. The service account provided in the instructions covers the required permissions.

Get an instant overview of your OpenShift environment

You can easily see how many cluster resources have been allocated through the workloads that are running on the cluster.

Learn about your workloads using cloud application view

Taking a closer look at the applications deployed in one of the namespaces, you can learn about their most important resource usage metrics. The cloud applications view covers workloads like: Deployment, DeploymentConfig, ReplicaSet, DaemonSet, StatefulSet, StaticPod.

The CPU throttling metric tells you how long the application was throttled, so you can determine where more CPU time would have been needed for processing. This usually happens when the containers don't have enough CPU resources (limits) in the workload definition. This might affect the performance of the processes and applications running inside the containers.

You can also see the number of running pods versus desired pods for every cloud application.


Get deep visibility into your OpenShift pods and containers

Especially for complex environments, with many microservices and applications that depend on each other, you can find out if and where any anomalous behavior has occurred.

In the example below, the paymentservice-v1 application consists of more technologies than expected.

OneAgent automatically monitors all the technologies and microservices that run on your cluster. You can easily automate OneAgent roll-out with a Helm chart or directly through the OneAgent Operator.

Find out if your applications are getting enough CPU resources

In addition to the auto-discovery and auto-tracing capabilities, OneAgent captures low-level container metrics to reflect the effect of container resource limits.
Generic resource metrics for all supported container runtimes on Linux are available in custom charting and grouped in Containers > CPU and Containers > Memory. Metrics for the number of running and desired pods are also available under the Cloud Platform section.


The CPU throttled time and memory usage percentage shows if the resource limits in the OpenShift pod specs are set correctly. If memory usage reaches 100%, containers or applications crash (out of memory) and need to be restarted.

Fine-grained control of visibility into namespaces and cloud applications via management zones

You can use management zones to control user access to the monitoring data of specific Kubernetes objects in your environment. For example, you can limit the access to specific cloud applications and namespaces to specific user groups. With this approach, you can control user access to specific Dynatrace Kubernetes pages, custom charts, and dashboards.