Get deep Kubernetes observability with the new cloud application view (Early Adopter)

Explore your Kubernetes workloads and get deep visibility into your pods' containers and processes with the new cloud application view in Dynatrace.

video thumbnail

Kubernetes is the platform of choice these days when it comes to automating and managing containerized workloads and applications. Projects that are built on top of Kubernetes (for example, Keptn, Argo, and Istio) add an extra layer of abstraction and convenience to support teams in their daily work of running and managing the life cycle of applications. However, at the end of the day, application and platform teams have two primary questions:

  • What applications and workloads are running in my Kubernetes environments?
  • Are the applications and workloads running in my Kubernetes environments performing well and properly providing value to customers?

To allow you to easily inspect and understand the workloads and applications you run on Kubernetes, Dynatrace is happy to introduce a new cloud application view for stateless and stateful workloads in Kubernetes.

Get an instant overview of your K8s environment

Dynatrace not only gives a great overview of your entire Kubernetes cluster, it also provides deep visibility into the workloads that are running in your environment. Thus, you can easily see how much of a cluster’s resources have been allocated to the workloads running on the cluster.

Looking at the example below (a rather small environment), you can see that:

  • There are 35 Deployments and 3 DaemonSets in 8 Namespaces on 4 Cluster nodes.
  • In total, there are 59 Running pods. There’s still 22 GB available memory and 3.68 CPU Cores available on the cluster for more pods.

Kubernetes monitoring with cluster and workload insights

Better understand your Kubernetes workloads with the new cloud application view

If you take a closer look at the applications deployed in one of the namespaces (hipster-shop, a slightly adapted version of the Hipster Shop from Google’s microservices repository), you can learn about the most important resource usage metrics that Dynatrace provides. The new cloud applications view covers workloads like Deployment, DeploymentConfig, ReplicaSet, DaemonSet, StatefulSet, StaticPod, and the rarely used ReplicationController.

The CPU throttling metric tells you how long the application was throttled, which indicates where the application may have needed more CPU time for processing than was available. This usually occurs when containers aren’t given adequate CPU resources (limits) in the workload specification, which may affect the performance of the processes and applications running inside.

Cloud application view also shows you the number of running versus desired pods for each cloud application. In the future, we’ll extend this with information about failed and pending pods. However, the recently announced support for Kubernetes events can already give you, for example, information about why pods are in a pending state.

List of workloads and cloud applications for K8s

Get deep visibility into your Kubernetes pods and containers

Especially for complex environments with many microservices and applications that depend on each other, you want to know if and where things deviate from the norm.

In our scenario, the paymentservice-v1 consists of far more technologies than one would initially expect. There’s a Node.js process that runs the main application in a container called server. This is the most important process because it serves the payment microservice. If you take a closer look, you’ll see that there are many more technologies involved here beyond just the main Node.js application.

  • As we run hipster-shop in an Istio service mesh, there’s also an Envoy proxy running next to the payment service in an istio-proxy container.
  • The Envoy proxy is deployed through Istio and is managed by the Istio pilot-agent, which also runs in the istio-proxy container.

Technologies and workloads running in Kubernetes workloads

To proactively optimize your environment, you first need to know what’s running inside your cloud applications and how. For this, OneAgent automatically monitors all the technologies and microservices that run on your cluster. You can easily automate OneAgent rollout using a Helm chart or directly using OneAgent Operator.

Find out if your applications have adequate CPU resources

In addition to its auto-discovery and auto-tracing capabilities, OneAgent also captures low-level container metrics so you can understand the impact of container resource limits. For this, we introduced a new set of generic resource metrics for all supported container runtimes on Linux. All the newly introduced container metrics are available in custom charting and are grouped in the Containers > CPU and Containers > Memory categories. The Cloud platform category also has the new metrics Running pods and Desired pods.

Kubernetes dashboard with container-level and pod-level metrics

The CPU throttled containers tile (based on the CPU throttled time metric) and the Memory usage containers tile (based on the Memory usage % of limit metric) show you if the resource limits in the Kubernetes pod specifications are set correctly. If memory usage reaches 100%, containers and applications will crash because of out of memory errors (OOM). They will then need to be restarted. The tile Memory usage paymentservice-v1 (bottom right on dashboard) shows a typical chart for a memory leak with periods of increasing memory usage followed by steep drops when the container crashed.

Fine-grained control for visibility into namespaces and cloud applications via management zones

To help you control the visibility of Kubernetes objects for your users, we’ve also adapted management zones for the Kubernetes workloads view and the cloud applications page. As an example, you can limit access to cloud applications in a certain namespace to a specific user group.

To do this, you can now add rules for cloud applications and namespaces when you define a management zone. Kubernetes pages, custom charting, and your dashboards can all be filtered based on management zones.

Flexible control of visibility into Kubernetes workloads with Dynatrace

How to get started

You can enable the new workload and cloud application support on the Kubernetes settings page where you set up Kubernetes API monitoring for your clusters.

Note: Ensure that the respective service account has permissions to access the required API endpoints. For details, see Dynatrace Help.

The new support requires:

  • ActiveGate version 1.189+
  • OneAgent version 1.189+
  • Latest OneAgent image from Docker Hub with tag 1.38.1000+
  • Show workloads and cloud applications toggle enabled on the Kubernetes settings page

Note: Kubernetes workload and cloud application support is available as an Early Adopter release. Workload metrics ingested into Dynatrace are subject to custom metric licensing and are free of charge during the Early Adopter release phase.

What’s next

The Dynatrace roadmap for Kubernetes support is full. We’re already working on these further enhancements:

  • Namespace resource quotas for workloads
  • Enhanced integration of pod and namespace labels
  • Reporting of pod phases like “pending” and “failed”
  • Ecosystem metrics from Prometheus exporters in Kubernetes

Note: While moving forward with our Kubernetes support, we plan to eventually deprecate the currently existing set of Docker-only container metrics.

There’s a lot more to come in Dynatrace Kubernetes support, and we’d like to include you in the process of prioritizing upcoming features. Please check out the planned enhancements on Dynatrace answers and share your feedback with us. We look forward to hearing from you.