Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts. Why? Because they are processes that start and terminate quickly. Virtual machines aren’t designed to run for only a short time and then be terminated. Likewise, processes serve specific tasks while virtual machines typically serve multiple tasks.
Utilize Docker’s Remote API
Monitoring an environment at the container level is a great first step towards understanding the dynamics of containers in your environment. Many tools use the Docker Remote API to capture host resource consumption metrics related to CPU, memory, and network IO for each container. This is valuable information that operators can use when allocating host resources to containers.
Details of container dynamics in an environment can be captured by querying the Docker API of all Docker engines. For example, you can learn which hosts run containers that use a specific image. With the current move towards microservices, this becomes more important as Docker images are built for each service. You need to know on which machines the containers for a specific service are running.
Docker containers and orchestration technologies like Docker Swarm, Mesos/Marathon, and Kubernetes offer means of deploying, running, and scaling applications and microservices. The whole Docker ecosystem is a fantastic enabler for running microservices in dynamic cloud-based environments.
But how can you know if the services you’ve deployed are okay and if they’re working as designed? This is where application performance management enters the game.
It’s what’s running inside that counts
When it comes to application monitoring, you’re mostly interested in the services running inside containers rather than the containers themselves. You need application-centric information to ensure that the applications served by your containers are running as expected. You need CPU-time breakdowns for your application at the method level. You also need to inspect database queries, measure throughput and response times for services, and track communication between microservices across containers and hosts.
Monitoring microservices within containers
If you need to run your services at scale, Docker containers and orchestration tools are an ideal approach. No matter if services are stateless or stateful, load balancers send traffic to the respective containers once they’re properly configured.
To monitor the health of your application’s services, you need intuitive infographics that show you the most important metrics for each service. With this approach you can track throughput, average response time, failure rate, and most time consuming requests that are processed by all containers for each service.
If you need deep insights about a specific condition, select a time frame and analyze the metrics from that period in detail.
Find performance hotspots at the method level
Deep application performance analysis includes the ability to identify hotspots that contribute to the response time of a request. This enables you to pinpoint the service methods that consume the most CPU, disk, or network time for each request. In our example below, you can see the method that consumes the most CPU time for a Java service running in Docker containers.
Measure database query execution times and frequencies
Analyzing queries to and responses from databases is an essential aspect of performance tuning and therefore a core feature of application monitoring. This also holds true for monitoring applications that run in containers, no matter if the databases are served by other containers or not.
Inspecting all SQL statements and NoSQL queries sent by an application tells you about average query response times, execution frequency, numbers of fetched rows/executions, and failure rate. With this information you can optimize caching and query behavior on the application end, not to mention optimizing each database statement.
Track JVM metrics in Docker containers
Tracking Java heap memory metrics enables you to see if your JVM’s garbage collection works as expected and if there is a memory shortage. Memory shortage is the #1 cause of increased garbage collection times. You can see how long a JVM is suspended due to garbage collection and then fine-tune memory settings accordingly. In our example below, you can see a JBoss process running within a Docker container on an AWS ECS cluster.
Full-stack Docker performance monitoring
Since you can use and run Docker containers virtually everywhere, and you can run almost anything within containers, monitoring needs to keep up with Docker’s dynamic and portable approach. Docker performance monitoring needs to cover many entities beyond just the container and application space.
The table below shows you how monitoring various aspects of your environment, including Docker containers, can provide answers to different questions related to the performance of your applications.
What do you need for what?
Are all my machines healthy?
(CPU usage, memory, disk latency) Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringWhich components communicate with one another?
(Network connections between processes)Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringAre the processes responsive?
(Process response time and availability)Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringDoes the network allow for proper process communication?
(Traffic, TCP requests, connection timeouts, retransmissions)Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringAre the containers healthy?
(CPU usage, memory, network IO)Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringWhich images have been deployed?
(Hosts with containers using same image) Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringWhere are new services deployed?
(New instances, containers, service deployments) Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringAre my application services responsive?
(Response time, failure rate, workload) Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringWhich code parts are critical?
(CPU, disk, network time spent on a method, exceptions) Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringDo the databases respond quickly?
(Query execution frequency, response time, and failure rate) Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringAre the message queues fast enough?
(Message response time, failure rate) Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringHow does heap memory usage change over time?
(Memory used in the generations)Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringWhat is the average web response time experienced by users per region?
(Response time, number of user actions, Apdex rating)Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb MonitoringAre my applications available and functional?
(Periodic availability checks and SLA reports)Server MonitoringNetwork MonitoringDocker MonitoringApplication MonitoringWeb Monitoring
Monitoring data captured for the entities listed above must be put into context and analyzed along with all other entities and related dependencies. For example, user action duration (web monitoring) for customers in a specific region may be high despite the fact that the web servers and backend services show low CPU usage (server monitoring or Docker monitoring). Let’s assume that the network connections are also fast (network monitoring). The problem may be due to too few worker threads for the Apaches within the containers (application monitoring) or there may be an overloaded ESXi host with a high CPU ready time for the respective VM (cloud monitoring).
In other words, full-stack monitoring requires that you monitor all entities with a single solution that can analyze and interpret monitoring data from across your technology stack.
Go for Dynatrace! Not convinced that Dynatrace can really monitor all the entities that I’ve outlined in this post? Then test drive Dynatrace for yourself! Simply sign up for the free trial, install OneAgent on your Docker hosts, and you’ll be all set for deep, full-stack monitoring of your Docker environment.