Easily find key performance metrics about your Hadoop cluster
Dynatrace auto-detects your Hadoop components and shows key metrics and a timeline chart specific to each component.
Dynatrace shows you Hadoop-specific metrics, providing you both current and historical data.
Dynatrace directly pinpoints components that are causing problems with big data analytics of billions of dependencies within your application stack.
Dynatrace’s Hadoop server monitoring provides a high-level overview of the main Hadoop components within your cluster.
Hadoop-specific metrics are presented alongside all infrastructure measurements, providing you with in-depth Hadoop performance analysis of both current and historical data. See enhanced insights for metrics like HDFS and MapReduce.
With Hadoop monitoring enabled globally, Dynatrace automatically collects Hadoop metrics whenever a new host running Hadoop is detected in your environment.
In under five minutes, Dynatrace detects your Hadoop processes and shows metrics like CPU, connectivity, retransmissions, suspension rate and garbage collection time.
Cluster charts and metrics show you key performance data about your Hadoop processes. Hadoop NameNode pages provide details about your HDFS capacity, usage, blocks, cache, files, and data-node health.
For the full list of the provided NameNode and DataNode metrics please visit our detailed blog post about Hadoop monitoring.
Dynatrace provides relevant performance metrics about the following Hadoop processes:
For the full list of the provided metrics please visit our detailed blog post about Hadoop monitoring.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.
Dynatrace monitors and analyzes the activity of your Hadoop processes, providing Hadoop-specific metrics alongside all infrastructure measurements.