Quickly find key performance metrics about your Apache Spark instance
Dynatrace auto-detects your Spark components and shows key metrics and a timeline chart specific to each component.
Dynatrace provides in-depth cluster performance analysis of both current and historical data.
Dynatrace directly pinpoints components that are causing problems with big data analytics of billions of dependencies within your application stack.
In under five minutes, Dynatrace detects your Apache Spark processes and shows metrics like CPU, connectivity, retransmissions, suspension rate and garbage collection time.
Dynatrace shows performance metrics for the three main Spark components:
Apache Spark monitoring provides insight into the resource usage, job status, and performance of Spark Standalone clusters.
The Cluster charts section provides all the information you need regarding Jobs, Stages, Messages, Workers, and Message processing.
For the full list of the provided cluster metrics please visit our detailed blog post about Apache Spark monitoring.
Apache Spark metrics are presented alongside other infrastructure measurements, enabling in-depth cluster performance analysis of both current and historical data.
Spark node/worker monitoring provides metrics including:
For the full list of the provided worker metrics please visit our detailed blog post about Apache Spark monitoring.
Apache Spark is an open-source cluster-computing framework. Originally developed at the University of California, Berkeley’s AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance.
Dynatrace monitors and analyzes the activity of your Apache Spark processes, providing Spark-specific metrics alongside all infrastructure measurements.
With Spark monitoring enabled globally, Dynatrace automatically collects Spark metrics whenever a new host running Spark is detected in your environment.