Skip to technology filters Skip to main content
Dynatrace Hub

Extend the platform,
empower your team.

Popular searches:
Home hero bg
Confluent Cloud (Kafka)Confluent Cloud (Kafka)
Confluent Cloud (Kafka)

Confluent Cloud (Kafka)

Remotely monitor your Confluent Cloud Kafka Clusters and other resources!

Extension
Free trialDocumentation
  • Product information
  • Release notes

Overview

This extension provides the ability to remotely monitor your Confluent Cloud Kafka Clusters, Connectors, Schema Registries, and KSQL DB Applications. Every minute using the API provided by Confluent it ingests data about how your different Confluent Resources are performing.

This is intended for users, who:

  • Would like to monitor the health state and performance of their Confluent Cloud Resources.
  • Look for analysis support for Ops, IT and Network Admins.

This enables you to:

  • Monitor infrastructure with a comprehensive dashboard
  • Detect usage anomalies and alert on them

Compatibility Requirements Confluent Cloud Resource(s) and API User/Token

Note: The metrics in the Kafka Lag Partition Metrics and Kafka Lag Consumer Group Metrics feature sets are not provided by the Confluent API. To obtain these metrics the Kafka Lag Exporter is needed. See the Use Cases section for additional information.

Use cases

  • This extension provides monitoring of Confluent Cloud Resources via their public API (see details tab).

  • Also supported, via the Kafka Lag Partition Metrics and Kafka Lag Consumer Group Metrics feature sets, are metrics provided by the Kafka Lag Exporter.

    • NOTE: This exporter is not supported by Dynatrace and needs to be set up and run independently from this extension.
    • Currently the extension only supports ingesting metrics from this exporter.

Get started

Simply activate the extension in your environment using the in-product Hub, provide the necessary device configuration and you’re all set up.

Read more in the Prometheus Extension Documentation

Details

This extension uses the Confluent Metric Export API to gather metrics.

NOTE: This API has a fixed 5 minute offset which the extension currently does not honor. This leads to metrics being out of sync by 5 minutes between Dynatrace and Confluent. For more information see the 'Timestamp offset' header in the link above.

First you will need to create either a Cloud/Cluster API Key and Secret. This can be done via the Confluent UI or via their CLI. The MetricsViewer role is required to access the Confluent API. It is suggested to use the Organization scope for this role so it can be used as clusters are created or destroyed.

In Dynatrace, then create a new Monitoring Configuration and select "Monitor Remotely without OneAgent" near the bottom on the Monitoring Source Screen.

In the Dynatrace Monitoring Configuration the Confluent Cloud API Key and API Secret are used as the Basic Auth User (API Key) /Password (API secret) combination.

Next you'll create a URL with your resource types and IDs at the end, similar to what is shown below. This URL supports multiple resources but it is recommended to not have more than 5 to 10 per URL.

https://api.telemetry.confluent.cloud/v2/metrics/cloud/export?resource.kafka.id=lkc-XXXXX&resource.connector.id=lcc-XXXX1&resource.connector.id=lcc-XXXX2

Base URL https://api.telemetry.confluent.cloud/v2/metrics/cloud/export?

  1. Confluent Kafka Cluster

    • resource.kafka.id=lkc-XXXXX
  2. Confluent Kafka Schema Registry

    • resource.schema_registry.id=lsrc=XXXXX
  3. Confluent Kafka Connector

    • resource.connector.id=lcc-XXXXX
  4. Confluent Kafka KSQL DB Application

    • resource.ksql.id=lksqlc-XXXXX
Dynatrace
Documentation
By Dynatrace
Dynatrace support center
Subscribe to new releases
Copy to clipboard

Extension content

Content typeNumber of items included
alerts
2
screen chart groups
13
screen layout
7
list screen layout
7
screen entities lists
12
generic type
7
screen properties
5
generic relationship
5
metric metadata
43
metric query
3
dashboards
2

Feature sets

Below is a complete list of the feature sets provided in this version. To ensure a good fit for your needs, individual feature sets can be activated and deactivated by your administrator during configuration.

Feature setsNumber of metrics included
Metric nameMetric keyDescriptionUnit
Kafka Partition Earliest Offsetkafka_partition_earliest_offsetEarliest offset of a partitionCount
Kafka Partition Latest Offsetkafka_partition_latest_offsetLatest offset of a partitionCount
Metric nameMetric keyDescriptionUnit
Kafka Ksql Streaming Unit Countconfluent_kafka_ksql_streaming_unit_count.gaugeThe count of Confluent Streaming Units (CSUs) for this KSQL instance. The implicit time aggregation for this metric is MAX.Count
Kafka Ksql Query Saturationconfluent_kafka_ksql_query_saturationThe maximum saturation for a given ksqlDB query across all nodes. Returns a value between 0 and 1, a value close to 1 indicates that ksqlDB query processing is bottlenecked on available resources.Count
Kafka Ksql Task Stored Bytesconfluent_kafka_ksql_task_stored_bytesThe size of a given task's state stores in bytes.Byte
Kafka Ksql Storage Utilizationconfluent_kafka_ksql_storage_utilizationThe total storage utilization for a given ksqlDB application.Percent
Metric nameMetric keyDescriptionUnit
Kafka Cluster Request Bytesconfluent_kafka_server_request_bytesThe delta count of total request bytes from the specified request types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.Byte
Kafka Cluster Response Bytesconfluent_kafka_server_response_bytesThe delta count of total response bytes from the specified response types sent over the network. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.Byte
Kafka Cluster Active Connection Countconfluent_kafka_server_active_connection_count.gaugeThe count of active authenticated connections.Count
Kafka Cluster Request Countconfluent_kafka_server_request_count.gaugeThe number of requests received over the network.Count
Kafka Cluster Successful Authentication Countconfluent_kafka_server_successful_authentication_count.gaugeThe number of successful authentications.Count
Metric nameMetric keyDescriptionUnit
Kafka Server Consumer Lag Offsetsconfluent_kafka_server_consumer_lag_offsetsThe lag between a group member's committed offset and the partition's high watermarkCount
Metric nameMetric keyDescriptionUnit
Kafka Server Cluster Link Destination Response Bytesconfluent_kafka_server_cluster_link_destination_response_bytesThe delta count of cluster linking response bytes from all request types. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.Byte
Kafka Server Cluster Link Source Response Bytesconfluent_kafka_server_cluster_link_source_response_bytesThe delta count of cluster linking source response bytes from all request types. Each sample is the number of bytes sent since the previous data point. The count is sampled every 60 seconds.Byte
Kafka Server Cluster Link Countconfluent_kafka_server_cluster_link_count.gaugeThe current count of cluster links. The count is sampled every 60 seconds. The implicit time aggregation for this metric is MAX.Count
Kafka Server Cluster Link Mirror Topic Countconfluent_kafka_server_cluster_link_mirror_topic_count.gaugeThe cluster linking mirror topic count for a link. The count is sampled every 60 seconds.Count
Kafka Server Cluster Link Mirror Topic Offset Lagconfluent_kafka_server_cluster_link_mirror_topic_offset_lagThe cluster linking mirror topic offset lag maximum across all partitions. The lag is sampled every 60 seconds.Count
Kafka Server Cluster Link Mirror Topic Bytesconfluent_kafka_server_cluster_link_mirror_topic_bytesThe delta count of cluster linking mirror topic bytes. The count is sampled every 60 seconds.Byte
Metric nameMetric keyDescriptionUnit
Kafka Connect Sent Recordsconfluent_kafka_connect_sent_recordsThe delta count of total number of records sent from the transformations and written to Kafka for the source connector. Each sample is the number of records sent since the previous data point.Count
Kafka Connect Received Recordsconfluent_kafka_connect_received_recordsThe delta count of total number of records received by the sink connector. Each sample is the number of records received since the previous data point.Count
Kafka Connect Sent Bytesconfluent_kafka_connect_sent_bytesThe delta count of total bytes sent from the transformations and written to Kafka for the source connector. Each sample is the number of bytes sent since the previous data point.Byte
Kafka Connect Received Bytesconfluent_kafka_connect_received_bytesThe delta count of total bytes received by the sink connector. Each sample is the number of bytes received since the previous data point.Byte
Kafka Connect Dead Letter Queue Recordsconfluent_kafka_connect_dead_letter_queue_recordsThe delta count of dead letter queue records written to Kafka for the sink connector.Count
Metric nameMetric keyDescriptionUnit
Kafka Consumer Group Group Topic Sum Lagkafka_consumergroup_group_topic_sum_lagSum of group offset lag across topic partitionsCount
Kafka Consumer Group Poll Time (ms)kafka_consumergroup_poll_time_msGroup poll timeMilliSecond
Kafka Consumer Group Group Offsetkafka_consumergroup_group_offsetLast group consumed offset of a partitionCount
Kafka Consumer Group Group Sum Lagkafka_consumergroup_group_sum_lagSum of group offset lagCount
Kafka Consumer Group Group Lagkafka_consumergroup_group_lagGroup offset lag of a partitionCount
Kafka Consumer Group Group Lag Secondskafka_consumergroup_group_lag_secondsGroup time lag of a partitionSecond
Kafka Consumer Group Group Max Lagkafka_consumergroup_group_max_lagMax group offset lagCount
Kafka Consumer Group Group Max Lag Secondskafka_consumergroup_group_max_lag_secondsMax group time lagSecond
Metric nameMetric keyDescriptionUnit
Kafka Schema Registry Schema Countconfluent_kafka_schema_registry_schema_count.gaugeThe number of registered schemas.Count
Kafka Schema Registry Request Countconfluent_kafka_schema_registry_request_count.gaugeThe delta count of requests received by the schema registry server. Each sample is the number of requests received since the previous data point. The count sampled every 60 seconds.Count
Metric nameMetric keyDescriptionUnit
Kafka Cluster Received Bytesconfluent_kafka_server_received_bytesThe number of bytes of the customer's data received from the network.Byte
Kafka Cluster Sent Bytesconfluent_kafka_server_sent_bytesThe number of bytes of the customer's data sent over the network.Byte
Kafka Cluster Received Recordsconfluent_kafka_server_received_recordsThe number of records received.Count
Kafka Cluster Sent Recordsconfluent_kafka_server_sent_recordsThe number of records sent.Count
Kafka Cluster Retained Bytesconfluent_kafka_server_retained_bytesThe current number of bytes retained by the cluster.Byte
Kafka Cluster Partition Countconfluent_kafka_server_partition_count.gaugeThe number of partitions.Count
Kafka Cluster Load Rawconfluent_kafka_server_cluster_load_percentA measure of the utilization of the cluster. The value is between 0.0 and 1.0.Count

Full version history

To have more information on how to install the downloaded package, please follow the instructions on this page.
ReleaseDate

Full version history

DXS-2054

  • Add new calculated metrics to correct aggregation issue
    • func:confluent_kafka_server_received_bytes_per_sec
    • func:confluent_kafka_server_sent_bytes_per_sec

Full version history

v2.1.2

  • Added display names to metrics calculated on the different entity screens

Full version history

v2.1.1

  • Update screens section to help with validation errors seen activation of v2.1.0

Full version history

v2.1.0

  • Updates to metric selectors in screens to better match aggregations in Confluent's Web Portal

Full version history

v2.0.0

  • IMPORTANT : Updated Dynatrace metric keys to match metric keys from Prometheus.
    • This will cause existing Dashboards & Alerts (or anything that relies on the old metric keys) to stop working! Please update them accordingly.
    • Please immediately update your Monitoring Configurations once this new version is activated
    • You can use still view the old metrics either via the Confluent Kafka Overview (Deprecated Dashboard) or the Data Explorer
  • Added new Cluster, Schema Registry and ksqlDB metrics.

Full version history

v1.2.1

  • Updated to use Schema v1.256
  • Added Entity Type to metrics
  • Updated Cluster Count Dashboard Tile

Full version history

v1.1.1

  • Added support for confluent_kafka_server_cluster_load_percentmetric

Full version history

v1.1.0

  • Updates to metric metadata to correct units for Lag Offsets

v1.0.0

  • Initial Version to collect metrics from Confluent Cloud's API & the Kafka Lag Exporter

Full version history

No release notes

Dynatrace Hub
Get data into DynatraceBuild your own app
All (767)Log Management and AnalyticsKubernetesAI and LLM ObservabilityInfrastructure ObservabilitySoftware DeliveryApplication ObservabilityApplication SecurityDigital ExperienceBusiness Analytics
Filter
Type
Built and maintained by
Deployment model
SaaS
  • SaaS
  • Managed
Partner FinderBecome a partnerDynatrace Developer

Discover recent additions to Dynatrace

Problems logo

Problems

Analyze abnormal system behavior and performance problems detected by Davis AI.

Logs logo

Logs

Explore all your logs without writing a single query.

Security Investigator logo

Security Investigator

Fast and precise forensics for security and logs on Grail data with DQL queries.

Business Flow logo

Business Flow

Track, analyze, and optimize your critical business processes.

Cost & Carbon Optimization logo

Cost & Carbon Optimization

Track, analyze, and optimize your IT carbon footprint and public cloud costs.

Davis Anomaly Detection logo

Davis Anomaly Detection

Detect anomalies in timeseries using the Davis AI

Analyze your data

Understand your data better with deep insights and clear visualizations.

Notebooks logo

Notebooks

Create powerful, data-driven documents for custom analytics and collaboration.

Dashboards logo

Dashboards

Transform complex data into clear visualizations with custom dashboards.

Automate your processes

Turn data and answers into actions, securely, and at scale.

Workflows logo

Workflows

Automate tasks in your IT landscape, remediate problems, and visualize processes

Jira logo

Jira

Create, query, comment, transition, and resolve Jira tickets within workflows.

Slack logo

Slack

Automate Slack messaging for security incidents, attacks, remediation, and more.

Secure your cloud application

See vulnerabilities and attacks in your environment.

Security Overview logo

Security Overview

Get a comprehensive overview of the security of your applications.

Code-Level Vulnerabilities logo

Code-Level Vulnerabilities

Detect vulnerabilities in your code in real time.

Threats & Exploits logo

Threats & Exploits

Understand, triage, and investigate application security findings and alerts.

Are you looking for something different?

We have hundreds of apps, extensions, and other technologies to customize your environment

Leverage our newest innovations of Dynatrace Saas

Kick-start your app creation

Kick-start your app creation

Whether you’re a beginner or a pro, Dynatrace Developer has the tools and support you need to create incredible apps with minimal effort.
Go to Dynatrace Developer
Upgrading from Dynatrace Managed to SaaS

Upgrading from Dynatrace Managed to SaaS

Drive innovation, speed, and agility in your organization by seamlessly and securely upgrading.
Learn More
Log Management and Analytics

Log Management and Analytics

Innovate faster and more efficiently with unified log management and log analytics for actionable insights and automation.
Learn more