Dynatrace monitoring consumption is based on various types of monitoring units that are consumed by your Dynatrace environment during the monitoring of your applications and related services. Details of these monitoring unit types and how these units are consumed are outlined below.
Unless otherwise stated, the consumption details explained here apply to both Dynatrace SaaS and Managed deployments. To get started using Dynatrace, contact Dynatrace Sales. Your sales representative will provide you with further details.
This page is provided for informational purposes only. The terms of the Dynatrace free trial offer and/or your Dynatrace license will be applied to any use of Dynatrace products or services.
Application and infrastructure monitoring
Dynatrace application and infrastructure monitoring is provided via installation of a single Dynatrace OneAgent on each monitored host in your environment. OneAgent is licensed on a per-host basis (virtual or physical server).
However, not all hosts are of equal size. Larger hosts consume more host units than do smaller-sized hosts. We use the amount of RAM on a monitored server as a measuring stick to determine the size of a host (i.e., how many host units it comprises). The advantage of this approach is its simplicity—we don’t take technology-specific factors into consideration (for example, the number of JVMs or the number of microservices that are hosted on a server). It doesn't matter if a host is .NET-based, Java-based, or something else. You can have 10 JVMs or 1,000 JVMs; such factors don't affect the amount of monitoring that an environment consumes.
OneAgent can operate in two different modes. By default, OneAgent operates in Full-Stack Monitoring mode. Alternatively, you can use Infrastructure Monitoring mode to monitor hosts that don't require full-stack visibility. Infrastructure mode consumes fewer host units than Full-Stack mode.
Refer to the host unit weighting table below to see how many host units are consumed based on the amount of RAM a monitored server has. Total host-unit consumption is calculated based on the sum of all host units of all modes and monitored systems.
|Max. RAM||Host units (Full-Stack*)||Host units (Infrastructure**)|
* When the amount of RAM on a host falls between the values listed in the table above, the number is rounded up. For example, a host with 12 GB RAM consumes 1 host unit because 12 GB falls between 8 GB and 16 GB.
** For Infrastructure Monitoring mode, the same rounding principle applies, but the number of host units consumed by a host is capped at 1.0. If you have an existing agreement that doesn't reflect the
1.0 cap on host units per host, please contact your Dynatrace Sales representative.
When OneAgent is integrated using universal injection, operation of OneAgent is based on the application level (i.e, application-only monitoring), not the host level. Such cases occur when you don’t have access to the underlying server or host, whether physical or virtual, and therefore can't install OneAgent directly on the hosts. Examples include, but are not limited to, AWS Fargate (serverless containers), Red Hat OpenShift Container Platform (PaaS), Pivotal Web Services (PaaS), and Solaris Zone.
For these technologies, host units are calculated based on memory detected at the operating system instance or container level. Calculations take into account the detected memory limit, such as container memory limits. If no memory limits are detected, calculations may use the underlying host memory, which may reflect a higher number of host units.
Note that Azure services running on the Azure App Service plan are licensed based upon the number and size of virtual machine instances, regardless of how many applications run on the instances.
- 4 Serverless containers run concurrently for 1 hour. Each container has a memory limit of 1 GB RAM.
4 hosts x 0.1 host unit weighting = 0.4 host units
- 2 Docker containers run on a host that has 16 GB RAM for 1 hour in application-only monitoring mode. Without a memory limit detected, the containers will consume 2 host units in total as 16 GB RAM per container will be detected.
2 hosts x 1.0 host unit weighting = 2 host units
Host unit overages (optional)
If you've arranged for an allotment of host units to monitor your hosts and you're entitled to exceed this number (i.e., overages are allowed for your account), the overages will be calculated in host unit hours. For example, if you've arranged to monitor up to 10 host units (a maximum of 160 GB total RAM) and your account allows for overages, if you connect another host that equates to 2 host units you'll have 12 host units in total and will, therefore, have exceeded your quota by 2 host units. If you continue to monitor your hosts using 12 host units for a full week, you'll accrue an overage of 336 host unit hours.
2 (host units) x 24 (hours a day) x 7 (days) = 336 (host unit hours overage)
To add or remove overages from your account, contact Dynatrace Sales.
Host unit hours
A host unit hour represents the consumption of a host unit over a time period. 1 host unit hour equates to 1 host unit being consumed for 1 hour. A host with 16 GB of RAM (i.e., 1 host unit) running for a full day consumes 24 host unit hours.
For example, say you have 1,000 host unit hours available and you want to monitor a host that has 64 GB RAM (which equates to 4 host units). If you keep the host running for a full day, it will consume 96 host unit hours.
4 (host units) x 24 (hours a day) = 96 (host unit hours)
The 1,000 host unit hours will be consumed in slightly more than 10 days.
4 (host units) x 24 (hours) x 10 (days) = 960 host unit hours
Each minute, Dynatrace calculates host-unit consumption based on true concurrency. This means that two hosts running within the same hour will consume two hosts units if both hosts run at the same time. Host-unit hours are counted in calendar hours (for example, 10:00 – 11:00 AM) and not usage hours (for example, 10:23 – 11:23 AM).
If a host runs for less than 5 minutes, it doesn't count against your host unit hour quota. A host running for 5 minutes or longer is rounded up to
1 host unit hour.
When the monitoring of a host stops for any reason, that host's consumed host units are released and made available to another host within about five minutes.
You have a host with 16 GB RAM (which equals 1 host unit) running from 10:00-10:30 AM. At 10:30 you spin up another host of the same size. Dynatrace considers this a single host unit because the hosts don't run concurrently.
You start the first host at 10:00 AM and launch another host at 10:30 AM. Then, both hosts run together for 30 minutes and are shut down at the same time. Dynatrace considers this to be 2 host units because both hosts run at the same time.
One host of size 16 GB RAM is started and stopped three times within an hour:
12:10 - Server start
12:20 - Server stop
12:30 - Server start
12:40 - Server stop
12:50 - Start
13:00 - Stop
Such a scenario equates to
1 host unit hour because true concurrency is taken into account.
You have a host with 16 GB RAM (which equals 1 host unit) running from 10:23-11:23 AM. Since the host is running for 2 calendar hours (10:00 – 11:00 AM and 11:00 – 12:00 AM), it equates to
2 host unit hours.
Host unit hours are used for Dynatrace free trials. When you sign up for a Dynatrace free trial, you receive a certain number of host unit hours to evaluate Dynatrace.
If you know in advance that your base quota of host units will be exceeded due to holiday demand or a short-lived project (for example, on Black Friday or during a one-time testing initiative), you can use host unit hours rather than host units to manage variable traffic spikes. For example, if you have a pool of 9,000 host unit hours and 100 host units, during Black Friday, you'll need more hosts to scale up for the increased traffic on your site. In such a case, you have the option of using all 9,000 host unit hours in a single day. This would enable you to connect an additional 375 host units (475 total maximum) to Dynatrace for one day.
9,000 (host unit hours) / 24 (hours) + 100 (base quota of host units) = 475 (max. host units)
If your account has multiple monitoring environments, for example, one for development and the other one for production, then overages are calculated per account and not per environment. Only when the account quota is exceeded, then overages are incurred.
For example, you licensed 100 host units and you have two environments, one for production and one for testing. You assign 80 host units to the production environment and 20 host units to the testing environment. Your license entitles you for overages (you can see this in the account overview below the host units circle). If production uses 70 host units but testing uses 30 host units, the total account quota of 100 host units is not exceeded thus no overages are incurred. Only if both environments use more than 100 host units overages are incurred.
Cloud service monitoring
Beginning in early 2021, all cloud services consume DDUs. The amount of DDU consumption per service instance depends on the number of monitored metrics and their dimensions (each metric dimension results in the ingestion of 1 data point; 1 data point consumes 0.001 DDUs). The following DDU consumption estimates per service instance are based on the selection of recommended metrics only, predefined dimensions, and assumed dimension values.
|Service name||DDU per minute per instance|
|Service name||DDU per minute per instance|
|AWS API Usage||0.054|
|AWS Direct Connect||0.016|
|AWS Elastic Beanstalk||0.008|
|AWS Elemental MediaConnect||0.012|
|AWS Site-to-Site VPN||0.003|
|AWS Storage Gateway||0.061|
|AWS Transit Gateway||0.008|
|Amazon CloudWatch Logs||0.038|
|Amazon Database Migration Service||0.078|
|Amazon DynamoDB Accelerator (DAX)||0.124|
|Amazon EC2 Auto Scaling||0.008|
|Amazon EC2 Spot Fleet||0.003|
|Amazon ECS ContainerInsights||0.033|
|Amazon Elastic File System (EFS)||0.003|
|Amazon Elastic Inference||0.024|
|Amazon Elastic Kubernetes Service (EKS)||0.117|
|Amazon Elastic Map Reduce (EMR)||0.016|
|Amazon Elastic Transcoder||0.007|
|Amazon Kinesis Data Analytics||0.028|
|Amazon Kinesis Data Streams||0.005|
|Amazon Kinesis Video Streams||0.005|
|Amazon Managed Streaming for Kafka||0.128|
|Amazon MediaPackage Live||0.024|
|Amazon MediaPackage Video on Demand||0.005|
|Amazon Route 53||0.002|
|Amazon Route 53 Resolver||0.003|
|Amazon SageMaker Endpoint Instances||0.015|
|Amazon SageMaker Endpoints||0.015|
|Amazon Simple Notification Service (SNS)||0.003|
|Amazon Simple Queue Service (SQS)||0.004|
|Amazon Transfer Family||0.002|
|Amazon WAF Classic||0.24|
|Service name||DDU per minute per instance|
|Event Hub Namespace||0.003|
|SQL Elastic Pool||0.012|
|Virtual Machine Scale Set||0.01|
|Service name||DDU per minute per instance|
|Azure Anomaly Detector||0.216|
|Azure Apache Spark Pool||0.011|
|Azure App Configuration||0.01|
|Azure App Service Plan||0.018|
|Azure Application Insights||0.079|
|Azure Automation Account||0.045|
|Azure Batch Account||0.013|
|Azure Bing Autosuggest||0.216|
|Azure Bing Custom Search||0.216|
|Azure Bing Entity Search||0.216|
|Azure Bing Search||0.216|
|Azure Bing Spell Check||0.216|
|Azure Blockchain Service||0.042|
|Azure CDN WAF Policy||0.009|
|Azure Cognitive Services - All in One||0.216|
|Azure Computer Vision||0.216|
|Azure Connection Monitors Preview||0.486|
|Azure Container Instance||0.008|
|Azure Container Registry||0.005|
|Azure Content Moderator||0.216|
|Azure Custom Vision Prediction||0.216|
|Azure Custom Vision Training||0.216|
|Azure DB for MariaDB||0.01|
|Azure DB for MySQL||0.01|
|Azure DB for PostgreSQL - Hyperscale||0.007|
|Azure DB for PostgreSQL - Server||0.01|
|Azure Data Explorer Cluster||0.049|
|Azure Data Factory v1||0.006|
|Azure Data Factory v2||0.024|
|Azure Data Lake Analytics||0.006|
|Azure Data Lake Storage Gen1||0.004|
|Azure Data Share||0.006|
|Azure Device Provisioning Service||0.063|
|Azure Event Grid Domain||0.382|
|Azure Event Grid System Topic||0.051|
|Azure Event Grid Topic||0.051|
|Azure Event Hubs Cluster||0.015|
|Azure ExpressRoute Circuit||0.01|
|Azure Front Door||0.189|
|Azure Function App Deployment Slot||0.03|
|Azure HDInsight Cluster||0.009|
|Azure Immersive Reader||0.216|
|Azure Ink Recognizer||0.216|
|Azure Integration Service Environment||0.029|
|Azure IoT Central Application||0.009|
|Azure Key Vault||0.037|
|Azure Kubernetes Service (AKS)||0.039|
|Azure Language Understanding (LUIS)||0.216|
|Azure Language Understanding Authoring (LUIS)||0.216|
|Azure Logic Apps||0.034|
|Azure Machine Learning Workspace||2.49|
|Azure Maps Account||0.09|
|Azure Mesh Application||0.054|
|Azure NetApp Capacity Pool||0.001|
|Azure NetApp Volume||0.001|
|Azure Network Interface||0.004|
|Azure Notification Hub||0.004|
|Azure Power BI Embedded||0.008|
|Azure Public IP Address||0.01|
|Azure QnA Maker||0.216|
|Azure SQL Managed Instance||0.007|
|Azure SQL Pool||0.028|
|Azure Search Service||0.003|
|Azure Spring Cloud||0.234|
|Azure Storage Account (classic)||0.189|
|Azure Storage Blob Services (classic)||0.189|
|Azure Storage File Services (classic)||0.567|
|Azure Storage Queue Services (classic)||0.189|
|Azure Storage Sync Service||0.111|
|Azure Storage Table Services (classic)||0.189|
|Azure Stream Analytics Job||0.126|
|Azure Streaming Endpoint||0.033|
|Azure Synapse Workspace||0.301|
|Azure Text Analytics||0.216|
|Azure Time Series Insights Environment||0.009|
|Azure Time Series Insights Event Source||0.007|
|Azure Traffic Manager Profile||0.006|
|Azure Virtual Machine (classic)||0.007|
|Azure Virtual Network Gateway||0.003|
|Azure Web App Deployment Slot||0.03|
|Service name||Configuration||DDU per minute per instance|
|Amazon EC2 Instance (via GCP)||cloud_tasks_queue/default||0.014|
|Cloud SQL Database||cloudsql_database/default||0.066|
|Google Apigee Environment||apigee.googleapis.com/Environment/default||0.027|
|Google Apigee Proxy (v2)||apigee.googleapis.com/ProxyV2/default||0.246|
|Google Apigee Proxy||apigee.googleapis.com/Proxy/default||0.207|
|Google App Engine Application||gae_app/default||0.101|
|Google App Engine Instance||gae_instance/default||0.005|
|Google Assistant Action Project||assistant_action_project/default||0.567|
|Google Cloud APIs||api/default||0.086|
|Google Cloud BigQuery BI Engine Model||bigquery_biengine_model/default||0.055|
|Google Cloud BigQuery Dataset||bigquery_dataset/default||0.147|
|Google Cloud BigQuery Project||bigquery_project/default||0.085|
|Google Cloud Bigtable Cluster||bigtable_cluster/default||0.018|
|Google Cloud Bigtable Table||bigtable_table/default||0.111|
|Google Cloud Composer Environment||cloud_composer_environment/default||0.129|
|Google Cloud DNS Query||dns_query/default||0.003|
|Google Cloud Data Loss Prevention Project||cloud_dlp_project/default||0.019|
|Google Cloud Dataproc Cluster||cloud_dataproc_cluster/default||0.081|
|Google Cloud Datastore||datastore_request/default||0.025|
|Google Cloud Function||Cloud Function/default||0.073|
|Google Cloud IoT Registry||cloudiot_device_registry/default||0.026|
|Google Cloud Logging export sink||logging_sink/default||0.003|
|Google Cloud ML Job||cloudml_job/default||0.162|
|Google Cloud ML Model Version||cloudml_model_version/default||0.038|
|Google Cloud Memorystore||redis_instance/default||0.169|
|Google Cloud Microsoft Active Directory Domain||microsoft_ad_domain/default||0.028|
|Google Cloud NAT Gateway||nat_gateway/default||0.04|
|Google Cloud Network TCP Load Balancer Rule||tcp_lb_rule/default||0.045|
|Google Cloud Network UDP Load Balancer Rule||udp_lb_rule/default||0.036|
|Google Cloud Pub/Sub Snapshot||pubsub_snapshot/default||0.021|
|Google Cloud Pub/Sub Subscription||pubsub_subscription/default||0.166|
|Google Cloud Pub/Sub Topic||pubsub_topic/default||0.049|
|Google Cloud Router||gce_router/default||0.032|
|Google Cloud Run Revision||cloud_run_revision/default||0.059|
|Google Cloud Run for Anthos Broker||knative_broker/default||0.27|
|Google Cloud Run for Anthos Revision||knative_revision/default||0.547|
|Google Cloud Run for Anthos Trigger||knative_trigger/default||0.57|
|Google Cloud Spanner Instance||spanner_instance/default||0.223|
|Google Cloud Storage bucket||gcs_bucket/default||0.185|
|Google Cloud TCP/SSL Proxy Rule||tcp_ssl_proxy_rule/default||0.054|
|Google Cloud TPU Worker||tpu_worker/default||0.013|
|Google Cloud Trace||cloudtrace.googleapis.com/CloudtraceProject/default||0.003|
|Google Cloud VPN Tunnel||vpn_gatewayv/||0.09|
|Google Consumed API||consumed_api/default||0.084|
|Google Consumer Quota||consumer_quota/default||0.021|
|Google Filestore Instance||filestore_instance/default||0.048|
|Google Firebase Realtime Database||firebase_namespace/default||0.104|
|Google Firestore Instance||firestore_instance/default||0.324|
|Google GKE Container||gke_container/default||0.109|
|Google IAM Service Account||iam_service_account/default||0.04|
|Google Instance Group||instance_group/default||0.001|
|Google Interconnect Attachment||interconnect_attachment/default||0.005|
|Google Internal HTTP/S Load Balancing Rule||internal_http_lb_rule/default||0.405|
|Google Internal TCP Load Balancer Rule||internal_tcp_lb_rule/default||0.135|
|Google Internal UDP Load Balancer Rule||internal_udp_lb_rule/default||0.108|
|Google Kubernetes Cluster||k8s_cluster/default||0.009|
|Google Kubernetes Container Agent||k8s_container/agent||0.021|
|Google Kubernetes Container Apigee||k8s_container/apigee||0.268|
|Google Kubernetes Container Nginx||k8s_container/nginx||0.017|
|Google Kubernetes Container||k8s_container/default||0.024|
|Google Kubernetes Node||k8s_node/default||0.039|
|Google NetApp CVS-SO||cloudvolumesgcp-api.netapp.com/NetAppCloudVolumeSO/default||0.013|
|Google NetApp Cloud Volume||netapp_cloud_volume/default||0.032|
|Google Network Security Policy||network_security_policy/default||0.018|
|Google Producer Quota||producer_quota/default||0.012|
|Google Pub/Sub Lite Subscription Partition||pubsublite_subscription_partition/default||0.006|
|Google Pub/Sub Lite Topic Partition||pubsublite_topic_partition/default||0.013|
|Google Transfer Service Agent||transfer_service_agent/default||0.002|
|Google VM Instance Firewall Insights||gce_instance/firewallinsights||0.18|
|Google VM Instance||gce_instance/appenginee||0.108|
|Google VPC Access Connector||vpc_access_connector/default||0.004|
|Google Zone Network Health||gce_zone_network_health/default||0.243|
|Google reCAPTCHA Key||recaptchaenterprise.googleapis.com/Key/default||0.003|
Application Security Monitoring
Application Security Monitoring helps you to visualize, analyze, and monitor security vulnerabilities in your environment that are related to third-party libraries at runtime.
Dynatrace Application Security is licensed based on the consumption of Application Security units. The number of Application Security units that an environment consumes is based on the amount of RAM that a monitored server has (see the table below) and the number of hours that those Application Security units are monitored. For example, to run Application Security for a 16 GB host for one year, 9,000 Application Security units are required [
1 (Application Security unit) x 365 (days) x 24 (hours) = 9,000 (Application Security units)]. See the weighting table below for details.
|Host size (based on RAM GB)||Application Security unit weight|
|N x 16||N|
Application Security units can of course be consumed in addition to host-unit hours for the purposes of Full-Stack and Infrastructure Monitoring modes. For example, you can monitor the security of a host that runs on a Tomcat server that's monitored only with Dynatrace Infrastructure Monitoring. While this approach might not provide the deeper performance insights that are provided with Full-Stack Monitoring mode, it does provide powerful Dynatrace Application Security insights while saving on costs.
The allocation of Application Security units is only applicable to hosts that run supported technologies. Please contact a Dynatrace product specialist via in-product chat or reach out to your account executive to learn more.
Digital Experience Monitoring
In addition to the application and infrastructure monitoring provided by OneAgent, you may also require Dynatrace Synthetic Monitoring, Real User Monitoring, and Session Replay. These capabilities are consumed based on Digital Experience Monitoring units, otherwise known as DEM Units. The amount of DEM Units you need depends on how many synthetic monitors you want to run and how many user sessions you need to monitor. The table below explains the rate at which DEM Units are consumed per each capability type and unit of measure.
|Unit of measure||Capability||Consumption per unit of measure|
Browser monitors, browser clickpaths
User session per application*
Real User Monitoring (Without Session Replay playback)
User session per application*
Real User Monitoring session captured with Session Replay
Real User Monitoring
Real User Monitoring
Third-party synthetic result
Third-party synthetic API ingestion
* User sessions are charged per application, even if a session spans multiple applications from the same domain. Only user sessions from real users are counted in your consumption of user sessions. User sessions from synthetic users and "robots" aren't counted when calculating your monitoring consumption.
** Data types for properties are weighed differently and affect billing, monitoring, as well as consumption. Short strings (fewer than 100 characters) and numeric (long strings, double, or dates) data types are counted as 1 property each. Long string data types are counted as 1 property per 100 characters.
A single Real User Monitoring session (i.e., a "user session") is defined as a sequence of interactions between a user with a browser-based web application or a native iOS or Android mobile app within an interval and with at least two user actions. A user action is a button click or app start that triggers a web request (for example, a page load or a page-view navigation). Interactions that include only one user action are considered “bounced” and aren't counted. A user who interacts with more than one web application or app at the same time consumes one session for each of those web applications or apps, except when the interaction is considered “bounced”. Interactions with hybrid mobile apps, that for technical reasons include a web application and a mobile app will only be considered as a single session.
A billed session ends because it technically ended, or after 60 minutes of continuous interaction with the web application or mobile app.
If you've set up an annual RUM sessions quota, your usage will reset annually.
Real User Monitoring DEM consumption example
Say, for example, that a user has been interacting with a web application or mobile app for a period 4 continuous hours. From a license perspective, a session ends after 60 minutes of continuous interaction, after which a new session is resumed for the next 60 minutes. Therefore, a 4-hour session is the equivalent of 4 licensed sessions. Without Session Replay data, this session costs
4 * 0.25 = 1 DEM Unit. With Session Replay data, the session costs
4 * 1 = 4 DEM Units.
A browser monitor or browser clickpath “synthetic action” is an interaction with a synthetic browser that triggers a web request that includes a page load, navigation event, or action that triggers an XHR or Fetch request. Browser monitors perform a single synthetic interaction (for example, measuring the performance and availability of a single URL) and consume one synthetic action per execution. Clickpath monitors are sequences of pre-recorded synthetic actions. Clickpaths consume one action per each interaction that triggers a web request. Scroll downs, keystrokes, or clicks that don't trigger web requests aren't counted as actions.
An HTTP monitor consists of one or multiple HTTP(S) requests (for example, GET, POST, HEAD requests). Each request executed by an HTTP monitor equates to one synthetic request.
# Synthetic actions/requests consumed per monitor = (# Synthetic actions included in monitor) x (# Executions per hour) x (# Locations) x # Hours
XHR or Fetch requests that are made by a synthetic browser as the result of a user action like a page load, which isn't directly triggered by user input, don't result in user actions and therefore aren't counted. Such XHR and Fetch calls are considered child requests of synthetic actions.
Synthetic actions/requests calculation example
For example, a recorded browser clickpath that navigates through 2 pages and clicks 1 button that triggers an XHR or Fetch request consumes 3 synthetic actions. If such a synthetic monitor runs every 15 minutes from 2 locations for 1 day, the browser clickpath will consume 576 synthetic actions per day.
3 (synthetic actions) x 4 (monitor executions per hour) x 2 (locations) x 24 (hours per day) = 576 (synthetic actions)
For more details, see Synthetic Monitoring.
If you've arranged for Digital Experience Monitoring overages (i.e., your account allows you to exceed the maximum limit of DEM Units), the units you consume as overage are counted just as with regular DEM Unit consumption; each additional overage session or synthetic test increases the amount of DEM Units consumed by your account. To add or remove overages from your account, contact Dynatrace Sales.
You can gain more information from a session or user action by configuring additional defined properties. We currently offer a free tier of 20 defined properties per application. As shown in the table, the DEM unit cost per session increases by 0.01 DEM units for each additional defined property.
For example, 100 sessions with 25 defined properties consume
100 * (25 - 20) * 0.01 = 5 DEM units for additional defined properties. The total DEM unit cost in this case is 30 DEM units (
5 DEM units (additional defined properties) + 25 DEM units (100 sessions; 1 session = 0.25 DEM units) = 30 DEM units.
Limited custom metric ingestion and analysis is provided with out-of-the-box Dynatrace technology support in the form of included metrics per host unit. To arrange for additional custom metric ingestion and analysis, contact Dynatrace Sales.
For full details on the setup and ingestion of custom metrics in Dynatrace, see Metric ingestion.
How custom metrics affect monitoring consumption
Custom metrics typically consume Davis data units (DDUs). However custom metrics from OneAgent-monitored hosts are first deducted from your quota of included metrics per host unit, so they don't necessarily consume DDUs. For complete details, see Metric cost calculation (DDUs).
Dynatrace monitors serverless compute technologies through integration with cloud platform providers and OneAgent integration.
Cloud services that are monitored by AWS CloudWatch and Azure Monitor integrations (including serverless functions and serverless containers) typically consume custom metrics. Limited custom metric ingestion and analysis is available out of the box. For details, see Custom metrics above.
AWS Lambda Serverless DDU pool cost calculation (DDUs)
For OneAgent AWS Lambda integrations, monitoring consumption is based on Davis data units. Dynatrace counts the total number of invocations (i.e., requests) of the monitored functions. For each invocation, .002 DDUs are deducted from your available quota.
For example, if you monitor 1 function with the OneAgent integration and that function is invoked 1 million times, DDU consumption will be calculated as follows:
1 AWS Lambda function x 1 million invocations x 0.002 DDU weight = 2,000 DDUs per month per function.
Note: In Full-Stack Monitoring Mode, Dynatrace monitors AWS Lambda for both traces and CloudWatch integrations, which consume DDUs based on invocations and custom metrics, respectively.
For details on host unit calculation and monitoring consumption for serverless monitoring, see Application-only monitoring – including PaaS and some Serverless above.
The Azure Functions Consumption Plan is currently not supported.
Dynatrace provides OneAgent integrations for other serverless compute services, including container platforms like AWS Fargate, Azure Container instances (for example, Azure Kubernetes Service), Elastic Compute Services (for example, Elastic Container Service, AWS Elastic Beanstalk, Elastic Kubernetes Service), and Azure App Service. For these services, monitoring consumption is based on host units.
Dynatrace versions 1.207 and earlier
Log Monitoring consumption is based on anticipated GiB of annual average log storage size, which is calculated as the average annual daily ingestion of uncompressed log data multiplied by the number of days. Once this limit is reached, you need to contact Dynatrace Sales to arrange for additional capacity.
Annual average daily log storage = Actual annual storage / (# of days)
Log storage calculation example
Say, for example, that your Log Monitoring agreement is configured for 90 days and you've arranged for 450 GiB of annual daily average storage. The anticipated average daily ingestion of log data in this case would be 5 GiB.
450 (GiB; base quota of annual average storage) / 90 (days) = 5 (GiB; anticipated average daily ingestion)
Once the annual equivalent of 1,825 GiB is ingested and exceeded, the annual average storage size of 450 GiB is also reached.
5 (GiB; anticipated average daily ingestion) x 365 (days) = 1,825 (GiB; anticipated average annual ingestion)
Continuing with the example above, if after six months your actual log ingestion is only 912.5 GiB (50% of the anticipated 1,825 GiB), then you might decide to re-configure your Log Monitoring allotment down to 45 days while leaving the annual average storage capacity unchanged at 450 GiB. In this case, the anticipated average daily ingestion of log data for the subsequent six months would be 10 GiB.
450 (GiB; average annual capacity) / 45 (days) = 10 (GiB; anticipated average daily ingestion)
Once the annual equivalent of 2,737.5 GiB is ingested, the annual average storage size of 450 GiB is also reached.
(5 x 182.5) + (10 x 182.5) = 2,737.5
If you've arranged for annual log storage capacity, your usage will reset annually (Dynatrace SaaS only).
Log Monitoring overages (optional) - SaaS deployments
If you've arranged for Log Monitoring overages so that you can exceed your agreed upon maximum limit of anticipated annual average storage size, your overages will be calculated based on the difference between your storage size limit and your actual storage size.
For example, if you have an agreed upon storage limit of 450 GiB and your actual consumption is 500 GiB, you'll have 50 GiB in overages.
500 (GiB; actual average storage size) - 450 (GiB; average storage size limit) = 50 (GiB; overages)
To add or remove overages from your account, contact Dynatrace Sales.
Log Monitoring consumption is based on anticipated GiB per day of annual average log ingestion, which is calculated as the average annual daily ingestion of uncompressed log data. Once this limit is reached, you need to contact Dynatrace Sales to arrange for additional capacity.
For example, if during an annual period the total log data sent to your Dynatrace Managed Cluster is 730 GiB, then the "per day" rate of annual average ingestion would be 2 GiB.
730 (GiB; actual annual ingestion) / 365 (days) = 2 (GiB; annual average daily ingestion)
If you've arranged for annual log storage capacity, your usage will reset annually.
Log Monitoring overages (optional) - Dynatrace Managed deployments
If you've arranged for Log Monitoring overages so that you can exceed your agreed upon maximum limit of daily log storage, your overages will be calculated based on the difference between your daily storage limit and your actual daily storage size.
For example, if you have an agreed upon storage limit of 10 GiB/day and your actual consumption is 12 GiB/day, you'll have 2 GiB/day in overages.
12 (GiB; actual daily log storage) - 10 (GiB; daily log storage limit) = 2 (GiB; daily overage)
To add or remove overages from your account, contact Dynatrace Sales.
Each Dynatrace environment (SaaS or Managed) comes with 5 GB of log data storage, per year at no cost to you.
Dynatrace versions 1.208+
To understand how Dynatrace calculates your consumption of Davis data units for the purposes of Log Monitoring, see Calculate Log Monitoring consumption.
Mainframe monitoring on IBM z/OS
Monitoring of OneAgent code modules that run on IBM z/OS (CICS, IMS, and Java) is based on the consumption of Million Service Units (MSUs). Therefore, mainframe monitoring doesn't contribute to the consumption of host units or host unit hours.
An MSU is a measurement of the amount of processing workload that an IBM mainframe performs per hour. The amount of consumed MSUs is calculated based on CPU usage, as derived from IBM System Management Facility (SMF) data per monitored Logical Partitions (LPARs), products, or regions.
Premium High Availability
The Premium High Availability deployment model is licensed separately based only on the concurrent host units limit. Premium High Availability doesn't contribute to the consumption of concurrent host units or host unit hours.