Platform extensions (DPS)
Learn how consumption of Dynatrace platform extensions is calculated using the Dynatrace Platform Subscription model.
Custom Metrics Classic
You can extend the value of Dynatrace by defining, enabling or ingesting custom metrics. Dynatrace enables you to integrate third-party data sources, ingest custom metrics via API, leverage extensions, cloud integrations, and more.
Here is a non-exhaustive list of custom metric types:
- Metrics ingested from Amazon CloudWatch, Azure Monitor, or Google Cloud Operations Cloud for Cloud services monitoring
- Metrics ingested from remote extensions for monitoring of databases, network devices, queues, and more
- All API-ingested metrics
- Calculated service metrics, custom DEM metrics, and log metrics
The unit of measure for calculating custom metrics is a metric data point. A metric data point is a single measurement of a custom metric. Every metric data point that belongs to a custom metric consumes an additional metric data point whenever the metric is calculated.
To calculate your environment's custom metric consumption
- Go to Dynatrace Hub and find the cloud service or extension you want to use (for example, Amazon S3, Azure Storage Account, Oracle Database, and F5).
- Determine how many custom metrics Dynatrace ingests for the service or extension.
- Determine the number of metric data points per custom metric.
- Use the example below as a guide.
If you have a single custom metric that is written once per minute, annually you will consume 525.6 k metric data points:
1 metric data point x 60 min x 24 h x 365 days = 525.6k metric data points/year
Note that a single custom metric may have multiple dimensions. For example, if you have the same custom metric for 2 instances of your cloud service, you will consume 2 metric data points:
cloud.aws.dynamo.requests.latency, dt.entity.dynamo\_db\_table=DYNAMO\_DB\_TABLE-41043ED33F90F271 21.78
cloud.aws.dynamo.requests.latency, dt.entity.dynamo\_db\_table=DYNAMO\_DB\_TABLE-707BF9DD5C975159 4.47
2 instances x 1 metric data point x 60 min x 24 h x 365 days = 1,051.2k metric data points/year
Metric data points are not billed based on the increase in dimensions, but rather by the increased number of metric data points. If dimensions are added, but the number of metric data points remains the same, then billable metric data points usage does not change:
cloud.aws.dynamo.requests.latency, dt.entity.dynamo\_db\_table=DYNAMO\_DB\_TABLE-41043ED33F90F271, Operation='DeleteItem' 21.78
cloud.aws.dynamo.requests.latency, dt.entity.dynamo\_db\_table=DYNAMO\_DB\_TABLE- 707BF9DD5C975159, Operation='DeleteItem' 4.47
Therefore, in this case, the same number of metric data points is consumed as shown in the calculation above.
Log Monitoring Classic
The unit of measure for Log Monitoring Classic is one log record. A log record is recognized in one of the following ways:
- Timestamp
- JSON Object
If you use Logs Powered by Grail, see Log Management and Analytics DPS.
Timestamps
Each timestamp is counted as a new log record.
For example, in the following log data (consumed via log file or generic ingestion), Dynatrace counts nine log records based on timestamp occurrence:
-
Oct 18 05:56:11 INFO ip-10-176-34-132 DHCPREQUEST on eth0 to 10.176.34.1
-
Oct 18 05:56:12 INFO ip-10-176-34-132 DHCPACK from 10.176.34.1
-
Oct 18 05:56:13 INFO ip-10-176-34-132 bound to 10.176.34.132 -- renewal in 1551s4:
-
Oct 18 05:56:13 INFO ip-10-176-34-132 [get\_meta] Getting token for IMDSv
-
Oct 18 05:56:16 INFO ip-10-176-34-132 [get\_meta] Trying to get http://169.23.2.3
-
Oct 18 05:56:18 INFO ip-10-176-34-132 [rewrite\_aliases] Rewriting aliases
-
Oct 18 06:22:06 INFO ip-10-176-34-132 DHCPREQUEST on eth0 to 10.176.34.1 port 67
-
Oct 18 06:22:07 INFO ip-10-176-34-132 DHCPACK from 10.176.34.1 (xid=0x3a182c8c)
-
Oct 18 06:22:10 INFO ip-10-176-34-132 bound to 10.176.34.132 -- renewal in 1364s
JSON Objects
Each JSON object is counted as a log record. A JSON file can contain multiple objects that count as a log record. For example, in the following log data, Dynatrace counts three log records based on JSON object occurrence:
{
"timestamp": "2021-07-29T10:54:40.962165022Z",
"level": "error",
"log.source": "/var/log/syslog",
"application.id": "PaymentService-Prod",
"content": "DHCPREQUEST on eth0 to 10.176.34.1"
},
{
"log.source": "/var/log/syslog",
"content": "[get\_meta] Getting token for IMDSv"
},
{
"content": "DHCPACK from 10.176.34.1 (xid=0x3a182c8c)"
}
Custom Traces Classic
You can ingest traces into Dynatrace using OpenTelemetry exporters for applications running on hosts that don't have OneAgent installed. These distributed traces are sent via the Trace Ingest API.
The unit of measure for Custom Traces Classic is in an ingested span. A span is a single operation within a distributed trace. To calculate the total consumption, multiply the number of ingested spans by the price per span.
Traces, including OpenTelemetry spans captured by OneAgent code modules or sent via the OneAgent local Trace API, are included with Full-Stack Monitoring, and therefore are not consumed as Custom Traces Classic.
Custom Events Classic
The unit of measure for calculating your environment's consumption of custom events is custom events. While there are no additional costs or licensing involved in the default monitoring and reporting of built-in event types via OneAgent or cloud integrations, you have the option to configure custom events and/or event-ingestion channels. Such event-related customizations do result in additional consumption because they require significantly more processing and analytical power than the built-in event ingestion via OneAgent of cloud integrations.
Custom created/ingested or subscribed events that might be configured for an environment include:
- Any custom event sent to Dynatrace using the Events API v2
- Any custom event (such as a Kubernetes event) created from log messages by a log event extraction rule
Serverless Functions Classic
Dynatrace enables end-to-end observability of serverless Cloud functions based on monitoring data coming from traces, metrics, and logs.
Tracing of serverless functions, such as AWS Lambda, Azure Functions, and Google Functions operating on a consumption model, is based on the monitored function's total number of monitored invocations. The term "function invocations" is equivalent to "function requests" or "function executions."
Cloud functions monitored with metrics using cloud vendor integrations, such as Amazon CloudWatch, Azure Monitor, or Google Cloud Operations consume custom metrics within Dynatrace. For details, see custom metrics.
Dynatrace also allows you to ingest logs from your serverless cloud functions. When using Dynatrace with Grail, serverless function consumption works as described in Log Management and Analytics. Using Dynatrace without Grail results in consumption via Log Monitoring Classic. With Dynatrace Managed, the Log Monitoring Classic consumption model is applied.
AWS Lambda tracing
For AWS Lambda tracing integration, monitoring consumption is based on the monitored functions' total number of monitored invocations (for example, requests).
Assuming an average of 1,000 invocations per Lambda function per month, monitoring 100 Lambda functions would result in a total of 100,000 invocations per month. Each invocation results in the consumption of one invocation from your DPS budget as per your rate card.
Azure Function tracing
Azure Functions provide many different hosting options with various tracing integration possibilities. Tracing Azure Functions on the App Service (dedicated) plan is equivalent to Full-Stack Monitoring and consumes GiB-hours (depending on the memory size and the duration the App Service is monitored).
For Tracing Azure Functions on Azure consumption plan, monitoring consumption is based on the monitored functions' total number of monitored invocations (for example, requests).
Assuming an average of 1,000 invocations per Azure function per month, monitoring 100 Azure functions would result in a total of 100,000 invocations per month. Each invocation is deducted from your available Dynatrace Platform Subscription budget as per your rate card.
Google Functions tracing
For Google Functions tracing integration, monitoring consumption is based on the monitored functions' total number of monitored invocations (for example, requests).
Assuming an average of 1,000 invocations per Google function per month, monitoring 100 Google functions would result in a total of 100,000 invocations per month. Each invocation is deducted from your available Dynatrace Platform Subscription budget as per your rate card.
When a Serverless Functions platform host is monitored with OneAgent (consuming GiB-hours), all monitored function invocations are part of the Full-Stack monitoring package and therefore don't result in additional consumption.