• Home
  • Manage
  • Data privacy and security
  • Data privacy
  • Dynatrace data retention

Data retention periods

Dynatrace stores and retains different types of monitored data from your environments. The monitoring data is stored on the Dynatrace cluster. The following table shows the general retention periods for service data (distributed traces), Real User Monitoring (user actions and user sessions), synthetic monitors, Log Management and Analytics, and metric time series data.

Data typeDynatrace SaaSDynatrace Managed

Distributed traces, code-level insights, and errors

10 days

Configurable, with maximum 365 days of retention time

Services: Requests and request attributes

35 days

Configurable, with maximum 365 days of retention time

RUM: User action data

35 days

Configurable, with maximum 35 days of retention time

RUM: User sessions

35 days

35 days

RUM: Mobile crashes

35 days

35 days

RUM: Session Replay

35 days

Configurable, with maximum 35 days of retention time

Synthetic

35 days

Configurable, with maximum 35 days of retention time

Log Management and Analytics

Configurable, with maximum 3 years of retention time

Not applicable

Log Monitoring Classic

35 days

35 days

Metrics

5 years

5 years

OneAgent diagnostics (support archives and analysis results)

30 days

Configurable, default is 30 days

Davis problems and events

14 months

14 months

OpenTelemetry ingested traces

10 days

Configurable, with maximum 365 days of retention time

Distributed traces, code-level insights, and errors

Dynatrace stores the complete details of every transaction (distributed traces) for 10 days. This enables you to analyze individual transactions and get all details, including RUM waterfall analysis and JavaScript errors. For trial users, an additional storage-size limit applies, which might lead to shorter retention times.

A short-timeframe analysis accesses code-level data that is available for 10 days. After 10 days, session data is optimized for aggregated views. Non-aggregated and aggregated code-level data produce comparable results for longer timeframes, while differences may be expected for shorter timeframes.

Services: Requests and request attributes

Short-term storage of the data related to service metrics used in multidimensional analysis and request charting. This data is available for 35 days with the following interval granularity levels:

TimeframeInterval granularity
Less than 20 minutes10 seconds
20–40 minutes20 seconds
40–60 minutes30 seconds
More than 1 hour1 minute

A short-timeframe analysis accesses code-level data that is available for 10 days. After 10 days, session data is optimized for aggregated views. Non-aggregated and aggregated code-level data produce comparable results for longer timeframes, while differences may be expected for shorter timeframes.

RUM: User action data

Aggregated user action metrics, which are used in tables like Top user actions and Top JavaScript errors, are available for 35 days. After 10 days, user actions data is optimized for aggregated views, and some individual user actions become unavailable for individual analysis. However, the sample set is large enough for statistically correct aggregations.

RUM: User sessions

Includes Session Replay data. All user session data is stored for 35 days. Note that waterfall analysis and JavaScript error data are stored with Distributed traces, code-level insights, and errors.

RUM: Mobile crashes

Includes all crash data and stack traces of mobile and custom applications. The data is stored for 35 days.

RUM: Session Replay

Minimum size of required Session Replay storage volume is entirely load-dependent. A maximum size isn't required. In SaaS deployments, a dedicated disk is used for Session Replay data.

In Dynatrace Managed deployments, the Session Replay data storage directory is a dedicated file store that's used exclusively for Session Replay data. For more information about storage size recommendations, see Configure the secondary disk.

Log Management and Analytics

Log Management and Analytics enables you to ingest, process, retain and analyze log data stored in the Grail data lakehouse in SaaS environments. With Grail storage, you don't have to worry about managing data storage performance, availability, or free space. Just select the desired retention period for your logs in the bucket configuration.

Log Monitoring Classic

Log Monitoring Classic enables you to store all logs centrally within external storage. This makes log data available independent of log files themselves.

For Dynatrace SaaS customers, log files are stored in Amazon Elastic File System in the zone where your Dynatrace environment resides. You don't have to worry about storage performance, availability, or free space. Disk storage costs are included in your Log Monitoring Classic subscription.

For storing log files centrally, you already use Elasticsearch storage to store log files on your Dynatrace Managed cluster. To configure log storage, see Log storage configuration.

Log Monitoring v1

To store log files centrally on your Dynatrace Managed cluster you must provide a common Network File System (NFS) mount point (path) that is identical and available for all cluster nodes. With this approach, it's your responsibility to ensure appropriate levels of performance, availability, and free space on the mounted NFS volume.

Memory dumps

Memory dumps are immediately deleted from the disk once they're uploaded to ActiveGate. When an upload isn't possible, memory dumps up to 20 GB are stored on the disk for up to 2 hours.

Metrics

The following interval granularity levels are available for dashboarding and API access:

TimeframeInterval granularity
0–14 days1 minute
14–28 days5 minutes
28–400 days1 hour
400 days–5 years1 day

To provide accurate calculations for timeseries metrics, Dynatrace uses the P2 algorithm to calculate the quantiles dynamically. This algorithm is known to yield good results and it works well with values in the long tails of value distributions. However, the aggregation algorithm is neither associative ((a + b ) + c == a + ( b + c )) nor commutative (a + b + c == c + b + a). For some metrics, for example, response times, this might lead to different quantile values each time the algorithm runs or when the data is aggregated in different ways, for example, one metric is split by URL and another by browser.

OneAgent and ActiveGate diagnostics

OneAgent diagnostics and ActiveGate diagnostics are optional features that enable you to collect and analyze support archives for anomalies.

Support archives are created by Dynatrace OneAgent or Dynatrace ActiveGate and stored in Cassandra, where they are automatically deleted after 30 days. When you allow Dynatrace to analyze an issue, an additional copy of the support archive is stored in the configured AWS S3 bucket. Results of the issue analysis and the support archive are also automatically deleted from the AWS S3 bucket after 30 days. Dynatrace OneAgent and Dynatrace ActiveGate do not keep copies of created support archives.

You can delete OneAgent or ActiveGate diagnostics issues at any time. If you delete an issue, the related support archive and analysis report are deleted from Cassandra and the AWS S3 bucket immediately. The analysis result in Dynatrace Health Control is deleted after 30 days.