Data retention periods

Dynatrace stores and retains different types of monitored data from your environments. The monitoring data is stored on the Dynatrace Server. The following table shows the general retention periods for service data (PurePath), Real User Monitoring (user actions and user sessions), synthetic monitors, Log Monitoring, and metric timeseries data.

Data retention by type


Data type Dynatrace SaaS Dynatrace Managed Storage
Services: Distributed trace and code insights 10 days Configurable, with maximum 365 days of retention time Proprietary; Shared with non-aggregated RUM data
Services: Requests and request attributes 35 days Configurable, with maximum 365 days of retention time Proprietary
RUM: Non-aggregated user action data (waterfall analysis, JavaScript errors, and crashes) 10 days Configurable, with maximum 35 days of retention time Shared with distributed trace and code service insights
RUM: Aggregated user action data 35 days Configurable, with maximum 35 days of retention time
RUM: User sessions 35 days 35 days
RUM: Session Replay 35 days 35 days
Synthetic 35 days Configurable, with maximum 35 days of retention time
Log Monitoring Configurable from 5-90 days. Specific files can be included/excluded. Configurable from 5-90 days. Specific files can be included/excluded. File-based NFS storage; Storage requirements and costs
Timeseries metrics (Key user actions and requests) Unlimited Unlimited Cassandra

Services: Distributed trace and code insights

Includes PurePath data.

Services: Requests and request attributes

10 second granularity of charts, non-key and key requests

RUM: Non-aggregated user action data

Dynatrace stores the full details of every user action for 10 days. This enables you to analyze individual user actions and get all details including waterfall analysis, JavaScript errors, and mobile crashes for 10 days.

RUM: Aggregated user action data

Aggregated user action metric (used in tables like Top user actions, Top JavaScript errors, and Top mobile crashes) are available for 35 days. After 10 days, user actions data is optimized for aggregated views and some individual user actions become unavailable for individual analysis. However the sample set is large enough for statistical correct aggregations.

RUM: User sessions

Includes Session Replay data. All user session data is stored for 35 days. Note that waterfall analysis, JavaScript error, and crash data are stored with RUM non-aggregated user action data.

RUM: Session Replay

Minimum size of required Session Replay storage volume is entirely load-dependent. A maximum size isn't required. In SaaS deployments, a dedicated disk is used for Session Replay data.

In Managed deployments, the Session Replay data storage directory is a dedicated file store that's used exclusively for Session Replay data.

Log Monitoring

Log Monitoring enables you to store all logs centrally within external storage. This makes log data available independent of log files themselves.

For Dynatrace SaaS customers, log files are stored in Amazon Elastic File System in the zone where your Dynatrace environment resides. You don’t have to worry about storage performance, availability, or free space. Disk storage costs are included in your Log Monitoring subscription.

To store log files centrally on your Dynatrace Managed cluster, you must provide a common Network File System (NFS) mount point (path) that is identical and available from all cluster nodes. With this approach, it's your responsibility to ensure appropriate levels of performance, availability, and free space on the mounted NFS volume.

For full details, see Log Monitoring.

Timeseries metrics

  • 0-14 days: 1-minute interval granularity available for dashboarding and API access.
  • 14-28 Days: 5-minute interval granularity available for dashboarding and API access.
  • 28-400 days: 1-hour interval granularity available for dashboarding and API access.
  • 400+ days: 1-day interval granularity available for dashboarding and API access.

Note: To provide accurate calculations for timeseries metrics, Dynatrace uses the P2 algorithm to calculate the quantiles dynamically. This algorithm is known to yield good results and it works well with values in the long tails of value distributions. However, the aggregation algorithm is neither associative (i.e., (a + b ) + c == a + ( b + c )) nor commutative (i.e., a + b + c == c + b + a). This leads to slightly different response time estimates each time the algorithm runs. So, small differences in the response time values you may see in your environment (typically < 1%) are to be expected.