Dynatrace stores and retains different types of monitored data from your environments. The following table shows the general retention periods for service data (PurePath), Real User Monitoring (user actions and user sessions), synthetic monitors, Log Analytics, and metric timeseries data.
Data retention by type
|Data type||Dynatrace SaaS||Dynatrace Managed||Storage|
|Services: Distributed trace and code insights||
||Configurable||Proprietary; Shared with non-aggregated RUM data|
|Services: Requests and request attributes||
||Configurable||Shared with distributed trace and code service insights|
|RUM: Aggregated user action data||
|RUM: User sessions||
|RUM: Session Replay||
|Log Analytics||Configurable from 5-90 days. Specific files can be included/excluded.||Configurable from 5-90 days. Specific files can be included/excluded.||File-based NFS storage; Storage requirements and costs|
|Timeseries metrics (Key user actions and requests)||Unlimited||Unlimited||Cassandra|
Services: Distributed trace and code insights
Includes PurePath data.
Services: Requests and request attributes
10 second granularity of charts, non-key and key requests
RUM: Non-aggregated user action data
RUM: Aggregated user action data
RUM: User sessions
RUM: Session Replay
Minimum size of required Session Replay storage volume is entirely load-dependent. A maximum size isn't required. In SaaS deployments, a dedicated disk is used for Session Replay data.
In Managed deployments, the Session Replay data storage directory is a dedicated file store that's used exclusively for Session Replay data.
Log Analytics enables you to store all logs centrally within external storage. This makes log data available independent of log files themselves.
For Dynatrace SaaS customers, log files are stored in Amazon Elastic File System in the zone where your Dynatrace environment resides. You don’t have to worry about storage performance, availability, or free space. Disk storage costs are included in your Log Analytics subscription.
To store log files centrally on your Dynatrace Managed cluster, you must provide a common Network File System (NFS) mount point (path) that is identical and available from all cluster nodes. With this approach, it's your responsibility to ensure appropriate levels of performance, availability, and free space on the mounted NFS volume.
For full details, see Log Analytics.
0-14 days: 1-minute interval granularity available for dashboarding and API access.
14-28 Days: 5-minute interval granularity available for dashboarding and API access.
28-400 days: 1-hour interval granularity available for dashboarding and API access.
400+ days: 1-day interval granularity available for dashboarding and API access.
Note: To provide accurate calculations for timeseries metrics, Dynatrace uses the P2 algorithm to calculate the quantiles dynamically. This algorithm is known to yield good results and it works well with values in the long tails of value distributions. However, the aggregation algorithm is neither associative (i.e.,
(a + b ) + c == a + ( b + c )) nor commutative (i.e.,
a + b + c == c + b + a). This leads to slightly different response time estimates each time the algorithm runs. So, small differences in the response time values you may see in your environment (typically < 1%) are to be expected.