This topic explains the hardware and operating system requirements for installing Dynatrace Managed.
It's not always possible to provision nodes that are sized exactly right, particularly if your environment is subject to ever-increasing traffic levels. While it's useful to do upfront analysis of required size, it's more important to have the ability to add more capacity to your Dynatrace Managed cluster should your monitoring needs increase in the future. To leverage the full benefits of the Dynatrace Managed architecture, be prepared to scale along the following dimensions:
- Horizontally by adding more nodes
- For Managed versions 1.168 and higher, we support installations of up to 15 cluster nodes.
- Earlier Managed versions only support up to 6 cluster nodes.
- Vertically by provisioning more RAM/CPU per node
- In terms of data storage, by being able to resize the disk volumes as required (for guidelines regarding recommended disk setup see below).
The hardware requirements included in the following table are estimates based on typical environments and load patterns. Requirements for individual environments may vary. Estimates for specific columns take into account the following:
- Minimum node specifications
CPU and RAM must be exclusively available for Dynatrace. Power saving mode for CPUs must be disabled. CPUs must run with a clock speed of at least 2GHz and the host should have at least 32GB of RAM assigned to it.
- Transaction Storage
Transaction data is distributed across all nodes and it is not stored redundantly. For multi-node clusters, the storage per node is divided by the number of nodes.
- Long-term Metrics Store
For multi-node installations we keep three copies of the metrics store. For 4 or more nodes the storage requirement per node is reduced.
|Node Type|| Max hosts
| Peak user
| Min node
| Disk IOPS
| Transaction Storage
(10 days code visibility)
(35 days retention)
|Trial||50||1000|| 4 vCPUs,
|Small||300||10000|| 8 vCPUs,
|Medium||600||25000|| 16 vCPUs,
|Large||1250||50000|| 32 vCPUs,
|XLarge||2500||100000|| 64 vCPUs,
To monitor 8k hosts with a peak load of 3k user actions per second, you need 3 extra large nodes with a combined storage of 4TB for direct storage and 30TB for long-term metrics.
To monitor 200 hosts with a peak load of 500 user actions per second, you need 1 medium node with storage of 1TB for transactions and 2.5TB for long-term metrics,or to have failover you can also use 3 small nodes.
Dynatrace Managed stores multiple types of monitoring data, depending on the use case.
- Storing Dynatrace binaries and the data store on separate mount points to allow the data store to be resized independently.
- Not keeping Dynatrace data storage on the root volume to avoid additional complexity when resizing the disk later, if required.
- Mounting different types of data storage on separate disk volumes for maximum flexibility and performance.
- Creating resizable disk partitions (for example, by leveraging Logical Volume Manager [LVM]).
The directory paths included in the following table are the default paths. Actual paths may vary if you've installed to a custom directory.
If you customized the storage locations,
ELASTICSEARCH_DATASTORE_PATH should be placed in separate directories and they should not be a sub-directory of the other.
|Directory symbol||Directory path||Description||Required free disk space for installing||Required free disk space for upgrading|
||Main directory for Dynatrace Managed binaries||6 GB||4 GB|
||Main directory for Dynatrace Managed data||24 GB||3 GB|
||Logs of all Dynatrace Managed components, services, and tools||2 GB||1 GB|
||Metrics repository||25 GB||1 GB|
||Elasticsearch store||3 GB||1 GB|
||Transactions store||14 GB||1 GB|
||Session replay store||14 GB||1 GB|
||OneAgent installation packages (if downloaded by Dynatrace Server or installed from a standalone OneAgent package)||24 GB||1 GB|
||Dynatrace Managed installer for adding nodes to a cluster, prepared during installation/upgrade||2 GB||1 GB|
Also see, Disk space for OneAgent
||Main directory for self-monitoring OneAgent binaries.||4.8 GB||1.1 GB|
OneAgent self-monitoring is enabled by default, an opt-out installation parameter is available:
We recommend multi-node setups for failover and data redundancy. A sufficiently sized 3-node cluster is the recommended setup. For Dynatrace Managed installations with more than one node, all nodes must:
- Have the same hardware configuration
- Be synchronized with NTP
- Be in the same time zone
- Be able to communicate over a private network on multiple ports
- The latency between nodes should be around 10 ms or less.
We recommended that system users created for Dynatrace Managed have the same UID:GID identifiers on all nodes.
You'll need a dedicated host for Dynatrace Managed installation. This host must not run other services that are CPU or memory intensive, or that open ports used by Dynatrace Managed.
You need a 64-bit Linux distribution (see supported Linux distributions below). Note that installation on both physical and virtualized hosts is supported, but installation in containers isn't supported.
Dynatrace Server requires a fixed IP assignment.
Ensure that you've appropriately configured your firewall settings
The libraries that are installed with Dynatrace Managed are locale-aware. For correct display of text and symbols, be sure to set your environment's system locale to an English language option (for example,
Supported operating systems
|Red Hat Enterprise Linux1||6.x - 7.x|
|CentOS||6.x - 7.x|
|Ubuntu||12.04 - 18.x|
|openSUSE||12.x - 13.x|
|SUSE Enterprise Linux||11.3 - 12.x|
|Oracle Linux Server||6.x - 7.x|
|Amazon Linux AMI||2017.x - 2018.x, 2.x|
1Redhat Enterprise Linux 7.4 & 7.5 must be amended.
Supported file systems
Dynatrace Managed operates on all common file systems. We recommend that you select fast local storage appropriate for database workloads. High latency remote volumes like NFS or CIFS aren't recommended. While NFS file systems are sufficient for backup purposes, we don't recommend them for primary storage.
We don't support or recommend Amazon Elastic File System (EFS) as a main storage for Elasticsearch. Such file systems don't offer the behavior that Elasticsearch requires, and this may lead to index corruption.
Please also check the requirements regarding ActiveGate