Server and Performance Warehouse best practices

This page explains some of the additional considerations for the AppMon Server and the Performance Warehouse to help you get the most out of your AppMon installation.

AppMon Server best practices

Use the most recent operating system possible for your AppMon Server.

Server heap size

Garbage collector (GC) activity has an important impact on AppMon Server performance and for the performance of any Java application. High correlation load on the AppMon Server typically increases the load for the GC in the following ways:

  • More transactions per second directly causes higher churn rates.
  • Using a bigger heap for more transactions per second keeps PurePath buffer duration constant, but this can lead to more objects on the heap.

GC options set for the AppMon Server cover a broad range of deployment scenarios. You should not change these settings unless you have carefully tested them in pre-production and are sure that your settings are a reasonable improvement over the default settings in your specific scenario.

Session storage

To look up the details of a specific transaction after a period of time, you can store all transactions to a consecutive series of stored sessions using the live session recording feature. This needs careful planning due to high data volume in productive systems. The Session Store should normally be configured for less than 2 TB. If the Session Store estimation grows too large, change configuration settings to keep it within reasonable boundaries.

If you have a continuous load of 500 transactions per second (in a non-UEM environment), then approximately 1 TB is required to store the transactions of a single day.

Base your storage capacity estimates on the assumption of continuous load. If you know the transaction load on your system is discontinuous (for example, if there is no load during the night), then you can reduce this estimate.

Don't use FTP to store session related data. FTP has high latency and is designed for full file transfer (potentially including resuming). For this and other reasons, FTP is not suitable for the random access AppMon session storage requires. Don't use NFS, SMB or CIFS to store session related data. Those are designed for NAS environments and provide a file-level access, which is not suited for random file access.

In addition:

  • Install AppMon on a local disk.
  • If session storage must be on the network, use a SAN.
  • If SAN is not possible, then try a NAS, however there may be poor storage performance on NAS.

In a medium deployment scenario with 250 transactions per second and a back office application only used during office hours (8h/day), the given number is 1 TB/day (for 500 transactions per second and 24h/day). A more realistic value would be 1 TB / 500 tps * 250 tps / 24h * 8h= 170 GB / day. Therefore, a storage capacity of 1 TB should be sufficient to store 6 days of data.

Internal disks such as a RAID array are usually sufficient, because the Session Store is a continuous recording of all transactions into big data blocks. In most cases, a backup solution is not required since the data is often kept only for a few days or even hours.

Performance Warehouse best practices

AppMon Server to PWH DBMS to DB relationship

Be careful with semantics. (DBMSes / Database Management Systems / database servers using SQL as query language like Oracle, MS SQL Server, IBM DB2, and PostgreSQL, each supporting many databases in one instance).

You may connect each AppMon Server to a separate / differently named database, where all databases may be located on the same DB server instance, as long as your DB server can take the load. Though you may want to consolidate data or reports from AppMon Servers covering different geographical regions, you must not connect more than one AppMon Server to the same (named) database on the same DB server instance. This causes unintended and possibly grave results with your data. Current consolidation options are:

  • Connect the stand-alone AppMon Client to your Servers, report and put the reports side-by-side.
  • Consolidate the data from different databases using your DBMS.
  • Export your data to a central analysis server.
  • Stream the data to ElasticSearch and then chart.

Consult your Dynatrace sales representative for further details.

Performance Warehouse sizing

Embedded Derby DB restrictions

The embedded Derby database is only suitable for demonstration and evaluation purposes. For a production AppMon deployment, use a supported RDBMS for the Performance Warehouse. For details, see the Release notes for AppMon 2018 April or AppMon 2017 May.

To keep Performance Warehouse size in reasonable boundaries, AppMon uses an automated data-aging algorithm that aggregates measures in 3 steps from a very high live resolution of 10 seconds down to 1 minute, 1 hour, and 1 day.

The following table shows the recommended settings for the duration of these resolutions and the disk space necessary to store 1,000 measures in a PostgreSQL database (Oracle, DB2, SQLServer are slightly less):

Database Sizing for 1000 Measures

Resolution Resolution interval Recommended duration Used space in million rows Disk size estimation
high 1 minute 2 weeks ~ 20 million rows ~ 2 GB
mid 1 hour 2 months 1.4 million rows measures + 0.1 million rows percentiles ~ 0.5 GB
low 1 day 1 year 0.37 million rows + 0.1 million rows percentiles ~ 0.2 GB

If you follow the recommended duration for the three resolutions, you can roughly calculate with a total space of 2 GB needed per 1,000 Measures. You should ask your database administrator about the growth rate of these tables and check the Deployment Sizing Calculator spreadsheet for the database transactions per second to determine the actual disk space needed and the hardware requirements for the database.

To estimate the number of measures you need to store in the Performance Warehouse, you must add the following major contributors:

Number of typical measures per contributor

Component Number of typical measures Subscribed measures
Agent 200 JVM metrics, PurePath metrics
Self-Monitoring 1,000 Performance counters, PMI values

For a deployment with 20 Agents, calculate with 1,000+20*200 = 5,000 measures.

Database suggestions

For Measures <= 30,000 a PostgreSQL database is sufficient.

For Measures > 30,000, using an Oracle, DB2 or SQLServer database is recommended.

The best way is to perform a test run with your System Profile and get the number of measures from the chart. This gives you the real number and you do not need to estimate.

Performance Warehouse database

AppMon stores very large amounts of measure data points in the Performance Warehouse. Being highly optimized, it bursts data every minute to the database. Involving a database administrator (DBA) for database sizing is recommended. Your DBA knows how many transactions a database can allow without conflicts to other applications.

There is an upper limit of 300,000 measurements per minute per instance. This is an upper limit for database sizing - it does not make much sense to invest into ultra-high-end server hardware beyond this limit for a single AppMon Server instance.

SQL Server recommendations

  • The SQL Server OS should reside on its own hard disk. Data files, transaction log files, and temporary tables should also reside on a separate disk system.
  • Data files (.mdf) have many random read/write operations. For high-end systems, a disk system with a RAID 1+0 configuration is recommended.
  • Transaction log files (.ldf) mainly write sequentially. The transaction log heavily influences the write performance. A RAID 1+0 disk system configuration is recommended.
  • Temporary tables (tempdb). For high-end systems, using an SSD for tempdb, is recommended. Otherwise, use RAID 1+0.
Other considerations
  • The more IOPS the better, but not using separate disk systems for data files, transaction log, temporary tables and OS has more impact than the number of IOPS.
  • Disks rated at 10,000 RPM or higher are recommended, but not using separate disk systems has more impact than lower RPM.
  • When creating a new database using SQL Server Management Studio, the default size and growth increment rate for both data and transaction log files is too small. Pre-allocating both with a much higher value is recommended. For example, the data file with the estimated size for the performance warehouse, derived from the Deployment Sizing Calculator spreadsheet.
  • A standard Dell database server usually is sufficient.
  • For a small system, you can put together the data files and the temporary tables on one disk system. The transaction log files shall remain on a separate disk system.

Oracle server recommendations

  • Evenly distribute database I/O requests to multiple physical disks. A larger amount of smaller disks should be preferred over less large disks.
  • Ideally, put redo logs, archive logs, table data, index data, temp data, and control files on their own dedicated physical disks or disk groups.
  • Use different table spaces for persisting table data and for index data.
  • A RAID 1+0 disk system configuration is recommended.
Other considerations:
  • Generally, the hardware investments should follow an I/O >> RAM >> CPU priority chain. Scaling I/O and more RAM is more important than doubling the CPUs, although a good balance between the three resource types is still important.
  • Disks rated at 10,000 RPM disks or higher is recommended.
  • The more RAM, the better. Also, use the opportunity to store the temp data in a RAM disk.
  • For XLarge deployments, partitioning the high-data-volume tables MEASUREMENT_HIGH and PERCENTILES_HIGH must be taken into account.

Partitioning in Oracle must be licensed separately as an option on top of the Oracle Enterprise Edition.