Use the Performance Warehouse for long term storage of measurements, incidents, thresholds, and tests. The Performance Warehouse does not store PurePaths, or memory/thread dumps. See Test Automation Explained) and their configuration data for more information.
The AppMon Performance Warehouse uses a relational database to store long-term historical data. By default, AppMon installs and uses an embedded database for demo/testing purposes. Any production installation of AppMon must use one of the following database management systems with the specified version to host the Performance Warehouse database:
- Oracle 10g/11g/12c
- Microsoft SQL Server 2008 / 2012 / 2014 / 2016
- IBM DB2 Version 9.7 / 9.8 / 10.1 / 10.5 / 11.1
- PostgreSQL 9.2 / 9.3 / 9.4 / 9.5 / 9.6
AppMon supports partitioned tables for high-load scenarios for all supported databases. For all AppMon installations greater than Large, a partitioned database is mandatory. You can find more details in Performance Warehouse Partitioning for Dynatrace.
To determine disk-space requirements for the AppMon repository upfront, factor 15GB per 1,000 persistent measures (of all system profiles within the AppMon repository). You can find more details in the Deployment Guide.
See below for information on using the embedded Apache Derby database.
Embedded Performance Warehouse usage restrictions
You can use the embedded database on developer machines only, because this database runs in the server process and directly affects server performance.
The following table lists the enforced restrictions.
Recommended database settings
The embedded database is fully preconfigured. You don't have to change the settings.
SQL Server, DB2, Oracle, PostgreSQL, SQL Azure
Following database settings must be configured:
Read/write permission: You must be able to read and write data.
Truncate tables: Table truncation must be allowed.
Create/drop tables, indexes: Schema modification rights like create table are required during installation or migration from an earlier version. During operation it is optional.
Create and execute stored procedures: Creation and execution of stored procedures is required during installation or migration from an earlier version. During operation it is optional.
UTF 16 compatible character set / char set / code page / codepage: A non UTF16 compatible charset may cause problems Measure names which use the full UTF16 char set spectrum. See Database Not Fully utf16 Capable and Default Char Set utf8 below.
Don't use special characters and accent marks. Use standard English alphabet characters, digits, and underscores for names.
Page Size 16k: Necessary on DB2. Create the database with the following command:
CREATE DB database_name PAGESIZE 16384
User Temporary Tablespace: Necessary on DB2. Create this with the following command:
CREATE USER TEMPORARY TABLESPACE user_temporary1 MANAGED BY AUTOMATIC STORAGE; GRANT USE OF TABLESPACE user_temporary1 TO USER DYNATRACEUSER;
Collation: Create the database with insensitive collation.
For a database admin who wants to fine tune the database, it is important to know the typical behavior of the Performance Warehouse.
Measurement writing supports high load environments. Data is inserted once per minute in one of the
measurement_temp tables. AppMon inserts in a temp table for 30 minutes, then switches and moves the data for the now inactive temp table to the
Keep the statistics (and indexes) up to date for the following tables (for databases other than embedded):
test_expectation(for usage of Test Automation only).
Performance Warehouse resolution
Due to performance reasons the measurement data is held in the following different resolutions:
- High: All data received within a minute is aggregated to a single data point.
- Mid: All data received within an hour is aggregated to a single data point.
- Low: All data received within a day is aggregated to a single data point.
AppMon calculates and stores minimum, maximum, and average values for the aggregation interval in a single data point.
Due to the different nature of percentiles, the data is held only in the two following different resolutions:
- High: All data received within an hour is used for calculating the percentile.
- Low: All data received within a day is used for calculating the percentile.
Performance Warehouse configuration
Configure the Performance Warehouse at the Performance Warehouse of the Dynatrace Server Settings dialog box. To access it click Settings > Dynatrace Server > Performance Warehouse.
The Connection Details tab contains connection settings, and also allows you to create or rebuild a schema.
The default activated database is the embedded database. Use an external database server for storage of more than four days of recorded data. See also Embedded Performance Warehouse Usage Restrictions above.
Using an external DB server
You must create the DB manually before you connect to the DB server. The Database Name field in the dialog box defaults to
dt4 in DB2), but you can use any valid DB name. If a schema is not available, the tables are created automatically at first connect. If the table/schema creation process fails, an error message appears.
How to create a schema
Ensure that the user has sufficient rights to create tables. If the schema is not created automatically at first connect, for example due to insufficient rights, you must manually create the schema process. To do this click the Create Schema button.
Rebuilding a schema
The Rebuild Schema button only appears if the AppMon Server is connected to a Performance Warehouse with a valid schema. The schema is dropped and recreated which leads to a loss of all data in the Performance Warehouse.
System Profiles management
The System Profiles Management tab lists System Profiles, stored in the Performance Warehouse.
You can remove measurements and incidents for System Profiles manually or remove System Profiles altogether. If a System Profile is deleted, the Performance Warehouse access should also be disabled. If not, AppMon synchronizes and writes the System Profile in the Performance Warehouse the next time a System Profile is saved.
Do not delete a specific time span or System Profile, or perform a calculation about the usage percentage of each System Profile in a production environment. These operations can lock the database.
The Storage Management tab contains settings for data storage duration.
Resolutions or data aging
The duration is the time that the data is kept in the corresponding resolutions.
For example, a duration of one week for high resolution means that data with a resolution of one minute is dropped if it is older than seven days. A duration of two months for mid resolution denotes that data with a resolution of one hour is deleted if it is older than 60 days. A duration of one year for low resolution means that data with a resolution of one day is deleted after 365 days.
Enable partitioning to speed up read and write performance, and for high load scenarios. See Performance Warehouse Partitioning for dynaTrace
Cleanup deletes old dynamic measures from the Performance Warehouse. In other words, each dynamic measure for which no new measurements have been received as long as the duration of the mid resolution are purged from the Performance Warehouse. Auto Purge helps to clean up unused measures in case of measure explosions or misconfigurations. If you think you need to tweak AutoPurge further, contact support.
Recommend settings for the different durations
When you set durations you may have to make a tradeoff between performance and granularity. The longer the higher resolution is kept in the Performance Warehouse, the more the performance is negatively affected. The formulas below provide insight into the expected amount of storage in the different tables and their impact on performance:
An active Measure is one for which measurements are actually taken. The expected number of active Measures is only estimated. It depends on the amount of subscribed Measures and how many of them are configured to create dynamic Measures.
Below is the example for 1000 active measures.
|Resolution||= 1000 DB Entries per||Duration of Storage||= DB Entries per Day||= DB Entries per Resolution|
The higher resolution data is kept in the Performance Warehouse, the bigger the impact on performance.
Performance Warehouse clean up task
Use the Clean Up Task to aggregate data for the different resolutions and delete data after the specified duration. You can also track and schedule data. To use this:
- AppMon 2017 In the Status Overview of the Cockpit click Tasks and Monitors.
- AppMon 2018 February In the Server section of the sidebar, click Tasks and Monitors.
In the Tasks and Monitors find Performance Warehouse related tasks in Dynatrace Self-Monitoring > Performance Warehouse. Schedule these tasks during times when the load on AppMon Server and database is expected to be low.
If the amount of data in the Performance Warehouse is expected to be very high, try to run the clean up task at least once a day so that the DB server works on smaller amounts of data, which results in a lower load.
If you do not execute the task, aging is not performed and data is kept in high resolution only and is never purged.
See the Deployment Guide for information on disk space requirements.
The AppMon Server does not back up the database. You have to schedule backups manually on the database server itself.
Usage of PostgreSQL
Download PostgreSQL from http://www.postgresql.org/.
On Windows, execute the setup file. For performance and stability reasons the data directory should be on its own disk. Use the default locale. You do not have to install Pg/PLSQL for the template database.
Schema/user creation for Performance Warehouse
If the PostgreSQL database instance is only used for the Performance Warehouse and other applications do not perform write operations on this instance, generate a user with read/write privileges for the public schema, or use the postgres default user. If you don't do this operation, you should delete the public schema and create one for each user/role.
PostgreSQL clustered indexes
As of Fall 2015, PostgreSQL does not support the automatic reorganization of clustered indices. To ensure the performance, you must schedule the cluster command manually.
The cluster command locks the affected tables. Do this during maintenance intervals when the Performance Warehouse is not connected.
As a workaround, you can use an application runnable by the pgAgent job scheduler for Postgres, which performs this reorganization task online, but uses almost double the storage. See Script pg_reorg for more information.
Setting statement timeout
Set the statement timeout on the Postgres database itself to prevent blocking database operations.
AppMon Server to PWH DBMS to DB relationship
Be careful with semantics. (DBMSes / Database Management Systems / database servers using SQL as query language like Oracle, MS SQL Server, IBM DB2, and PostgreSQL, each supporting many databases in one instance).
You may connect each AppMon Server to a separate / differently named database, where all databases may be located on the same DB server instance, as long as your DB server can take the load. Though you may want to consolidate data or reports from AppMon Servers covering different geographical regions, you must not connect more than one AppMon Server to the same (named) database on the same DB server instance. This causes unintended and possibly grave results with your data. Current consolidation options are:
- Connect the stand-alone AppMon Client to your Servers, report and put the reports side-by-side.
- Consolidate the data from different databases using your DBMS.
- Export your data to a central analysis server.
- Stream the data to ElasticSearch and then chart.
Consult your Dynatrace sales representative for further details.
If making any changes to
server.config.xml, chances are that you need to apply them to
frontendServer.config.xml as well, because the Frontend Server has its own (read-only) connection to the PWH. For example, Windows authentication for SQL Server needs changes in both
.ini files, as described below.
The test connection message reports that the database is not fully utf16 capable. In most cases, this doesn't matter, but if you're planning to use measure names, session names, or other data that uses the full spectrum of utf16 char set, the measure names may not store properly in the Performance Warehouse.
Test Connection reports that the database is fully utf16 capable, but the default char set of the database is set to utf8, or a different single-byte character set.
The utf16 capability test is very simple. AppMon creates a temporary table. It writes strings from the upper sections of the Java character set into this table and reads them. If the strings are equal, the Performance Warehouse stores every string and does not lose information.
You must be able to select, insert, delete, update, and truncate on all tables that AppMon creates. Schema modification rights (like create table) are only necessary during installation or migration from an earlier version.
The Performance Warehouse tries to reconnect, and measurements are buffered for a maximum of 11 minutes. Therefore, short database outages do not affect daily business.
As a default, the Performance Warehouse implements a timeout for the SQL Server, Oracle, and DB2 databases, after which, a statement is automatically canceled. This can happen if some database management operation is running on the tables used by the Performance Warehouse, and the AppMon Server tries to do a cleanup. The default value is two hours, but you can change it in
server.config.xml, in the
<Repository Config> section.
As of Fall 2015, the PostgreSQL JDBC driver does not implement the query timeout, so the timeout must be set explicitly in the database with the following code:
ALTER DATABASE <databasename> SET statement_timeout=<timeout in seconds>;