Test automation explained

Application performance testing is often not done during software development or is only done at the end of a development cycle. This is because it is a time-consuming task, tests are long-running, and the results can often only be checked manually. However, the later in development a performance defect is detected, the higher the cost of fixing it. AppMon solves this problem by proactively monitoring tests executed in a Continuous Integration (CI) environment.

This page describes Integration and Performance Testing, which can be fully automated. See Load Tests for Load Testing information.

Every test run captures various measures such as duration, number of executed database statements, and used CPU time. AppMon uses these measures to calculate a corridor for the expected minimum and maximum value of a measure. Values outside of this corridor indicate a performance change. Designated developers are automatically notified about these changes.

All performance data is available to different stakeholders within the organization. Developers and architects can use PurePath data to continuously monitor performance and track of architectural metrics to identify regressions early in the development stage. The automated baseline helps this task, and identifies tests that exhibit performance problems. In AppMon, the Test Overview dashlet provides an entry point to analyzing performance problems in tests. The metrics collected by AppMon and displayed there are exposed by REST interfaces that allow integration with other tools. See Test Overview dashlet for more information.

Step-by-Step Process to use Test Automation in AppMon

Integrate into the automatic CI build process

In addition to data captured by AppMon, you can include information such as the version number, a unique version identifier (such as the revision number in Subversion) and other custom data for a test run. Start session recording using tools listed below to gather in-depth data for all performance test runs.

You can manually start and stop session recording.

Note

Make sure to use trigger session storage from your test automation environment. These provide drilldown and comparison availability.

Setting test information manually

If you need to register a new test run manually, go to https://<DynatraceServer>:8021/api-docs/index.html for a web interface using the REST API for test automation. Create a new test run using the POST /management/profiles/{systemProfileName}/testruns endpoint.

Example form filled with data
Example form filled with data

Analyze test automation data

Test Results dashlet

The Test Results dashlet visualizes the results of various tests in your environment.

Test Results dashlet
Test Results dashlet

This dashlet is only available in the AppMon Test Center edition.

Test case section

The upper section of the dashlet lists the test cases, organized by defined test category and package. Each test case entry shows an aggregated view of the KPIs.

Test status

Every test case has a status:

  •  OK: Test case executed correctly.
  •  Degrading: Test case runs have become slower.
  •  Improving: Test case runs have become faster.
  •  Volatile: Test case has a very volatile outcome — sometimes faster, sometimes slower — so changes in performance might not be recognized correctly. This especially happens with very short-running tests. Try to increase the duration of the test, perhaps by executing the same operation multiple times.
  •  Failing: Test case has a functional problem, so no performance data could be recorded.
  •  Invalidated: The last test run of this test case was manually invalidated by the user.
    When you click a test status icon in the toolbar in the upper right corner of the Test Results dashlet, related test status columns display in the measure section. For example, click the Degraded icon to include the Degraded Runs column or the Volatile icon to include the Volatility column.

Grouping by package can be toggled by clicking the Group by packages icon.

Viewing details

Right-click the item and select Details to view details for a test case, measure, or test run. You can copy information from the Details dialog box.
Details dialog box

Measures section

The measure section in the lower left of the dashlet shows the latest values of the KPIs associated with a test case, and indicates whether those values are within, above, or below the corridor.

Chart section

The chart section shows the historical values of a KPI for a test case, including the calculated performance corridor.

Viewing markers

Any marker set in the test metadata displays in the chart's heat field.

See the Test Automation FAQ for more information.

Assigning test cases

Test Cases can be assigned to a specific user in the system. Depending on the configuration, email notifications are sent for failing, degrading, improving, or volatile tests.

To assign a test, right-click a test case and select Assign Test to display the dialog box shown below. A test case can be assigned to one or more users.

Managing test runs

Comparing test runs

Comparing test runs and drilldown are only available if session recording was enabled during execution of the test case.

Do one of the following to compare two test runs.

  • Double-click a test run. A new dashboard comparing the selected test run with the last test run appears.
  • Select two test runs and click Compare. A new dashboard comparing the selected test runs appears.

Accepting test run changes

Sometimes a change in performance is unavoidable as functionality increases. In such a case, the performance change has to be manually accepted. Right-click a test run in the chart and select Accept Change from here. When the dialog box appears, select the appropriate option:

  • All Violated: All consecutive test runs beginning with the selected run are used to calculate a new corridor. Use this option when there are multiple changes in performance.
  • Only Selected: Use this option to use specific test runs to calculate the new corridor.

Invalidating test runs

Often, changes in performance are not caused by changes in the system, but by some random factors that affect only the given test run. For example, a database cleanup job was running at the same time as a test case. To exclude an outlier from the regular corridor calculation, invalidate it by right-clicking the test run and selecting Invalidate Test Run.