To troubleshoot and resolve this issue:
- Make sure the Performance Warehouse is connected.
- For unit tests, make sure the Java Tests or .NET Tests Sensor Pack is enabled for your Agent Group.
- Review the Agents overview to make sure the Agent is connected to the Server and it is in the correct Agent Group.
- Make sure the measures listed on the Capture Performance Data from Tests page have the Create a measure for each agent option selected. Also see below: Why are some measures missing in the Metrics section of the Test Results dashlet?
- Make sure you are using a Pre-Production License. Test Automation is not available in Production Edition.
To troubleshoot this issue:
- Check whether session recording was enabled for the test run.
- Check whether the session recording for this test run has already been deleted.
The Metrics section contains a limited set of measures. Each test category has its own list of measures, matching its characteristics. A list of measures assigned to each test category is on the Capture Performance Data from Tests page.
A measure might not appear in the Metrics section if it's not configured to be split by Agent. Check the Create a measure for each agent option in measure properties (Details panel, Measure Spliting section) to enable that behavior.
Measures are considered different if they are coming from Agents running on different hosts. Such measures are reported in separate rows (you can display the Host column in the metric table to verify). In the case of larger CI environments that make use of many hosts running the builds and executing tests (build farm), that approach may lead to unwanted measure duplication. To solve the problem, use the
overridehostname Agent option that would cause the Agent to report a given host name instead of the detected one. See Java Agent Configuration page for details.
If machines executing the builds have different performance capabilities and you force all of them to be reported as a single host, you may see shifts in performance related metric values as the consequent test executions are reported from different hosts. That may cause unwanted alerts on measure volatility or baseline violation.
Test Case: AppMon identifies test methods of unit tests from testing frameworks such as JUnit, NUnit, and MSUnit as a test case. Every test method is listed as an individual test.
Corridor: The corridor is the expected range of values for a measure in a test case.
- The corridor is the
((100 - False Positive %) - Confidence Interval %)of the Student's t-Distribution of a measure. By default, False Positive % is 1, so the default Confident Interval % is 99.
- The False Positive % is an AppMon term used for setting the confidence interval.
- Volatility is the Coefficient of Variation, calculated as a square root of the sum of squared deviations from the average, divided by that average. The Volatile % value in the Test Automation settings defines how high this coefficient has to be for a test to be considered volatile.
- The calibration runs are applied every time changes are accepted.
- When accepting changes, the system behaves exactly as if the selected measurement was the first one ever observed. All existing values are discarded and are only shown in the chart. Selecting Accept Change instead of Accept Changes from Here uses the first measurement outside the corridor for accepting changes.
Test Case Assignee: The test case assignee is the person responsible for the test performance. This person receives notification emails when a measure for a test case is outside the corridor for the number of tests configured in the System Profile - Test Automation settings.
Test Execution: Single execution of a test case, such as a unit test. It aggregates measures' values captured during the execution and provides additional statistics, like number of successful/failing executions.
Test Run: Represents single test suite run and can be registered manually (using the REST API) or automatically. It aggregates all the test executions launched as a separate process or assigned to a manually registered test run. It can also provide additional metadata such as version and build number or marker.