Performance dashboard

To go to the Performance dashboard, click > Performance dashboard.

View the performance of your synthetic tests in the Performance dashboard. Use this dashboard to quickly determine the synthetic tests that are in a problem state and to view possible causes.

You can view data from selected dashboards and receive alert notifications on your mobile device through the Dynatrace Synthetic Mobile app, available for iOS and Android devices.

By default, the Performance dashboard displays the Top 10 Most Active Tests dashboard with data from the past 24 hours, excluding benchmark tests. While you cannot edit the tests for this dashboard, the Performance dashboard provides administrative pages to create and configure custom Performance dashboards, including dashboards with benchmark tests.

You can select a time frame for the displayed data, from Last 1 Hour through Last 48 Hours. The displayed time is based on the time zone configured for your account.

The information in the Performance dashboard refreshes automatically every five minutes. However, the data shown on this dashboard can be refreshed manually.

The dashboard displays aggregate and test-level data, as defined by the most test runs, from:

  • Backbone
  • Last Mile
  • Private Last Mile
  • Mobile

Aggregate information

When the page is first opened, the dashboard displays aggregate information.

  • The left pane displays the test summary and test legend, which provide at-a-glance status of the tests. Tests are grouped in this pane by status: Severe, Warning, and Good. If alerts were triggered, the number of alerts is displayed at the top right corner of the test thumbnail.
  • The health map in the upper right pane provides a geographic view of performance, as determined by response time and availability.
  • The bottom pane displays these tabs:
    • Alerts – All alerts by type for the tests in this dashboard. This tab only appears if at least one test in the dashboard is a Backbone, Mobile, or Private Last Mile test. (Alerts aren't available for Last Mile tests.)
    • Errors – A list of error categories. Select a category to view the tests with this error.

Test information

Select a test in the Test pane on the left to view test-level information.

  • The health map in the upper right pane shows a geographic view of response time and availability for the specified test. For Backbone tests, you can select to display Node or Host locations. The severity status of each location is indicated by the color.
  • The performance chart next to the health map shows the average response time as a line graph and availability as a bar graph.
  • These tabs in the bottom pane provide detailed information for this test:
    • Analysis – The availability and response time status of the test. The Availabilty pane lists the error categories; the error icon means errors occurred in the test runs; a green checkmark means no errors. For Backbone tests, this pane also shows the status of the nodes where tests ran. The Response Time pane shows the status of the contributor groups and hosts, if any.
    • Alerts – The alerts, if any, for this test. The table lists the alert type, date/time, progression (e.g. Initial or *Condition Improved), and the severity level (Warning, Severe, Improved). This tab appears for Backbone, Mobile, or Private Last Mile tests only.
    • Errors – All errors that occurred in test runs. The table lists the error type, category (e.g. Network or Content Match), and date/time.
    • Steps – Data by step. The table lists the number of errors, availability, and average response time for each step. Select a step to display a Performance chart for the step. This tab only appears if a test has multiple steps.

As of the 2017.08.09 release of the Dynatrace Portal, the Third party tab of the Performance dashboard is retired. Data for third-party objects is available in the Test overview page and the waterfall chart.

Legend and navigation

Use the test legend in the left pane to navigate the dashboard:

  • Click the test summary at the top to view aggregate information for this dashboard.
  • Click a thumbnail in the test list to view information for that test.

Test summary

The test summary provides the following information:

  • Number of tests in this dashboard
  • Test status bar
    The category (status) of tests:
    • Red – Severe
    • Orange – Warning
    • Green – Good
    • Gray – No data
      The width of each color in the bar is determined by the number of tests with that status.

Hover over a bar to view the number of tests with that status.

Click a color to display the test list with only tests of that category displayed; all other categories are collapsed.

Click one of the following icons:

  • Add tests – Add or remove tests from a dashboard. This option is not available for the 10 Most Active Tests dashboard.

  • Configuration settings – Edit test thresholds. This option is not available for the 10 Most Active Tests dashboard.

  • Views  – View the dashboard by:

    • Test status
    • Test name
    • Test type

Test list

The test list contains thumbnails for all tests in the dashboard.

Select a test to view test-level information.

By default, tests are grouped by their status. You can list tests by name or by test type by using the View menu as described above.You can collapse and expand the test categories:

  • Click next to a category label to hide the tests in a category.
  • Click to show the tests.

For each test, the following information is shown:

  • Test status bar – The color of the left side of the thumbnail shows the status of this test:

    • Red – Severe
    • Orange – Warning
    • Green – Good
    • Gray – No data
      The status is determined by the metric (response time or availability) that has the worse status. For example, if the response time is Severe and availability is Good (green icon), the test status bar is red for Severe. See Response time and availability for more information.
  • The test type, identified by the icon in the top right corner:

    • Backbone
    • Last Mile
    • Private Last Mile
    • Mobile
  • Test name – If the name is truncated, hover over it to view the full test name in a tooltip.

  • Alert status – If the test has any active alerts, a number in the upper right corner show the number of alerts.
    Hover over the value to view for each alert:

    • Alert type
    • Alert time
    • Level
      Click the value to go to the Alerts tab for the test.
  • Metric Icons – the status of these metrics:

  • A chart for the selected time period:

    • Bar graph – Availability
    • Line graph – Response time

Response time and availability

The test list displays the status of response time and availability.

For each test, the average response time and availability are compared to thresholds. For the Top 10 Most Active Tests dashboard, the thresholds are:

  • Response Time:
    • Warning – 4 seconds.
    • Severe – 7 seconds.
  • Availability
    • Warning Level – 98%.
    • Severe Level – 95%.

You can select the response time and availability thresholds for custom Performance dashboards.

Response time

Individual test runs are compared to the thresholds to determine if that test run is has a status of good, warning, or severe. Then, the following rules are run in order with the icon determined by the first rule that applies.

  1. If no tests ran during the time frame, the icon is gray (no data).
  2. If at least one test ran, but there were no successful test runs, the icon is red (severe).
  3. If more than 50% of all test runs had a response time that surpassed the Severe threshold, the icon is red (severe).
  4. If more than 80% of all test runs had a response time less than the Warning level, the icon is green (good).
  5. If none of the above rules apply, the icon for the test run is yellow (warning).

The 50% and 80% thresholds used in Step 3 and Step 4 are configurable on an account basis in the Account Details page.

Availability

Availability is defined as:

(Successful test runs/Total test runs) * 100

In a successful test run, each page in the test has an HTTP response status code of 200.

Individual test runs are compared to the thresholds to determine if that test run is good, warning, or severe. Then, the following rules are run in order with the icon determined by the first rule that applies.

  1. If no tests ran during the time frame, the icon is gray (no data).
  2. If availability is greater than 98%, the icon is green (good).
  3. If availability is less than or equal to 98% and greater than 95%, the availability is warning (yellow).
  4. If availability is less than or equal to 95%, the icon is severe (red).

Drilldown for data analysis

You can drill down from the Performance dashboard through various analysis workflows depending on the starting point, to the Waterfall summary page and from there to the waterfall chart. For details, see Analysis Workflow.