To view the Error analysis page:
- Select > Error analysis to display error data for all tests within a selected time frame.
- In the Operational summary page, click an Error value for a test to drill down to the Error analysis page for that test.
- In the Test overview page, select the Performance/Availability tile, then click View in error analysis under the error chart.
- In the Trend details page, click a data point with an error and select Error analysis from the popup window.
- In the Raw scatter page, click a value in the Errors column of the Summary tab below the scatter chart.
- In an Operations dashboard opened from the Dashboards page, click an Availability value that's less than 100%.
Use Error analysis to identify patterns of errors, focus on tests that have errors, and view information that can help you troubleshoot and resolve performance or availability problems. For a selected test or for all tests, you can view the availability percentage, error types and counts, and the error distribution over agent locations, tests, and test-run times. At a glance, you can identify the most frequently occurring errors, and the tests and locations with the most errors.
From the Error analysis page, you can go to the Error list to get more details about the errors. From that page, you can:
- Drill down to the Http summary page for an Availability test.
- Drill down through the Waterfall summary page for a Performance test to a waterfall chart for each step in the test execution.
- View screen captures if Screen Capture on Error is enabled for the selected Performance test.
Error analysis page
The Error analysis page displays:
- Aggregate test statistics.
- Circle charts for Error types, test Locations where errors occurred, and the Tests that have errors.
- An Error count chart that shows the number of errors over time.
When you drill down to the Error analysis page from the Operational summary page or Operations dashboard, the error statistics are calculated for the selected test. When you use the menu to go to the Error analysis page, the statistics are calculated based on all active tests that have errors.
These statistics are aggregated for the tests:
- Availability – The percentage of successful test executions, calculated as
(Total test Executions – Failed Test Executions)/(Total Test Executions)
- Runs failing – The number of tests runs that failed.
- Tests failing – The number of tests that failed.
- Runs total – The total number of test executions for all included tests.
- Tests total – The number of tests included in this Error analysis.
- Error types – The number of different error types that occurred in all the test executions.
- Locations failing – The number of measurement locations where test executions failed.
The interactive circle charts show the top 10 items, by percentage, for Error types, measurement Locations with errors, and Tests with errors.
For example, the Locations graph shows the locations with the highest percentage of availability errors. The location name and the percentage of the total availability errors are shown for each segment. The segment size corresponds to the percentage of errors.
When the Error analysis page displays data for just one test, the Tests chart contains segments for the test's steps where errors occurred.
Click a chart segment to filter the page by that item, as described below.
If a circle chart has more than 10 items — for example, more than 10 Tests have errors — the top 9 items are graphed individually and the rest are grouped in a segment labeled Other. Click the Other segment to display a list of the items in the Other category. Click an item in the list to filter the Error analysis page.
The Error count chart at the bottom of the page shows the number of errors that occurred at specific times during the selected time frame. The chart's resolution depends on the time frame. For example:
- Last 1 hour – Every 15 minutes
- Last 48 hours – Every 6 hours
Hover over a bar to display the error count and exact time.
You can use the Error count chart to filter the data, as described below.
Filtering the error analysis data
You can filter the aggregate view by:
- Error types
- Measurement locations
When you filter the page, the filters are listed across the top of the page, with the exception of the time frame menu selection. If you drilled down from a selected test, the test name is listed as a filter.
Aggregate statistics are recalculated based on the filter. For example, when you filter by a Location, the statistics are aggregated for the tests that ran on that location.
Filtering by time frame
By default, the Error analysis page displays the time frame that's set for the Operational summary page.
To view error data for a specific time frame, select from the time frame menu at the top of the page.
Changing the time frame in the Error analysis page also changes it in the Operational summary page.
You can define a Quick or Custom time frame.
Filtering by error count timestamp
To focus on error data from test executions at a specific time, click the bar for that time in the Error count chart at the bottom of the page. When you filter the page this way, the filter is displayed at the top of the page.
Using the circle charts to filter
Click a segment of a circle chart to filter the page by that item. Chart filters are cumulative: you can filter the page by an error type, and a location, and a test/step.
Error types – By default, the error types are grouped into categories: Network errors, timeouts, HTTP errors, etc. When you click an error group, the circle chart displays the error types in that group; for example, Network may drill down to show that DNS Lookup Failures and Connection Timeouts occurred.
Locations – By default, this chart shows the continents were errors occurred. Drill down through countries and cities to the measurement locations (Backbone nodes, peer populations, mobile carriers/sites).
Tests – By default, the tests with errors are grouped into test types. Click a test type to drill down to the individual tests. The top 10 tests with errors are displayed; all other tests are grouped into "Other".
- Clicking a Mobile, Last Mile, or Private Last Mile test drills down to the batch groups that contain the test; clicking a batch group drills down to the test's steps. if the test is not in a batch group or is in only one, it drills down directly to the steps.
- Clicking a Backbone test drills down to the steps.
When a circle-chart filter is applied, the filter is listed at the top of the page and the test statistics are recalculated for the filtered tests.
As you apply each filter, the other circle charts are filtered automatically; for example, when you click a Location that ran only Mobile tests, the Tests chart is filtered to display only those Backbone tests.
Disable or remove filters
Click a filter to temporarily disable it; the text is dimmed to show the filter is disabled. Click the filter again to enable it.
Click the X on the right side of a filter to remove it.
If a circle-chart filter is applied, you can also click the center of the chart to remove the filter.
To view the Error list, click Inspect errors at the top right of the Error analysis page.
The Error list displays errors for the time frame selected in the Error analysis page. Inspect errors is available for a maximum 48-hour time frame:
- In the Quick time frame menu, select a time frame of Last 48 hours or less.
- In the Custom menu, select start date within the last 45 days, with a maximum time frame of 48 hours.
If filters are applied to the Error analysis page, the same filters are applied to the Error list.
The error table lists the following information:
- Test – The test name, with icons that identify the test type and browser type.
- Step – The step in which the error occurred, if the test has more than one step.
- Error code – For more information, see Synthetic Classic return codes and error codes.
- Error – A brief description of the error.
- Test time – The date and time of the test execution.
To sort the table, click a column head. By default, the table lists errors by Test time, from most recent to oldest.
Filtering the error list
To find specific errors in a long error list, you can filter the list by Error description, measurement Location, SCoE Available, Step name, and Test name.
Click the filter field to display the criteria list.
When you select a criterion, it is added to the field and a list of items matching that criterion appears. You can only select one item for each criterion.
Filters are cumulative. The list is immediately filtered when you add a criterion, so the next criterion you select displays only the items available in the filtered list. For example:
- The unfiltered list displays errors for a test consisting of three steps. When you filter the list for an error type and then select Step for the next filter criterion, the Step list only displays the step(s) where that error occurred.
- The selections for SCoE available are true (screen capture are available) and false. If none of the errors in the list have screen captures available, the selection list for SCoE available only displays false.
Viewing error details
Click the expand icon for an error to see the geographic Location, the Site (node or peer population), and the number of Objects that failed.
If Screen Capture on Error is enabled in the test settings, a thumbnail of the first screen captured is displayed. Click the thumbnail to view the screen capture in a popup window.
Drilldown for data analysis
If SCoE is enabled for the test, click View screen capture to open the Screen capture page in a new browser tab. (You must allow popups for portal.dynatrace.com to be able to open the new tab.)
Click Http summary under the error details to go to the Http summary page for the test execution.