The Error analysis page offers an interactive view of the success of a selected test or all your tests taken together. For a given time range, you can view the availability of all your tests, the availability error types and counts, and error distribution over agent locations. At a glance, you can identify the most frequently occurring errors, tests, and locations with the most errors. Use Error analysis to troubleshoot tests or locations failing consistently or frequently occurring errors.
You can access the Error analysis page via the Analyze section of the menu.
On the Error analysis page, you can:
- Review aggregate statistics for the selected tests, locations, and error types.
- Drill down into the interactive circle graphs for Errors, Nodes (agent locations), or Tests (measurements) to filter by multiple factors and isolate a problem test or agent location or get details on the incidence of an error.
- Review the Error count over time and choose one of the time intervals to view Error analysis for it.
Drilling down from the Error analysis page by clicking Inspect errors, you can go to the Error list to get a complete list of errors and run details for the filters chosen. From the Error list, you can view screen captures and drill into a page-level summary and waterfall graphs for a run.
You can adjust the Error analysis time range in the same way as for the Operational summary.
Click the icon and select a new custom or preset time range.
Preset ranges are relative to the current time and vary from the last 1 hour to the last year, with additional options for the current calendar day (Today) or previous calendar day (Yesterday).
You can specify a custom relative time range, i.e., the number of hours, minutes, or days to look back from the current time or from a specified number of hours/minutes/days prior to the current time.
For example, you configure the relative time range as follows:
- Time range – 6 Hrs
- Ending – Custom
- Shift to past – 30 Min
At 12:40, the dashboard displays data from 6:10 through 12:10.
You can also specify absolute start and end dates and times in five-minute intervals.
The custom time you select is your local time. However, all times in the portal are displayed in the time zone configured for your account. For example, if your machine is in the Pacific Time zone but the account time zone is Eastern Time (three hours ahead), when you select 14:00 and 16:00 as the custom range, the displayed time range is 17:00 through 19:00.
Changing the time range in the Error analysis page also changes it in the Operational summary page. The time range selected when you log out will be applied the next time you log in.
Aggregate error, test, run, and location statistics are displayed at the top of the page, which are updated as you filter the interactive circle graphs by drilling down into them.
- Availability - Aggregate availability across included tests
- Runs failing - Number of runs with availability errors
- Tests failing - Number of unique tests with availability errors
- Runs total - Number of runs for all included tests
- Tests total - Number of unique tests deployed
- Error types - Number of unique errors generated
- Locations failing - Number of agent locations where tests have availability errors
Interactive circle graphs show the top ten items by percentage for Error types, agent Locations with errors, and Tests with errors. The segments of each circle graph add up to 100%. Segment size corresponds to the percentage of errors.
The Error types graph displays error categories from where you can drill down to view specific error types triggered. The Locations graph shows the continents where agent locations have the highest percentage of availability errors. Drill down through countries and cities to agent sites. Tests are grouped by type (TxP, MWP, or ApP) from where you can drill into specific tests and pages.
If a circle graph has more than ten items, there are segments for top ten items, while the Other category groups the rest. For example, there there are more than ten TxP tests in the Tests graph below. The graph displays the top ten tests and the Other category.
Click Other to see the list of other tests with availability errors. A pop-up window lists these tests; the number of tests and their percentage of availability errors are displayed on the header. Click on any test in this list (or in the circle graph) to filter Error analysis by it.
Drilling down, or filtering circle graphs
Select a segment of any circle graph to filter Error analysis by it. For example, if you notice that a single category accounts for a large percentage of availability errors, click it to filter Error types by it.
When you select a category, the Error types graph displays an additional layer of segments representing specific errors triggered. For the selected error category, you will see the corresponding Locations and Tests.
At the top of the page, test statistics are updated to match your filters.
Any time you filter by an error, the Tests failing and Tests total are always the same, and Availability is 0%.
A filter breadcrumb at the top of the page shows the factor you have filtered by. Click the breadcrumb text to toggle the filter off and on; click to clear the filter. You can also click in the circle graph to clear the filter. The effect of disabling or clearing a filter is the same; however, you can re-enable a disabled filter.
You can select multiple filters in sequence. If you select a continent (Location) and then the largest error category, the result show you a.) the Errors types for the selected category in the b.) countries for the continent selected. Tests are also filtered accordingly.
The breadcrumb trail shows all the selected filters. You can disable or dismiss individual filters or Clear all filters together. Note that if you filter by a test and then a test step, you will see a single breadcrumb showing the step name:
The Error count chart at the bottom of the page shows the number of errors that occurred for specific intervals during the selected time range. The size of the interval depends on the time range. For example:
- If the time range is Last 1 hour, the interval size is 15 minutes.
- If the time range is Last 48 hours, the interval size is 6 hours.
Hover over a bar to display the error count for an interval ending at the time displayed.
Drilling down, or filtering error count
You can use the Error count chart to filter the data. Select any bar, or time interval, to filter Error count and Error analysis by it.
A filter breadcrumb shows you the exact time period filtered for, even though you have selected a longer time range.
Depending on your initial time range for Error analysis and hence, the Error count time interval, you can drill into smaller, more specific intervals. For example, if drill into an initial Error count interval of 6 hours (see the image above), you can drill further into 30-minute and then 15-minute intervals. Error analysis circle graphs, test statistics, error count, and the error list are updated to match the chosen interval.
Clicking Inspect errors at the top right of the Error analysis page provides a detailed listing of each error run.
The Error list displays the following information:
- Test and step, if any, with error
- Error code and type
- Agent location
- Time stamp
Click Aggregate view to return to the Error analysis page.
Inspect errors is available when you are viewing data collected within the last 45 days, with a maximum 48-hour time range.
Click the expand icon for a run to see additional details such as:
- Location (city) of the agent
- Agent site within the city
- Number of objects downloaded
Click the thumbnail of the screenshot to see a full-sized image in a pop-up window.
Click View waterfall to go to the Waterfall summary page for the test execution. From the waterfall summary, you can drill down to the waterfall graph for a step.