The number of Backbone alerts generated within the given time period. Critical alerts are those with a Severe status.
For month-to-date reports, the number of measurements that were available for the number of days that have elapsed in the month. For previous month reports, the number of measurements that were available for the month of the report. For terms greater than one month, allowance is equal to:
(number of measurements remaining) / (number of months left on the contract)
Also called Step success rate. The percentage of steps conducted during a specific period of time that completed successfully. Calculated as:
(Number of Successful Steps) / (Number of Successful Steps + Number of Failed Steps)
The percentage of tests conducted during a specific period of time that completed successfully. Calculated as:
(Number of Successful Tests)/(Number of Successful Tests + Number of Failed Tests)
Average daily allowance
The average number of measurement units per day that can be consumed to stay within purchased measurements for the term. The value is calculated as:
(total plan measurements) / (number of days in all the months of the term)
The average response time for the past 90 days for each test.
A network component, the time (in seconds) that it takes to connect to a web server across a network. This provides an excellent measure of the network round-trip delay due to network traffic.
The number of TCP links between the IP address received by the DNS lookup and the hosts. Typically, each host can have multiple connections.
The consistency ranking indicates how variable the response times are for a single web page or a multi-step business process. It is calculated based on the standard deviation of response times for successfully completed requests. The lower the standard deviation, the higher the consistency ranking for the site.
A network component, the time (in seconds) required to receive the content of a page or page component, starting with the receipt of the first content and ending with the last packet received.
Daily available to meet plan
The average number of measurement units an account can use daily to finish the plan term without overage. This metric is calculated once daily by dividing the measurements remaining at the start of the current day by the number of days remaining in the month (including the current day). This metric is not reported for past months.
A network component, the time (in seconds) it takes to translate the host name into the IP address — often done by a third party. DNS response times that are consistently longer than two seconds typically indicate that one of the DNS servers is not responding.
W3C metric: The time elapsed from the start of page navigation to completion of page content processing.
W3C metric: Time from the start of the page navigation to DOM Interactive (current document status: interactive).
Also called the host. A unique name that identifies a website (for example, mywebsite.com).
End of month usage projection
Measurement usage estimate, calculated by projecting the previous day’s usage as the daily usage for the current day and the rest of the month.
Usage projections are not reported for past months.
Also called failing objects. An object download fails when the agent was unable to download the specified object for one of the following reasons: the agent connected to the server that the object allegedly resides on but was unable to find the object; the agent identified the server but could not connect to it; the agent could not find the server because the DNS lookup failed. A test may be reported as successful even though one or more objects failed.
The total number of steps during the time period that did not complete successfully. A step is considered "failed" when any of these conditions exist:
Zero objects downloaded with HTTP response status code of 200 (successful)
Content match failure
Byte limit failure
User script failure
The total number of tests during the time period that did not complete successfully. A test is considered “failed” when a step within the test fails.
The failed test executions for a test as a percentage of total test executions:
(failed test executions) / (total test executions) * 100
W3C metric: The time elapsed from immediately before the check for any relevant application caches (if HTTP GET or an equivalent is used) or the time when the resource is fetched, until immediately before the DNS lookup occurs.
First (1st) byte time
A network component, the time it takes to receive the first byte of the page HTML, graphic object, or other web component after the TCP connection is completed. Overloaded web servers often have a long first byte time.
First paint time
W3C metric: The time from the start of page navigation to when page elements are first displayed.
The number of web servers that host content accessed by the tested web page. Host metrics can combine data from multiple host IP addresses.
The number of HTTP response status codes from 200 through 298 returned during the selected time frame. These codes indicate the action by the client was received, understood, accepted, and processed successfully.
The number of HTTP response status codes from 300 through 399 returned during the selected time frame. These codes indicate that the client must take additional action to complete the request.
This metric is sometimes called 300 Objects. The number of HTTP response status codes from 300 through 399 returned during the selected time frame. This is the (arithmetic) average number per test across all test executions. A status code of 3xx indicates that the client must take additional action to complete the request.
The total number of response status codes 400 or higher returned during the selected time frame. These include client, server, network, internal, and timeout errors.
Total number of kilobytes downloaded from the initial request until the last connection closes. The metric in this report is the (arithmetic) average per test across all test executions. Calculated as:
(Bytes Downloaded) / 1024
W3C metric: Time from start to end of the load event.
Where a particular test was run. Depending on the test type, this can be called a site, node, or peer population.
Total number of measurements remaining in the current plan term. This metric is not reported when reporting on past months.
Month to date allowance
The number of measurements that would be consumed if usage were equal to the Average Daily Allowance on each day.
Month to date usage
The number of measurements consumed during the current month. Measurement units are recorded hourly, so the Month to Date Usage will vary depending upon when the report is run. If usage is higher than the allowance, the usage will be reported in orange text. If usage is higher than the total plan measurements, the usage will be reported in red text and the overage will be reported.
Monthly measurement trend
The Monthly Measurement Trend tables report usage for past months. If the report is run before the plan end date, this table will include usage, average allowance, overage units (if any) and overage cost for each month in the plan up to 12 months. If the report is run outside of the plan term start and end dates, a maximum of 12 months will be reported, but there will be no allowance or overage metrics.
Network component metrics
Metrics that provide visibility into how the network time is spent in order to provide more information about the nature of time needed to download a web page. Times for each component are calculated in the following manner: every individual test execution sums the total time for a particular component within that test execution, inclusive of all objects and connections. An average of those sums is then charted over the time breakdown requested. Network components include:
- DNS time
- Connect time
- SSL time
- First byte time (1st byte time)
- Content time
An object is a single downloaded file such as HTML page source, a GIF image, a Java application, or a response status code header.
In reports, the Total Objects metric comprises Successful Objects (response codes 200-298), Failed Objects (response codes 400 to 20099), and objects that were partially downloaded but the test initiated the next navigation before the download was complete (response code 299). This metric is the (arithmetic) average of the count in successful test executions during the time period. Object counts from failed steps are not included in the averages.
In interactive charts, the Total Objects metric comprises Successful Objects (response codes 200-298), 300 Objects, Failed Objects (response codes 400 to 20099), and objects that were partially downloaded but the test initiated the next navigation before the download was complete (response code 299). This metric is the (arithmetic) average of the count in successful test executions during the time period.
Percent of total successful steps that accessed content from this host.
Cost of each additional overage unit. The calculated cost may be a higher decimal precision than the value displayed in the report.
The number of measurement units consumed more than allowed measurements for the term length.
Page composition metrics
A set of metrics that indicate the complexity of a given web page. These include:
- KB (downloaded)
Contract end date as recorded in the Usage system. Often this is reported as of the end of a month.
Total number of measurements that can be consumed over the specified term length. Plan measurements comprise base measurements and promotional measurements.
Plan start (usage)
Contract start date as recorded in the Usage system. Often this is reported as of the start of a month.
W3C metric: The time elapsed from the start time of a URL fetch that initiates a redirect until the last byte of the last redirect response is received.
Report time frame (usage report)
All usage time references are reported in Universal Coordinated Time (UTC).
W3C metric: The time elapsed from when the browser sends the request for the URL until the time just after the browser receives the first byte of the response.
For full-object tests, and for Firefox and Chrome no-object tests: The time, as measured in seconds, from when a user clicks on a link to when the content is completely downloaded. This includes the time to collect all objects on all steps of the test, including graphics, frames, third party content form offsite servers, and redirection.
For Internet Explorer no-object tests: the time, as measured in seconds, from when a user clicks on the link to when the root object is downloaded.
Response time average
The arithmetic mean for all successful tests or steps in the selected time period.
Response time baseline
The average response time for the test, calculated from the test executions for the past 90 days.
Response time distribution
The percentage of test executions that have response times within the given response time frames.
Response time (domain/host)
The time, as measured in seconds, to download all of the objects from this location.
Response time maximum
The longest response time for all successful tests in the selected time period.
Response time median
The middle value for response time for all successful tests/steps in the selected time period. If there is an even number of test executions, it is the average of the two middle numbers.
Response time minimum
The shortest response time for all successful tests in the selected time period.
Response time percentile
The average response time below which X% of the response time measurements can be found.
Service level thresholds
Values entered by the user or calculated automatically by the report, used to categorize performance metrics as Good, Warning, or Severe.
A network component, the time (in seconds) it takes a client to send a request to connect to the server, the server to send the signed certificate, and the client to make a handshake with the server. When the machines that provide SSL termination at your website are overloaded, SSL times will increase.
Step success rate
See Availability (step).
The number of steps that executed successfully (did not fail) during the selected time period. Called valid steps in some reports.
Number of months over which purchased measurements can be used.
Term to date allowance
The number of measurement units consumed more than allowed measurements for the plan term.
Term to date usage
The number of measurements consumed during the current term. Measurement units are recorded hourly, so usage for the current month will vary depending upon when the report is run. If usage is higher than the allowance, the usage will be reported in orange text. If usage is higher than plan measurements, the usage will be reported in red text and the overage will be reported.
The number of test executions for the test during the time covered by the report.
Whether the test is currently active or inactive.
Test success rate
The successful test executions as a percentage of the total executions. Calculated as:
(successful test executions) / (total test executions) * 100
The kind of testing location. Options are Backbone nodes, Mobile nodes, and Last Mile or Private Last Mile peers.
Rate of content delivery from a given host. Calculated as:
(kilobytes delivered) / (response time)
The total number of errors that occurred in all steps or test executions.
Total page load time
W3C metric: The time elapsed from immediately after the prompt to unload the previous document occurs until the load of the current document ends.
The number of steps executed in the selected time period.
Also called total test runs or total test executions. The number of test executions in the selected time period. Calculated as:
(successful tests) + (failed tests)
The kind of synthetic test being run: Backbone, Mobile, Last Mile (LM), or Private Last Mile (PLM).
W3C metric: The time elapsed from immediately before the start of the unload of the previous document until the time immediately after the unload finishes.
The number of measurements consumed during the month of the report. Measurement units are recorded hourly, so the reported usage will vary depending on what time of day the report is run.
The number of steps that executed successfully (did not fail) during the selected time period. Called successful steps in some reports.
The number of tests that completed successfully in the selected time period.
The XF measurements used for the reported item during the reporting period. The number of XF measurements consumed by a test depends on the test type, configuration settings, number of steps, number of locations, and amount of data transferred.
The XF measurements used by a test that was executed during the 24-hour period before the report was generated.