A modern web application is typically not restricted to be used via a web frontend. It also provides functionality that is used elsewhere like mobile apps. Think about an e-commerce site today: a shop does not get its business exclusively from sales on its web shop, but also through mobile apps, and through rich-client applications used to process orders in a call center. In addition, it can also expose certain functionalities that other sites could use (for example, the OAuth APIs provided by Google).
In such an environment, it is important to realize that performance not only counts on the frontend. A shopping app on your phone can only provide great user experience when the backend services it relies on are up and running, and also provides results fast enough. This applies to smartphone apps and to any other software that uses server-side Web APIs through SOAP or REST calls.
So how can we make sure our APIs actually live up to those performance expectations? Our blog series about performance metrics in Continuous Delivery has shown us that starting to care about performance on the day of the first production deployment is too late! Therefore, it is important to have the proper tools in place to continuously monitor performance already during development, and to also include Web APIs in this monitoring.
Dynatrace 6.1 extends the lifecycle and testing capabilities with a new test category tailored for Web API testing. For these tests, we look at a number of relevant measures for each individual API call, calculate a baseline over historic data, and alert users of deviations from this baseline. Measures we analyze include the response time, the failure rate, and the response size to look at performance from the API users’ perspective: Am I getting data for my calls back in time? How does the response size change with different parameters? Is the API sending a correct response code when parameters are missing or something goes wrong? These metrics are also relevant for production monitoring where synthetic monitoring helps to catch issues before your customers report them.
However, we don’t want to stop looking at the API from the outside. Instead, we also want to provide insight into the server-side processing: How many calls to a database or an internal webservice are made per API call? How many exceptions are thrown? How many log messages are written? These metrics help us to catch architectural regression so we can make sure that the API performance was not adversly affected by changes in recent builds. By monitoring these numbers over time we can continuously ensure that our API meets our performance goals and catch potential problems early in the development lifecycle to give developers the data they need to fix them.