Agile Development Practices have widely been adopted in R&D Organizations. A core component is Continuous Integration where code changes are continuously integrated and tested to achieve the goal of having “potential shippable code” at the end of every Sprint/Iteration.
In order to verify code changes agile team members write Unit or Functional Tests that get executed against every build and every milestone. The results of these tests tell the team whether the functionality of all features is still given and that the recent code changes have not introduced a regression.
Besides functional verification we can add performance and architectural validations. With this we can additionally answer questions like:
- How much CPU and Network bandwidth it takes to execute the search query?
- How many database statements are executed to retrieve the search result?
- Whether product images will be cached on the browser?
- Whether the last code change affected any of these Performance, Scalability or Architectural Rules
Analyzing the execution of your existing tests allows you to answer these questions by verifying metrics such as Number of Remoting Calls against a defined Architectural Threshold. Additionally to your JUnit or Selenium Reports we get a report telling us which rules have been violated:
Automatic Baselining, Performance Validation of Unit and Browser Tests
Implementing this in Continuous Integration allows us to find many problems as explained in Top 10 Performance Problems early on before becoming a real problem.
Want to know more? Read the full article on the dynaTrace Community Portal!