Enforce Development Best Practices
Chapter: Performance Engineering
In the previous sections of this chapter we have taken a detailed look at the technical aspects of performance engineering in development and as part of continuous integration. In this section we discuss the most common problem patterns and consider the organizational and conceptual aspects of each.
Plan and Define Quality
Quality starts with a well-thought-out definition of new features, along with a detailed features and capabilities description—the new-feature story. The software architects should review every story before it becomes part of a final development sprint by enforcing the following additional requirements:
- Testability: Every new piece of code must be testable, and as result of the story implementation engineers must produce unit and functional tests. These tests help verify the functional correctness of the implementation. When using code-coverage tools, it is recommended that you specify the code-coverage percentage of these tests, as well.
- Architectural Rules: Developers must make sure that their code adheres to the defined architectural rules, such as "No duplicate database query for one operation," or "Do not transfer more than 100kb per remoting call". Having Unit and Functional Tests allows you to automate rule validation, as discussed earlier in this chapter.
- Architectural rules: Developers must make sure that their code adheres to the defined architectural rules, such as "No duplicate database query for one operation" or "Do not transfer more than 100 KB per remoting call." Having unit and functional tests allows you to automate rule validation, as discussed earlier in this chapter.
- Performance requirements: A new feature must have performance requirements, such as "The Save operation must not take longer than 500 ms with 100 concurrent users on the system." It is important to define the performance as well as the load condition under which this performance requirement must be met.
- Documentation: Code and end-user documentation improve quality. Enforcing a high level of code documentation allows developers to better understand what the code is supposed to do and with that, to make better code changes in the future. End-user documentation makes it possible to test new features as they're expected to be used. This enables better use-case testing and ensures that the functional quality of the product is high.
Enable your Engineers with Tools
As discussed throughout this chapter, developers need tool support to analyze the behavior of the implemented code, verify architectural correctness, and check performance and scalability. Everything starts with the developers, and developers require the right tools to ensure that their code adheres to all defined architectural rules. This includes the following:
- Java profiling tools: CPU or memory profiling allows developers to analyze the performance of their application code and to identify algorithms that don't perform well, overused or incorrectly used objects (possible causes of memory leaks), or high memory usage.
- Tracing and diagnostic tools: These tools show how code impacts application architecture, and provide the input for architectural-rule validation. When developers have the same tools used later on by architects, they are able to examine performance metrics before they commit their code changes, in a process of agile continuous integration.
- Testing tools: In addition to unit tests, there are many types of tests that developers should execute on their local workstation before committing any code changes. Make it standard practice to run functional tests that test code through the end-user interface, as well as small-scale load tests. Open-source tools, such as Selenium or JMeter, are good, easy-to-use tools.
- Real vs. test data: For data-driven applications, using a local subsetted copy of the database is often not sufficient to detect many database-related problems. Giving developers access to a copy of a real production-like database helps them find and eliminate data-driven problems from the start.
Educate your Engineers
Dynatrace continually educates engineering staff through different channels:
- Classroom training: Engineers get mandatory training to understand the application they will be working on. Even though engineers typically work on only a subset of the application, understanding the full application allows them to better predict where their code changes may have an impact.
- Access to literature: Architects pick literature that explains and teaches new concepts in development. This literature must be made available to engineers so that they can learn the latest developments and advance their own skills. Deciding to use a third-party framework library should trigger purchases of literature about that framework to make sure the framework is used in the best possible way. Without making proper information available, you may end up implementing code based on sample applications seen on the Web instead of doing it the way explained by experts.
- Code reviews: During code reviews (another great best practice adopted from agile development) senior engineers have the ability to coach junior engineers. Coaching encourages high-level knowledge transfer that's reinforced by the shared experience of real-life, cooperative engineering.
- Regular update meetings: We established regular engineering update meetings to discuss project statistics and software quality. This allows every engineer to see how much impact his or her work has. For instance, when we ship a poor-quality product, we see an increase in customer-support complaints. With high-quality code, we have more time to implement new features. A regular update allows everybody to focus again on what is currently important.
Education time is well-invested time. It helps your engineers create higher-quality code with less time spent finding and fixing problems.
Automate, Automate, Automate and Report
The more tasks you can automate, the better. You get faster results on code quality, which then allows you to tell your engineers whether they can continue coding new features or if they need to fix problems to bring software quality back on track. Not everything can be automated, but thanks to continuous integration and the availability of ever-advancing toolsets, we can automate things like unit, component, integration, functional, and load tests. We can automate performance analysis as well as architecture validation. All of this is possible, but it requires the dedication of the engineering team:
- Engineers must write testable code and the actual tests to verify this code before committing changes. Therefore, it's essential to test locally before committing.
- Architects must define and enforce architectural guidelines and rules. Provide coaching and help engineers implement the automation process.
- Automation engineers must invest in the continuous-integration process to automate test execution and provide meaningful reports to engineering.
The last piece of advice is on reporting: bring the results back to engineering as quickly as possible. At Dynatrace, we use several dashboards that show the status of current development. We see the number of tests executed, how many of them failed and succeeded, and whether we have a stable build or not. These dashboards are visible throughout the office so that every engineer can see what's going on upon entering or leaving the building, or even just when getting a cup of coffee.
Table of Contents
Application Performance Concepts
Differentiating Performance from Scalability
Calculating Performance Data
Collecting Performance Data
Collecting and Analyzing Execution-Time Data
Visualizing Performance Data
Controlling Measurement Overhead
The Theory Behind Performance
How Humans Perceive Performance
How Java Garbage Collection Works
The Impact of Garbage Collection on application performance
Reducing Garbage Collection Pause time
Making Garbage Collection faster
Not all JVMS are created equal
Analyzing the Performance impact of Memory Utilization and Garbage Collection
GC Configuration Problems
The different kinds of Java memory leaks and how to analyze them
High Memory utilization and their root causes
Classloader-releated Memory Issues
Out-Of-Memory, Churn Rate and more
Approaching Performance Engineering Afresh
Agile Principles for Performance Evaluation
Employing Dynamic Architecture Validation
Performance in Continuous Integration
Enforcing Development Best Practices
Load Testing—Essential and Not Difficult!
Load Testing in the Era of Web 2.0
Virtualization and Cloud Performance
Introduction to Performance Monitoring in virtualized and Cloud Environments
IaaS, PaaS and Saas – All Cloud, All different
Virtualization's Impact on Performance Management
Monitoring Applications in Virtualized Environments
Monitoring and Understanding Application Performance in The Cloud
Performance Analysis and Resolution of Cloud Applications