Your Load Testing Questions, Answered: Part Three

As part of our series with performance testing expert and author of Web Load Testing for Dummies, Scott Barber provides answers to your load testing questions.

Scott is the founder and Chief Technologist of PerfTestPlus, Inc. and is viewed by many as the world’s most prominent thought-leader in the area of software systems performance testing. He is also a respected leader in the advancement of the understanding and practice of testing software systems. To read more about Scott.

In the Part one and part two of this series, Scott provided details on how to establish performance goals and targets. In this next set of questions, Scott provides answers on when and how to performance test:

Q. When developing an internet application from “ground” up, when should performance testing begin?

SB: The moment the project kicks off.  Now I define performance testing in the broadest sense.  You don’t need an application to do performance testing. You don’t need hardware.  You don’t even need a single story written.  All you need is an idea.  If you don’t consider activities like researching technologies to feed implementation decisions to increase the odds of having a well performing application when it goes live to be performance testing, so be it.

In that case it starts the first time someone starts configuring a piece of hardware and does something to check to see if a setting is appropriate from a performance perspective, or when someone creates the first performance-related story, or the first time a developer executes their code and thinks “Wow, that’s taking longer than I thought it would!”

It’s my opinion that *if* performance really matters, that performance testing never begins or ends, it is simply part of how development is done.

A different question would be “When should the performance tester start recording and executing multi-user simulations?”  Unfortunately, my answer is just as “squishy” – at the earliest moment that it makes sense and adds value to do so.

At the end of the day, “when”, “how much”, “how often”, and “what type(s)” are business decisions, and those decisions should be based on first asking “What is the cost associated with a poorly performing application?” and then asking “How much are we willing to spend to reduce the risk of poor performance by what degree?”

I say “cost” and “how much” making the presumption that your executive decision makers know how to monetize benefits and risks. It is only with that information that one can make good business decisions regarding “when”, “how much”, “how often”, and “what type(s)” of performance testing are appropriate.

*Note* If you don’t think your executives know how to monetize non-monetary value, or you’d like to learn how, I recommend Chapter 16:  Rightsizing the Cost of Testing: Tips for Executives of  How To Reduce the Cost of Software Testing; Matthew Heusser, Govind Kulkarni; CRC Press 2011 (or you can read the pre-publication version of the chapter on my site.)

Q. Can you elaborate about how to performance test at unit test level? I thought you needed to have at least a semi stable and integrated application to do performance tests?

SB: To load test the system end to end?  Probably, but you don’t need load to performance test.  When I say “performance unit tests” I’m talking about code profiling, I’m talking about monitoring resource consumption at the unit, object, or component level.

I’m talking about sticking timers around modules, objects, functions, procedures, beans, whatever a “block of code” is called in your favorite programming language as part of your Unit and Build Verification Tests, that spit code execution times out into a .csv that you can pull up once in a while to see if there are any “interesting or negative” trends from build to build.

Nothing complicated or time consuming – just some quick things that can be done at the code level, every day, that will lead to far better performing builds when those builds *are* ready to be slammed with some production simulations.  And the performance issues you do encounter will be far easier to identify conclusively and tune.

Q. What is the difference between load and stress testing?

SB: Load testing is multi-user testing under anticipated conditions with the intent of determining the acceptability of application characteristics and/or identifying application characteristics deemed to be unacceptable.

Stress testing is, pretty much, testing under any situation you can imagine that “stresses the system” beyond anticipated conditions to determine if, when, and/or how the system will fail so decisions can be made about whether to implement controls or mitigation measures against those modes of failure.

In part four of this series, Scott will address testing execution and tools. Read part 1 and part 2 of the series.

System Performance Strategist and CTO; PerfTestPlus, Inc. Author; Web Load Testing for Dummies Scott Barber is a thought-leader in delivering performant systems and software testing who is best known as “one of the most energetic and entertaining” keynote speakers in the industry and as a prolific author (including his blog, over 100 articles, and 4 books.)