Don’t like to read? Watch my video of this blog where I walk through all the steps highlighted in the blog: Functional Test (R)evolution on YouTube

In the last couple of weeks I had the chance to meet a lot of testers in different cities and at different events. To name a few: TestExpo in Copenhagen, STPCon in San Diego, Grand Rapids Testers and Sydney Testers Meetup. I presented my idea of a performance metrics driven approach to software engineering across the lifecycle. I want testers to level-up their skill set and not only test for functional correctness but look behind the scenes and look at things that might seem out of their league right now such as # of elements on a page, # of XHR calls, # of SQL calls, Memory Allocation, CPU Hotspots or bad architectural behavior. Like this example we recently found in our Performance Clinic in Denver:

When a tested feature makes 24889! SQL calls we need to consider this test as FAILED even when it returns the right results.
When a tested feature makes 24889! SQL calls we need to consider this test as FAILED even when it returns the right results.

Based on the feedback from the attendees in my presentations my ideas seem to resonate well. Everybody understands that it is about time to contribute by finding problem solutions instead of just creating bugs. In a perfect world we would do all of this automated in continuous integration. The side conversations I had at these events gave me a reality check though: the majority of testers are primarily executing manual tests and most of them are afraid of using “tools a developer would use” that would give them those metrics I taught them. Testers are not familiar with the terminology around JavaScript, CSS, XHR, Java, .NET, PHP, Nginx, node.js, Android or iOS and therefore fear that they are not taken seriously by developers when they present this new set of test results as they don’t speak the same language.

I started my career as a tester before I became a developer. I understand the fear but I also know that it is time to level up. Why? Because manual testing alone can easily be done by anybody out there in the world. Labor is cheap and crowd services such as uTest and others make it even easier for organizations to outsource these tasks. So Testers: You have to be part of this (R)Evolution. It makes you more valuable to the organization as you produce higher quality output in a world that demands more releases with higher quality and not just more bugs in JIRA.

Let me tell you a couple of easy steps to start making this transition!

Step #1: Make Ctrl-Shift-I and F12 your friend

One example I always bring to my presentations is overloaded websites as many testers out there actually test websites. My favorite example is the website of Pepsi that sponsors the American Super Bowl. Their mobile landing page for the 2014 Super Bowl forced my iPhone to download 434! Images and a total of 20MBs. You can read the details in my Tips for Super Bowl Ad Landing Pages blog post I wrote back then. As you can see from the example below their current website is still rather heavy!

With the tools we have these days – THERE IS NO LONGER ANY EXCUSE that an implementation like this ends up in a production deployment. First of all, web developers should know better that this violates all rules of Web Performance Optimization.

Using the Chrome DevTools (Ctrl-Shift-I) reveals some bad metrics. 433 Roundtrips, 14.5MB in size and many HTTP 403s for loading pepsi.com
Using the Chrome DevTools (Ctrl-Shift-I) reveals some bad metrics. 433 Roundtrips, 14.5MB in size and many HTTP 403s for loading pepsi.com

Testers are the next safety net and there is also NO EXCUSE FOR TESTERS not to use the same built-in browser diagnostics tools that come with Firefox, Chrome, Safari and nowadays also with Internet Explorer. If you use the latest versions of Chrome or Firefox simply press Ctrl-Shift-I and you will see the DevTools popping up at the bottom of your browser window. Now click on the Network Tab and start executing your manual test. After every click check the information in the status bar. It will tell you how many roundtrips it took to load that page and how many bytes had to be downloaded. If you see > 100 resources and > 5MB in page size report that test as FAILED. Why? Because if you let this build move on to the next testing phase or even into production chances are very high that this web application will crash under load. For Internet Explorer it is similar – here you press F12 to get to the DevTools which opens up in a separate window. Click on Network and the Start Capturing. From now on all activity in your test will be recorded and you also get to see overall number of items and size in the status bar.

Here are some YouTube Tutorials on both FireFox and Chrome’s DevTools as well as 15 Minute Browser Diagnostics with Dynatrace which also provides the same functionality for Internet Explorer and Firefox on Windows.

Step #2: Perform Basic Software Architecture Checks

This is definitely a much bigger step than step #1 but it is not as challenging as you think. Look at the following screenshot of a Dynatrace Transaction Flow. Even if you don’t know the internals of Java, .NET, PHP or whatever technology is used by the application you are testing – reading and understanding these high-level architectural metrics is easy:

Easy to read and understand architectural metrics captured for your functional tests. 177 SQL calls and 33 Web Services is clearly questionable
Easy to read and understand architectural metrics captured for your functional tests. 177 SQL calls and 33 Web Services is clearly questionable

Getting to such a view is easy whether you use Dynatrace or some of the other tools such as NewRelic, or AppDynamics. All of these tools are available for a free trial and straight forward to install. If you decide to go with Dynatrace check out my What Dynatrace Can Do For You And How To Install blog or check out my YouTube Tutorial on How Dynatrace Works.

Step #3: Perform End-to-End Test Case Sanity checks

When you execute a manual or even automated test you typically do not just test a single URL. You typically execute several steps as part of that test. I therefore also propose that you perform an end-to-end sanity check on each action and not just those that feel like they have a problem, e.g: are slow or are failing. Why? Because very often the steps that lead up to a problem are the actual root cause. To give you two examples:

  1. The login assigned you with the wrong user roles and therefore subsequent requests fail with unauthorized. Knowing what actually happened during login even if it seemed fine at first sight gives you the actual root cause
  2. Adding and removing items from the shopping cart actually didn’t update the cart and kept state information in the users session object. This results both in wrong total sum of the shopping cart but over time also causes to high memory usage as these objects are not freed. Observing what actually happens in these seemingly correctly executed add/remove cart actions is important to understand the root cause of failing actions later on.

If you take a tool that can capture every single interaction you make while testing the application you will end up getting the full Visit and all the User Actions captured. You can now explore every click end-to-end and perform the sanity checks based on some of the metrics I told you, e.g: # of objects allocated, # of SQL roundtrips, Total Shopping Cart Sum, … The following screenshot shows a Dynatrace Visit and all the captured User Actions. For every User Action Dynatrace also captured all the Details End-to-End and also gives you the Transaction Flow as shown earlier in this blog:

Capturing all Visitors and User Action is a unique capability that makes your life as tester much easier. From this high-level analysis drill into the more technical details
Capturing all Visitors and User Action is a unique capability that makes your life as tester much easier. From this high-level analysis drill into the more technical details

Step #4: Share, Collaborate and Learn with Developers

Don’t keep these new insights to yourself. Instead, start sharing this data at your daily standup or sprint review meetings. If you come up with wrong conclusions in front of developers don’t get discouraged but find an explanation together. This will help both sides to learn about these problem patterns, how to spot them but more importantly for developers how to prevent them.

In Dynatrace you can also export all the data you have captured into a single file. We call it a Dynatrace Session file (.dts). I see most of our users simply attaching that file to the actual JIRA ticket (or whatever ticketing system they use) instead of attaching too many screenshots and giving textual descriptions about the test case and the problem encountered. This is no longer necessary with the evidence you just collected.

The Dynatrace Session file is also the data I look at when people decide to participate in my Share Your PurePath program.

Let me help you make the first step

Writing a blog like this is easy for me. Why? Because I am doing this every day of my life. Especially since I launched my Share Your PurePath program where users send me their Dynatrace Session files I see that most of you out there are testing applications that struggle with the same architectural, scalability, configuration and performance problems. If you feel intimidated by the new set of data just take my offer and let me have a look at your data. If you already made the first step and want to share your results also let me know. Let’s spread the word together on how we can (R)Evolutionize Testing.