Scenario: different test types target the same test machine

For smaller software projects – where deployment and configuration of the application to test is easy – you often find separate installations for individual testers or test types. This allows every tester to work against an installation without impacting other test activities.

For large enterprise software projects, however, it’s very common to install the application that should be tested in a central testing environment. This installation is then used for all different types of test activities: manual, functional and load. This makes perfect sense as it would take too much effort and would consume too many hardware/software resources to deploy the same application multiple times so that different types of tests can be executed.
The central application can then be used by automated functional tests to test the different use-case scenarios.
With the sheer endless mix of different operating systems and browsers it is very common to use virtual machines for these different OS/browser test combinations – running the same test script against the same centrally deployed application

Scenario: Deployed Testing Apps and Virtualized Test Setups

Problem: how do we know which test caused which application log entry?

It’s 10 AM in the morning – the Agile Development team meets for the daily Stand-Up – discussing the current backlog, current problems and the results of the nightly tests that have been executed.
As part of the nightly build the application was centrally deployed to the testing environment. In addition to unit tests, functional web tests were automatically executed in parallel on different virtual machines to test the basic functionality with the different OS/browsers combinations. Parallel execution is necessary in order to run all tests in a timely manner.
Several tests on different virtual machines failed. Unfortunately the test results produced by the testing tool didn’t provide enough information to isolate the problem. The application produced several errors that were written to log files. Due to the parallel execution of multiple tests it’s not possible to correlate those application log entries to test-case executions as the timestamp in the log file could correlate to multiple test cases that failed at the same time.

The problem is that the reported errors by the test tools cannot be correlated to the errors logged by the application under test. This makes error analysis harder and it requires additional effort when trying to reproduce the same problem again.

Problem: No correlation of test and application logs

Solution: tag your functional test requests

In order to know which test script in which test configuration caused the error we have to tag each individual request that is executed by the testing tools with the information about the execution context (Test Case Name, Environment, OS, Browser, …). This information can then additional be captured on the server side when error logs are written.

How can we tag a web request?
We can use an additional HTTP Header that we send with each web request. The header value can contain the information we need to identify the test case.
Current functional testing tools “drive” the browser by either simulating user input (mouse and keyboard) or by driving the DOM model within the browser. Many of these tools do not allow adding additional HTTP Headers as it requires modification of the HTTP Request that is sent by the browser. For these tools we simply use a proxy approach that adds the HTTP Header to the request after the browser has sent it. For tools that allow adding HTTP Headers we simply specify it in the testing script.
Solution: Correlate data with a proxy approach

Example: Using Microsoft Fiddler for tagging requests
Fiddler is Web Debugging Proxy that not only can log HTTP(S) traffic that is redirected via Fiddler – but also allows you to write JavaScript to specifically analyze or modify HTTP Requests.

Step 1: Download and install Fiddler (http://www.fiddler2.com/fiddler2/)
Fiddler automatically (this can be changed to manually) changes your browser network configuration to use Fiddler as a proxy.

Step 2: Define a custom rule
Via the menu Rules->Custom Rules you get into the script editor. In my example I implement the OnBeforeRequest callback method and add a special dynaTrace HTTP Header to the request:
Step2: Fiddler Script 1
The dynaTrace header is a special header that will be picked up on the server side by dynaTrace. The header allows not only specifying a logical name for the request, it also allows you to specify a Page Context (this could be used for a test case name) and a Virtual User ID (which can be used to uniquely identify the test environment).

Every HTTP Request that is sent now gets the following header added:
dynaTrace: NA=TestAction;PageContext=TestCase;VU=TestEnvironment

Step 3: Make the header configurable
Fiddler also allows you calling a JavaScript method via a command line tool called ExecAction.exe (which can be found in the installation directory). Parameters that are passed to this command line tool are passed to the OnExecMethod method in the custom rules script file. In my case I make my 3 parts of the dynaTrace Header configurable. I simply extend the existing OnExecAction method by adding a new handler in the sAction switch:

Step 3: Fiddler Script 2

With this modification I can now call ExecScript with up to 3 parameters for changing my name, page context and virtual user id. Here are some samples:
ExecAction “dynaTrace DoLogin Login IE7_WinXP”
ExecAction “dynaTrace LastMinute BrowseCatalog FF_Vista”

Step 4: Verify if Fiddler works
Run a simple test script. In Fiddler you will see every web request that is routed via the proxy as a separate node in the web sessions list. Click on a web request. In the Inspector tab you can now verify if the web request was modified by looking for the additional header.

Step 4: Verify if Fiddler works

Step 5: Extend your test scripts to call ExecAction
In my functional test script I can now call ExecAction whenever I start a new test case or even before actions within test cases to provide as much context information as possible via the HTTP Header to the server side. Most of the functional testing tools allow you to call command-line tools – please refer to the online help of your tool how that can be done in the tools scripting language.

Step 6: Analyzing server side information with additional testing context information
Now it’s up to your application developers to consume this information on the server side and to log it in case of errors. When using Dynatrace this step is automated for any Java or .NET Web Application. Each individual web request that is executed is recorded as a PurePath containing the additional context information we passed via the HTTP Header.

The following illustration shows a dynaTrace dashboard that shows the two requests from the testing tools with their respective names. The dashboard also shows the SQL Statements and log statements generated by the individual request. For performance analysis we see the performance breakdown into application layers.

dynaTrace Dashboard showing application details of the functional tests
dynaTrace Dashboard showing application details of the functional tests

Hint for IE Developers/Testers when testing local applications
If you use a proxy approach as explained with Fiddler and if you test an application that is deployed on a local machine be aware of the fact that IE bypasses the proxy in case you browse to http://localhost/mainpage. In order for IE to use the configured proxy you have to add a . (dot) after localhost like this http://localhost./mainpage.

Conclusion
There are easy ways to correlate your functional test results with your server-side logs. Whether you use an Application Performance Management solution like Dynatrace or if you are using custom application logs, correlating both sides helps you in analyzing problems as you automatically get additional context information for your log outputs.