Synchronization is a necessary mechanism to control access to shared resources. Especially in multi user environments, e.g.: Web Applications where the same code can be executed by multiple threads at a time – it is essential to make sure that data access is protected. Whether it is for Java or .NET – the same performance guidelines when using locks or synchronization blocks apply. Here are some articles that are worth reading:
- .NET 2.0 Performance Guidelines – Locking and Synchronization
- Thread Synchronization and the Java Monitor
A simple way to identify synchronization issue
I’ve tried to find a simple way to identify synchronization issues in code that is executed by multiple threads. In my sample application I have several methods and components that synchronize access to shared data. The code is executed as part of a Web Application. The more users concurrently access the page – the more time will probably be spent by the individual worker threads waiting to enter the synchronization blocks. The questions for me are
- Which of my methods have a performance impact on the overall system because of synchronization
- How much is the performance impact?
- Is there other external code in my application that has similar issues?
One method to answer those questions – although not 100% accurate – is very easy to execute. I simply take the CPU Time and the Method Execution Time of a particular method, component or framework. Here is an explaination of these two values:
- CPU Time: Is the time spent on the CPU by the executed code
- Execution Time: This is the total time the method takes to execute. This includes time spent on the CPU, I/O, waiting for external systems to respond, GC times and time spent waiting to enter synchronized code blocks
I chart these values and run increasing load against the web application. I compare how CPU Time relates to Execution Time. The gap between these two values indicates time that is spent on waiting for I/O, external systems or waiting to enter synchronized code blocks. As this gap includes more than just the sync time, this method is – as mentioned above – not 100% accurate. Although not 100% accurate – it can give you a great indication about where your code has to wait. If you also happen to know that your code is not doing external calls to databases or is not doing I/O you can be sure it is most likely sync time.
I use dynaTrace to subscribe to CPU and Execution times of the method I want to analyze. Then I take a load testing tool to simulate an increasing load against my web application – hitting the web page that is executing the code in question. In my scenario I used SilkPerformer showing the following results:
From these results I drilled into the results captured on the method level. The following graph shows CPU Time (GREEN) only increasing very slightly but the Exec Time increasing in a similar way than the overall response time of my web pages. Be aware that the graph below uses different scaling for CPU and Exec Time in order to make the difference more visible.
Following image shows the same graph with the same scaling for both values:
The fact that the overall execution time of my method increased rapidly but CPU time almost stayed constant means that my method spent a significant time (difference between CPU and Exec Time) in waiting for either I/O, external systems or on sync blocks.
Extending analysis from method to components or to frameworks
The same approach also works if the tool you use can capture CPU and Execution Time for a complete component or a framework library. The following image shows the same timings captured for the .NET Remoting components.
This is one approach you can use to identify code that spends large portions of the time waiting for synchronization or other external components. In my next posting I will show a different approach by actually looking into the time spent in the monitor enter methods.