Every time I work with one of our .NET customers to help them with managing their application performance I come across the same problems as seen with other clients before: lots of ADO.NET queries, many hidden exceptions in core or 3rd party .NET libraries, slow 3rd party components, inefficient custom code, …

Too often we from Dynatrace get introduced when it is already very late in the development cycle. Most of the times we get introduced when the first performance test results show bad response times and nobody understands why it is that slow. In other cases we get called when there are problems in production and it has already taken too much time to figure out the root cause. Solving these problems at that point can become really expensive as it sometimes involves changes to the architecture. Most of these problems can be prevented by following some basic principles from the start of the project. In this blog I cover some of the problems I’ve seen and I encourage everybody to read the paper I wrote on Performance Management for .NET Applications that covers this problem domain in detail.

Why performance problems leave development?

We came a long way of adapting agile development principles which puts a great focus on continuous testing and good test coverage. But still many problems escape development. Here are 3 statements that I regularly hear when developers are confronted with the first load testing results. I am sure they sound familiar to you 🙂

  • “Our Unit Tests were all green and we executed them on every build”
  • “Everything ran perfectly fine on my local machine – I even executed some load”
  • “Everybody on the Online-Forums of the 3rd Party Framework we use seemed to have good experience with performance”

The Status Quo in development however shows the following problems: Unit Tests only verify the functionality but don’t take performance, scalability or architecture into account. Local Performance Tests or either not done at all or with unrealistic sample data. 3rd party frameworks like O/R-Mappers, logging frameworks, … are often used incorrectly for the applied use case scenario. And last but not least – Data Access either from a database or via remoting protocols is often done inefficiently by causing too many roundtrips or requesting more data than needed.

2 Examples of problems that can be prevented

My first example is database access. There are 3 scenarios that I often see

  1. The same data is requested multiple times for the same request
  2. Data is requested inefficiently, e.g.: multiple SQL calls that could be aggregated into fewer calls
  3. More data is queried than actually needed

My recent .NET engagement was on a SharePoint application. SharePoint provides an API to access the data stored in the SharePoint Content Database. I’ve blogged several times about how not to use the SharePoint API but it seems that this problem is still out there. Following screenshot of a PurePath shows that iterating over all items in a SharePoint List actually executes the same SQL statement for every item in the list because the SharePoint API is used incorrectly.

Too many SQL Statements executed by the SharePoint API
Too many SQL Statements executed by the SharePoint API

Knowing that accessing the Items property on a SPList object queries the full list content every time allows you to make a better decision in accessing SharePoint data. You can either store the result of the first access to the Items property in a Collection or use alternative options like SPQuery. Check out my blog posts about SharePoint list access for more specific examples. This example is SharePoint specific but holds true for other Database Access Frameworks like popular O/R Mappers or the ADO.NET Entity Framework. Wrong usage leads to too many SQL Statements that happen under the hood.

My second example is hidden exceptions. With hidden exceptions I mean exceptions that are thrown but handled by the framework you are using, e.g.: standard .NET Libraries or 3rd Party libraries like NHibernate, Spring.NET, log4net, SharePoint … These exceptions never make it to your custom code and can therefore never be investigated whether there might something we can do about it. Each exception – intentional or not – causes a certain amount of overhead. Often “hidden” exceptions are caused by incorrect configurations of frameworks where the frameworks then fall back to a default setting. Knowing that there is a problem allows you to correct the settings and prevent unnecessary exceptions. Following screenshot shows some hidden exceptions.

Hidden Exceptions in 3rd Party Code and WCF Communication Layer
Hidden Exceptions in 3rd Party Code and WCF Communication Layer

I’ve seen hidden exceptions with configuration problems as described above as well as with deployment problems. When using ASP.NET Web Services the client proxy classes can be generated and deployed with your application. This often does not happen. If the compiled proxy classes are not deployed the .NET Framework creates proxy classes “on-the-fly” which causes massive overhead for the first request that accesses the web service proxy.

There is more …

This was a short excerpt of the stuff I see out there with clients. Check out Case Study we did with BPA who builds CRM Solutions on SharePoint or the one with BUPA on how they use it in their .NET Environment. Read my full paper on Continuous Application Performance Management for Enterprise .NET Applications if you want to know more about how to prevent problems early on in the development cycle, supporting software architects, how to make better use of Visual Studio Team System for Testers and how to speed up problem resolution for problems happening in production.