Technical Debt is what slows you down in bringing new features to your end users.
Why? Because you spend time firefighting and fixing old code instead of innovating. However, rather than getting better and faster in at fixing problems, I suggest we start preventing problems by leveling-up our sense of quality!
How? By learning from others to avoid the same mistakes!
3 Use Cases from our Community
In this blog I cover three problem patterns reported by our Dynatrace community that already use our latest Application Monitoring and User Experience Management 6.2 release for what I call Painless DevOps.
They also took advantage of my free “Share Your PurePath” program where I help them analyze their quality problems, and in return, have something great to share with the larger community (that’s you!).
The screenshots throughout this blog are part of what I sent back as part of my performance review – here are the highlights covered in this blog:
- 3rd Party Frameworks: End User Impact caused by Atlassians CDSFramework for .NET
- NHibernate: Excessive Database Access impacts Oracle and End User
- Microservices: In-Process “Version Proxies” services wont’ scale!
Your role in minimizing technical debt
We’ve all got a role to play in minimizing technical debt.
The following screenshot is one of the dashboards I teach people to build and use in my Online Performance Clinics to easier identify common problem patterns that others have encountered:
Thanks to this “collective intelligence” we can all do our part in preventing technical debt:
- Developers: Execute your unit tests and look at the captured PurePaths on your local machine before checking in code
- Testers: Level-Up your testing by also looking at key architectural metrics: Transaction Flow is a good start and is also easy to understand
- Architects: Demand performance and architectural metrics in your code and sprint reviews. This enables you to correct bad implementations early.
- Biz & Ops: Sometimes bad things happen. Make sure you are aware of it by capturing the metrics highlighted in this blog. That speeds up your mean recovery time!
Now let’s get into these examples and learn!
Use Case 1: Bad Framework Usage impacts End Users
Having a closer look at the “Click here to attach a file” link showed that most of these frustrated users had performance issues with that particular action. Based on the waterfall diagram shown in the next screenshot it seems that ASP.NET Processing was taking almost 60s before it finished the transaction by calling 3 internal backend services:
A drill to the Hotspot view revealed that they were using the CDSFramework. This framework spent a lot of time in mapping the data from the database to the internal table structure. Most of the time was spent in reflection as well as in copying large amounts of data in arrays. Another good candidate to talk with the framework vendor on how to optimize this situation:
Use Case 2: Excessive Database Access by NHibernate
Hibernate is a topic we have covered in our blogs for over five years now. Yet I still run into applications that don’t use this popular object relational mapper in the best way possible. The screenshot below already shows the transaction flow clearly highlighting that 1812! SQL statements were executed for a single request.
Looking at the actual SQL statements executed allows you to optimize Hibernate in terms of better cache settings, lazy vs. eager loading fetching strategies…
But accessing the database may not be the only problem. Start analyzing CPU hotspots within these frameworks to learn where time is spent – then consult the documentation, online forums or contact the vendor to figure out how to optimize situations like the following, where a lot of time is spent on CPU in core .NET Framework classes:
Use Case 3: In-Process Proxied Microservices
This last use case is from an application with a service-oriented architecture. In order to handle service versioning, a service call first makes it through an in-process proxy that determines the correct endpoint.
The PurePath in the following screenshot shows this “call chain” nicely. We see the incoming and outgoing calls on the different URLs and also that they are done synchronously on different threads. This means that every web service call binds an extra HTTP Worker Thread in the JVM. This proxy approach also has an impact on garbage collection. As these service calls are proxied through a second service instance the request/response content of the web service needs to be parsed and kept in memory twice. Everything is duplicated!
If you have to deal with versioning of your services make sure you read up on some of the best practices discussed on the WWW.
Are You Ready to Stop Technical Debt? Start Here!
From these and many other examples we have blogged about in the past I strongly believe that we can all write better software right from the start. It starts WITH YOU on your workstation by doing simple sanity checks before code gets checked in, or before it gets promoted to production. The tool that I used to analyze this data is free for you. After the 30 Days we give you the option to continue is for personal use on your local machine as long as you want.
The good news is that you do not need to do everything manually. Most of these problem patterns can be identified by looking at simple metrics, e.g: # of SQL Calls, # of Service Calls, Time Spent in garbage collection. We at Dynatrace not only provide these metrics but capture and identify regressions automatically in your build pipeline. Check out our Jenkins or Bamboo Plugins or read up on this in our community: Continuous Delivery & Test Automation.
All the best with building better software. Let me know if you come across any new interesting problem patters. Remember my “Share Your PurePath” program – I am happy to look at your data.