Most organizations are dealing with a diverse IT landscape today. And so does one of our customers, a large American bank, which needs to handle a 10+ year old Visual Basic 6 (VB6) application that got integrated into their modern .NET enterprise application. In this blog post I describe how we managed to get insight into their legacy applications and enabled them to manage and optimize performance.

One of the most significant characteristics of an enterprise-class Application Performance Management solution is the support of numerous platforms. Why is that such a big deal? Because most of the landscapes we work on with our customers are not pure Java or .NET applications. Their application landscape just grew over time, and they chose the best technology for each application and for data exchange among them at a given time. Or they just inherit an application through a merger and need to integrate it. To manage a complete enterprise application landscape it is obviously significant to have at the same time a global and deep insight into all your applications and how they exchange data.

The Scenario

About ten years ago, our customer acquired the VB6 source code from a vendor for one of their enterprise applications. They refined it and adapted it to their business needs. In the course of the years, that finally led to a very complex application which became harder and harder to understand and to manage. In addition, the load on the application increased. They started to experience long response times and struggled with numerous SQL statements and it was hard and time-consuming to identify where they were coming from. When one of the application issues took them multiple days to pinpoint, they decided to look out for APM solutions to bring visibility and transparency into their application.

The VB6 code is basically used as a library that is only accessed from their modern .NET application. Thus, they needed an APM solution that not only provided them with deep transaction visibility into their .NET environment but one that is also offering visibility into their VB6 code. This was crucial as having adapted this hard-to-manage legacy application to their needs, it was supporting their business activities and they truly needed to gain insight into it. The customer now has exact information about:

  1. The execution time of about 1500 functions and subs; and
  2. All their parameter values;
  3. The duration of each SQL statement that is called from the VB code; and
  4. The occurrence of every single of their approximately 1700 exceptions, including throwing class and method, as well as the error code and error message

All that embedded into the exact context, i.e. they exactly see the sequence of method invocations including parameter values that lead to a particular exception or SQL statement.

Knowing VB6 exception occurrences within the exact path, including a detailed error description, error code, throwing class and method makes root cause analysis quick and easy
Knowing VB6 exception occurrences within the exact path, including a detailed error description, error code, throwing class and method makes root cause analysis quick and easy

In order to get this visibility they used a semi-automated development kit that allowed them to instrument methods, exceptions and SQL executions. The first thought one might have about such an approach is that obviously runtime overhead is caused by this extra code and collection mechanism. The CPU overhead we measured with all instrumentation points in place was surprisingly small.

Our customer took one additional step and instrumented their application with multiple refinement levels. It enables them to manage the captured information and reduce monitoring overhead to 1% in their production environment. They are using the finest level in test and development and switch off the parameter capturing in production. If they now have a production issue where they would love to see the parameters, they can easily go back to the test environment and run a test with the finest instrumentation level. This is only possible because they identified the exact path of the transaction through their application in the production environment!

Findings

With deep transaction Application Performance Management in place, they are now able to monitor metrics such as:

  1. Number of database calls queried from .NET and VB6
  2. Response times and response time distribution
  3. Number of database calls with certain criteria, e.g. ones that last longer than 3s
Visibility about database activity for .NET and VB6
Visibility about database activity for .NET and VB6

We also capture the same metrics for:

  1. Stored procedure calls (see dashboard below); and
  2. Exception occurrence in VB6
Number of stored procedure calls, total execution time and their distribution is another great insight to application behavior
Number of stored procedure calls, total execution time and their distribution is another great insight to application behavior

They can also follow all transactions from .NET into VB6, which allows them to identify the VB6 portion of the response time, what DB calls are queried and which exceptions occur in VB6 for each .NET method execution.

The API breakdown shows which components contribute to response times. The bottom right chart shows it was about 10% for the last 24h, the bottom left that it was approximately 95% at 10.36 and the top chart the actual time, i.e. roughly 4s. Since we are watching a test environment here there is no permanent load
The API breakdown shows which components contribute to response times. The bottom right chart shows it was about 10% for the last 24h, the bottom left that it was approximately 95% at 10.36 and the top chart the actual time, i.e. roughly 4s. Since we are watching a test environment here there is no permanent load

Summary

Diversity matters! For a market- and technology-leading APM solution, it cannot be enough to be the old bull for a certain technology. It is crucial to have a cutting-edge product for all technologies that are currently used, and evolving to be used, for business-critical applications. Furthermore, it’s important to offer an interface for implementing instrumentation for native applications and proprietary protocols.

For Dynatrace customers: visit our community and download the tool for instrumenting Visual Basic applications!

After a lengthy email correspondence, I was fortunate enough to be able to introduce one of the bank’s performance project leaders to our Product Managers at the recent Perform conference. Together, we discussed future directions for how APM will integrate with legacy technologies – an extremely productive meeting. Leave a comment below if you’d like to contribute to this discussion too!