For the past decades, mainframe experts have been tasked with tuning their systems, applications, databases and individual DB2 SQL executions. Traditional Monitoring and Application Performance Management (APM) tools provide useful insight to support these tasks. APM for the Mainframe is therefore well established and saves more money by optimizing MIPS and MSUs than these performance tools cost to run. But – there is more to APM than optimizing individual DB calls and CPU cycles.
Tackling the same architectural and performance problems as in the distributed world
The new generation of APM for the Mainframe is tasked with additional questions such as: Do we really need all these DB2 calls? Can’t we eliminate duplicated SQL executions even though the individual executions have been highly optimized? Who is making these calls into the Mainframe? Why does the distributed world make duplicated calls to the mainframe?
A Customer told us that he needed answers to these questions after he came to the conclusion that “highly optimized” on the Mainframe still means that unnecessary calls were being made and the database was still consuming CPU cycles. Based on his research he concluded that Tracing is the way to identify unnecessary SQL executions – and that Caching, eliminating and combining is the way to reduce SQL calls.
These may seem like familiar application performance conclusions. They are! But mainframes add a special element into application performance management in distributed applications, as shown below.
The “manual trace” through the Mainframe
Let’s take a deeper look into these options to further optimize DB2 usage patterns in a Mainframe environment. In order to analyze logical transactions on the Mainframe we have started by analyzing the results of SMF 110 records and combined them with the SQL call statistics provided by the SQL Monitor. To get more details the DB2 Detail Trace option can be turned on. This trace option allows a more granular view of the programs executed, the SQL statements executed and the host variables concerned. Based on that information our customer was able to manually create a diagram showing an “End-to-End” view from transactions to DB2 calls. The following illustration example shows a generated diagram (with sanitized data) which really helps Mainframe experts understand what type of interaction is going on in their system:
A next step is to “normalize” this “call graph” to make it easier to read and to better see the potential performance optimization hotspots. Below is an illustration of the normalized diagram:
Based on the analysis and the data shown in the diagram they were able to reduce the number of SQL executions from 90 to 60 per end-user transaction. That is a 33% reduction in SQL calls. The following chart visualizes the number of SQL calls over time and shows how applying the changes reduced the SQL calls:
Reducing the number of SQL executions also had a positive impact on overall CPU usage and response time. We could see about 2-3% less CPU usage which directly translates into saved costs on the Mainframe. The following chart shows CPU Utilization at the same timeframe as the chart above:
The facts are clear on why it pays off to go through a rather cumbersome and manual process of analyzing transactions and their DB interactions:
- 2-3% CPU Savings
- 10-15% CPU Savings in the case of any given Mainframe application ported to Java. Tuning applications on the Mainframe is therefore a good investment for future porting projects.
- This use case has additional optimization potential as there are many services that are called multiple times resulting in too many DB calls
The first questions that came up after going through the manual exercise explained above were, “Can I have more of these?” and “How can we automate that?”
Compuware automates End-to-End Tracing through Mainframe
The above described scenario shows how valuable it is to get transactional insight into the Mainframe world. Combining that with the transactional view in the distributed world allows Compuware customers to follow transactions from the Web Browser all the way through the Mainframe into DB2. The following image shows the Transaction Flow visualization, provided by Dynatrace using its PurePath for z/OS, which helps operators get insight into the Mainframe world. The image also highlights the problem pattern discovered by our customer where about 390 DB calls are executed per transaction:
Focusing on these Database Calls shows us which DB calls get executed on which Connection and also how often these statements get called per single transaction. In the following screenshot we see that the top three statements are called up to 209 times per individual transaction. That means there’s great potential to optimize:
After the DB statements that are called very frequently are identified, it is only one click to drill down to the actual transactions that make these calls. The following screenshot shows one of the PurePaths which represents the End-to-End transaction starting in the Java Application Server – going through Message Broker all the way to the Mainframe and into DB2. The PurePath contains enough contextual information for the engineers to identify which transaction and which code is responsible for the multiple DB2 calls:
Automated capture of every end-to-end transaction and built-in analysis capabilities of the Dynatrace solution makes it easy to identify the performance and architectural hotspots–not only in the Mainframe itself but also from distributed applications, e.g. Java and .Net, that call into the Mainframe. More efficient mainframe transactions mean fewer resource consumption and faster response time to the end user. In case of the customer example above this leads to save operating costs (saved MIPS) on the mainframe as well as higher transaction throughput in their distributed environment that connects to the mainframe.
Interested in more? Visit or Compuware PurePath for z/OS website.