In Part I of this blog series I gave a general overview of BizTalk – the components that are involved in message processing and talked about how BizTalk specific performance counters can help spotting problematic areas. In this post we go beyond performance counters (even though we still need them) and take a deep-dive into adapters and pipelines.

Step 2: Analyzing BizTalk Adapters

On the incoming or receiving side of BizTalk – Adapters receive artifacts, e.g.: the File Adapter reads files from disk that get put into and processed by the receiving pipeline. On the outgoing or sending side of BizTalk – Adapters send artifacts, e.g.: by calling a Web Service via SOAP.

BizTalk comes with the several out-of-the-box adapters – such as File, HTTP, SOAP, SQL, SNMP, SMTP, FTP, POP3, SharePoint, …. Additional to these BizTalk can be extended with custom Adapters. The default adapters provide different performance counters that indicate how many artifacts or messages have been sent and received by the adapter. Let’s have a closer look at the File Adapter and its performance counters: We can monitor the number of bytes sent and received, the number of total messages sent and received and the number of lock failures that happened. Unfortunately there is not a whole lot of documentation about the impact of the lock failures counter. I assume BizTalk tries to lock each file exclusively and retries it if it is not successful – causing a delay in processing the input folder. The number of messages (in this case we talk about files) and especially the number of bytes per second can give us a good indication whether the adapters are a bottleneck because the file system can become the bottleneck here. The following dashboard shows the counters for the File Adapter (top), SOAP Adapters (middle) and the I/O Handle and CPU time (bottom). The BizTalk scenario I monitor uses the File Adapter on both the receiving and sending side as well as a SOAP Adapter to call out to a Web Services. Once a file is received and transformed a Web Service gets called with the transformed message. The response of this Web Service will be written to an output file. In a “perfect world” I would therefore have one SOAP call and one sent file for every received file:

Monitoring FILE and SOAP Adapter Performance
Monitoring FILE and SOAP Adapter Performance

In the top graph we can see how increasing incoming files caused lock failures (be aware that the scale of Lock Failures is 1/100th of the other measures). Here is where I actually hope to get some insight from you – my fellow readers: it seems that BizTalk produces more lock failures/sec (1498) then actual messages received/sec (66). But – does this mean that files that couldn’t be locked were actually not processed or that BizTalk had to retry a couple of times in order to lock a file before reading it? For me it seems that these counters are not necessarily accurate anyway – why that? I pushed 700 messages through the system but this number is nowhere reflected by these performance counters.
The middle graph shows the number of sent SOAP messages. Every incoming message from the file system should trigger a SOAP call. In my case I have a difference of 3 messages which would indicate that 3 messages were not processed correctly.
The bottom graphs show the I/O handle count of the BizTalk Host Instance as well as the CPU Usage. Handle Count obviously goes up with the number of messages increasing in the receiving location. CPU seems to be aligned with the SOAP calls – which also make sense as making SOAP calls will be more CPU intensive then just reading in files from disk.

The problem that I have here is that my FILE Receiving Adapter shows a high number of lock failures which indicates a problem with my file system. I also question the performance counters as they do not reflect the number of messages I sent through the system.

Step 3: Analyze Pipelines

In the Receive and Send Ports of BizTalk you configure which Pipeline to use for handling the message before it gets put to the Message Box or before it is sent out. Pipelines have a significant impact on the overall BizTalk performance as they perform actions on every single message that gets through BizTalk. Therefore it is important to understand the impact of Pipeplines that are used. BizTalk comes with 4 default pipelines: PassThruReceive, PassThruTransmit, XMLReceive and XMLTransmit. If you have your custom Pipeline I recommend reading the article Optimizing Pipeline Performance.
In my scenario – every message is passing through one instance of each of the 4 default pipeline types. There are no performance counters for pipelines which leaves us a bit blind here from that perspective. You could turn on Tracking for your individual pipelines which will give you some insight into what’s going on within Pipelines. Before turning that on please read the documentation about BizTalk tracking and make yourself familiar with the overhead you inherit with it. Also consider Business Activity Monitoring (BAM) as an alternative.

One way to analyze what is really going on in pipelines is using an Application Performance Management Solution with Transactional Tracing capabilities. I’ve installed dynaTrace in my BizTalk Environment and enabled it while running my 700 messages test. The following screenshot shows me the pipeline activities dynaTrace traced for each individual pipeline:

BizTalk Activity grouped by Pipeline
BizTalk Activity grouped by Pipeline

Now I really see accurate numbers. My 700 messages made it through all of the 4 different pipeline types with XMLTransit being the by far the slowest compared to all the others. A drill into an individual transaction on XMLTransmit shows me that a) I have a huge variance in execution time (several ms to > 1s) and that the most expensive methods are those that call to the native COM component of BizTalk (yeah – big pieces of BizTalk are still “good” old COM):

Individual Pipeline PurePath showing where things are slow in XMLTransit
Individual Pipeline PurePath showing where things are slow in XMLTransit

If you turn on BizTalk Tracking we can even see those tracking messages along the PurePath (can be seen in the Argument column for the TraceMessage calls). The question now is – what is the problem in my scenario? Analyzing the individual PurePath’s showed me that there were some BizTalk methods that contribut more than 90% of the processing time to individual transactions. It also showed that these methods don’t scale that well by checking the execution time range (check the min & max columns). As BizTalk Tracking was turned on I now have proof what the overhead of this feature is (~10%):

What is slow in my pipeline processing?
What is slow in my pipeline processing?

All these method calls end up calling the BizTalk COM Components like e.g.: Message Agent (BTSMessageAgent.dll). The lessons learned from this excercise is that a) BizTalk Tracking causes significant overhead to my message processing and b) XMLTransit is a slow Pipeline spending most of its time in the native COM Components.

Next Steps …

In my next blog post I make a deep dive into the Orchestration Engine to analyze where things can slow down there. We also look at external services that can get called by your business process. In my case I call a SOAP Web Service which plays its role in the overall message processing performance.
I am sure that the majority of you is working in a Microsoft .NET centric environment. On topics like ASP.NET, SharePoint, .NET Services, … feel free to check out my White Papers about Continuous Application Performance for Enterprise .NET Systems.

Stay tuned for the next BizTalk blog and please share your own experience and best practices