As a network professional, one of your newer roles is likely troubleshooting poor application performance. For most of us, our jobs have advanced beyond network “health,” towards sharing – if not owning – responsibility for application delivery. There are many reasons for this more justifiable than the adage that the network is first to be blamed for performance problems. (Your application and system peers feel they are first to be blamed as well.) Two related influencing trends come to mind:
- Increased globalization, coupled with (in fact facilitated by) inexpensive bandwidth means that the network is becoming a more critical part of the business at the same time its constraint is shifting from bandwidth to latency.
- Many of the network devices – appliances – that sit in the path between remote offices and data centers are application-fluent, designed to enhance and speed application performance, often by spoofing application behaviors; in fact, many of these have evolved in response to problems introduced by increased network latency.
In an ideal world, your application performance management (APM) solution or your application-aware network performance management (AANPM) solution would automatically isolate the fault domain for you, providing all the diagnostic evidence you need to take the appropriate corrective actions. The reality is that this isn’t always the case; intermittent problems, unexpected application or network behaviors, inefficient configuration settings, or just a desire for more concrete proof mean that manual troubleshooting remains a frequent exercise. And although it may seem like there are a near-unlimited number of root causes of poor application performance, and that trial and error, guesswork and finger-pointing are valid paths towards resolution, the truth is much different. In a series of network triage blog posts, I’ll identify the very limited realm of possible performance constraints, explain how to measure and quantify their impact, illustrate these using network packet trace diagrams, and offer meaningful and supportable actions you might evaluate to correct the problem. Understanding how to detect these possible performance problems (there are twelve altogether) will help you troubleshoot faster, more accurately, with greater insight, while collaborating more effectively with your application and system peers.
In this introductory entry, I present the request/reply application paradigm assumption upon which most of the analyses depend, illustrate key packet-level measurements, and provide a list of the 12 bottleneck categories we’ll discuss in future blog entries to the series.
Packet Flow Diagrams
- Each arrow represents one TCP packet
- Blue arrows are used to represent data packets
- Red arrows are used to represent TCP ACK packets
- The slope of the arrow represents network delay
- Time flows from top to bottom
We will frequently use the term “operation,” which we define as the unit of work that an application performs on behalf of a user; we sometimes describe it as “Click (or Enter key) to screen update.” Business transactions are made up of one or more operations; for example, a user may click through a series of screens (operations) to complete a customer order update. Operations are an important demarcation point, as they represent the unique performance dimension important to the business, to the user, and to IT. The time a user waits for the system to execute an operation impacts business transaction performance and therefore productivity, and is dictated by the performance of lower-level IT-managed hardware, software and services. Note that this terminology may differ somewhat from network probes that often use the term “transaction” to reference session-layer request-response exchanges, which we discuss next.
We assume a client/server or request/reply paradigm, with TCP as the transport; this covers virtually all of what we might refer to as interactive business applications. It would include, for example, web-based apps, “fat client” apps, file server access, file transfers, backups, etc. It specifically excludes voice and video streaming as well as the presentation tier of thin-client solutions that use protocols such as ICA and PCoIP.
For each operation, there will be at least one application-level request and one corresponding application-level reply. These can be considered application messages, sometimes referred to as application-layer protocol data units (PDUs). Consider a simple client-server operation. At the application layer, a request message is passed to the client’s TCP stack (TCP socket) for segmentation (into packets), addressing, and transmission; these lower layer TCP stack functions are essentially transparent to the application. At the receiving end (the server), the data from the network packets is reassembled into the application layer message and delivered to the listener service for processing. Once processing is complete, the server application passes the reply message to the server’s TCP stack, and the message contents are similarly segmented and transferred across the network to the client. The performance of these request/reply message exchanges is constrained by two factors; message processing (at the server or client) and message transmission (across the network).
It is helpful, then, to consider this request/reply message exchange as the basis for performance analysis; the reassembled messages represent our network-centric insight into the application, while the packets visible in the trace file inform us how efficiently the network transports these messages.
From Application Message to Network Packets
Most application layer messages will require more than one data packet, as the content is typically larger than the maximum segment or payload size (MSS), commonly 1460 bytes. The packets associated with a request or reply message can be considered a flow; grouped together, their payload represents the entire application-level message.
The diagrams below illustrate a simple operation comprised of two request/reply exchanges. The first is an application-centric view, showing the request and reply message exchanges while masking or abstracting the network packets. The second reveals the underlying network packet flows, where each message is transported in 3 data packets.
Allocating Operation Time
As we look to analyze application performance in general and operation performance in particular, we will want to examine the factors that influence the transfer of an operation’s request and reply messages across the network, distinguishing these from the node delays associated with client and server processing. Expanding the detail of our request/reply flow diagram, we can allocate an operation’s total delay into four initial categories:
- Client message sending time
- Server processing time
- Server message sending time
- Client processing time
The server node processing measurement begins at the point when the server has received the last packet in the client request flow (Callout 1); this packet represents the end of the request message. The server processing delay ends with the first packet of the reply flow (Callout 2); this packet represents the beginning of the reply message.
The server node sending measurement begins as the server transmits the first packet of the reply flow (Callout 2), and ends with the last packet of the reply flow (Callout 3); this flow, or grouping of packets, represents the entire message as transported across the network. Since the measurement is taken from the server’s perspective, we don’t include the time it takes for the last packet to traverse the network in the calculation of server sending time. Client node processing and node sending measurements are similarly calculated as illustrated.
Performance Analysis Framework
We can use the basic measurements described above to introduce a performance analysis framework. This framework identifies 12 potential performance bottlenecks related to client, network or server constraints. A few of these are uncommon in that they are either not likely to occur, or unlikely to have a significant impact on performance; they are included here for completeness, and also because troubleshooting often implies diagnosing the unexpected. (If it were simple, you wouldn’t be reading these blogs.) Here is the complete checklist of all 12 potential performance bottlenecks:
Bottleneck Analysis Checklist
Server Processing Delays
Client Processing Delays
Receiver Flow Control (Window 0)
Window and the Bandwidth Delay Product
Chattiness and Latency
Starved for Data
The Nagle Algorithm
In this blog entry we covered two potential performance bottlenecks: Server and Client Processing Delays. In the upcoming blog entry, Part II of this series, we will look at performance constraints related to bandwidth and congestion. Stay tuned and feel free to comment below.