While the term “wire data” may be relatively new, the concept isn’t; probes and capture agents have been collecting and examining network traffic almost from the beginning of network time. The term has been used casually by many network analysts, is particularly popular with Wireshark users, and – in a very casual Google search – was referenced in vendor documentation as early as March 2012. In fact, wire data has a rich, progressive 20+ year history and a bright future driven by analytics. All the more curious, then, that vendors staking ownership claims appear overly and artificially offended.

Network General – of Sniffer fame – did a lot to promote insights that were available from network packets, first by decoding packet headers, subsequently by attempting to turn packet data into monitoring information with some level of application insight (primarily port association, but also via pattern-matching within application headers).

Not to digress into a history lesson, but in the 1990s, a few companies began applying transactional insights to wire data – essentially creating decodes that could identify, extract, and correlate application-specific request and response messages. Others began extracting metadata from packet payloads – such as files accessed, user names, and database instances. Suddenly, wire data could offer more than just network performance; it had often-compelling application performance relevance. (As an aside, just what is “network performance” anyway? I’ll save that for a future blog.)

Extensions to simple request-response transactional insights came in rather rapid progression; userid, application error information, database instance, filenames, device types, application versions, customer names/numbers, and more. Today, wire data is generally considered more than just extracting information via a packet-level decode; analytics transform wire data into wire information. Important is the ability to understand user session states; reconstructing complex transaction steps and gaining insights into user visits or click paths are two clear examples, but other types of metadata may also require session-state awareness. For example, it is possible (simple, even) to extract an error code as a string; this information becomes significantly more valuable if it can be associated with a userid, which may only be visible at the beginning of a session. Voila! Business relevance – and the progressive march of insights available from wire data continues.

Who’s the Customer?

Early vendors, however, struggled with an inherent conflict; is this a network tool or an application tool? Network teams and application teams were cordoned off to opposite ends of the IT department, with fundamentally different charters. Why would the network team want responsibility for — or care about — application performance? What would the application team do with network data – could that source be trusted? In fact, the hostility between the two groups often enough led to aborted sales cycles; value without an owner.

Thankfully, that has changed (or is well on the way to changing). The lines between the network, the server, and the application (and the teams that support them) continue to blur in the modern dynamic data center, just as we saw the lines all but disappear between once-separate teams supporting LAN and WAN or data and voice. Buying centers are converging, and cross-functional, collaborative solutions are increasingly common. Finally, the network is the computer.

Wire Data and Application Fluency

Gartner suggests it’s a way to help solve a key challenge, that of the difficulty of inferring the overall quality of a complex and volatile system from subsets of data. It stems from a recommendation to focus your approach to availability and performance on assessing the end user experience. The network should be exploited as “a source of information and springboard for analysis of the entire IT infrastructure and application portfolio in production….” If that’s not a clear call for network teams to become application fluent, I don’t know what is.

What’s behind this recommendation? It’s a theme I’ve mentioned recently, that of the exploding complexity of application architectures combined with increasingly dynamic services. This makes it virtually impossible to gauge service quality from infrastructure and service monitoring, instead demanding the use of EUE measurements as the primary quality metric.

There’s another factor adding to the importance of wire data; the multi-modal spectrum from stability to agility, the tension between new stack architectures and legacy platforms – and everything in between. The result is that wire data becomes even more critical to understanding EUE, given its potential to support multiple application protocols and its independence from client device types and server platforms.

Wire Data Access Challenges

Probe-sourced wire data has some clear logistical advantages; among these are no agents to install, no APIs to maintain, no impact on system or network performance. But the dynamic data center introduces new challenges as well, commensurate with the complexity of the application architecture and the intended uses of wire data. At the risk of over-simplification, there are two general use cases.

  • Wire data is used to understand the end user’s experience. This is a primary use case, one which is emphasized by Gartner. A probe’s ability to understand network behaviors adds important performance insights; after all, there’s a greater likelihood of a network-related bottleneck on the WAN than there would be inside the data center fabric.
  • Wire data is used to understand inter-tier transaction performance. Fundamentally, this is a fault-domain isolation use case, where the probe’s ability to transform – reassemble – wire data into application transactions for multiple protocols can pay significant dividends – if these measurements can be correlated with the user’s experience.

For this first use case – end-user experience – the probe needs to see user traffic entering and leaving the data center, making probe placement decisions relatively simple. To support the second use case, the probe needs to see communications between servers. While just a few short years ago this wasn’t too difficult, modern data center architectures present new challenges; capturing inter-VM traffic within the same hypervisor and following VM migrations are two prime examples. As you might expect, network packet broker vendors such as Ixia are stepping up to meet these challenges.

Path to Wire Data Value: Extract, Analyze, Present

There are three fundamental steps to wire data value. The first is to extract the data of interest. By itself, this step has no inherent value; it must be combined with steps two and three. Step two is to analyze the data; approaches include correlation, pattern discovery, anomaly detection and prediction. Step three is to make the analyses consumable, presenting the resulting insights in role-relevant dashboards. Whether you choose to build your own or opt for a turn-key solution such as Dynatrace DC RUM, each of these steps are required to realize the value of wire data.