Kingshuk Mandal
Senior Director & Distinguished Technologist

In-band Network Telemetry - More Insight into the Network

March 1, 2019 by Kingshuk Mandal


In an information- and service-based economy, the demand for “always accessible services” is ever increasing. Momentary non-availability of a network is a significant concern for any modern data center. Hence network monitoring is of paramount importance and acts as a first line of defense to ensure non-disruptive and optimized network functions. Traditionally for network monitoring, operators put a tap in the network and collect packets to monitor. But this structure comes with limitations that include the number of taps you can have in the network, whether data center east-west traffic can flow through the taps, and the inability to track which path a packet has traversed or how well a packet was treated by each switch in the path. 

This is where in-band network telemetry (INT) provides a much-needed solution. INT provides the abstractions that enable each data packet to enquire any traversing switch about its internal data state like egress queue occupancy at that point of time, identification of the switches the packet is going through, outgoing link used, and packet processing latency of the switch. This information gets recorded into the data packet itself as it traverses through the network. Since the internal states of the switches are recorded into the data packet, this mechanism is called in-band telemetry.  


Figure 1: A telemetry-capable network

Referring to the figure above, a telemetry-capable network has a few components. The first one is a telemetry source. This is a trusted network entity that creates and inserts telemetry headers into the packets it sends. The telemetry headers contain, at a minimum, instruction indicating what to collect. Every telemetry transit hop on the path of the packet is a network node that appends its telemetry data into the telemetry header. In the above figure, router 1 is the telemetry source for the red and green flow. Router 2 is the telemetry transit hop. Finally, a telemetry sink is the network node that extracts the telemetry headers and collects the path state contained in the headers. The telemetry sink is responsible for removing telemetry headers to make telemetry transparent to upper layers or to the end user. A telemetry sink may send the collected metadata to a central analytic engine to extract actionable insight. In the above figure, node 4 acts as the telemetry sink.

In any active network, a transient micro-loop can occur for a very short period because of router failure or the network taking time to converge to use a new route. A telemetry-capable network can give an indication for these micro-loops, which traditional flow monitoring may fail to detect. In the telemetry recording, you may see a flow was reaching the destination with N number of telemetry records with node ID(s). Then, for a short duration, a few packets may come with N+P number of telemetry records with same node ID repeated. This indicates a transient micro-loop occurred for a short period. The analytic engine can point out where and when the micro-loop occurred. 

Now, let’s discuss in detail some practical applications of data center telemetry. Managing a large data center network is a very daunting task because of traffic volume and complex networking device software stacks. Network faults are unavoidable and diverse in nature, such as packet drops, latency spikes, low throughput, and micro-loops. The existing tools to troubleshoot such scenarios—like ping/traceroute/device counters—are insufficient. 
Let’s study two example scenarios and compare how traditional monitoring tools fail to debug such scenarios and how in-band telemetry does it with precision without much overhead to the network.

Case Study 1: Intermittent and Silent Packet Drop

Often, applications cannot reach the servers, resulting in a timeout that is followed by multiple retries. Shortly after, one of the reties becomes successful and the application starts functioning. This intermittent packet drop may not have a visible impact at the surface, but if ignored, it may aggravate and lead to network failures at a future point of time. So, it’s important as well as tricky to detect where exactly the packets were getting dropped silently in the network.


Figure 2: Intermittent and silent packet drops are difficult to find, but often lead to future failure points

Debugging using existing tools like counters and trace route

  1. For the entire path between the client and the server, checking every device’s counter for packet drop count may be one method. But firstly, it will not show significant drops because the issue happens intermittently. Secondly, by watching the drop counter, you will not know which flow/application was affected due to this drop. Thirdly, there will be too many devices to check on. And finally, there will be multiple paths between the client/server, so you will not know which path to debug. It’s not feasible for a sizable system and the results are not conclusive enough to pinpoint the exact point of failure. 
  2. Trace route may be another option. But it’s expensive so far as the CPU resource is concerned. A bigger problem is that the trace route message may travel through a different path other than the one taken by the data packet. You may get a false positive or negative.

Debugging Intermittent Packet Drop Using In-band Telemetry

  1. Packet-level in-band telemetry information is inserted into each data packet at selective hops as it traverses the network. When needed, telemetry information can be collected at every hop. This makes in-band telemetry flexible in the amount of metadata that is recorded in the data packet.
  2. The recorded metadata shows link utilization of the nodes in the path traveled. To an analytics engine (telemetry processor), the link utilization data can give an indication of which links are experiencing higher latency and may start dropping packets soon. To prevent packet drops, the flow can be routed differently.
  3. Some vendors are proposing a special mode of telemetry information reporting called post-card mode. If an intermediate switch faces some unexpected event like packet drops, then it can send a post-card notification directly to the telemetry processor. 

In-band network telemetry provides greater visibility into the network, enabling the network operator to be proactive in preventing disruptions. 

Case Study 2: Micro bursts caused by network congestion

When there is congestion in the network, the network operator needs to find out which node has surpassed the latency watermark. For example, if baseline latency in a switch is a few hundred nanoseconds and then suddenly rises to micro seconds, then there is most-likely congestion in the network. When this happens, you’ll need to know which nodes contributed to the congestion. Which application is the aggressor flow that created the micro bursts and which other application flows are suffering because of this. For example, in Figure 3, the red link between the top of rack (TOR) and spine switch has become congested. The network monitor needs to pinpoint these congestion points with precision.


Figure 3: Pinpointing congestion points with precision

Debugging Using Existing Tools (Device Counters, Ping)

  1. Device counters do not give a proper indication as they are aggregated over time and flow.
  2. Ping can indicate that network is experiencing higher latency than before. But it cannot give hop-by-hop information, like forwarding latency of each networking device in the network. Hence, it cannot pinpoint where the problem lies. 
  3. A multipath network ping can give a false positive or negative if the ping packets and data packets traverse through different paths. 

Debugging Using In-Band Telemetry

  1. Using in-band telemetry, you can get hop-by-hop information such as forwarding latency or queue occupancy of each node at a per-flow level granularity.
  2. So, a thorough congestion analysis can be done when there is a problem in the network. Internal states (queue depth, egress link utilization) of every node will pinpoint the problem location where this specific flow has experienced a problem. 

Telemetry Test Scenario using IxNetwork

Before deployment, it is important to thoroughly test and measure the efficiency of a network node implementing telemetry functions. Ixia’s IxNetwork is a test solution that generates different types of telemetry data packets that can be configured and transmitted at line rate. IxNetwork currently supports the following encapsulation types with INT headers and metadata:

  • INT over Geneve (as Geneve option)
  • INT over UDP (as payload)

The IxNetwork test tool can simulate the following INT use cases to validate an INT-capable device under test (DUT):

  • Validate DUTs capability to append INT metadata 

As the data packet with INT info traverses through the network, every INT transit node pushes its own INT information into the data traffic. The DUT, as shown in Figure 4, is playing the role of an INT transit hop. After receiving a data packet with INT header, it checks the “Instruction Bit Map” to decide which metadata info needs to be recorded and then pushes it on top of the INT stack of metadata accordingly. It must also calculate and fill up the transport protocol’s payload length and INT length in the data packet. It is important to verify whether the DUT can do all these flawlessly under different load condition. 

Figure 4: Validating a device’s ability to append INT metadata

IxNetwork offers functionality to configure and send data traffic with multi-node INT metadata of different types. After receiving the packet, the DUT inserts its own metadata and forwards it towards the INT sink emulated by another IxNetwork port. The INT header conformance can be validated from the packet capture by IxNetwork (refer to Figure 4).

  • Measure the impact of INT processing over DUT’s throughput

As an INT transit node, the DUT must do the additional work of inserting INT information in each data packet at a certain rate. This is bound to have some processing overhead that may impact the overall network throughput. This impact can be assessed using the IxNetwork INT solution. To test this use case, INT should be enabled on the DUT. The user can send traffic with INT instruction mapped for the desired metadata insertion, like hop latency, queue occupancy, or ingress/egress port ID (refer to Figure 4). Traffic throughput and latency/jitter should be measured. Repeat the test with INT disabled on the DUT. Now, send the same data traffic without the INT information. Measure the throughput and latency/jitter. Comparing the results with and without INT headers, you can get the actual impact on throughput when the DUT is doing INT processing.

  • Validate DUT’s behavior as INT sink

When the DUT is acting as the INT sink, it needs to strip off the INT metadata info from the received data packets and then forward it to the destination of the data traffic. To simulate this test, send INT traffic from the IxNetwork INT Source port as shown in Figure 5. Then capture the traffic on the IxNetwork destination port on the other side of the DUT and verify that the received traffic does not have any INT header/metadata in it.


Figure 5: Validating in-band network telemetry sink

Easy Configuration of Telemetry Header/Metadata in IxNetwork

IxNetwork provides full flexibility to craft telemetry packets using the visual editor with build-in INT template. Users can easily set the instruction bitmap as per the test need, for example, in Figure 6, Switch ID and Hop Latency are enabled. Then optionally add multiple metadata stacks corresponding to multiple hops simulated by the Ixia INT ingress port. In Figure 6, three hops are added.  


Figure 6: Configuring telemetry header/metadata in IxNetwork

IxNetwork also provides the capability to analyze the packets at the bits/bytes level. This gives your visibility to check whether the DUT has inserted its own record according to the Instruction Bitmap set in the INT header. 


Figure 7: With IxNetwork, you can analyze packets at the bits/bytes level


In-band network telemetry is a game-changer for understanding the treatment of packets as they traverse a network. The information recorded into the data packet itself helps troubleshoot potential problems even before they impact applications. But validation is critical to success in such systems to ensure proper configuration and high performance across the network.

You’ll need pre-deployment validation of a device implementing in-band network telemetry from both the conformance and performance perspectives. Ixia’s IxNetwork provides a feature-rich INT test solution that enables user to execute pre-deployment validation. Although specific use-case scenarios have been depicted herein for INT over UDP traffic, IxNetwork also supports similar use cases for INT over Geneve traffic. IxNetwork’s INT solution is designed based on the May 2018 INT draft

For more details, see our IxNetwork Telemetry emulation demo: