Keith Bromley
Sr. Manager, Product Marketing at Ixia
Blog

How Network Performance Monitoring Is Really Being Used

August 30, 2019 by Keith Bromley

In this blog, we will explore how network performance monitoring (NPM) is really being used by IT personnel. If you are not familiar with NPM, it is a monitoring category comprised of software-based tools that can take metrics from your baseline analysis, flow data, and information that can come directly from your network devices to provide a complete picture of your network. Administrators need good monitoring data and network monitoring tools to help them discover, isolate, and solve problems as quickly as possible.

In regards to NPM solutions, there are several specific use cases and instances where this type of solution can be very beneficial. Here are some specific benefits that can be realized:

  • Expose critical network events – The requirement for total network visibility often includes the capture of different types of data like client CPU utilization, data throughput, bandwidth consumed, and device memory consumed.
  • Quickly troubleshoot sporadic performance problems – NPM tools can navigate to the exact moment a problem happened to show a detailed packet-level view of before, during and after the occurrence to aid in troubleshooting.
  • Provide geolocation of issues – Knowing that there is a problem is one part of the solution. The next ingredient is to know where the problem is occurring. This is where the location of anomaly driven data flows can allow NPM tools to quickly isolate potential problem locations.

This podcast also talks about benefits of NPM solutions.

So, while there are several uses for NPM, how is it actually being used? Shamus McGillicuddy from Enterprise Management Associates released a study (Network Performance Management for Today’s Digital Enterprise) in May 2019 that answers this specific question.

The following list shows the most important NPM use cases and how often they are deployed:

  • Security monitoring and incident response – (44%)
  • Network and capacity planning – (38%)
  • Network monitoring and health and performance of the network – (36%)
  • Cloud application assessments – (29%)
  • Troubleshooting – (28%)

Security monitoring and incident response is the clear winner. With the absolute plethora of network security attacks happening on a daily basis, this is not surprising. Capacity planning and health monitoring are part of the second tier of use cases. Enterprises tend to use these two use cases about 1/3 of the time, versus the nearly 50% mark for the security monitoring use case. According to the research from EMA, approximately up to 80% of the respondents felt they were either successful or very successful in their current NPM practices for the 5 use case categories.

The EMA report also looked into what data is most valuable to NPM tools. Here are those results:

  • Management system APIs (events and time series data) – (41%)
  • Network flows (NetFlow, sFlow, IPFIX) – (29%)
  • Network device metrics (SNMP) – (27%)
  • Network packets – (20%)
  • Synthetic traffic – (20%)
  • Network tests (ping, traceroute) – (19%)
  • Log files (14%)

The largest category of data type is APIs. While this might not be obvious at first, network data that you are interested in may not be readily available from network equipment. A few years back, much of this data was buried, as it was deemed complicated and was only accessible by equipment vendors. With APIs, network engineers can now access this data more readily and customize the output of what they are interested in.

The next group of data is good for a quick snapshot of performance. For instance, NetFlow data is good for fast and general macroscopic views across the network. SNMP is used to check that network devices are working properly and for capacity planning purposes.

Packet data is the buried gem in this list. It gives you the most complete view of the data in your network. However, it does take some time to capture the data (if you are not already collecting it), so it is understandable why NPM engineers use it less often.

Rounding out the list, synthetic traffic is useful for load testing and instantaneous testing. This data can be used to proactively help you check for problems or load test new software build performance. Ping and trace route are used for basic connectivity. They essentially give you a “thumbs up or down” that equipment and routes are working. Log files work well for smaller, less distributed networks but these will be very time consuming to review if your organization is a medium to large enterprise.

The last fundamental question is, “Once the engineers have the NPM data, what do they do with it?” The following data from the EMA report shows just exactly what happens:

  • We correlate these insights at a service management level – 22%
  • Our NPM tools correlate these insights intrinsically – 21%
  • The NPM tools integrate with 3rd party tools (APM, experience management, SIEM) to provide insights– 20%
  • We correlate these through direct integration with AIOps or advanced analytics platform – 13%
  • Ad hoc - we correlate these insights manually – 12%
  • We correlate these insights in a data lake with standalone analytics tools – 12%

The results from the top three of four responses show that IT teams are typically integrating or correlating the information with a management or APM solution. The one exception is that 21% of respondents are deploying an NPM tool that functions in a stand-alone manner to generate network insights.

Further information about NPM, watch this podcast. Information on Ixia’s network performance, network security, and network visibility solutions along with how they can help generate the insight needed for your business is available on the Ixia network solutions webpage.

References

  1. EMA Research Report:  Network Performance Management for Today’s Digital Enterprise, Shamus McGillicuddy, Enterprise Management Associates. May 2019.