Keith Bromley
Sr. Manager, Product Marketing at Ixia

Blind Spots: The ABCs of Network Visibility

August 23, 2016 by Keith Bromley

What do people mean when they talk about network blind spots?  And are these really that important?  The answer to the second question is overwhelmingly yes. Blind spots directly correlate to network problems and outages, increased network security risk, and potential regulatory compliance issues. In regards to the first question asking what do we mean by “blind spots”, blind spots are hidden reasons for a lack of network visibility.

Common Blind Spot Examples

Let’s look at some examples of blind spots. Here’s a library list (although it’s not all inclusive) of examples that I’ve compiled. While several of these may not apply to your organization, some probably do, right? Scan the list and see if anything matches your network.

  • Silo IT organizations – Security, Network IT, and Compliance groups don’t often talk or share data and can form silos in an enterprise. This can lead to inconsistent data and compliance policies, SPAN port contention issues, improper SPAN port programming that results in incorrect or missing data captures, and plain old data conflicts that arise from collecting data at the wrong places.
  • Use of virtualization technology – According to Gartner Research, up to 80% of virtualized data center traffic is east-west (i.e. inter- and intra-virtual machine traffic) so it never reaches the top of the rack where it can be monitored by traditional tap and SPAN technology. While virtual tap technology exists to counter this threat, according to the Ixia 2015 virtualization study, 51% of IT personnel don’t know about the technology. For instance, an Ixia healthcare insurance provider had zero visibility into 100+ virtual hosts. This was immediately solved when they installed the Ixia Phantom vTap.
  • SPAN port overloading – An Ixia case study shows the various problems that a national pharmacy ran into with SPAN port contention problems and the fact their SPAN ports were also dropping packets and creating a loss of information due to a data overload condition. SPAN ports can, and will, drop monitoring data if the CPU is overloaded. Besides running into port contention problems, the case study also shows that the customer ran into problems splitting and filtering the data from the SPAN ports. For more information on general SPAN port visibility issues, see this article.
  • Rogue IT – When users add their own Ethernet switches, access points (from an iPhone), use offsite data storage (like Box), or add something else to the network, company security policies are often subverted which opens the door to security, compliance and liability issues. IT rarely knows anything about these devices, especially as they can appear sporadically, like Wi-Fi hot spots.
  • Mergers and acquisitions – The blending of disparate equipment and systems often causes interoperability issues which adds to system/application downtime, system capabilities being turned off to improve network performance, and the scaling back/elimination of network and application monitoring while extensive network re-architecting takes place. This results in very limited visibility (i.e. blind spots) because no one really knows what is happening.
  • Addition of new network equipment – When new equipment is added, there is often no record as to who owns it and what it does. The equipment can get “lost” and forgotten about, especially if IT key personnel leave the company or change departments. “Lost” equipment that is still functioning in the network can be a source of security vulnerabilities due to lack of proper software updates and unknown user access privileges.
  • New equipment complexity – New equipment is often complex to understand, i.e. what it does and how best to use it. For data networks, complexity never seems to take a rest at all. The rate of increase of this complexity has been characterized by David Cappuccio at Gartner who stated in a Gartner Symposium back in late 2012 that for every 25% increase in functionality of a system, there is a 100% increase in complexity. If IT doesn’t have time to do the research on new equipment and how to properly program it, they often stop using the equipment and then eventually forget about it. The equipment can often remain running in the network even though it isn’t being utilized.
  • Network complexity – When new links and office locations are added, they can be set up with different VLANs, sub nets, etc. to geographically segment them. These segmented networks often have separate equipment that is used for remote logon, authentication, etc. that makes it hard to track what is happening at those locations.
  • Inconsistent monitoring/data collection policies – This can occur from multiple sources but one of the common effects is that virtual monitoring equipment policies and physical equipment monitoring policies are often different, which can cause compliance data mismatch, requisite data that is simply not captured, and security issues. See this case study for an example.
  • Network planning issues – In many cases, the requisite data just doesn’t exist at all. This can be a common experiences for organizations with external customers. For instance, service providers (especially wireless service providers) need good customer data (service holes, malfunctioning radios, poor coverage, and even customer dissatisfaction) to properly plan their networks and deliver a better quality of experience.
  • Network upgrades that are postponed – Postponing upgrades can result in continuing to use old and antiquated equipment that has limited uses on a higher speed network. Network performance then becomes slow, which affects IT’s ability to solve problems as fast as required. More information is available in this ebook on upgrade issues.
  • Network upgrades are implemented – Just the action of necessary upgrades can result in blind spots. One example is if new higher speed equipment is added. This equipment may end up overloading various components of the network, especially monitoring and security tools, with too much data. This is especially true if any monitoring and performance tools weren’t upgraded at the same time. These tools can become overloaded and lose (i.e. drop) data or overwrite buffers/logs at a faster than expected pace. In addition, tool dashboards are often limited in what they can see which allows the blind spot to remain hidden. More information is available in this ebook on upgrade issues and common vulnerabilities can be found in the CVE database.
  • Addition of new applications – A common blind spot for hospitals is access to application data and application performance trending. In this case study, the customer was using the EpicCare Ambulatory Electronic Medical Record (EMR) application from Epic but was having problems correlating all of the information from their different systems.
  • Security and network audits are postponed or rarely occur – This action will often result in a safe and cozy harbor for various threats and malware on your network. It’s hard to say what will be hidden but whatever it is, I’m sure you don’t want it. See this resource for more information.
  • Anomalies – Unexplained network events happen and are often addressed by IT but if they are spurious and random in nature, and they go undiagnosed, this can result in larger problems later on. Ixia has several customers who have eliminated their network anomalies and also realized a mean time to repair (MTTR) reduction by up to 80%.
  • Incorrect equipment programming rules – An example of this is firewall programming, which is rules-based and typically processed through access lists.  When the traffic matches a rule, it is immediately forwarded on, even if subsequent rules exist to tailor the information. This can cause gaps in network security because the packet was routed before the correct security tool got to see the correct information.

How To Eliminate Blind Spots

So, when it comes to your specific network, where are your potential blind spots? If some of the blind spots listed above apply to you, you’ve typically got two ways to respond – either in a proactive or reactive manner.  The reactive approach is straight forward, just wait until something happens and then go fix it. While it’s the simplest approach, it’s also usually the costliest in terms of locating exactly what issue the blind spot caused (which usually increases your mean time to repair). In addition, it often necessitates the purchase/implementation of expensive long term fixes or multiple “Band-Aid” fixes that never really “fix” the problem. In any case, this approach is very straight forward.

If you want to follow a proactive approach, the best solution is to design a visibility architecture. This involves more upfront cost and planning but will normally pay for itself very quickly. The visibility architecture is a plan you create for organizing exactly how you want your monitoring tools to connect to the network. This involves how they connect (taps or SPAN ports), where they connect (edge, core, which branches, etc.), and how you groom the monitoring data before you send the stream to a tool (packet filtering, application filtering, deduplication, packet trimming, decryption, aggregation, etc.).  If you want to learn more about designing a visibility architecture, check out this whitepaper.

To end blind spots in your network, you need to be able to see everything. Unknown issues and “soon to be problems” exist in every network to some degree. To achieve the goal of ending blind spots in your network, you’ll need to implement a visibility architecture. It’s not hard or complicated, but it does require some planning. At the same time, the sooner you can accomplish this step, the faster you can integrate a visibility architecture with your IT network. And the sooner you can realize cost and productivity savings.

Considerations When Researching Visibility Architectures

When considering visibility architectures, there are several items to investigate. Here is a short list of common items:

Flexibility, i.e. choice – You will want, and need, options. This includes the flexibility to deploy inline and out-of-band visibility solutions. It also includes the ability to monitor your physical and virtual data center traffic. Application Intelligence is another area to look for. While you may not want to engage in all of these activities right away, you should look for a solution that allows you to add the pieces you want, when you want, without a forklift upgrade.

Ease of Use – This will be a critical component that will heavily influence your total cost of ownership (TCO). Look for a solution that uses a drag and drop GUI interface. A command line interface will take you 10 times (or more) longer than a drag and drop interface to configure filters. The management system should also be able to handle everything—from global element management, to policy and configuration management, to data center automation and orchestration management.  Engineering flexible management for network components will be a determining factor in how well your network scales.

Technology – A third consideration is around the technology. Buyer beware applies to this market place (just like others you are used to). While vendor products may sound the same, they usually aren’t. In general, a strong consideration should be to purchase NPBs that run at line rate under all conditions. Only a very few NPBs do this. Anything less adds delay to your monitoring effort. For inline solutions, this line rate will be absolutely critical. You will also want failover technology that is as fast as possible for inline solutions.

Data Access – Data access is another area of concern. Consider using Taps instead of SPAN ports for your data access technology. Taps are superior to SPANs for several reasons, see this analysis. One key difference is that SPANs provide summarized data (instead of a complete copy of all data) that can often be missing key data you need for proper problem resolution. Another area to investigate is whether your tools need packet data or NetFlow data. One last thing to consider is if your tools need additional data from application intelligence functions to further improve their performance.

More Information on Visibility Architectures

When all components of a visibility architecture are combined, they eliminate the blind spots within your network that are harboring potential application performance and security issues. So what does a successful visibility architecture look like? Read this blog and/or check out the material available here.

Ixia’s entire series of blogs on visibility are available now in the e-book Visibility Architectures: The ABCs of Network Visibility