Jeff Harris
Chief Marketing Officer

Virtualized monitoring: the public cloud dilemma

October 4, 2016 by Jeff Harris

Public clouds seem to be becoming the new normal - at least the conversations about them. But by 2019, Cisco predicts that over half (56%) of all cloud workloads will be in the public cloud. This represents an increase of 44% year-on-year growth.  That’s a lot so perhaps we are beyond the conversation stage.   

It is easy to see why public clouds’ popularity is growing so dramatically. Public cloud environments are cheaper than running your own, and they are both elastic and scalable. Like everything else, though, you have to give something up to get something. That trade has been mostly in insight.  Security in the cloud has so far been mostly a matter of trust, as those public clouds are somewhat opaque.  And businesses need to monitor their public virtualized environments with the same robustness and vigor as they do in their on premise and private cloud environments.

The trouble is, most organizations typically do not. When we surveyed a range of businesses on their virtualization practices, we found that only 37% monitored their virtualized environments to the same standard as their physical ones. This is particularly problematic when you consider that 67% of our respondents deployed business-critical applications in their public clouds – so their day-to-day operations depend on an environment they cannot see into. There is a huge visibility gap when it comes to public clouds – and businesses desperately need to plug it.

Fortunately, there are just four simple principles that drive public cloud visibility. By implementing these processes before transitioning to a public cloud, you can assure easy yet effective security monitoring in your newly virtualized environment.

  1. Copy. Extract the traffic of interest from the virtual machines in your datacenter, and make a copy of it so that you can do more detailed inspection. Traffic copying is simple enough to achieve with a virtual TAP (vTAP) or a packet capture agent.
  2. Filter. Control the rate at which virtual traffic hits your host computers with a Network Packet Broker (NPB). This is where the difference between east-west traffic (between machines in your own data center) and north-south traffic (entering and exiting your data center) is important. In a virtualized environment, east-west traffic makes up a far larger proportion of overall traffic, and therefore needs to be filtered to stop the system being overwhelmed.
  3. Manipulate and groom. This is all about breaking up your data packets into manageable chunks, which in turn makes your security tools far more efficient. Packet manipulation, packet grooming and brokering are all useful processes.
  4. Analyze. Finally, you need to deploy intelligent analysis tools and filter your traffic (now carefully broken and organized into manageable packets) through them. You may be able to complete this on the same host network or, if you are dealing with particularly large traffic volumes, it may need to be filtered externally, in which case you can use a tunneling protocol to manage its exit from the host machine. The point of this stage is to analyze each data packet for threats and identify any key weaknesses that you should attend to before deploying your public cloud.

Public cloud environments are the future of enterprise IT infrastructures. However, to fully realize their benefits (and to avoid introducing any risks), it is vital that you apply the same monitoring and visibility standards to your public cloud as your physical network.  Make sure you are scaling for your future architecture.  Contact us today to discuss your cloud monitoring needs.