LUIS CAZACU
Software Architect
Blog

Network Visibility in Hybrid Cloud

September 25, 2018 by LUIS CAZACU

What's a hybrid cloud

More and more companies are starting to move to the public cloud or are at least investigating what are the pros and cons of doing it. For start-ups, it's probably easier since they can just start everything fresh in the cloud, but for companies that already have their own on-premises data center, things are not that easy. Most of them end up with a hybrid cloud, basically a mix of on-premises, private cloud and public cloud resources. I won't go into details on how a hybrid cloud works or if you should go that route, but basically this type of architecture gives businesses greater flexibility and agility, cost effective temporary scaling solutions and more data deployment options, while allowing them to make these changes incrementally. Despite its benefits, hybrid cloud computing can also present technical challenges. The one we will focus on in this blog, is how to avoid losing the network visibility over the machines you decided to move from on-premises to the cloud.

Classic network visibility deployment

Let's take a simple data center network visibility deployment example that we will discuss to make things easier to follow. 

common network visibility deployment

 

So let's assume these are your servers and network infrastructure. You have spent a lot of time to add network visibility in your data center to monitor application performance, have an intrusion detection system and added a packet recorder that you can use for gathering more insight/details in case one of the other tools raise an alert.  You tap the traffic flowing through your switches either by using a span port or network tap and redirect it to a network packet broker(NPB) that will then send the appropriate traffic to the right tool(s) for analysis.

Now, you want to extend and decide the way to go is to add a few more servers in the public cloud, but what can you do so you won't lose or redo all the monitoring you've already put in place in your data center ?

Network visibility in a hybrid cloud

Let's say you have added your servers in the public cloud and now your current challenge is how to keep leveraging the application and security monitoring you put in place.  Even if you add a load balancer that distributes the traffic between the on-premises servers and those in the public cloud and say I will just tap the traffic reaching the load balancer, you will still have no east-west visibility for the servers in the cloud. Or worse, the servers in the public cloud could serve a distinct scope/application than those on-premises, so redirecting all the traffic through my data center just adds more burden and cost. Ideally you need to tap the traffic that reaches your servers in the public cloud and send it back in your data center for analysis.

Another option is to have the monitoring tools duplicated in the public cloud, next to the servers, but such a deployment will make sense if your public cloud deployment is big enough (you don't want to spend more on your monitoring infrastructure than what you pay for your application servers). Usually in the public cloud you pay for the network traffic that exits your virtual network, so such a deployment becomes mandatory if you pay more for the network traffic going out than what you would pay for hosting a set of monitoring tools also in the public cloud (if they are cloud ready).

For this blog we assume the first option makes more sense for you, so we will focus on that (send the tapped traffic back in your data center). For details on a full public cloud network visibility solution check this blog.

Here is how the deployment should look:

hybrid cloud visibility deployment

 

So you need to tap the traffic on the VMs in the cloud and forward it back to your NPB. Since in the public cloud you do not have access to the underlying infrastructure, you will install an agent inside the VMs that can do the tapping at the OS level. The agent should be able to send the traffic back in your data center. 

 

Network visibility with CloudLens Public

CloudLens Public offers the agent you need, that can do the network tapping from inside the VM and send the traffic back to your NPB in the data center. CloudLens uses a docker-based agent for packet capture. This agent gets installed on each VM that you want to monitor. One thing a bit different than other solutions, the CloudLens agent is also installed on the receiving end. If you have access to the NPB, you can install the agent directly on it, if not you need another machine where to install the receiving agent and forward the traffic from there to the NPB.

Behind the scenes, what happens is that the capturing agent picks up network packets as they get in and out of the VM and packages them up in a tunnel. The CloudLens agents on both ends of this tunnel work together to get through firewalls or NAT. So CloudLens is very flexible about the network architecture.  The agent takes care of receiving the incoming packets, verifying they're from a trusted source, and aggregating them onto that virtual interface. On the receiving end, the CloudLens agent creates a virtual network interface, named `cloudlens0` by default. If you installed the agent on the NPB, it's just like the `cloudlens0` interface is connected to a network tap, exactly like in a data center. If you don't have access to the NPB, the usual case, you need a Linux machine to receive the tapped traffic and forward it to the NPB through a GRE tunnel for example ( since most NPBs offer the capability to terminate this kind of tunnel). So you first create a GRE tunnel called "cloudlens0" on the Linux machine (details here)

 
 sudo ip tunnel add cloudlens0 mode gre remote <NPB_IP> local <LOCAL_IP> ttl 255 
 sudo ip link set cloudlens0 up 
 sudo ip addr add 10.10.10.1/24 dev cloudlens0

and after that you configure the receiving end of the tunnel on the NPB. Last step is to install the CloudLens agent on the Linux box. It's important to do the steps in this order, so CloudLens uses the `cloudlens0` interface created by the tunnel and not create one of it's own. You should also set a lower MTU for the `cloudlens0` interface than the one you have for your physical interface, to account for the GRE tunnel overhead of 24B.

The CloudLens configuration is managed through CloudLens backend service. All of the installed agents phone home to this service, and report back metadata about the host where they live. You can search through this list based on that information, and save searches as groups. All of the configuration for CloudLens is based on these groups.

cloudlens search screen

So you would search for the instances you want to monitor, and save that search as a group, then search again for your receiving agent, and save that as a tool group. Draw a line between them, and that's it, you have packet capture!

cloudlens groups

The great thing is, if you add more VMs because you scale out, or for any other reason, as long as they match your saved search they'll automatically be added to your monitoring. Also, because the capturing agent sits right in the VM you're monitoring, it's not limited to seeing only north-south traffic. Your monitoring tools will see all of your traffic, even east-west traffic within the virtual network. You can see traffic going from your VMs back to the data center or the outside world.

That's it, you now have network visibility in your hybrid cloud deployment with minimum pain and effort.

RESOURCES