Containers, Docker and Kubernetes… Do I need network visibility between containers?
A long time ago, in a galaxy not so far away, companies used to run applications on physical servers. With this model, it was not easy to manage hardware resources allocated to an application unless you only ran one application per server, something that was not always practical and definitely was not particularly optimized or efficient. You could run multiple apps on the same server, but any of those apps could easily use more resources than planned and slow down the others, or should things go really poorly, bring down the whole server and the other apps with it. Not good!
Then came virtualization, altering the deal to allow you to run multiple virtual machines (VM) on the same physical server, with each VM independent from the others. It is like multiple server or PCs, each one with its own operating system and resources. An application running on VM1 is isolated and cannot interfere with the resources allocated with VM2. Multiple applications can run in a VM if needed, but more importantly, multiple VMs can run on the same physical server, thus optimizing the use of expensive physical resources.
Then containers came into the picture, altering the deal further. Actually they had been around for a while, but it is with Docker and its ecosystem for container management that their popularity took off, that was an eternity ago… In 2013!
Containers are like smaller VMs, they have their own CPU, memory and file system, but they share an operating system with other containers. This results in a smaller footprint (remember, you are not storing another instance of the OS for each app/container), more independence from the underlying infrastructure, greater portability, and they can start very quickly. The same container can run in multiple environments, an elegant solution for a budget constrained age.
So, containers are small entities, nimbler than VMs, which run applications or services, they can start quickly to cope with instant demand and scale at will. How are they managed and orchestrated? There are multiple solutions including Kubernetes, Docker Swarm, OpenShift to list a few, but Kubernetes is leading the charge and spreading fast.
Another point to consider is that a single application is often deployed across multiple containers and that a cluster of containers is called a pod in Kubernetes terminology.
Now, back to the subject of this post, visibility between containers, and why you should care. You should care because whatever the structure of your applications, a large monolithic block, or a multitude of tiny components, offline analysis of network data is key to avoid privacy breaches, compliance issues (HIPAA etc.), support troubleshooting and QoS measurements. And in our collective experience, with network visibility there is no such thing as luck. It’s all about packet level monitoring. It is as crucial in virtualized environments as it is in physical ones. East-West or lateral traffic, about 80% of overall network traffic in a data center, may never leave the physical servers the VMs or containers are hosted on. However, it is very important to monitor these “inter application” communications. Running a containerized application does not make it safer, but it can make it harder to see this traffic. But, without inter container visibility, or inter-pod visibility, your organization is at risk.
Pod Satellite/Sidecar vs Service Container
Mirroring inter-pod network traffic in a containerized environment can be done by either inserting a special container called a “pod satellite” inside the pod, or by setting a service container in the host itself. The implementation will vary depending on the container orchestrator. Here I am focusing on the most common, Kubernetes. In the first approach, the pod satellite, also referred to as a “sidecar”, can see all the ingress and egress traffic to and from the pod. In contrast, the service container sits next to the container pods, and connects to the container network via a dedicated CNI (Container Network Interface) plug-in.
Both options have their pros and cons, let me provide some highlights.
Scope & Flexibility: At first the Service Container looks like an easier approach, with no need to bother installing and managing sidecars in the pods that require monitoring. Just install one Service Container in the host and let it do its job. Easy, right? Well, it is not that simple… First you need a CNI-Plugin for each network driver to be supported (Calico, Flannel, Cilium, Contiv, WeaveNet…). Then resource allocation comes from the host pool and accordingly is less granular and less isolated than the resources of the sidecar which are allocated from the tenant pool. A big plus of the sidecar is independence from network drivers. This of course aids agility and in dynamic environments which require on demand/elastic provisioning, agility is key.
Maintenance and operations: The Service Container is heavier, requiring downtime for updates. If it goes down, events will be missed. The sidecar is fully integrated with the container pod, follows the very dynamic lifecycle of the pod, can be sized to fit the pod requirements and can support rolling updates of images, multi-versioning, follow flexible pod policies and micro segmentation.
Scalability, HA, Debug & Recovery: Any issue on the Service Container will affect all containers and VMs on the host, while the sidecar lifecycle is tied to the pod and is fully isolated and recoverable with the pod. Because it is highly distributed it has fewer resource constraints than the Service Container, which faces greater resource contention.
In case of an issue, the sidecar is easy to isolate. The Service Container, as a central entity, is more complex, and will likely affect others containers on that host in event of an issue. It can also create security exposure for the infrastructure.
While the Service Container can be appealing for green field deployments with lower requirements, you need to be fully aware of its implementation limitations. If you want a flexible, and secure solution which can isolate proxy scenarios and provide full visibility of pod traffic, the pod satellite/ sidecar is definitely the right option.
Ixia’s CloudLens offers full network visibility for containerized environments in private and public clouds. Contact us to request a demo on how to remove blind spots in your containerized environments.