Kris Raney
Distinguished Engineer
Blog

Kubernetes (post 4 in a series)

August 28, 2018 by Kris Raney

In my previous posts in this series, I laid out my plan to enable Threat Hunting in a scalable way for a cloud environment by integrating Bro IDS with CloudLens, hosted on Kubernetes, with Elasticsearch and Kibana as the user interface. I then discussed why Bro will serve as the intrusion detection system in my project, and how CloudLens provides the visibility into the cloud-hosted network.

In this post I'll give a high level overview of Kubernetes. For those that might not be familiar, Kubernetes is a cluster management solution based on containers, started by Google drawing on their experience from their Borg cluster management, and now owned by the Cloud Native Computing Foundation, or CNCF.

This is a very basic overview of Kubernetes. It does not cover every Kubernetes concept. This is not intended to be a complete lesson on Kubernetes, just an introduction to the relevant concepts for this project.

Why Kubernetes

There are a couple of different reasons why I prefer Kubernetes as the hosting environment for this project.

At a technical level, I need my solution to be adaptable under dynamic conditions. I don't want to have to size everything for my worst case conditions.  It needs to be able to scale out to handle heavy load, and then scale back in when the load is light. I want to be able to manage it using automation, I don't want to have to manually tweak it as things change. And I want to keep the infrastructure as light as possible, investing most of my time and resources in the actual analysis I'm trying to do.

At a more practical level, I see Kubernetes as important because I want this solution to be applicable to a diverse, hybrid environment. I don't what to build something that locks me in to a particular provider. Kubernetes provides for me an abstraction layer that means I could shift from one place to another, with Kubernetes addressing the underlying differences so my solution still
applies.

Finally, I chose Kubernetes because it's popular. I don't mean that in a "follow the crowd" kind of way. What I mean is, I want an option that has a strong community supporting it, with lots of people contributing updates and fixes. Kubernetes has that. This is one reason why it's able to provide an abstraction over many different environments - different teams are making it fit in different places, all in parallel. For any environment I decide to move to, even one that doesn't exist today, I can be fairly confident that somebody (probably the owners of that environment) will do the work necessary to adapt Kubernetes to work there. Somebody besides me, I mean. I don't want to spend my time on that, I want to spend my time on value add for my customers.

Kubernetes Concepts

In order to understand Kubernetes and how it works, you need to understand some fundamental Kubernetes building blocks.

Container

The most basic workload element on Kubernetes is a container - usually a Docker container. A container has the following properties

  • It’s based on an immutable filesystem image. These are hosted such that they’re easily installable from a known location.
  • It has its own, separate, writable filesystem namespace.
  • It can mount directories or remote shares as volumes into its filesystem
  • It also uses namespaces to have completely separate network and process views
  • From other containers, even on the same host. It’s almost like a completely separate VM, but lighter weight.

The way to think of a container is a single application handling a single job, wrapped up neatly along with all of its dependencies into an easily launch-able package.

Pod

An important unique attribute of Kubernetes compared to other clustering systems is containers don’t stand alone. They’re grouped together into pods (like “pod of whales” in keeping with Kubernetes’ general nautical theme.) A pod is a set of containers with the following properties.

  • They will run on the same physical host.
  • Being on the same host, they can share volumes.
  • They will share the same network namespace as well.

This is actually a powerful concept, in a subtle way. Having the native element of the system to be a group of containers in this way, rather than just an individual container, helps to support the general philosophy that each container has just one job.

This is important enough that it’s worth digging into a practical example.

Suppose you want to run a webserver, and you’ve chosen nginx to do that job. At a simple level, you can just launch an nginx container. But now, you want to monitor your web server by aggregating its logs up to Elasticsearch, so you need to run logstash. How to accomplish this?

One way would be to add logstash to your nginx container, so it’s a single self-contained image. But now you’re mixing apps - you’ll likely have to custom build your docker image, and any time you want to update either of those apps it will have to change.

A more “docker-like” approach is to have a separate logstash container. You map a volume into your nginx container to hold the logs, and you map the same volume into your logstash container so it can pick them up. They both see the same log file directory, but they’re otherwise isolated. You can update either container without affecting the other. And since each is just a single, un-customized app, you can probably just use off-the-shelf images from Docker Hub. You stay out of the business of building your own image.

Now if you want to run these partner containers on a cluster, it could be tricky. There’s more than one node where they might land, so you might have to write some rules to make sure they get put in the same place so they can share a directory easily. That is, unless you’re using Kubernetes. Kubernetes recognizes that this kind of container partnership is not just common, it’s the norm. You just put the two containers in a pod, and Kubernetes knows implicitly they need to go in the same place so they can share.

ReplicaSet

If you just have a single container or pod to run, you don’t really need a cluster. The point of a cluster becomes apparent when you need to run multiple instances to distribute a workload and increase scale.

In that case, it’s the job of the ReplicaSet to make sure you have the number of instances that you want. You give it a replication count, and it will start that many pods. If it finds there aren’t enough (maybe one died or a whole node died) it will start more. If it finds there are too many (maybe you either started one manually, or you reduced the replication count) it will kill some off.

This sounds simple, even trivial, but it's important because it's an example of the general Kubernetes concept of a “Controller”. If you’re familiar with the old saying “If you give a man a fish, you feed him for a day. If you teach a man to fish, you feed him for a lifetime,” controllers are all about creating that fish-for-yourself kind of ability in the cluster.

In other words, rather than launching pods directly as a manual operation, where you would need to launch them again if some of them died, you just tell the ReplicaSet how many you want. It knows how to start them and kill them, and it will handle keeping the right number alive autonomously.

In general terms, Kubernetes clusters are managed by configuring them with what are called manifests. A manifest tells the cluster what state you want the cluster to be in. Controllers understand the manifests and do the actual work to get it in that state, and to keep it in that state long term.

Deployment

A deployment is another layer above ReplicaSet. Its job is to know which version of the pod you want. When the version changes, the Deployment object manages rolling out the change. It does this by setting up a new ReplicaSet for the new version, and gradually dialing up that count at the same time it dials down the count on the old ReplicaSet. Once all the old pods are gone, it deletes the old ReplicaSet for you.

For the most part, when you want to deploy an application to Kubernetes, what you’re going to create is a Deployment object. You tell it what the pod you want looks like. It manages the details of setting up a ReplicaSet, and of going through those upgrade steps as your configuration changes over time.

In standard controller style, Deployments get you out of the business of micromanaging this changes. It boils this work down to a very simple interface. You tell it what version you want (with a manifest,) and it handles the details.

ConfigMap

Once you take an application and distribute it across a bunch of separate nodes, you are likely to have the problem that you need to distribute a config file across those nodes for the application to use. ConfigMap handles that job.

You can define a ConfigMap containing one or more config files, and map it into your pods as a volume (or alternately as environment variables.) Any time you update the ConfigMap, all of the pods will see it no matter which node they’re on.

Again in Kubernetes style, you don't worry about copying files around to the right subset of nodes or setting up remote shares. You define the ConfigMap, and Kubernetes handles the details.

Service

In Kubernetes, a Service represents essentially a network port you want to expose from your application. It provides the ‘public interface’ of the application. Since your application may be a bunch of instances distributed across multiple nodes, the Service object defines which pods to load balance connections across. Connections to the public interface get distributed over the available running pods. This may happen by setting up whatever kind of load balancer your cloud provider offers. That's an implementation detail that Kubernetes handles for you.

Ingress

An ingress accepts web connections, and based on the requested hostname or URL forwards the request to a particular service to be actually handled. It can be a layer above a Service that allows multiple web resources to be served from the same IP and Port. This can be important, since load balancers are a limited resource on cloud providers in general. 

What's Next

At this point in the series I've described Bro, CloudLens, and Kubernetes. In my next post I'll talk about Elasticsearch and Kibana. Then we'll finally be ready to start digging into the details of how it all fits together, and how you can replicate it in your environment.

References