Cluster Setup for Bro (post 9 in a series)
In my previous posts in this series, I laid out my plan to enable Threat Hunting in a scalable way for a cloud environment by integrating Bro IDS with CloudLens, hosted on Kubernetes, with Elasticsearch and Kibana as the user interface. I also gave brief overviews of the key components, how to configure CloudLens to deliver network packets to Bro, and how Bro will be configured.
In this post I'll talk about how to stand up a cluster to use to host Bro.
If you already have a Kubernetes cluster, you should be able to skip this. However, you may still want to skim through the Cluster Configuration section to see if there are any config tweaks you'll need for your existing cluster.
The general steps for setting up a Kubernetes cluster are covered well elsewhere, so I won't go into detail here. Instead, I'll link to other guides.
For my cluster, I decided to use Amazon's Elastic Kubernetes Service. This works well for me because I'm going to be monitoring AWS instances. Might as well keep things close together. It would be just as reasonable, though, to use Google's GKE, or Azure Kubernetes Service, or set up your own cluster somewhere else, including your own data center.
Amazon's instructions for setting up an EKS cluster are located here.
As mentioned in the guide, once your cluster is up and running you should be able to interact with it by running commands like
kubectl get nodes or
kubectl get pods.
The AWS instructions to set up a generic EKS cluster get you most of the way there, but there are a few more things to set up in preparation for the Bro deployment.
Default Storage Class
Elasticsearch needs to store data, which means it needs a persistent volume. Generally speaking, cloud providers are able to automatically create volumes as needed. It's easy to get this working in AWS, but it's not set up by default. (Setting this up for self-hosted clusters may be a bit more tricky, and is not covered in this series.)
You will need to set up a default storage class so your cluster will know what kind of volumes to go create.
When the Elasticsearch setup asks for a persistent volume by creating a PersistentVolumeClaim object, it doesn't specify a specific storage class (unless you override in your config to make it do so.) Kubernetes will assume it wants the default - if there is one. If you haven't set one, Elasticsearch will wait around indefinitely for persistent volumes that will never appear.
If you have set a default StorageClass, it contains the necessary information for Kubernetes to know where you want the persistent volume to come from. A new persistent volume will be created based on those settings, and allocated to Elasticsearch.
We're going to use a Kubernetes application called Helm to help manage our running stuff. Helm is kind of like a package manager - kind of equivalent to Red Hat's
yum or Ubuntu / Debian's
apt-get. The packages you can install are called Charts. Each chart has all of the manifests needed to get that application fully up and running.
There is a default repository of charts, and there can be more in other locations. Helm takes care of tweaking the manifests when you install a chart, so that you can install multiple instances if you want and they don't conflict. Also, Helm charts can have dependencies on each other, so you can install one chart and get all of the components it needs as well.
Helm is pretty easy to install but the install for EKS is a little special due to EKS integration with IAM authentication. EKS-specific instructions can be found here.
Finally, we'd like to have an ingress controller. This takes a single IP address and is able to host multiple web services behind it. Using an ingress is not strictly necessary. You could just expose a service. But that means your application will consume a Load Balancer with an IP address, and those are limited resources. It's more efficient to allocate one Ingress, and let that be the service. Then it can use host-based or path-based routing to direct requests to the right service behind the scenes.
With Helm in place, it's simple to install an nginx-based Ingress controller as a Helm chart. Here are instructions.
- Amazon EKS - https://aws.amazon.com/eks/
- Google Kubernetes Engine - https://cloud.google.com/kubernetes-engine/
- Azure Kubernetes Service - https://azure.microsoft.com/en-us/services/kubernetes-service/
- Getting Started with Amazon EKS - https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
- Amazon EKS Storage Classes - https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html
- Helm Package Manager - https://helm.sh/
- Installing Helm for EKS - https://medium.com/@zhaimo/using-helm-to-install-application-onto-aws-eks-36840ff84555
- Kubernetes Ingress - https://kubernetes.io/docs/concepts/services-networking/ingress/
- Nginx Ingress Helm Chart - https://github.com/helm/charts/tree/master/stable/nginx-ingress
Now we have a cluster ready and waiting to host Bro. Next we'll need to stand up the components of our solution.