Bro Installation (post 10 in a series)
In my previous posts in this series, I laid out my plan to enable Threat Hunting in a scalable way for a cloud environment by integrating Bro IDS with CloudLens, hosted on Kubernetes, with Elasticsearch and Kibana as the user interface. I also gave brief overviews of the key components, how to configure CloudLens to deliver network packets to Bro, and how Bro will be configured. Then I gave instructions on how to set up a Kubernetes cluster that will host Bro.
In this post, finally, I will walk through the steps of getting Bro running on your cluster.
I mentioned it in my previous post, but as a reminder we will be relying on Helm to manage our installs. Be sure it's installed on your cluster as mentioned in the previous post.
Also, we will be relying on an Ingress. Be sure you have one installed on your cluster as mentioned in the previous post.
As I mentioned before, I wrote a Kubernetes controller to help with updating Bro's configuration to reflect the current state of the cluster.
In order to put this in place, you'll need to install ConfigMapTemplate, available at https://github.com/OpenIxia/kube-bro-configmaptemplate.
The simplest way to install it is by running
kubectl apply -f https://raw.githubusercontent.com/OpenIxia/kube-bro-configmaptemplate/master/artifacts/configmaptemplate.yaml
Since we're going to store our Bro data in Elasticsearch, obviously we'll need Elasticsearch to be running somewhere. One option for this would be to use a hosted Elasticsearch service like the ones provided by Elastic or by AWS.
I'd rather have my ES cluster hosted very near the data it's going to ingest, so I want to run it right on my cluster. Fortunately, that's relatively easy to do. There is a Helm chart provided by Bitnami that makes setting up Elasticsearch a simple process.
Helm charts can be configured by setting values on the command line when you install, but for anything beyond the most basic settings it's more handy to create a values.yaml file that pulls all of the settings together into a structured format. Bitnami's defaults are actually pretty decent for a basic ES cluster, but I have a few tweaks for my cluster.
- I want to pin the Elasticsearch version to a specific revision by specifying an image tag. That's partly so I can match Kibana to the same version when I install that later.
- My setup produces data pretty quickly, so I am expanding the amount of disk space used.
- At the same time, my cluster is small so I'm limiting the number of replicas. I can always grow this later if needed.
With the settings all layed out, it's a simple matter to install Elasticsearch using Helm.
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install bitnami/elasticsearch --version 2.0.3 --name bro-es -f es-values.yaml
These commands add the custom helm repository from bitnami to my list of repositories, download the list of packages from it, and then installs Elasticsearch using the settings file I created earlier.
It will take the Elasticsearch cluster a moment to come up. You can watch it coming up using
kubectl get pods -o wide
Once all of the elasticsearch pods are in the 'Running' state, ES is mostly ready to go. (It can actually take a little time after that for the running processes to finish their startup work and actually start processing requests.)
A brief warning
One thing to be aware of about Elasticsearch - it's quite the memory hog. Running it on your cluster this way, you'll need to be sure you have sufficient memory in your nodes to support it.
As discussed earlier, Elasticsearch by itself doesn't really have a graphical UI, it just provides APIs. So next I want to install Kibana, once again using Helm charts. The default stable repository for Helm already has Kibana available.
Similar to Elasticsearch, I need to provide my settings for Kibana in a YAML file.
- I need to set the
ELASTICSEARCH_URLto point Kibana to the ES cluster we just created. The hostname in this URL is based on the service name that gets created by Helm, and follows a typical Helm naming convention. It has four parts
- bro-es - this is the name of the helm release we just created (a "release" is an installed chart.)
- elasticsearch - this is the name of the helm chart that we installed
- coordinating-only - this suffix is added by the chart to specify exactly which service we're talking about
- 9200 - This is the port number, as specified in the Elasticsearch default config.
- We specify the image tag to use. This is deliberately chosen to match the Elasticsearch image that we used.
- We tell the Helm chart to use an ingress controller. The ingress will use host-based routing, so we give it a hostname to watch for.
With the settings in place, install is once again simple:
helm install stable/kibana --name bro-kibana -f kibana-values.yaml
Kibana actually starts up fairly quickly, assuming Elasticsearch is up and running already.
Since we're using an ingress with host-based routing, to access it you need to make sure the hostname
bro-kibana.local that we set in our config maps to the IP address of the ingress. Here's how to do that.
First, you need to find the IP address of the ingress. You can do this using the command
kubectl get svc
This should list the services on your cluster. You'll want to look for the service for your ingress. If you used Helm to install the nginx controller as recommended in the previous post, the service name will look like
<servicename> is whatever you gave to helm when you installed nginx.
On the output line that shows that service, if you scan over you should see a column for
EXTERNAL-IP. The IP address you want is whatever shows up in that column for this service.
└─▪ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
bro-es-elasticsearch-coordinating-only ClusterIP 10.105.197.22 <none> 9200/TCP 5h
bro-es-elasticsearch-discovery ClusterIP None <none> 9300/TCP 5h
bro-es-elasticsearch-master ClusterIP 10.96.210.49 <none> 9300/TCP 5h
bro-kibana ClusterIP 10.103.232.144 <none> 443/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 33d
nginx-nginx-ingress-controller LoadBalancer 10.105.237.225 10.24.66.17 80:32145/TCP,443:30223/TCP 21d
nginx-nginx-ingress-default-backend ClusterIP 10.111.186.217 <none> 80/TCP 21d
Once you know that IP address, you can add it to the hosts file for your desktop. For Mac or Linux, this file would be in `/etc/hosts`. For Windows, it's located under `C:\Windows\System32\Drivers\etc\hosts`. Open this file in your favorite editor, and add a line like:
Using whatever IP address you determined earlier.
Having done this, you should be able to open your browser and navigate to http://bro-kibana.local/, and see the Kibana UI.
Finally, we need to get Bro running on the cluster. I have created a Helm chart for Bro, available here.
For the Bro chart, there are a few things that you will need to set
- You need to explicitly accept the CloudLens EULA by setting
- You need to give the project Key from your CloudLens project by settings
- You need to point to the elasticsearch cluster by setting elasticsearch.url
Since these are simple values, I just set them on the command line:
helm repo add kube-bro https://openixia.github.io/kube-bro-helm-repo/
helm repo update
helm install kube-bro/kube-bro --name bro \
You can watch with
kubectl get pods while Bro comes up. The start can be a bit bumpy since Bro doesn't like when the
cluster-layout.bro is incomplete, and it doesn't become complete until all pods are at least starting. Nevertheless after a moment you should see all of the Bro pods go to a 'Running' state.
I already talked through the CloudLens configuration in an earlier post, but this is the point where it should kick in. Give Bro a bit of time to come up, and then we'll search for it in the CloudLens UI.
I mention in that post that you'll need to create a tool group that watches for your Bro instances by looking for a tag. The Helm chart for Bro already sets some tags for you. You should be able to find your Bro instance by searching for tag "module" with the value "kube-bro", which is taken from the name of the Helm chart. Then add a search for the tag "install" with a value that is the name you gave to helm when you installed bro. ('bro' in the example command above.)
Your search should come back with a result representing the Bro pod you just started using Helm. You'll want to save this search as a tool group.
Don't forget to do a similar search for your source side - the VMs whose traffic you want to capture. Save that as a source group, and then draw a connection from that to your Bro tool group.
If you watch in the CloudLens UI, you should be able to see your Bro tool group go from an instance count of 0 to an instance count of 1, indicating that your Bro instance has been noticed and added to the group. At that point, you should also see the instances on the source side start sending their packets to your Bro instance (indicated as a thickening connection bar given sufficient traffic quantities.)
You should also be able to see Bro events start showing up in Kibana, but I'll dig into that next time.
- ConfigMapTemplate open source repository - https://github.com/OpenIxia/kube-bro-configmaptemplate
- Helm Charts for Elasticsearch - https://bitnami.com/stack/elasticsearch/helm
- Kibana Helm Chart - https://github.com/helm/charts/tree/master/stable/kibana
- Bro Helm Chart - https://github.com/OpenIxia/kube-bro-helm-chart
Now we finally have Bro running and collected data into Elasticsearch. Next time I'll give a brief overview of how to dig around in this data to do Threat Hunting using Kibana.