Kris Raney
Distinguished Engineer

Bro Hosting (post 7 in a series)

September 10, 2018 by Kris Raney

In my previous posts in this series, I laid out my plan to enable Threat Hunting in a scalable way for a cloud environment by integrating Bro IDS with CloudLens, hosted on Kubernetes, with Elasticsearch and Kibana as the user interface. I also gave brief overviews of the key components, and how to configure CloudLens to deliver network packets to Bro.

In this post I'll talk about how Bro fits into a Kubernetes architecture.


Typically with Bro, you would have a static set of machines or VMs, each hosting one of the Bro components. A program called BroControl would help manage these elements using ssh to access the nodes of the Bro cluster.

This is pretty reasonable for a fairly small deployment that can just be maintained manually. But cloud infrastructure often doesn’t meet either of those criteria. It can be large scale, and that scale may change a lot. And definitely, no one wants to be managing cloud resources manually. You want to have automation and orchestration doing the management. Unfortunately, at least in my opinion, BroControl is a little clunky for managing a dynamic cloud environment.

So, I set out to convert Bro to be hosted on Kubernetes. For those that are not familiar with Kubernetes, I give an overview in this post.

As discussed in that post, one of the important principles of Kubernetes is "teaching the cluster how to fish" - that is, letting a controller within Kubernetes manage the details of operating in a clustered environment rather than micromanaging all of the orchestration directly. BroControl rather overlaps with the cluster management provided by Kubernetes, and this creates some difficulties. Because of this, I decided to try and remove BroControl from the equation, letting Kubernetes handle the heavy lifting of managing the cluster.

These are the jobs BroControl handles

  • It generates detailed config from high-level config (we'll just create the detailed config directly)
  • It distributes config files and policy scripts across the cluster (this is handled by ConfigMaps in Kubernetes)
  • It starts and stops the Bro processes (this is handled by Kubernetes controllers)
  • It gathers monitoring and statistical data (we'll tackle this in a later post.)

Traditional Architecture

As you may recall from an earlier post, the elements of our Bro cluster look like this. Each box below CloudLens is a Bro component. Each of these is really just the same Bro executable, but running with different settings. A description of the various Bro components can be found here.


Translating this to Kubernetes

Logically for Kubernetes, each of these Bro processes translates to a container, where that container is part of a Pod.

Bro pods for Kubernetes

Additionally, in order to use CloudLens as the Frontend / Load Balancer, the worker pod needs a CloudLens agent container added. CloudLens provides the network interface carrying the traffic to be monitored, for Bro to listen on. For more detail on this, see this post.

Bro worker pod for Kubernetes

Also we will gather the Bro logs into Elasticsearch, so the Logger pod will need a Logstash container added. We'll discuss Logstash and how this fits together in more detail in a later post.

Bro logger pod for Kubernetes

And of course, each pod will actually be managed by a Deployment. I talk about the reasons why in this post.

Kubernetes Deployment object

All of these Kubernetes objects are defined in a manifest, which is just one or more text files written in YAML.

In addition to the deployment and pod definitions, we need the configuration information that will be distributed around to the nodes in our cluster. This includes a set of policy scripts that define what Bro will watch for, and some configuration that tells Bro how to operate.

The set of policy scripts will go into one ConfigMap.

Bro policy ConfigMap

And the Bro configuration will go into a separate ConfigMap.

Bro cluster ConfigMap

A Snag

However, with this last ConfigMap we run into a complication. One of the config files generated by BroControl defines the cluster itself - cluster-layout.bro. Bro appears to use this file for each node to know its own role and to know which other nodes to talk to.

Here's an example of it.

redef Cluster::manager_is_logger = F;
redef Cluster::nodes = {
    ["control"] = [$node_type=Cluster::CONTROL, $ip=,
                   $zone_id="", $p=47760/tcp],
    ["logger"] = [$node_type=Cluster::LOGGER, $ip=,
                  $zone_id="", $p=47761/tcp],
    ["manager"] = [$node_type=Cluster::MANAGER, $ip=,
                   $zone_id="", $p=47762/tcp,
                   $logger="logger", $workers=set("worker-1")],

    ["proxy-1"] = [$node_type=Cluster::PROXY, $ip=,
                   $zone_id="", $p=47763/tcp,
                   $logger="logger", $manager="manager",
    ["worker-1] = [$node_type=Cluster::WORKER, $ip=,
                   $zone_id="", $p=47764/tcp,
                   $interface="eth0", $logger="logger",
                   $manager="manager", $proxy="proxy-1"],

What makes this tricky is that the exact content of the file is not static, it’s completely dependent on the current state of the application on the cluster. Or more specifically, the content of this file depends on decisions that Kubernetes will be making on our behalf about where pods will be deployed, how many there will be, and what their IP addresses are. There should be one entry for each pod, and there are references in each entry to settings from the pod as well. ConfigMap does not natively have any support for dynamic content like that.


Fortunately, Kubernetes allows the user to add extended functionality by creating new types of resources called “custom resources”. A custom resource can have an associated controller that knows what to do with that resource. In order to accommodate this dynamic config file while still maintaining a “light touch” approach to Bro, I decided to create a Kubernetes custom resource I called a ConfigMapTemplate.

This resource allows you to define a config file using Go templating. In your template, you can query things like a list of existing pods and the IP addresses of those Pods. Then a ConfigMapTemplate controller will pick up the template and instantiate it, creating a ConfigMap to hold the results. Any time the template changes or the set of pods changes, the controller will update the ConfigMap.

So for example, this bit of template code will list all of the pods labeled with “component=logger” and save that list to a variable.

{{- $loggerpods := pods "app=kube-bro,component=logger" }}

While this snippet will iterate over that list, adding entries into the config file output and substituting the pod’s IP address.

{{- range $loggerpods }}
  ["logger"] = [$node_type=Cluster::LOGGER,
                $ip={{- .Status.PodIP -}},
{{end -}}

The end result is a cluster-layout.bro config file that matches what BroControl would create, which is dynamically created based on whatever pods happen to be running, and which is distributed out over the cluster like any other ConfigMap.

ConfigMapTemplate is open source and available for use:

What's Next

I've described how I plan to structure Bro for the purposes of this project. However, there are some limitations to this approach. Before we get into cluster setup, next I'll describe a bit about these limitations and how I think they are best addressed long term.