Bro Frontend (post 6 in a series)
In my previous posts in this series, I laid out my plan to enable Threat Hunting in a scalable way for a cloud environment by integrating Bro IDS with CloudLens, hosted on Kubernetes, with Elasticsearch and Kibana as the user interface. I also gave brief overviews of the key components.
In this post I'll talk about how I'll be feeding Bro network traffic for it to analyze.
First, let's take a look at the typical Bro architecture for a scalable clustered deployment. This architecture assumes a network tap feeding a "frontend", which will load balance the traffic over some number of workers. (I kind of object to the term "frontend" here since I normally think of the user interface as the front end, but I'll go with Bro's terminology.)
As I mentioned in my earlier post in this series about CloudLens, it is theoretically possible to set up a VPC in the cloud to mimic this architecture. You’d put your resources in a private subnet and route all of their traffic through a VM implementing a NAT, and that VM serves as the tap. But that’s troublesome for a couple of key reasons.
- First, that NAT VM becomes a single point of failure and a network bottleneck.
- Second, you’ll only see north-south traffic that way. You’ll be blind to all the traffic traveling around inside your VPC, east-west.
So instead, I will use CloudLens to address this problem. In order to make this happen, I need to place a CloudLens agent alongside each worker. (In Kubernetes terms, I'll add a CloudLens agent container to the worker pod. But I'll get into that more in a later post.)
The CloudLens agent also goes onto each VM I'm trying to monitor. Those source agents then capture traffic at each of these VM endpoints, and tunnel it to an agent on a worker. For each worker, all of these tunnels get aggregated onto a single virtual interface, and the Bro worker just listens on that interface. As far as Bro is concerned, it’s just like listening on a tap.
First, it's necessary to sign up for a CloudLens account. CloudLens is a paid service, but there is a free trial. Once that expires the cost for a small number of VMs is basically negligible.
In the CloudLens UI, I create a project. Each project has a project key, which I'll need when launching the agents. For each Bro worker, I start the CloudLens agent, giving it this project key. Once the agent comes up, it will be listed as a result in searches within the project.
I need to create a source group that gathers together all of the source VMs I want to monitor. I can do this by searching by region, by OS, by EC2 tag, or any other metadata that's relevant to find the right set of VMs.
Then I create a tool group that gathers together all of my Bro workers. I do this by setting a tag on the agent for the worker. Then I just search for that tag.
With both groups in place, I just draw a connection between them. It's as simple as that; a moment later and packets start to arrive. So with CloudLens as a Frontend for Bro, we’re in business. We have traffic arriving, ready for analysis. There was no extra infrastructure and VMs to set up.
The only other thing to do is configure Bro to listen on the network interface that the agent provides, named cloudlens0. There are two places that interface shows up in Bro's config. The first is on the command line - when I run the bro worker, I'll want to specify -i cloudlens0 as an argument. The second is in the cluster-layout.bro config file, in the entry for the worker. To be honest, I'm not sure why it's in two places or which takes precedence. I just followed in the footsteps of a BroCtl managed configuration.
That's pretty much it. With this simple setup, now I can monitor anything going on in my cloud infrastructure.
Now that I have CloudLens ready to deliver packets to my Bro IDS, the next step is to get Bro up and running.
- CloudLens - https://ixia.cloud/startup
- CloudLens AWS Deployment Guide - https://www.ixiacom.com/resources/cloudlens-aws-deployment-guide