You Can't Manage What You Can't Measure - an RSA 2019 Retrospective
This week, I had the privilege of enjoying an enlightening and broad-ranging panel discussion entitled “You can’t manage what you can’t measure: are you measuring the right stuff?” at the RSA 2019 conference. The panelists included Sean Cordero of Netskope, David Ginsburg of Cavarin, Accenture’s Justin Harvey, and Ixia’s own Steve McGregory. Marie Hattar, Keysight CMO, moderated the session.
It’s hard to recount the entire discussion which spanned from 2FA to automation to machine learning to the gap between standards and reality, but several threads stood out as running through multiple topics during the hour. Chief among those was not a hot new technology or technique, but rather the importance of people in the security equation. As Steve McGregory pointed out, you can’t train humans not to be human. You have to assume they will make mistakes and can only deal effectively with a limited amount of data and complexity. Our role as security specialists and product developers is to make it easier for humans to do the right thing.
One example of this, illustrated by Sean Cordero, is two-factor authentication (2FA). Even with training people choose weak passwords, and passwords can be compromised, so why not make 2FA easy and required everywhere?
Another subject where the focus returned to people was, ironically, the role of machine learning and AI in security, although this did prompt some debate within the panel. At the end of the day, security breaches are typically detected by security administrators, not ML algorithms (although this detection time is often quite long). AI should properly be thought of as “automation intelligence” because it can play a very valuable role in winnowing down the flood of events and alerts to inspect, making life easier for the human at the console, but the algorithm isn’t going to find the bad guy for you.
One intertwined set of topics that came up was cloud migration and security tool selection. It was pretty well agreed on the panel that Cloud Service Providers (CSPs) do a good job of securing their infrastructure and that security concerns should not prevent an enterprise from moving workloads to the cloud – as long as it’s done responsibly, and with the selection of appropriate security tools. Different operating environments require different tools and techniques, and the enterprise firewall which served you well in the data center will be useless in the public cloud. Moving to the cloud will involve deploying additional tools. It is tempting to try to collapse down to a single vendor, whether it’s for both data center and cloud security or even for going from best-of-breed to a less expensive single-vendor security team. But if you’ve chosen the wrong tools, tools which are inappropriate or don’t work very well, then they won’t make you very secure even though they might save you some money.
Circling back to the importance of measurement and measuring the right things, it was noted that measuring things like alerts or blocked attacks was really irrelevant. As Justin Harvey pointed out, the only two metrics that really matter are Mean Time To Detect (MTTD) and Mean Time To Respond (MTTR). The only way to truly measure the effectiveness of a security deployment, including the critical human element, is by measuring how long it takes to detect a breach and how long it takes to effectively respond and remedy the situation. Training, simulation, and engaging a “red team” to attack your network is probably the best way to evaluate and improve this metric.
My thanks to all of the panelists and the many attendees. It was very illuminating to hear from a variety of security experts, developers, and practitioners sharing their perspectives, and refreshing to hear that despite the ongoing avalanche of new technologies, at the end of the day security really comes down to people.