Are log files enough to keep you safe in the cloud?
Once an enterprise has deployed production applications and workloads in the cloud, the IT department will need to re-evaluate enterprise security and make adjustments to keep assets in the cloud safe from attacks. Not having control over the physical infrastructure means IT can’t plug in taps or traffic recorders to see what’s happening and use trusted security solutions to analyze packets for threats and suspicious activity. As a substitute, the cloud provider may offer log files (at a price) for security monitoring, but are they enough to protect data and applications in the cloud?
The use of log files
Log files are automatic, time-stamped documentation of specific events and are routine in any system. In cloud environments, log files generate information useful for operating and managing clouds, such as the number of instances deployed, API calls, configuration changes, and billing events.
As part of a security architecture, log files (or rather the underlying event being logged) can be used to trigger a security alert for further investigation. Security administrators use log files to view, search, and compile a history of specific events that have occurred. However, though event logs are useful for identifying when condition was triggered, they do not provide enough detail to determine what(and who) was responsible. More granular data is required to identify a breach or stop data loss.
Security solutions need granular, packet data
With the increasing sophistication of cyberattacks, threat detection requires more detailed information than what is available in standard log files. Multi-stage attacks are often composed of individual interactions that do not trigger alerts, so bad guys can carry out their plan without attracting attention. Details such as where a request originated from, which users are involved, which applications are used, and what data was accessed are embedded inside data packets and are essential to security detection and analysis.
Packet data is the gold standard for security investigations and provides the granular details required to identify a breach or perform root cause analysis. Packet analysis is also used to determine whether an attack spread laterally through the network. The rise of advanced persistent threats (APT) is one example of a highly-damaging breach that cannot be stopped by looking at log files. The high-performance security solutions used to detect APTs require detailed, real-time data to have a chance of detecting and stopping these complex threats.
Research by Enterprise Management Associates confirmed the importance of packet analysis for attack remediation and found that 81% of enterprises preferred packet data for security investigations and 35% were currently using deep packet inspection for analytics and reporting (1). Deep packet inspection is now built-in to all good firewalls and widely considered a key ingredient of effective cybersecurity (2).
Accessing packet data in your clouds
In the event a security attack or data breach is suspected in a cloud application, the cloud user may be frustrated to discover that packet access is not included in the Service Level Agreement they have with their cloud provider. Fortunately, there are alternate ways for cloud users to access their packet data.
A cloud visibility platform uses container-based sensors embedded in each cloud instance that is created, to generate copies of all cloud packet data. Because the sensors are deployed automatically, packet data from every cloud is available and there are no blind spots where attackers can hide. Providers of visibility solutions work with the different cloud platforms to test and pre-configure access, so data packets can begin flowing as soon as a cloud is spun up.
RightScale reports that the average enterprise in 2018 is operating 3.1 different types of clouds and experimenting with 1.8 more (3). That means the most cost-effective visibility solutions will be those capable of seeing inside any brand of public cloud or hypervisor-based private cloud. To find out more about cloud-agnostic visibility, check out Ixia CloudLens
Managing all the data
Having access to packet data from every transaction in all your clouds presents a secondary problem, though—volume that can overwhelm your monitoring solutions and even lead them to drop packets. Another valuable feature of a cloud visibility platform is intelligent data filtering: a way to condense data volume by removing duplicate packets, stripping out headers, and filtering out packets your security tools don’t need to analyze.
Justifying the cost
Critics of packet capture point to the cost of storing packets, even if just to ensure a 30-day investigation window. Of course, there are costs associated with data capture, filtering, and storage, but the value of being able to quickly respond to an alert could well outweigh those costs. The choice is really about what level of investment is justified to avoid the possible cost of a data breach.
Ultimately, log files are diagnostic tools. They are not security solutions and they cannot initiate an effective response to a security threat or breach. With the rise of advanced persistent threats and multi-stage attacks, effective security requires detailed packet-level data from every interaction that happens in the cloud. The cost of capturing and filtering packet data will be offset by the increased ability of the security team to detect attacks and accelerate incident response.
(1) EMA Enterprise Survey and Report: "Packet-Based Security Forensics--A Next-Generation Approach to Attack Remediation," November 10, 2016, accessed online.
(2) Information Security Buzz: “DPI – The secret ingredient of robust security solution,” An interview with Nicolas Bouthors, Chief Technology Officer, Qosmos, March 16, 2017, accessed online.
(3) RightScale: 2018 State of the Cloud Report, accessed online.