Public Cloud: The ABCs of Network Visibility
Cloud usage is widespread in enterprises today with analysts predicting that a majority of workloads will be running in some form of cloud by 2018. And IDC recently estimated that the shift to public cloud is now entering mainstream adoption and public cloud spending appears poised to triple to $239B by 2021. That could even turn out to be conservative, since their estimate assumes cloud penetration only moves from 5% to 17% of total. This raises the priority for organizations to learn how to gain visibility to data traffic in the public cloud.
NEED FOR CLOUD VISIBILITY
By now we all know that moving workloads to the cloud can increase agility, but there are a few challenges to work through along the way. When applications and services are deployed on public infrastructure, organizations sacrifice direct access to the data they usually use to monitor performance, security, and compliance. Simply accepting the cloud as a “blind spot” is not an option because the cloud is becoming the dominant mode of operation.
The use cases for public cloud visibility are not different from those listed in my blog on private cloud visibility: security and compliance, performance analysis, and troubleshooting. However, the challenges and solutions are different.
CLOUD-ENABLED VS. CLOUD-NATIVE
Visibility to data in the cloud is enabled by embedding a sensor in each cloud instance created to track the data moving in and out. This is how visibility starts, but our need for visibility has qualifiers. We want to consider every single bit of possible data, but we don’t actually need to always process every bit of data through every one of our powerful security and monitoring solutions. That can be too much of a good thing and be super expensive and time-consuming.
Therefore, we want to sort and filter the data according to each type of monitoring task. One way is to send all the data from all the instances to a centralized processing engine for filtering. When the engine is located in the public cloud, you can think of this as cloud-enabled filtering. Another option is to do the filtering right there inside the cloud instance (using container technology). From there, the filtered data can be sent directly to security, monitoring, and analytics tools or to an intelligence engine with more advanced processing capabilities, such as deduplication, packet trimming, active SSL decryption, or data masking. No large centralized processing engine is involved. This is referred to as cloud-native filtering and the approach used by Ixia’s CloudLens Public visibility solution.
CHALLENGES AND STRATEGIES FOR PUBLIC CLOUD VISIBILITY
In my previous blog, I listed many challenges to achieving visibility in the private cloud. In the public cloud, there are other unique challenges to consider.
1. Easy scalability. One of the key advantages of public cloud is how fast and easy it is to increase and decrease capacity as needs change, without the need for additional infrastructure. To keep up, you need your cloud visibility and filtering solution to scale just as easily. A cloud-native solution has the edge because it is automatically deployed along with each new cloud instance. A centralized filtering engine needs to be upgraded as volume increases.
2. Reliability. To maintain security and performance you need your visibility solution to be resilient against unexpected failures. In cloud-native solutions filtered data is sent directly to the relevant tools. This means that if any one cloud instance fails, the tool is unaffected and continues to process data from other instances. A centralized filtering engine, on the other hand, represents a single point of failure in a public cloud visibility solution. Not good.
3. Reducing data extraction. One negative of public cloud is that providers charge you to extract data. You can reduce the volume of data extraction, and the resulting costs, by filtering and processing traffic before you deliver to monitoring tools. Ixia’s cloud-native solution provides Visibility-as-a-Service within the public cloud so you can use advanced data processing such as deduplication, packet trimming, and application layer filtering to further streamline the data you send to your security and monitoring solutions.
4. Supporting tools on-premises and in the cloud. The best cloud visibility solution is one that supports all your current security and monitoring solutions—those in the data center, as well as those in the cloud. With Ixia’s Visibility as a Service solution, you can deploy a virtual packet broker in the data center to receive the filtered and processed cloud data and to any existing on-premises security or monitoring tools.
5. Easy policy management. Managing the filters used to select data for your security and monitoring tools can be complex and time-consuming. Plus, you don’t want mistakes to lead to inadvertent blind spots hiding security threats. Look carefully at the management interface provided by your cloud visibility solution and note how easy it is to apply filtering rules to new cloud instances. Good solutions will help you maximize consistency and eliminate configuration errors.
6. Support for multiple providers. Obviously, it is easier and less expensive to manage and support a single cloud visibility solution that can be used with multiple cloud providers. Avoiding vendor lock-in is always a good strategy. Solutions that support Amazon Web Services, Microsoft Azure, and Google Cloud will provide maximum flexibility as your cloud strategy evolves.
Applications running in a public cloud should maintain the same level of performance, security, and compliance as if they were running on-premises. A solution that can tap, aggregate, and filter data directly in the cloud, provides the most efficient outcome. And the additional ability to deliver data to both cloud-based and on-premises monitoring tools offers the most flexibility.