Lora O'Haver
Senior Solutions Marketing Manager
Blog

Using public cloud? Why you need cloud visibility.

October 30, 2018 by Lora O'Haver
You need visibility into your clouds

Public cloud usage is widespread in enterprises and public sector environments worldwide. The most recent RightScale 2018 State of the Cloud report found that 92 percent of enterprises surveyed used public cloud in production, and 38 percent specified public cloud as their top deployment platform. As a critical processing platform, you need to stay on top of what is happening in your clouds, just as you do with your on-prem infrastructure. ‘Cloud visibility’ simply refers to being able to access detailed information from your clouds.

Public cloud infrastructure comes with certain trade-offs

Clearly, enterprises value the agility, elasticity, and easy scalability that cloud offers. At the same time, once you move a workload to public cloud, you lose some of the control you once had. Cloud providers do not give customers direct access to their multi-tenant infrastructure, due to privacy concerns. You will no longer be able to deploy a network tap to mirror traffic and provide it to your monitoring tools. You will no longer be able to deploy an intrusion prevention solution inline, to screen live traffic. Essentially, you lose access, or visibility, to the packets flowing in your clouds, along with the information embedded in those packets. With public cloud, you are more dependent on the security services and management tools available from your chosen provider.

Security and performance monitoring require packet data

One of the add-on services cloud providers offer is a copy of metadata and log files on your cloud workloads. Providers position this as information you can use to protect your enterprise from security threats and malware attacks. That is true insofar as log files can be used to trigger security alerts. The problem is that to investigate the alerts, identify the source of malicious action, and repair the related weakness, you need to perform deeper investigation and correlation analysis. The type of security forensics and detection solutions you use to perform this work require packet data. High-level log files are not sufficient. Without packet analysis, you increase the risk a security attack will go unnoticed in your environment for a longer period and cause more damage.

Similarly, in a complex environment with multiple moving parts, you need granular data to identify the cause of performance bottlenecks. Being able to isolate the data based on the application in use, the user’s endpoint device, the user’s operating system, the geolocation of the traffic, and other details embedded in traffic packets helps you isolate issues faster. Faster identification and resolution, consequently, helps you deliver a better customer experience.

Having visibility means eliminating blind spots

Many IT organizations think more about the capabilities of their security solutions and monitoring tools than they do about how to give those solutions all of the data they need to perform effectively. The diverse hardware and software platforms used in a typical enterprise can become ‘opaque containers,’ preventing your monitoring solutions from seeing the complete environment. The result is blind spots that can limit your ability to strengthen security and improve performance.

Blind spots can lead to:

  • Unchecked security threats and data breaches
  • Compliance issues
  • Network downtime and service disruptions
  • Application performance and customer dissatisfaction

Achieving total visibility means being able to see traffic as it moves through all your physical, virtual, or cloud platforms. This information helps you determine which of your resources are in use, who is using them, where bottlenecks might exist, whether your infrastructure is secure, and much more.

A visibility platform delivers just the right packet data

As with all visibility solutions, cloud visibility platforms must first ensure no data is missed. Since cloud instances can be rapidly spun-up on-demand, you need a visibility that will also scale automatically, without manual intervention. A cloud-native architecture embeds sensors inside every cloud deployed, to deliver access to every packet flowing through that cloud instance. This distributed design ensures that visibility scales right along with your cloud instances and no cloud is overlooked.

Once all data is accessible, your visibility platform should also help you zero in on the most relevant data, to avoid overwhelming your monitoring solutions. Packet grooming, such as header stripping and de-duplication, and intelligent filtering based on packet characteristics, reduce the volume of packets your solutions need to process. With less packets to process, monitoring solutions are more efficient, cost less to operate, and are less likely to suffer congestion or failure.

A visibility platform also makes it easier and automatic to send similar types of traffic to more than one monitoring solution at the same time, to further accelerate the identification and isolation of issues or security threats.

Summary

The transfer of workloads to the public cloud is accompanied by a certain loss of IT control. The challenge is to maintain critical insights—to quickly detect and isolate problems across all tiers and services. A visibility solution that accesses, filters, and delivers key cloud data to monitoring solutions is crucial. With insight restored, IT can strengthen security and manage performance in diverse and complex processing environments.

Learn more about achieving cloud visibility in the blog: Public Cloud: The ABCs of Network Visibilityor at Ixia Cloud Solutions.