Jeff Harris
Chief Marketing Officer
Blog

Putting Performance and Resilience to the Test

June 2, 2016 by Jeff Harris

Testing is a crucial element in the rollout of new applications or services. Ignore it at your peril – the first you will know about poor availability or incomplete security will be a barrage of customer complaints – or worse, a damaging cyberattack.

However, knowing how to test new applications for performance and resilience can be difficult. Not all testing tools and mechanisms are created equal, and not all are appropriate for the virtual environments and hybrid clouds that dominate the vast majority of modern enterprise architectures. Above all, not all reflect the real-world traffic volumes and fluctuations, and the malicious hacking techniques, that modern networks need to deal with.

Growth and volume

The key point to understand is that traffic volumes and cyberattacks are both reaching significantly larger scales than we have ever seen before. Predictions are rife as to the scale of the Internet of Things (IoT), but most analysts are in agreement that we are talking about billions and billions of connected devices worldwide in just a few short years. Clearly, this means greater volumes of online traffic globally – HD video and social media in particular are extremely bandwidth-hungry – but it also means complex connectivity. Many of the applications running on connected devices will have to speak to each other in ways that do not currently exist.

Meanwhile, vast botnets are powering truly enormous distributed denial of service (DDoS) attacks. The number of attacks with bandwidths of over 100Gbps is increasing every day.

All this adds up to a huge burden on enterprise networks. To meet these demands, massive data center upgrades, plus enhancements to core networks with 100 GbE technology, are in order. But it is difficult for businesses to guarantee that the expected benefits will be delivered under real world conditions.

This is why it is so important for organizations undertaking a datacenter or network upgrade test the entire delivery path of each application – end-to-end – under realistic, real world conditions.

What do ‘real world conditions’ look like?

How, then do you emulate those real world conditions? What do they look like? Here are some essential considerations:

  • Use a mixture of testing applications and traffic workloads
  • The test environment must reflect your organization’s real load and network traffic, including applications and protocols such as Facebook, Skype, Amazon EC2 / S3, SQL, SAP, Oracle, HTTP or IPSEC
  • Consider possible illegitimate traffic flows, such as DDoS attacks
  • Bear in mind that testing complex storage workloads such as cache utilization, deduplication and compression can only be achieved with real traffic

DIY or TaaS?

Another choice is whether to handle all your testing yourself or to bring in testing as a service (TaaS). Your organization is unusual if it has an in-house team of highly qualified, dedicated testing engineers, and your network and security architects may not be best-placed to design testing programs for their own applications and systems.

Ixia’s TaaS offers a fast, reliable and accurate option, with a predictable, easily expensed cost and proven, repeatable test plans and methodologies. You can also read more about this topic in this article on ContinuityCentral.com

Testing your applications and systems for performance and resilience is an essential part of both your security posture and your business continuity. However, testing them for real-world performance and resilience is the only way to ensure that your results are accurate, relevant and reliable.