Blog

The Evolution to 400GE

November 3, 2014 by Ixia Blog Team

Author: Jim Smith

To ensure an always-on user experience, enterprises, governments, mobile operators, and cloud providers are rapidly-adopting new technologies across their increasingly-complex data centers. They must keep pace with the explosion of bandwidth and service demands from mobile, cloud, social, and mission-critical business operations.

More functions in smaller space and power footprints, along with preparing for unknown future scalability needs are key requirements for ongoing success. To make all this possible, network equipment vendors are racing to develop new Layer 4-7 technologies to keep pace with the ever-shifting landscape. New content-aware products promise to accelerate, optimize, and secure the modern data center, but they also add new complexities.

With this added functionality and flexibility come and increased need for raw throughput – more applications and services means more data. Much more data.

Early on, data centers based on 1GE connections advanced networks admirably – demonstration steady growth for nearly a decade. More recently, however, this growth has slowed to a crawl, and 10/40/100GE-based growth has seen a huge upswing. This is due to the ever-increasing demand for low-cost throughput speeds needed for today’s network users. Eventually, data center speeds will increase past the point of 100GE connections – requiring a new and, most-likely, exponential increase in throughput.

Besides the raw data produced by application and services, data centers require a constant flow or maintenance and monitoring data – an also ever-increasing overhead as systems become more and more complex and interwoven. Data center operators need to know what is producing what traffic, where, and for whom at all points and at all times in the network.

The History of Speed

To ensure mission-critical service objectives can be met, operators of data centers and the manufacturers of data center equipment can’t skip or minimize the steps to validate and test these complexities under real-world end user environments at extreme scale. The price for being wrong, even once, is steep.

Reliance on networking permeates every aspect of our world, and bandwidth requirements from local area, data center, access, and metropolitan area networks are expanding at double-digit rates. As technologies such as cloud computing and streaming video necessitate larger and faster data centers, equipment manufacturers must evolve beyond from 100Gbps capabilities to keep pace.

Since ratification of the 100Gbps Ethernet standard, global network traffic and bandwidth demand continues to experience massive growth. When the IEEE 802.3 Working Group last addressed the need for a new Ethernet speed rate, two new rates were created — 40GE and 100GE. 40GE was intended to provide a medium path for servers, while 100GE was targeted at network aggregation applications.

Reliance on networking permeates every aspect of our world, and bandwidth requirements from local area, data center, access and metropolitan area networks are expanding at double-digit rates. As technologies such as cloud computing and streaming video necessitate larger and faster data centers, equipment manufacturers must evolve beyond from 100Gbps capabilities to keep pace.

While some of the market is still transitioning from 1GE to 10GE, 40GE is being deployed in data center networks, and 100GE is being deployed in network cores, service provider client connections, internet exchanges, etc.

A New Standard

According to the IEEE 802.3 Ethernet Bandwidth Assessment Ad hoc, industry bandwidth requirements are continuing at an exponential pace. At such a rapid speed, in fact, networks will need to support terabit-per-second capacities in by 2015 – and 10-terabit-per-second capacities by 2020.

Recognizing this industry need, the IEEE 802.3 formed the IEEE 802.3 400 Gbps Ethernet (400 GE) Study Group in May 2013. In May of 2014 it received the “Task Force” designation and met for the first time at the IEEE 802.3 May 2014 Interim Session. There, it began work on defining the 400GE standard for enabling high-bandwidth solutions for web-scale data centers, video distribution infrastructures, service providers and new applications areas. The newest standard is expected to reach data-transfer speeds of 400Gbps, which is fast enough for 50,000 simultaneous High Definition Netflix video streams.

  • Early drivers for 400GE include:
  • Bandwidth demand expanding at double-digits rates
  • Cloud providers continuing to require even larger data centers
  • Streaming video content driving bandwidth demands and the need for faster processors

The project is expected to define and support the following:

  • Support a MAC data rate of 400 Gbps
  • Support Bit Error Rate (BER) of better than or equal to 10-13 at MAC/PLS service interface
  • Support full-duplex operation only
  • Preserve the Ethernet frame format using the Ethernet MAC
  • Preserve minimum and maximum frame size of current Ethernet standard
  • Provide appropriate support for OTN
  • Support optional 400 Gb/s Attachment Unit Interfaces for chip-to-chip and chip-to-module applications
  • Provide physical layer specification which support link distances of:
  • At least 100m over MMF
  • At least 500m over SMF
  • At least 2km over SMF
  • At least 10km over SMF

At this stage, the standard is just starting to be worked out. It’ll be a couple of years before we see wholesale approval and acceptance – but this doesn’t mean that vendor development is on hold.

PCS Layer Functions

One of the advancements of 40/100GE technology was the development of physical coding sublayer (PCS) lanes. These lanes, and how they are mapped and multiplexed across links, provides the mechanism for the higher-speed technology works.

The requirements for the PCS layer include:

  • Provide frame delineation
  • Control signal transport
  • Provide the clock transitions needed by SerDes-style electrical and optical interface
  • Bond multiple lanes together through a striping or fragmentation methodology

Since the data must travel on multiple optical lanes as well as on multiple electrical lanes to the optical module, the striping mechanism should support:

  • Low frame overhead that is independent of frame size
  • Formatting that enables receiver-only lane deskew
  • Evolving media and media interface widths
  • Simple optical module implementations

The advancing electrical and optical technologies requires the ability to handle differing and changing numbers of electrical interface lanes versus optical lanes. To handle the general case, the PCS baseline proposal calls for data to be distributed on a per-66-bit block basis to a number of PCS lanes v, where the number of PCS lanes equals the least common multiple of the number of electrical lanes n and optical lanes m. Using the virtual lane concept, the optical module can be a very simple multiplexer, which merely bit multiplexes the data from n electrical lanes down to m media lanes. The receiver’s PCS block can simply demultiplex the data back into the PCS lanes and then realign the skewed data.

A virtual lane is a continuous stream of 64/66b blocks. PCS lanes are created through a simple round-robin function which distributes 66-bit blocks, in order, to each virtual lane. In the case of an interface running with 20 PCS lanes, a single virtual lane would contain every 20th 66-bit block from the aggregate signal, as illustrated in Figure 4 below.

In order to allow the receiver to identify and deskew the individual PCS lanes, a unique alignment block is added to each virtual lane on a periodic basis. The alignment block is a special 66-bit control signal block that is unique to each virtual lane and cannot be duplicated in the data. The current proposal under consideration calls for an alignment block once every 16,384 blocks on each virtual lane.

The number of PCS lanes required depends on the electrical and optical lane combinations that need to be supported. The number of PCS lanes is the least common multiple of the number of electrical lanes and the number of optical lanes, regardless of whether the lanes are based on numbers of wavelengths, numbers of fibers, or other considerations.

This constraint ensures that the total number of PCS lanes can be mapped evenly over both the number of electrical lanes and the number of optical lanes; therefore, the data from any particular virtual lane will always reside on the same electrical and media lane across the link. This guarantees that no skew can be introduced between the bits within a virtual lane, which would be impossible to remove at the receiver.

When the 400GE specification is finalized, vendors will need to verify that their PCS lane implementation works correctly, and performs according to the standard.

Conclusion

Vendors are going to need to begin thinking about creating this next-generation of products, and Ixia is ready now with its 400GE JumpStart Test System.

With the new 400GE Jumpstart Test System, Ixia remains on the cutting edge of developing innovative networking technologies that allow our customers to meet demands and provide next-generation solutions to network providers.

Building on the company’s market leadership with the first 40 and 100GE testing solutions, Ixia has successfully completed the industry’s first 400GE interoperability test with Ciena, a global provider of open, programmable networking platforms and software.

The Ixia 400GE Jumpstart Test System is a developer tool kit to help network equipment manufacturers shorten development and test time – accelerating pre-standard 400GE networking hardware. Now customers can create, test and verify the interoperability of their next Higher Speed Ethernet technologies while maximizing their 400GE test investment.

Additional Resources:

400GE Solution

Press Release: Ixia Introduces World's First 400GE Test Platform