Data Communication

Comparing Ethernet and RapidIO

Seite: 4/4

Anbieter zum Thema

The RapidIO Gen2 specifications have created XON/XOFF, and rate- and credit-based flow control mechanisms for flows based on Data Streaming packets. This mechanism was designed for systems with multiple senders for a single destination. The protocol allows senders to communicate how much information they have available, and for the destination to manage and schedule which senders are active and at what rate. While the flow control mechanism can be supported in software, the packet format is simple enough that hardware can support these mechanisms. When used correctly, these flow control mechanisms can avoid long-term congestion and allow applications to optimize their performance.

Edging out congestion

One approach to the problems of network congestion and flow control is to communicate the location of the network congestion. This action allows packet sources to schedule transmission of packets to uncongested areas of the network. RapidIO supports mechanisms for communicating the location of congestion in the network. RapidIO Gen2 defines a low-latency, control symbol-based mechanism, named Virtual Output Queue (VoQ) Backpressure. VoQ Backpressure defines a hierarchical method, implementable in hardware, of pushing congestion back to the edge of the network. This has two benefits:

  • It slows flows that are contributing to congestion points, freeing system resources.
  • It encourages transmission of other flows that are not contributing to congestion, increasing the throughput and balancing system load.

In 2010, an IEEE task group began defining a similar mechanism, known as IEEE 801.3Qau Congestion Notification for Local and Metropolitan Area Networks. It is unclear if this mechanism will be simple enough to be supported by devices that are part of the DCE ecosystem.

While researching this article, I came across another by Stuart Cheshire originally published in 1996, with the provocative title “It’s the Latency, Stupid”. While his piece addresses latency in the consumer Internet Ethernet world, his key points ring true now:

  • Making more bandwidth is easy.
  • Making limited bandwidth go further is easy.
  • Once you have bad latency, you’re stuck with it.

Mass-market Ethernet, the Ethernet of the Internet, focuses on delivering more bandwidth for less money.

Data Center Ethernet has features that will increase throughput and improve the behavior of Internet Ethernet within the data center. However, these features have negative impacts on Ethernet chip design, and still require software and/or offload engines for transmission error recovery. The market for Data Center Ethernet is much smaller than that of Internet Ethernet, so the economies of scale that result in inexpensive Internet Ethernet technology do not apply to Data Center Ethernet.

The applications that have adopted RapidIO need low and predictable latency for chip-to-chip, board-to-board, backplane and box-to-box communication. The leading companies in these applications (for example, Ericsson, EMC, and Mercury) ensure that RapidIO will continue to have low latency combined with high throughput to ensure that the performance characteristics of these applications are met. RapidIO will continue to maintain bandwidth parity with other interconnects, while remaining the lowest latency, most efficient fabric available for chip-to-chip, board-to-board, backplane and up to 100m cable/optical connections.

* * Barry Wood ... is an Expert Applications Engineer with IDT, Canada

(ID:28270470)