Data Communication

Comparing Ethernet and RapidIO

Seite: 2/4

Anbieter zum Thema

Two Ethernet markets

Making a large change in direction, the Ethernet community effectively split the Ethernet market into Internet and Data Center halves. In the context of the planet-spanning network called the Internet, network-layer flow control, rather than link-level flow control, achieves higher throughput. Latencies in the planet-spanning Internet Ethernet, and the dynamic network topology, make network-level flow control the most effective flow control strategy. The vast majority of Ethernet technology is Internet Ethernet.

Internet Ethernet technology depends on processing power in the form of switch and router platforms. Within switch and router platforms, Network Processor Units (NPUs) shape traffic to meet Service Level Agreements (SLAs) and implement network protocols, such as MPLS. Figure 1 shows a typical Internet Ethernet router/switch box.

Packet transfer without loss

Note that within an Internet router or switch platform, there are a number of “short range” connections between chips. The low latency of these short-range connections can be leveraged by simple flow control mechanisms to minimize packet loss. It is noteworthy that the leaders in Internet Ethernet switch and routers do not use Ethernet as the exclusive interconnect technology within their own platforms. Instead, interconnect standards, such as Serial Peripheral Interconnect (SPI), Interlaken, and RapidIO, incorporate low-latency flow control mechanisms to ensure efficient packet transfer without loss.

Internet Ethernet switch chips can support the functions found in the switch and router platforms. However, the lower-latency DCE switches do not. Devices connected directly to the Data Center Ethernet switches are therefore responsible for these functions. This represents a significant shift from the simple transmit and receive architectures that characterize most consumer Internet Ethernet technology.

The advent of Lossless Ethernet signals the fracturing of the Ethernet market into Internet Ethernet and Data Center Ethernet devices. The implication is that Data Center Ethernet devices do not enjoy the same economies of scale that Internet Ethernet devices do.

Throughput, latency, and flow control

Within the data center network, network latency for control functions, such as committing file changes and confirming data transmission, can represent the real limit to system capacity. Similarly, control system latency for load balancing functions in servers can determine the utility and efficiency of the overall system.

In this context, flow control mechanisms should enable control packets to make forward progress at the earliest opportunity. Under contention, buffers should be fully utilized to maximize scheduling options to increase throughput and reduce latency. Ideally, buffers are managed in a way that ensures high priority flows can always make forward progress. A comparison of the RapidIO and DCE flow control strategies follows.

RapidIO implements a comprehensive flow control strategy to resolve short-, medium-, and long-term system congestion. RapidIO flow control mechanisms allow a transmitter to pack the receiver buffers completely full. This is important for several reasons:

The more packets a switch chip has in its buffers, the more options it has to increase throughput by routing packets to output ports that are not congested. This should lead to maximum possible throughput with minimal latency.

(ID:28270470)