8
White Paper PCI Ex press Ethernet Networking Smoothly Migrate Networking to the Next-generation, Industry-standard Serial I/O Architecture

PCI-E and Networking

Embed Size (px)

Citation preview

Page 1: PCI-E and Networking

8/8/2019 PCI-E and Networking

http://slidepdf.com/reader/full/pci-e-and-networking 1/8

White Paper

PCI ExpressEthernet Networking

Smoothly Migrate Networking to the Next-generation,Industry-standard Serial I/O Architecture

Page 2: PCI-E and Networking

8/8/2019 PCI-E and Networking

http://slidepdf.com/reader/full/pci-e-and-networking 2/8

White Paper PCI Express Ethernet Networking

2

Contents

Introduction 3

Evolution of the Platform 3

Performance 4

Reducing Latency 4

Scalability 5

Flexibility in Design 6

Conclusion 7

Page 3: PCI-E and Networking

8/8/2019 PCI-E and Networking

http://slidepdf.com/reader/full/pci-e-and-networking 3/8

PCI Express Ethernet Networking White Paper

3

Introduction

PCI Express* architecture is a breakthrough for server, desktop,

and notebook platforms and has significant positive impacts for

networking. It offers high bandwidth with lower pin counts, lower

latency, less power consumption, enhanced RAS (reliability,

availability, serviceability) capabilities, plus allows for additional

scaling in performance.

As processor, memory and video speeds have increased, the

shared PCI* bus has lagged behind. The PCI bus limits the network

bandwidth of most desktop systems to 1 Gbps, while PCI Express

architecture offers true bi-directional performance — beginning at

2 Gbps each way. For servers, PCI Express offers true multi-Gigabit

Ethernet scalability.

There have been extensions to the PCI specification over the years

to accommodate faster components. The PCI-X specification was

developed and standardized in the late 1990s as a higher frequency

and wider bus extension of the PCI bus mostly targeted to

support server and high-end workstation applications. While it

did extend the useful life of the PCI bus, the PCI-X specification

did not solve some of the basic problems associated with the

parallel nature of the shared architecture PCI bus. Furthermore,

the PCI-X specification does not provide the headroom in bus

bandwidth needed for emerging technologies, including 10GbE

and beyond. PCI-X cannot easily be deployed across a wide

range of platforms: notebook, desktop, server and

telecommunications.

This paper briefly reviews the evolving need for a new bus

architecture, the PCI Express architecture and its benefits for

Ethernet networking.

Evolution of the Platform

Prior to the PCI specification, the ISA bus dominated PC archi-

tecture (Figure 1). In the early 1990s, PC manufacturers adopted

the PCI specification, which offered users adequate bandwidth for

networked applications. However, workers did not use their PCs

to collaborate or access the Internet, so they were content with

the throughput of Ethernet networking at 10 Mbps.

Fast Ethernet was introduced shortly after the PCI bus, and it

offered ten times the performance over the existing Ethernet

standard. Older ISA-based PCs, however, could not support

Fast Ethernet. In addition, users didn’t believe they would ever

need that much bandwidth, so the faster network option didn’t

gain traction in the market.

As the ISA platforms were replaced with PCI bus-based PCs,

Intel introduced the first 10/100 Mbps auto-negotiating PCI bus

network adapter. The adoption of the PCI bus, combined with

the ability to turn on the faster 100 Mbps when customers were

ready, enabled the transition to Fast Ethernet. Today, over 90%

of all desktop computers run at 100 Mbps or higher.

By the mid 1990s, e-mail, electronic calendaring and scheduling,

the Internet, and Web-based tools, such as HTML, Java, etc., were

being deployed. People do not use their computers for simple

word processing and spreadsheets any longer. The need to

communicate quickly with peers and customers rapidly is a

requirement. Users access data in far off databases, and a majorityof network traffic is routed, as data traverses multiple network

segments. Tools are available to integrate the Internet with

applications across multiple platforms to bring users closer, and

intranets are a pervasive, effective tool for companies and

15

10

8.33 MHzVESA

MCA

EISA

VL

HLUp to 66 MHz

Up to 800 MHz

5

1

SignalingRate(GHz)

80s 90s 2000s

>12 GHz Copper Signaling Limits

1 GHz Parallel Bus Limit

ISA

OpticalInterconnects

PCI ExpressI/O Architecture

Today 2.5 GHz

PCI

AGPx

PCIX

Figure 1. Evolution of PC bus architectures

Page 4: PCI-E and Networking

8/8/2019 PCI-E and Networking

http://slidepdf.com/reader/full/pci-e-and-networking 4/8

White Paper PCI Express Ethernet Networking

4

organizations to communicate with all employees. Workers

spend more of their time using their PCs as an enterprise and

workgroup tool (Figure 2). And 10 Mbps Ethernet no longer

supports the workflow.

Performance

Faster processors need faster and wider network connections in

order to fulfill their promise of delivering a competitive edge. There-

fore, the alignment of the compute platform and communications

capabilities is much more important as faster processors and

network topologies are introduced. The PCI specification continues

to advance its network throughput performance, with the addition

of PCI-X, wider buses, and higher bus speeds (Table 1), to bolster

this needed alignment. Throughout this evolution, PCI-X has main-

tained backward compatibility with PCI 32-bit and 64-bit/33 MHz

and 66 MHz devices and software. This makes the transition fromPCI to PCI-X relatively easy. However, it is still a shared, multi-drop,

parallel-bus architecture.

Reducing Latency

Bottlenecks can occur in many places in servers and desktop

systems, and they are often attributed to the PCI bus. Let’s take

a look at some of these bottlenecks.

In a traditional PC architecture, data arriving through a PCI-based

Gigabit Ethernet (GbE) network controller must travel over several

buses before reaching the user (Figure 5). However, GbE network

traffic at 1000 Mbps imposes throughput burdens that the 32-bit/ 

33 MHz PCI bus was never designed to handle at 133 Mbps. The

resulting data flow bottleneck can lead to unused processor

resources and slower overall system performance.

A similar performance bottleneck occurs with multi-GbE perform-

ance for servers, even with PCI-X bus network cards. Most PCI-X

implementations are multi-drop, parallel-bus architectures. Multi-

drop buses impose latencies for arbitration of the bus and

reduced overall performance in certain cases (Figure 3). With the

PCI specification, the device needs to arbitrate with other devices

for the shared resource. Once the GbE controller has control

of the PCI-X bus, it can have access for several cycles before

it must release control for other devices. In addition, the PCIspecification allows “clock-down,” downshifting the entire bus

segment to the slowest device installed, inhibiting the speed

gains of faster devices connected to the bus.

PCI Express architecture helps to eliminate the bottleneck in

the PCI bus by providing a dedicated path between the GbE

network controller and the Memory Controller Hub (MCH) of the

chipset. The high bandwidth of PCI Express architecture enables

bi-directional GbE networking without imposing the burden of

handling network traffic on other system resources.

Table 1. PCI/PCI-X Bus and Throughput Characteristics

Architecture Bus Width Bus Frequency Raw Bus Bandwidth Raw Bus Bandwidth Max PINs(in Bytes) (in bits)

PCI 32-bit 33 MHz 133 MB/s 1 Gb/s 49

PCI/PCI-X 64-bit 66 MHz 533 MB/s 4.2 Gb/s 102

PCI-X 64-bit 100 MHz 800 MB/s 6.4 Gb/s 102

PCI-X 64-bit 133 MHz 1 GB/s 8 Gb/s 102

■ Productivity

■ Workgroup

■ Enterprise

35%

45%

20%20%

75%

5%

1991–1996 1996–2001

Figure 2. Growth of network-centric applications

Page 5: PCI-E and Networking

8/8/2019 PCI-E and Networking

http://slidepdf.com/reader/full/pci-e-and-networking 5/8

PCI Express Ethernet Networking White Paper

5

PCI Express architecture reduces the number of “hops” that

Ethernet packets need to traverse with its direct interconnect

architecture. In Figure 5, the PCI-X Bridge Solution shows that

the GbE network device must pass traffic through the PCI-X

Bridge to the MCH and on to memory. The CPU processes the

data, and memory sends it out the MCH, through the PCI-X

Bridge and out to the GbE controller.

The data must traverse five hops from start to finish. In contrast,

a PCI Express architecture-based network controller (or I/O device)

traverses the MCH to Memory and back to the MCH to the network

interface. This reduces the number of hops from five to three, a

minimum 40 percent reduction in latency. If more devices were

on the PCI-X bus, latency savings would be higher with PCI

Express architecture.

Scalability

Additionally, lanes (a lane comprises two differentially signaled,

uni-directional wire pairs – 4 wires per lane) can be combined

to provide various bandwidth-sizing options for maximum flexibility

and scalability. PCI Express lanes form a serial, point-to-point

topology with bi-directional lanes running at 2.5 Gbps uni-

directional, or 4 Gbps bi-directional, throughput per lane. Table 2

illustrates this lane (and port width) structure.

As more lanes are added to increase bandwidth, data is spread

across these lanes to create greater throughput capacity. Lanes

can be added to scale performance from 2 Gbps peak uni-

directional bandwidth with a x1 (by 1) interconnect to 32 Gbps with

a x16 interconnect (Table 2). Additionally, enhancements, such as

link layer protocols, enhance fault tolerance.

MemoryControl

Hub

PCI-XHub

    P    C    I  -    X

   1   3   3    M    H   z

RAM

MemoryControl

Hub

PCI-XHub

    P    C    I  -    X

   1   0   0    M    H   z

    P    C    I  -    X

   1   0   0    M    H   z

RAM

MemoryControl

Hub

PCI-XHub

    P    C    I  -    X

   6

   6    M    H   z

    P    C    I  -    X

   6

   6    M    H   z

    P    C    I  -    X

   6

   6    M    H   z

    P    C    I  -    X

   6

   6    M    H   z

RAM

OR OR

Examples of PCI-X Hub Motherboard Layout Designs(Number of PCI-X Bus Segments Vary by Design)

Figure 3. Example of Parallel or Multi-drop PCI-X 1.0 Bus Design

Clk4 Clk5 Clk6 Clk7PCI Express:

Efficiency = 4/8Clock = 2.5 GHz

OverheadD D D DOverhead Overhead Overhead

Clk8Clk1 Clk2 Clk3

PCI-X 133:Efficiency = 4/8Clock = 133 MHz

Overhead Overhead Data Data Data DataOverhead Overhead

Clk1 Clk2 Clk4 Clk5 Clk6

Data Clocks

Clk7Clk3 Clk8

Figure 4. PCI/PCI-X to PCI Express Arbitration Comparison

Page 6: PCI-E and Networking

8/8/2019 PCI-E and Networking

http://slidepdf.com/reader/full/pci-e-and-networking 6/8

White Paper PCI Express Ethernet Networking

6

Flexibility in Design

It is not surprising that as high-speed serial (HSS) technologies have

advanced over the years, serial interconnects have become more

attractive to solving the bottleneck problems for communications

(Table 3). By changing to a serial, lower voltage, self-clocking,

signaling I/O transfer methodology, the number of pins can be

reduced, power can be lowered and bandwidth can be increased.

With the addition of link layer communication protocols, improvements

in data reliability and fault tolerance can also be realized, resulting

in better RAS characteristics

Parallel buses require many I/O signal pins. In addition, they require

component, board, and system manufacturers to exactly match

the propagation delays of a large number of signals and clocks

across the entire system. The ability to precisely match propa-

gation delays directly affects the maximum clock rate that can

be achieved, and matching signals while maintaining backwards

compatibility with regards to voltage swings imposes large

power penalties.

PCI-based designs need a large number of signal lines (up to 84

for 64-bit PCI) and pins for PCI chip devices. More lines and pins

present a potential for signal noise induced problems in routing

or laying out these data paths. More lines and signal pin-outs

also mean high cabling, connector, routing, chip die sizes and

packaging costs. In most systems, platform designers can save

up to 53 percent of board space when converting designs from

PCI-X to PCI Express.

Lastly, PCI Express architecture-based products will support reuse

of PCI software, allowing the extension of the software from the

vast multi-billion dollar PCI ecosystem to PCI Express architecture.

M

P I-B

CPU(s)

GbE

PCI-X Bridge Solution

40% DecreaseI/O to and from Memory:

Proprietary/ PCI Express

PCI-X

PCI Express

5 Hops 3 Hops

M Memory

CPU(s)

GbE

PCI Express Direct Attach

Memory

Figure 5. Data traversal for PCI-X and PCI Express architecture

Table 2. PCI Express architecture throughput capacities

Architecture Width Frequency Bandwidth Est. Bandwidth Est. Max PINs(in Bytes) (in bits)

PCI Express – x1 1-bit 2.5 GHz 312 MB/s 2 Gb/s 8

PCI Express – x4 4-bit 2.5 GHz 1.25 GB/s 8 Gb/s 20

PCI Express – x8 8-bit 2.5 GHz 2.5 GB/s 16 Gb/s 40

PCI Express – x16 16-bit 2.5 GHz 5 GB/s 32 Gb/s 80

Table 3. Features and benefits of PCI Express architecture

Feature Benefit

2.5 Gb/sec for PCI Express v 1.0a, transfer Throughput 4 x PCI-X 133 MHz and

speeds of 4 GB/sec full duplex (x8 lane) scalable to 5 GB/s for futureinvestment protection

Lower pin count (40 versus Easier routing, denser packaging and150 for PCI-X 133 MHz) reduced chip/device pin-out requirements

and footprint

Point- to-point connection and switching Reduced latency and dedicated/scalablecapability (with optional switches) device connections; advanced peer-to-peer

switching for intelligent subsystems anddevices control

Advanced error logging/reporting, power Better overall data reliabili ty and reporting,management, built- in ease of testing and with predictable latencies and flexible dataquality-of-service features transfer characteristics

Low voltage and embedded clock Superior differential voltage margins,signaling; two uni-directional links each with improved EMI and higherlane with no sideband signals frequency/throughput scalability

Page 7: PCI-E and Networking

8/8/2019 PCI-E and Networking

http://slidepdf.com/reader/full/pci-e-and-networking 7/8

PCI Express Ethernet Networking White Paper

7

Conclusion

PCI Express architecture is the emerging serial I/O interconnect

architecture designed to solve network and platform issues

for both users and developers. For IT, PCI Express architecture

alleviates networking bandwidth bottlenecks and adds much

needed RAS features over the parallel I/O bus architectures.

For developers, PCI Express architecture provides smaller and

more efficient packaging over PCI/PCI-X.

PCI Express architecture provides a roadmap for developers to

migrate their platforms easily. PCI Express architecture-enabledplatforms will support PCI/PCI-X legacy devices in the form of

motherboard PCI-X bus segments/slots and PCI Express

architecture-to-PCI-X bridge chips.

PCI Express architecture provides the breakthrough needed for

today’s and tomorrow’s I/O interconnect limitations and concerns,

and it provides a clear transition and migration path from PCI-X

133 MHz-based solutions. And, perhaps more importantly, PCI

Express architecture brings a high degree of investment protection

and technology convergence to IT or development managers,

while still offering a relatively seamless transition from PCI and

PCI-X architectures.

PCI Express*:Two x8 Slots and No Bridge

PCI-X:Two 64b Slots and Bridge

Two PCI-X 64b Slots Two PCI Expressx8 Slots

MCH MCH

PCI-X Bridge

Figure 6. Board space savings with PCI Express architecture

Board area reduced by 53 percent

Board layer count reduction opportunity

Component count decrease

Page 8: PCI-E and Networking

8/8/2019 PCI-E and Networking

http://slidepdf.com/reader/full/pci-e-and-networking 8/8

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL ®  PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL

PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY

WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATINGTO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATINGTO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Int el products are not

intended for use in medical, life saving, or life sustaining applications. Intel may make changes to specifications and product descriptions at any time, without notice.

* Other names and brands may be claimed as the property of others.

Copyright © 2003 Intel Corporation. All rights reserved.Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Printed in USA. 1103/OC/TS/PDF Please Recycle 254108-002

White Paper PCI Express Ethernet Networking