PCI EXPRESS ARCHITECTURE
#1

PCI Express is positioned as the industry's third-generation I/O technology. First generation was ISA, second generation being PCI, and the third generation, PCI Express. PCI Express is designed to be a general-purpose serial I/O interconnects that can be used in multiple market segments, including desktop, mobile, server, storage and embedded communications. PCI Express can be used as a peripheral device interconnects, a chip-to-chip interconnects, and a bridge to other interconnects like 1394b, USB2.0, and Ethernet. It can also be used in graphics chipsets for increased graphics bandwidth. PCI Express is an implementation of the PCI computer bus that uses existing PCI programming concepts and communications standards, but bases it on a much faster serial communications system. PCI Express is intended to be used as a local bus only. The PCI Express multi-drop, parallel bus topology provides Host Bridge and several endpoints. This introduces a new element, the switch into the system which replaces the multi drop bus and is used to provide fan-out for the I/O bus. A PCI Express switch provides high performance I/O and can coexist in many platforms to support today?s lower bandwidth applications until a compelling need, such as a new form factor, causes a fully PCI Express platform. Due to it being based on the existing PCI system, cards and systems can be converted to PCI Express by changing the physical layer only - existing systems could be re-booted on PCI Express and never even know it. The higher speeds on PCI Express allow it to replace almost all existing internal buses, including AGP and PCI.
Reply
#2

CHAPTER 1
INTRODUCTION
The PCI bus has served us well for the last 10 years and it will play a major role in the next few years. However, todayâ„¢s and tomorrowâ„¢s processors and I/O devices are demanding much higher I/O bandwidth than PCI or PCI-X can deliver and it is time to engineer a new generation of PCI to serve as a standard I/O bus for future generation platforms. There have been several efforts to create higher bandwidth buses and this has resulted in the PC platform supporting a variety of application-specific buses alongside the PCI I/O expansion bus as shown in Figure 1.1
Figure 1.1 Todayâ„¢s PC has multiple local buses with different requirements
The processor system bus continues to scale in both frequency and voltage at a rate that will continue for the foreseeable future. Memory bandwidths have increased to keep pace with the processor. Indeed, as shown in Figure 1.1, the chipset is typically partitioned as a memory hub and an I/O hub since the memory bus often changes with each processor generation. One of the major functions of the chipset is to isolate these ever-changing buses from the stable I/O bus.
Close investigation of the 1990â„¢s PCI signaling technology reveals a multi-drop, parallel bus implementation that is close to its practical limits of performance: it cannot be easily scaled up in frequency or down in voltage; its synchronously clocked data transfer is signal skew limited. All approaches to pushing these limits to create a higher bandwidth, general purpose I/O bus result in large cost increases for little performance gain. The desktop solution of Figure 1.1 is only part of the problem of diverging local I/O bus standards. To PCIâ„¢s credit it has been used in applications not envisaged by the original specification writers and variants and extensions of PCI can be found in desktop, mobile, server and embedded communications market segments.
Today™s software applications are more demanding of the platform hardware, particularly the I/O subsystems. Streaming data from various video and audio sources are now commonplace on the desktop and mobile machines and there is no baseline support for this time“dependant data within the PCI or PCI-X specifications. Applications such as video-on-demand and audio re-distribution are putting real-time constraints on servers too. Many communications applications and embedded-PC control systems also process data in real-time. Today™s platforms, an example desktop PC is shown in Figure 1.2, must also deal with multiple concurrent transfers at ever-increasing data rates. It is no longer acceptable to treat all data as equal “ it is more important, for example, to process streaming data first since late real-time data is as useless as no data. Data needs to be tagged so that an I/O system can prioritize its flow throughout the platform.
Applications, such as Gigabit Ethernet and InfiniBand, require higher bandwidth I/O. A third generation I/O bus must include additional features alongside increased bandwidth.
Figure 1.2 Multiple concurrent data transfers.
CHAPTER 2
PCI EXPRESS OVERVIEW
Recent advances in high-speed, low-pin-count, point-to-point technologies offer an attractive alternative for major bandwidth improvements. A PCI Express topology contains a Host Bridge and several endpoints (the I/O devices) as shown in Figure 2.1 Multiple point-to-point connections introduce a new element, the switch, into the I/O system topology also shown in Figure 2.1
Figure 2.1 A switch is added to the system topology.
The switch replaces the multi-drop bus and is used to provide fan-out for the I/O bus. A switch may provide peer-to-peer communication between different endpoints and this traffic, if it does not involve cache-coherent memory transfers, need not be forwarded to the host bridge. The switch is shown as a separate logical element but it could be integrated into a host bridge component.
The low signal-count, point-to-point connections may be constructed with connectors and cables. The PCI Express mechanicals will enable new classes of system partitioning (the boring beige box is no longer required!).
Figures 2.2 through 2.4 show typical 2003 platforms using the PCI Express Architecture.
Figure 2.2 General purpose desktop/mobile I/O
Interconnect for 2003 and beyond.
Figure 2.3 PCI Express„¢-based Server/Workstation
System.
Figure 2.4 PCI Express„¢-based Networking
Communications System.
The multiple, similar parallel buses of today™s platform are replaced with PCI Express links with one or more lanes. Each page link is individually scalable by adding more lanes so that additional bandwidth may be applied to those links where it is required”such as graphics in the desktop platform and bus bridges (e.g., PCI Express”PCI-X) in the server platform. A PCI Express switch provides fan-out capability and enables a series of connectors for add-in, high-performance I/O. The switch is a logical element that may be implemented within a component that also contains a host bridge, or it may be implemented as a separate component. It is expected that PCI will coexist in many platforms to support today™s lower bandwidth applications until a compelling need, such as a new form factor, causes a full migration to a fully PCI Express-based platform.
The server platform requires more I/O performance and connectivity including high bandwidth PCI Express links to PCI-X slots, Gigabit Ethernet and an InfiniBand fabric. Figure 5 shows how PCI Express provides many of the same advantages for servers, as it does for desktop systems. The combination of PCI Express for inside the box I/O, and InfiniBand fabrics for outside the box I/O and cluster interconnect; allow servers to transition from parallel shared buses to high-speed serial interconnects. The networking communications platform could use multiple switches for increased connectivity and Quality Of Service for differentiation of different traffic types. It too would benefit from a multiple PCI Express links that could be constructed as a modular I/O system.
CHAPTER 3
PCI EXPRESS ARCHITECTURE
The PCI Express Architecture is specified in layers as shown in Figure 3.1. Compatibility with the PCI addressing model (a load-store architecture with a flat address space) is maintained to ensure that all existing applications and drivers operate unchanged. PCI Express configuration uses standard mechanisms as defined in the PCI Plug-and-Play specification. The software layers will generate read and write requests that are transported by the transaction layer to the I/O devices using a packet-based, split-transaction protocol. The page link layer adds sequence numbers and CRC to these packets to create a highly reliable data transfer mechanism. The basic physical layer consists of a dualsimplex channel that is implemented as a transmit pair and a receive pair. The initial speed of 2.5Gb/s/direction provides a 200MB/s communications channel that is close to twice the classic PCI data rate. The remainder of this section will look deeper into each layer starting at the bottom of the stack.
Figure 3.1 The PCI Express„¢ Architecture is specified in layers.
3.1 PHYSICAL LAYER
The fundamental PCI Express page link consists of two, low-voltage, differentially driven pairs of signals: a transmit pair and a receive pair as shown in Figure 3.2. A data clock is embedded using the 8b/10b encoding scheme to achieve very high data rates. The initial frequency is 2.5Gb/s/direction and this is expected to increase with silicon technology advances to 10 GB/s/direction (the practical maximum for signals in copper). The physical layer transports packets between the page link layers of two PCI Express agents.
Figure 3.2 A PCI Express„¢ page link uses transmit
and receive signal pairs.
The bandwidth of a PCI Express page link may be linearly scaled by adding signal pairs to form multiple lanes. The physical layer supports x1, x2, x4, x8, x12, x16 and x32 lane widths and splits the byte data. Each byte is transmitted, with 8b/10b encoding, across the lane(s). This data disassembly and reassembly is transparent to other layers. During initialization, each PCI Express page link is set up following a negotiation of lane widths and frequency of operation by the two agents at each end of the link. No firmware or operating system software is involved.
The PCI Express architecture comprehends future performance enhancements via speed upgrades and advanced encoding techniques. The future speeds, encoding techniques or media would only impact the physical layer.
3.2 DATA LINK LAYER
The primary role of a page link layer is to ensure reliable delivery of the packet across the PCI Express link. The page link layer is responsible for data integrity and adds a sequence number and a CRC to the transaction layer packet as shown in Figure 3.3.
Figure 3.3 The Data Link Layer
adds data integrity features.
Most packets are initiated at the Transaction Layer (next section). A credit-based, flow control protocol ensures that packets are only transmitted when it is known that a buffer is available to receive this packet at the other end. This eliminates any packet retries, and their associated waste of bus bandwidth due to resource constraints. The Link Layer will automatically retry a packet that was signaled as corrupted.
3.3 TRANSACTION LAYER
The transaction layer receives read and write requests from the software layer and creates request packets for transmission to the page link layer. All requests are implemented as split transactions and some of the request packets will need a response packet. The transaction layer also receives response packets from the page link layer and matches these with the original software requests. Each packet has a unique identifier that enables response packets to be directed to the correct originator. The packet format supports 32bit memory addressing and extended 64bit memory addressing. Packets also have attributes such as no-snoop, relaxed ordering and priority which may be used to optimally route these packets through the I/O subsystem.
The transaction layer supports four address spaces: it includes the three PCI address spaces (memory, I/O and configuration) and adds a Message Space. PCI introduced an alternate method of propagating system interrupts called Message Signaled Interrupt (MSI). Here a special-format memory write transaction was used instead of a hard-wired sideband signal.
This was an optional capability in a PCI system. The PCI Express specification re-uses the MSI concept as a primary method for interrupt processing and uses Message Space to support all prior side-band signals, such as interrupts, power-management requests, resets, and so on, as in-band Messages. Other special cycles within the PCI specification, such as Interrupt Acknowledge, are also implemented as in-band Messages. You could think of PCI Express Messages as virtual wires since their effect is to eliminate the wide array of sideband signals currently used in a platform implementation.
3.4 SOFTWARE LAYERS
Software compatibility is of paramount importance for a Third Generation general purposes I/O interconnect. There are two facets of software compatibility; initialization, or enumeration, and run time. PCI has a robust initialization model wherein the operating system can discover all of the add-in hardware devices present and then allocate system resources, such as memory, I/O space and interrupts, to create an optimal system environment. The PCI configuration space and the programmability of I/O devices are key concepts that are unchanged within the PCI Express Architecture; in fact, all operating systems will be able to boot without modification on a PCI Express-based platform. The run-time software model supported by PCI is a load-store, shared memory model this is maintained within the PCI Express Architecture which will enable all existing software to execute unchanged. New software may use new capabilities.
3.5 CONFIGURATION/OPERATING SYSTEM LAYER
Leverages the standard mechanisms defined in the PCI Plug-and-Play specification for device initialization, enumeration, and configuration. This layer communicates with the software layer by initiating a data transfer between peripherals or receiving data from an attached peripheral. PCI Express is designed to be compatible with existing operating systems, but future operating system support is required for many of the technologyâ„¢s advanced features.
CHAPTER 4
PCI EXPRESS ADVANCED FEATURES
PCI Express has advanced features that will be phased in as operating system and device support is developed and as customer applications requires them:
¢ Advanced power management
¢ Support for real-time data traffic
¢ Hot plug and hot swap
¢ Data integrity and error handling
4.1 ADVANCED POWER MANAGEMENT
PCI Express has active-state power management, which lowers power consumption when the bus is not active (that is, no data is being sent between components or peripherals). On a parallel interface such as PCI, no transitions occur on the interface until data needs to be sent. In contrast, high-speed serial interfaces such as PCI Express require that the interface be active at all times so that the transmitter and receiver can maintain synchronization. This is accomplished by continuously sending idle characters when there is no data to send. The receiver decodes and discards the idle characters. This process consumes additional power, which impacts battery life on portable and handheld computers.
To address this issue, the PCI Express specification creates two low-power page link states and the active-state power management (ASPM) protocol. When the PCI Express page link goes idle, the page link can transition to one of the two low-power states. These states save power when the page link is idle, but require a recovery time to resynchronize the transmitter and receiver when data needs to be transmitted. The longer the recovery time (or latency), the lower the power usage. The most frequent implementation will be the low-power state with the shortest recovery time.
4.2 SUPPORT FOR REAL-TIME DATA TRAFFIC
Unlike PCI, PCI Express includes native support for isochronous (or time-dependent) data transfers and various QoS levels. These features are implemented via virtual channels that are designed to guarantee that particular data packets arrive at their destination in a given period of time. PCI Express supports multiple isochronous virtual channels”each an independent communications session”per lane. Each channel may have a different QoS level. This end-to-end solution is designed for applications that require real-time delivery such as real-time voice and video.
4.3 HOT PLUG AND HOT SWAP
PCI-based systems do not have native (or built-in) support for hot plugging or hot swapping I/O cards. Instead, a few limited server and PC Card hot plug, hot swap implementations were developed as add-ons to PCI after the original bus definition. These solutions addressed pressing requirements of server and portable computer platforms:
¢ It is often difficult or impossible to schedule downtime on a server to replace or install peripheral cards. The ability to hot plug I/O devices minimizes downtime.
¢ Portable computer users need the ability to hot plug cards that provide I/O functions such as mobile disk drives and communications.
PCI Express has native support for hot plugging and hot swapping I/O peripherals. No sideband signals are required and a unified software model can be used for all PCI Express form factors.
4.4 DATA INTEGRITY AND ERROR HANDLING
PCI Express supports link-level data integrity for all types of transaction- and data-link packets. Thus, it is suitable for end-to-end data integrity for high-availability applications, particularly those running on server systems. PCI Express also supports PCI error handling and has advanced error reporting and handling to help improve fault isolation and recovery solutions.
CHAPTER 5
PCI EXPRESS FORM FACTORS
A number of PCI Express form factors address the requirements of client, server, and portable computer platforms:
¢ Standard and low-profile cards: desktops, workstations, and servers
¢ Mini card: portable computers
¢ ExpressCard: portable computers and desktops
¢ Server I/O module (SIOM) that is currently being defined by PCI SIG
5.1 PCI EXPRESS STANDARD AND LOW-PROFILE CARDS
Current PCI standard and low-profile cards are used in a variety of platforms, including servers, workstations, and desktops. PCI Express also defines standard and low-profile cards that can replace or coexist with legacy PCI cards. These cards have the same dimensions as PCI cards and are equipped with a rear bracket to accommodate external cable connections.
The differences between the PCI and PCI Express cards lie in their I/O connectors. A x1 PCI Express connector has 36 pins, compared to the 120 pins on a standard PCI connector. Figure 5.1 compares PCI and PCI Express low profile cards. The x1 PCI Express connector shown is much smaller than the connector on the PCI card. Next to the PCI Express connector is a small tab that precludes it from being inserted into a PCI slot. The standard and low-profile form factors also support x4, x8, and x16 implementations.
Figure 5.1 Comparison of PCI Express and PCI
Low-Profile Cards
Figure 5.2 compares the size of PCI Express connectors to the PCI, AGP8X, and PCI-X connectors they will replace on the system board.
Figure 5.2 PCI Express System Board Connector Size for
Standard and Low-Profile Cards
5.2 PCI EXPRESS MINI CARD
The PCI Express Mini Card replaces the Mini PCI card, which is a small internal card functionally identical to standard desktop computer PCI cards. Mini PCI cards are used mainly to add communications functions to portable computers that are built- or customized-to-order. The PCI Express Mini Card is half the size of the Mini PCI card as shown in Figure 5.3. This allows system designers to include one or two cards, depending on the size constraints of a particular portable computer.
Figure 5.3 PCI Express Mini versus Mini PCI
A PCI Express Mini Card socket on the system board must support both a x1 PCI Express page link and a USB 2.0 link. A PCI Express Mini Card can use either PCI Express or USB 2.0 (or both). USB 2.0 support will help during the transition to PCI Express, because peripheral vendors will need time to design PCI Express into their chip sets. During the transition, PCI Express Mini Cards can be quickly implemented using USB 2.0.
5.3 EXPRESSCARD
ExpressCard is a small, modular add-in card designed to replace the PC Card over the next few years. The ExpressCard specification was developed by the Personal Computer Memory Card International Association (PCMCIA). The ExpressCard form factors shown in Figure 5.4 are designed to provide a small, less-expensive, and higher-bandwidth replacement for the PC Card. Like the PCI Express Mini Card, an ExpressCard module can support a x1 PCI Express and a USB 2.0 link. Its low cost also makes it feasible for small form-factor desktop systems. The ExpressCard module also has low power requirements and is hot pluggable. It is likely to be used for communications, hard-disk storage, and emerging I/O technologies.
Figure 5.4 ExpressCard Modules
5.4 PCI EXPRESS SERVER I/O MODULE
The SIOM specification is currently being defined. SIOMs are expected with the second generation of the PCI Express technology. The PCI Express SIOM will provide a robust form factor that can be easily installed or replaced. It will be modular, allowing I/O cards to be installed and serviced in a system while it is still operating and without opening the chassis.
The SIOM is a more radical form factor change than other PCI Express form factors. It will solve many of the problems with PCI and PCI-X cards in servers. It will be hot pluggable and its cover will protect the internal components. These features are designed to make the cards more reliable in data center environments where many people handle cards.
The module is also designed with forced-air cooling in mind because high-speed server devices tend to generate a lot of heat. The cooling air can originate from the back, top, or bottom of the module. This flexibility offers system designers more options when evaluating thermal solutions for rack-mounted systems equipped with SIOMs.
The largest SIOM form factor will accommodate relatively complicated functions and should be able to leverage the full range of PCI Express links.
CHAPTER 6
PERFORMANCE CHARACTERISTICS
PCI Expressâ„¢s differential, point-to-point connection provides a very high-speed interconnect using few signals. Its message space eliminates all prior sideband signals resulting in minimal implementation signals. Figure 6.1 on next page, shows a comparison of bandwidth per pin for a variety of buses. [Bandwidth per pin was calculated as the (Peak Bus Bandwidth) divided by (Total number of pins at the component = data + address + control + required power and ground pins)]
PCI @ 32b x 33MHz and 84 pins, PCI-X @ 64b x 133MHz and 150 pins,
AGP4X @ 32b x 4x66MHz and 108 pins, Intel® Hub Architecture 1 @ 8b
x 4x66MHz and 23 pins; Intel Hub Architecture 2 @ 16b x 8x66MHz and
40 pins; PCI Express„¢ @ 8b/direction x 2.5Gb/s/direction and 40 pins.
Figure 6.1 Comparing PCI Express„¢™s bandwidth per pin with other buses.
CHAPTER 7
ADVANTAGES
PCI Express has the following advantages over PCI:
¢ Serial technology providing scalable performance.
¢ High bandwidth”initially, 5“80 gigabits per second (Gbps) peak theoretical bandwidth, depending on the implementation.
¢ Point-to-point page link dedicated to each device, instead of the PCI shared bus.
¢ Opportunities for lower latency (or delay) in server architectures, because PCI Express provides a more direct connection to the chip set Northbridge2 than PCI-X.
¢ Small connectors and, in many cases, easier implementation for system designers.
¢ Advanced features”Quality of service (QoS) via isochronous channels for guaranteed bandwidth delivery when required, advanced power management, and native hot plug/hot swap support.
CHAPTER 8
CONCLUSION
PCI Express will serve as a general purpose I/O interconnect for a wide variety of future computing and communications platforms. Its advanced features and scalable performance will enable it to become a unifying I/O solution across a broad range of platforms”desktop, mobile, server, communications, workstations and embedded devices. A PCI Express page link is implemented using multiple, point-to-point connections called lanes and multiple lanes can be used to create an I/O interconnect whose bandwidth is linearly scalable. This interconnect will enable flexible system partitioning paradigms at or below the current PCI cost structure. PCI Express is software compatible with all existing PCI-based software to enable smooth integration within future systems.
REFERENCES
[1] inteltechnology/3gio
[2] dellr&d
[3] wikipedia.com
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: dnssec architecture, express cargo gmbh, architecture of aspnet, define superscalar architecture, architecture of vebek, docsis architecture, nanorobot architecture,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  FLEXIBLE HARDWARE ARCHITECTURE OF HIERARCHICAL K-MEANS CLUSTERING FOR LARGE CLUSTER N seminar class 3 2,166 04-07-2011, 01:15 PM
Last Post: lokeshvaddla
  Implementation of a Multi-Processing Architecture Approach on FPGA seminar-database 0 1,113 20-05-2011, 08:12 AM
Last Post: seminar-database
  Floating-Point FPGA: Architecture and Modeling seminar-database 0 1,196 19-05-2011, 03:36 PM
Last Post: seminar-database
  The Web Sensor Gateway Architecture for ZIGBEE seminar class 1 1,893 03-05-2011, 12:55 PM
Last Post: seminar class
  SUPERSCALAR ARCHITECTURE ppt. seminar surveyer 0 2,103 28-01-2011, 02:40 PM
Last Post: seminar surveyer
  DIRECT DIGITAL RF MODULATOR: A MULTI-FUNCTION ARCHITECTURE FOR A SYSTEM-INDEPENDENT R science projects buddy 0 1,598 18-12-2010, 10:04 PM
Last Post: science projects buddy
  The Autonomic Network Architecture (ANA) projectsofme 0 1,099 26-11-2010, 10:51 AM
Last Post: projectsofme
  Blue Gene Architecture project report helper 0 1,013 12-10-2010, 09:41 AM
Last Post: project report helper
  MAGNETIC RESONANCE IMAGING SIMULATION ON A GRID COMPUTING ARCHITECTURE full report computer science topics 0 1,699 16-06-2010, 09:17 PM
Last Post: computer science topics
  The Pure IP Moby Dick 4G Architecture seminar presentation 0 1,090 21-05-2010, 11:55 PM
Last Post: seminar presentation

Forum Jump: