Seminar on HYPER TRANSPORT TECHNOLOGY
#1

HYPER TRANSPORT TECHNOLOGY
1.ABSTRACT

The hyper transport protocol defines a high-performance and scalable interconnect between CPU, memory, and I/O devices. Conceptually, the architecture of the Hyper Transport I/O page link can be mapped into five different layers, which structure is similar to the Open System Interconnection (OSI) reference model. In Hyper Transport technology: 1. The physical layer defines the physical and electrical characteristics of the protocol. This layer interfaces to the physical world and includes data, control, and clock lines 2. The data page link layer includes the initialization and configuration sequence, periodic Cyclic redundancy check (CRC), disconnect/reconnect sequence, information packet for flow control and error management, and double word framing for other packets. 3. The protocol layer includes the commands, the virtual channels in which they run, and the ordering rules that govern their flow.
2.INTRODUCTION

The demand for faster processors, memory and I/O is a familiar refrain in market applications ranging from personal computers and servers to networking systems and from video games to office automation equipment. Once information is digitized, the speed at which it is processed becomes the foremost determinate of product success. Faster system speed leads to faster processing. Faster processing leads to faster system performance. Faster system performance results in greater success in the marketplace. This obvious logic has led a generation of processor and memory designers to focus on one overriding objective “ squeezing more speed from processors and memory devices.Processor designers have responded with faster clock rates and super pipelined architectures that use level 1 and level 2 caches to feed faster execution units even faster. Memory designers have responded with dual data rate memories that allow data access on both the leading and trailing clock edges doubling data access. I/O developers haveresponded by designing faster and wider I/O channels and introducing new protocols to meet anticipated I/O needs. Today, processors hit the market with 2+ GHz clock rates, memory devices provide sub5 ns access times and standard I/O buses are 32- and 64-bit wide, with new higher speed protocols on the horizon. Increased processor speeds, faster memories, and wider I/O channels are not always practical answers to the need for speed. The main problem is integration of more and faster system elements. Faster execution units, faster memories and wider, faster I/O buses lead to crowding of more high-speed signal lines onto the physical printed circuit board. One aspect of the integration problem is the physical problems posed by speed. Faster signal speeds lead to manufacturing problems due to loss of signal integrity and greater susceptibility to noise. Very high-speed digital signals tend to become high frequency radio waves exhibiting the same problematic characteristics of high-frequency analog signals. This wreaks havoc on printed circuit board™s manufactured using standard, lowcost materials and technologies. Signal integrity problems caused by signal crosstalk, signal and clock skew and signal reflections increase dramatically as clock speed increases. The other aspect of the Hyper transport Technology integration problem is the I/O bottleneck that develops when multiple high-speed execution units are combined for greater performance. While faster execution units relieve processor performance bottlenecks, the bottleneck moves to the I/O links. Now more data sits idling, waiting for the processor and I/O buses to clear and movement of large amounts of data from one subsystem to another slows down the overall system performance ratings.
3.CAUSES LEADING TO DEVELOPMENT OF HYPERTRANSPORT TECHNOLOGY

ž h I/O Band width problem žh High pint count žh High power consumption While microprocessor performance continues to double every eighteen months, the Performance of the I/O bus architecture has lagged, doubling in performance approximately every three years, as illustrated below


. This I/O bottleneck constrains system performance, resulting in diminished actual Performance gains as the processor and memory subsystems evolve. Over the past 20 years, a number of legacy buses, such as ISA, VL-Bus, AGP, LPC, PCI- 32/33, and PCI-X, have emerged that must be bridged together to support a varying array of devices. Servers and workstations require multiple high-speed buses, including PCI-64/66. AGP Pro, and SNA buses like InfiniBand. The hodge-podge of buses increases system complexity, adds many transistors devoted to bus arbitration and bridge logic, while delivering less than optimal performance. A number of new technologies are responsible for the increasing demand for additional bandwidth. High-resolution, texture-mapped 3D graphics and high-definition streaming video are escalating bandwidth needs between CPUs and graphics processors. Technologies like high-speed networking (Gigabit Ethernet, InfiniBand, etc.) and wireless communications (Bluetooth) are allowing more devices to exchange growing amounts of data at rapidly increasing speeds. Software technologies are evolving, resulting in breakthrough methods of utilizing multiple system processors. As processor speeds rise, so will the need for very fast, high-volume inter- processor data traffic. While these new technologies quickly exceed the capabilities of today™s PCI bus, existing interface functions like MP3 audio, v.90 modems, USB, 1394, and 10/100Ethernet are left to compete for the remaining bandwidth. These functions are now commonly integrated into core logic products. Higher integration is increasing the number of pins needed to bring these multiple buses into and out of the chip packages. Nearly all of these existing buses are single ended, requiring additional power and ground pins to provide sufficient current return paths. High pin counts increase RF radiation, which makes it difficult for system designers to meet FCC and VDE requirements. Reducing pin count helps system designers to reduce power consumption and meet thermal requirements. In response to these problems, AMD began developing the Hyper Transport„¢ I/O page link architecture in 1997. Hyper Transport technology has been designed to provide system architects with significantly more bandwidth, lowlatency responses,lower pin counts, compatibility with legacy PC buses, extensibility to new SNA buses, and transparency to operating system software, with little impact on peripheral drivers. As CPUs advanced in terms of clock speed and processing power, the I/O subsystem that supports the processor could not keep up. In fact, different links developed at different rates within the subsystem. The basic elements found on a motherboard include the CPU, Northbridge, Southbridge,PCI bus, and system memory. Other components are found on a motherboard, such as network controllers, USB ports, etc., but most generally communicate with the rest of the system through the Southbridge


4.HYPER TRANSPORT TECHNOLOGY SOLUTION

Hyper Transport technology, formerly codenamed Lightning Data Transfer (LDT), was developed at AMD with the help of industry partners to provide a high- speed, highperformance, point-to-point page link for interconnecting integrated circuits on a board. With atop signaling rate of 1.6 GHz on each wire pair, a Hyper Transport technology page link can support a peak aggregate bandwidth of 12.8 Gbytes/s.The Hyper Transport I/O page link is a complementary technology for InfiniBand and1Gb/10Gb Ethernet solutions. Both InfiniBand and highspeed Ethernet interfaces are highperformance networking protocol and box-to-box solutions, while Hyper Transport is intended to support in-the-box connectivity. The Hyper Transport specification provides both link- and system-level power management capabilities optimized for processors and other system devices. The ACPI compliant power management scheme is primarily messagebased, reducing pin-count requirements. Hyper Transport technology is targeted at networking, telecommunications, compute rand high performance embedded applications and any other application in which high speed, low latency, and scalability is necessary. Hyper Transport technology addresses this bottleneck by providing a point-to point architecture that can support bandwidths of up to 51.2Gbps in each direction. Not all devices will require this much bandwidth, which is why Hyper Transport technology operates at many different frequencies and widths. Currently, the specification supports a frequency of up to 800MHz (sampled twice per period) and a width of up to 32-bits in each direction. Hyper Transport technology also implements fast switching mechanisms, so it provides low latency as well as high bandwidth. By providing up to 102.4Gbps aggregate bandwidth, Hyper Transport technology enables I/O-intensive applications touse the throughput they demand. In order to ease the implementation of Hyper Transport technology and provide stability, it was designed to be transparent to existing software and operating systems. Hyper Transport technology supports plug-and-play features and PCI-like enumeration, so existing software can interface with a Hyper Transport technology page link the same way it does with current PCI buses. This interaction is designed to be reliable, because the same software will be used as before. In fact it may become more reliable, as data transfers will benefit from the error detection features Hyper Transport technology provides. Applications will benefit from Hyper Transport technology without needing extra support or updates from the developer. The physical implementation of Hyper Transport technology is straightforward, as it requires no glue logic or additional hardware. Hyper Transport technology specifications also stress a low pin count. This helps to minimize cost, as fewer parts are required to implement Hyper Transport technology, and reduces Electro-Magnetic Interference (EMI), a common problem in board layout design. Because Hyper Transport technology is designed to require no additional hardware, is ansparent to existing software, and simplifies EMI issues, it is a relatively inexpensive, easy-toimplement technology.
5.DESIGN GOALS

In developing Hyper Transport technology, the architects of the technology considered the design

goals presented in this section. They wanted to develop a new I/O

protocol for in-the-box I/O connectivity that would:

1. Improve system performance

- Provide increased I/O bandwidth

- Reduce data bottlenecks by moving slower devices out of critical information paths

- Ensure low latency responses

- Reduce power consumption

2. Simplify system design

- Reduce the number of buses within the system

- Use as few pins as possible to allow smaller packages and to reduce cost

3. Increase I/O flexibility

- Provide modular bridge

architecture

- Allow for differing upstream and downstream bandwidth requirements

4. Maintain compatibility with legacy systems

- Complement standard external buses

- Have little or no impact on existing operating systems and drivers

5. Ensure extensibility to new system network architecture (SNA) buses

6. Provide highly scalable multiprocessing systems
6.IMPLEMENTATION

Hyper Transport technology supports multiple connection topologies including daisy chain topologies, switch topologies and star topologies.
7.CONCLUSION

Hyper Transport technology is a new high-speed, high-performance, point-to-point page link for integrated circuits. It provides a universal connection designed to reduce the number of buses within the system, provide a high-performance page link for embedded applications, and enable highly scalable multiprocessing systems. It is designed to enable the chips inside of PCs and networking and communications devices to communicate with each other up to 48 times faster than with existing technologies. Hyper Transport technology provides an extremely fast connection that complements externally visible bus standards like the PCI, as well as emerging technologies like InfiniBand and Gigabit Ethernet. Hyper Transport technology is truly the universal solution for in-the-box connectivity.
Reply
#2
[attachment=3114]


HYPER TRANSPORT TECHNOLOGY
INTRODUCTION

Hyper Transport technology is a very fast, low latency, point-to-point page link used for inter-connecting integrated circuits on board. Hyper Transport, previously codenamed as Lightning Data Transport (LDT), provides the bandwidth and flexibility critical for today's networking and computing platforms while retaining the fundamental programming model of PCI. Hyper Transport was invented by AMD and perfected with the help of several partners throughout the industry.

Hyper Transport was designed to support both CPU-to-CPU communications as well as CPU-to-I/O transfers, thus, it features very low latency. It provides up to 22.4 Gigabyte/second aggregate CPU to I/O or CPU to CPU bandwidth in a highly efficient chip-to-chip technology that replaces existing complex multi-level buses .Using enhanced 1.2 volt LVDS signaling reduces signal noise, using non-multiplexed lines cuts down on signal activity and using dual-data rate clocks lowers clock rates while increasing data throughput. . It employs a packet-based data protocol to eliminate many sideband (control and command) signals and supports asymmetric, variable width data paths.
New specifications are backward compatible with previous generations of specification, extending the investment made in one generation of Hyper Transport-enabled device to future generations. Hyper Transport devices are PCI software compatible, thus they require little or no software overhead. The technology targets networking, telecommunications, computers and embedded systems and any application where high speed, low latency and scalability are necessary.
The I/O Bandwidth Problem
While microprocessor performance continues to double every eighteen months, the performance of the I/O bus architecture has lagged, doubling in performance approximately every three years. This I/O bottleneck constrains system performance, resulting in diminished actual performance. Over the past 20 years, a number of legacy buses, such as ISA, VL-Bus, AGP, LPC, PCI-32/33, and PCI-X, have emerged that must be bridged together to support a varying array of devices. Servers and workstations require multiple high-speed buses, including PCI-64/66, AGP Pro, and SNA buses like InfiniBand. The hodge-podge of buses increases system complexity, adds many transistors devoted to bus arbitration and bridge logic, while delivering less than optimal performance.
A number of new technologies are responsible for the increasing demand for additional bandwidth.
? High-resolution, texture-mapped 3D graphics and high-definition streaming video are escalating bandwidth needs between CPUs and graphics processors.
? Technologies like high-speed networking (Gigabit Ethernet, InfiniBand, etc.) and wireless communications (Bluetooth) are allowing more devices to exchange growing amounts of data at rapidly increasing speeds.
? Software technologies are evolving, resulting in breakthrough methods of utilizing multiple system processors. As processor speeds rise, so will the need for very fast, high-volume inter-processor data traffic.
While these new technologies quickly exceed the capabilities of todayâ„¢s PCI bus, existing interface functions like MP3 audio, v.90 modems, USB, 1394, and 10/100 Ethernet are left to compete for the remaining bandwidth. These functions are now commonly integrated into core logic products.
Higher integration is increasing the number of pins needed to bring these multiple buses into and out of the chip packages. Nearly all of these existing buses are single- ended, requiring additional power and ground pins to provide sufficient current return paths. Reducing pin count helps system designers to reduce power consumption and meet thermal requirements.
In response to these problems, AMD began developing the HyperTransport„¢ I/O page link architecture in 1997. Hyper Transport technology has been designed to provide system architects with significantly more bandwidth, low-latency responses, lower pin counts, compatibility with legacy PC buses, extensibility to new SNA buses, and transparency to operating system software, with little impact on peripheral drivers.
The HyperTransport„¢ Technology Solution
Hyper Transport technology, formerly codenamed Lightning Data Transfer (LDT), was developed at AMD with the help of industry partners to provide a high-speed, high performance, point-to-point page link for inter -connecting integrated circuits on a board. With a top signaling rate of 1.6 GHz on each wire pair, a Hyper Transport technology page link can support a peak aggregate bandwidth of 12.8 Gbytes/s. The Hyper Transport specification provides both link- and system-level power management capabilities optimized for processors and other system devices. Hyper Transport technology is targeted at networking , telecommunications , computer and high performance embedded applications and any other application in which high speed, low latency, and scalability is necessary.
Original Design Goals
In developing HyperTransport technology, the architects of the technology considered the design goals presented in this section. They wanted to develop a new I/O protocol for in-the-box I/O connectivity that would:
? Improve system performance
??Provide increased I/O bandwidth
??Reduce data bottlenecks by moving slower devices out of critical information paths
??Reduce the number of buses within the system
??Ensure low latency responses
??Reduce power consumption
? Simplify system design
??Use a common protocol for in-chassis connections to I/O and processors
??Use as few pins as possible to allow smaller packages and to reduce cost
? Increase I/O flexibility
??Provide a modular bridge architecture
??Allow for differing upstream and downstream bandwidth requirements
? Maintain compatibility with legacy systems
??Complement standard external buses
??Have little or no impact on existing operating systems and drivers
? Ensure extensibility to new system network architecture (SNA) buses
? Provide highly scalable multiprocessing systems
Flexible I/O Architecture
The resulting protocol defines a high-performance and scalable interconnect between CPU, memory, and I/O devices. Conceptually, the architecture of the HyperTransport I/O page link can be mapped into five different layers, which structure is similar to the Open System Interconnection (OSI) reference model.
In HyperTransport technology:
? The physical layer defines the physical and electrical characteristics of the protocol.This layer interfaces to the physical world and includes data, control, and clock lines.
? The data page link layer includes the initialization and configuration sequence, periodic cyclic redundancy check (CRC), disconnect or reconnect sequence, information packets for flow control and error management, and doubleword framing for other packets.
? The protocol layer includes the commands, the virtual channels in which they run, and the ordering rules that govern their flow.
? The transaction layer uses the elements provided by the protocol layer to perform actions, such as reads and writes.
? The session layer includes rules for negotiating power management state changes, as well as interrupt and system management activities.
Device Configurations
HyperTransport technology creates a packet-based page link implemented on two independent, unidirectional sets of signals. It provides a broad range of system topologies built with three generic device types:
? Cave”A single-link device at the end of the chain.
? Tunnel”A dual-link device that is not a bridge.
? Bridge”Has a primary page link upstream page link in the direction of the host and one or more secondary links.
Technical Overview
Physical Layer
Each HyperTransport page link consists of two point-to-point unidirectional data paths, as illustrated in Figure.
? Data path widths of 2, 4, 8, and 16 bits can be implemented either upstream or downstream, depending on the device-specific bandwidth requirements.
? Commands, addresses, and data (CAD) all use the same set of wires for signaling, dramatically reducing pin requirements.
HyperTransport„¢ Technology Data Paths
All HyperTransport technology commands, addresses, and data travel in packets. All packets are multiples of four bytes (32 bits) in length. If the page link uses data paths narrower than 32 bits, successive bit-times are used to complete the packet transfers. The Hyper Transport page link was specifically designed to deliver a high-performance and scalable interconnect between CPU, memory, and I/O devices, while using as few pins as possible.
? To achieve very high data rates, the Hyper Transport page link uses low-swing differential signaling with on-die differential termination.
? To achieve scalable bandwidth, the Hyper Transport page link permits seamless scalability of both frequency and data width.
Minimal Pin Count
The designers of HyperTransport technology wanted to use as few pins as possible to enable smaller packages, reduced power consumption, and better thermal characteristics, while reducing total system cost. This goal is accomplished by using separate unidirectional data paths and very low-voltage differential signaling.
The signals used in Hyper Transport technology are summarized in Table given below
? Commands, addresses, and data (CAD) all share the same bits.
? Each data path includes a Control (CTL) signal and one or more Clock (CLK) signals.
??The CTL signal differentiates commands and addresses from data packets.
??For every grouping of eight bits or less within the data path, there is a forwarded CLK signal. Clock forwarding reduces clock skew between the reference clock signal and the signals traveling on the link. Multiple forwarded clocks limit the number of signals that must be routed closely in wider Hyper Transport links.
? For most signals, there are two pins per bit.
? In addition to CAD, Clock, Control, VLDT power, and ground pins, each Hyper Transport device has Power OK (PWROK) and Reset (RESET#) pins. These pins are single-ended because of their low-frequency use.
? Devices that implement Hyper Transport technology for use in lower power applications such as notebook computers should also implement Stop (LDTSTOP#) and Request (LDTREQ#). These power management signals are used to enter and exit low-power states.
Enhanced Low-Voltage Differential Signaling
The signaling technology used in HyperTransport technology is a type of low voltage differential signaling (LVDS ). However, it is not the conventional IEEE LVDS standard. It is an enhanced LVDS technique developed to evolve with the performance of future process technologies. This is designed to help ensure that the Hyper Transport technology
standard has a long lifespan. LVDS has been widely used in these types of applications because it requires fewer pins and wires. This is also designed to reduce cost and power requirements because the transceivers are built into the controller chips.
Hyper Transport technology uses low-voltage differential signaling with differential impedance (ZOD) of 100 ohms for CAD, Clock, and Control signals, as illustrated in Figure. Characteristic line impedance is 60 ohms. The driver supply voltage is 1.2 volts, instead of the conventional 2.5 volts for standard LVDS. Differential signaling and the chosen impedance provide a robust signaling system for use on low-cost printed circuit boards. Common four-layer PCB materials with specified di-electric, trace, and space dimensions and tolerances or controlled impedance boards are sufficient to implement a Hyper Transport I/O link. The differential signaling permits trace lengths up to 24 inches for 800 Mbit/s operation.

Enhanced Low-Voltage Differential Signaling (LVDS)
At first glance, the signaling used to implement a Hyper Transport I/O page link would seem to increase pin counts because it requires two pins per bit and uses separate upstream and downstream data paths. However, the increase in signal pins is offset by two factors:
? By using separate data paths, Hyper Transport I/O links are designed to operate at much higher frequencies than existing bus architectures. This means that buses delivering equivalent or better bandwidth can be implemented using fewer signals.
? Differential signaling provides a return current path for each signal, greatly reducing the number of power and ground pins required in each package.
Greatly Increased Bandwidth
Commands, addresses, and data traveling on a HyperTransport page link are double pumped, where transfers take place on both the rising and falling edges of the clock signal. For example, if the page link clock is 800 MHz, the data rate is 1600 MHz.
? An implementation of HyperTransport links with 16 CAD bits in each direction with a 1.6-GHz data rate provides bandwidth of 3.2 Gigabytes per second in each direction, for an aggregate peak bandwidth of 6.4 Gbytes/s, or 48 times the peak bandwidth of a 33-MHz PCI bus.
? ?A low-cost, low-power HyperTransport page link using two CAD bits in each direction and clocked at 400 MHz provides 200 Mbytes/s of bandwidth in each direction, or nearly four times the peak bandwidth of PCI 32/33.
Data Link Layer
The data page link layer includes the initialization and configuration sequence, periodic cyclic redundancy check (CRC), disconnect/reconnect sequence, information packets for flow control and error management, and double word framing for other packets.
Initialization
HyperTransport technology-enabled devices with transmitter and receiver links of equal width can be easily and directly connected. Devices with asymmetric data paths can also be linked together easily. Extra receiver pins are tied to logic 0, while extra transmitter pins are left open. During power-up, when RESET# is asserted and the Control signal is at logic 0, each device transmits a bit pattern indicating the width of its receiver. Logic within each device determines the maximum safe width for its transmitter. While this may be narrower than the optimal width, it provides reliable
Communications between devices until configuration software can optimize the page link to the widest common width.
For applications that typically send the bulk of the data in one direction, component vendors can save costs by implementing a wide path for the majority of the traffic and a narrow path in the lesser used direction. Devices are not required to implement equal width upstream and downstream links.
Protocol and Transaction Layers
The protocol layer includes the commands, the virtual channels in which they run, and the ordering rules that govern their flow. The transaction layer uses the elements provided by the protocol layer to perform actions, such as read request and responses.
Commands
All HyperTransport technology commands are either four or eight bytes long and begin with a 6-bit command type field. The most commonly used commands are Read Request, Read Response, and Write. A virtual channel contains requests or responses with the same ordering priority.
When the command requires an address, the last byte of the command is concatenated with an additional four bytes to create a 40-bit address.
Data Packets
A Write command or a Read Response command is followed by data packets. Data packets are four to 64 bytes long in four-byte increments. Transfers of less than four bytes are padded to the four-byte minimum. Byte granularity reads and writes are supported with a four-byte mask field preceding the data. This is useful when transferring data to or from graphics frame buffers where the application should only affect certain bytes that may correspond to one primary color or other characteristics of the displayed pixels. A control bit in the command indicates whether the writes are byte or doubleword granularity.

Address Mapping
Reads and writes to PCI I/O space are mapped into a separate address range, eliminating the need for separate memory and I/O control lines or control bits in read and write commands.
Additional address ranges are used for in-band signaling of interrupts and system management messages. A device signaling an interrupt performs a byte-granularity write command targeted at the reserved address space. The host bridge is responsible for delivery of the interrupt to the internal target.
I/O Stream Identification
Communications between the HyperTransport host bridge and other HyperTransport technology-enabled devices use the concept of streams. A HyperTransport page link can handle multiple streams between devices simultaneously. HyperTransport technology devices are daisy-chained, so that some streams may be passed through one node to the next.
Packets are identified as belonging to a stream by the Unit ID field in the packet header. There can be up to 32 unique IDs within a Hyper Transport chain. Nodes within a HyperTransport chain may contain multiple units.It is the responsibility of each node to determine if information sent to it is targeted at a device within it. If not, the information is passed through to the next node. If a device is located at the end of the chain and it is not the target device, an error response is passed back to the host bridge.
Commands and responses sent from the host bridge have a Unit ID of zero. Commands and responses sent from other HyperTransport technology devices on the chain have their own unique ID.
If a bus-mastering HyperTransport technology device like a RAID controller sends a write command to memory above the host bridge, the command will be sent with the Unit ID of the RAID controller . Hyper Transport technology permits posted write operations so that these devices do not wait for an acknowledgement before proceeding. This is useful for large data transfers that will be buffered at the receiving end.

I/O Streams Use Unit IDs
Ordering Rules
Within streams, the HyperTransport I/O page link protocol implements the same basic ordering rules as PCI. Additionally, there are features that allow these ordering rules to be relaxed. A Fence command aligns posted cycles in all streams, and a Flush command flushes the posted write channel in one stream. These features are helpful in handling protocols for bridges to other buses such as PCI, InfiniBand, AGP.
Session Layer
The session layer includes page link width optimization and page link frequency optimization along with interrupt and power state capabilities.
Standard Plug Ëœn Play Conventions
Devices enabled with HyperTransport technology use standard Plug Ëœn Play conventions for exposing the control registers that enable configuration routines to optimize the width of each data path. AMD registered the HyperTransport Specific Capabilities Block with the PCI SIG. This Capabilities Block, illustrated in Figure , permits devices enabled with HyperTransport technology to be configured by any operating system that supports a PCI architecture.
HyperTransport„¢ Technology Capabilities Block
Since system enumeration and power-up are implementation-specific, it is assumed that system firmware will recognize the Capabilities Block and use the information within it to configure all Hyper Transport host bridges in the system.
Once the host bridges are identified, devices enabled with Hyper Transport technology that are connected to the bridges can be enumerated just as they are for PCI devices. Configuration information that is collected and the structures created by this process will look to a Plug Ëœn Play-aware operating system (OS) just like those of PCI devices. In short, the Plug Ëœn Play-aware OS does not require any modification to recognize and configure devices enabled with HyperTransport technology.
Minimal Device Driver Porting
Drivers for devices enabled with HyperTransport technology are unique to the devices just as they are to PCI I/O devices, but the similarities are great. Companies that build a PCI I/O device and then create an equivalent device enabled with Hyper Transport technology should have no problems porting the driver. To make porting easier, the chain from a host bridge is enumerated like a PCI bus, and devices and functions within a device enabled with HyperTransport technology are enumerated like PCI devices and functions, as shown in Figure

Link Width Optimization
The initial link-width negotiation sequence may result in links that do not operate at their maximum width potential. All 16-bit, 32-bit, and asymmetrically-sized configurations must be enabled by a software initialization step. At cold reset, all links power-up and synchronize according to the protocol. Firmware (or BIOS) then interrogates all the links in the system, reprograms them to the desired width, and takes the system through a warm reset to change the page link widths. Devices that implement the LDTSTOP# signal can disconnect and reconnect rather than enter warm reset to invoke page link width changes.
Link Frequency Initialization
At cold reset, all links power-up with 200-MHz clocks. For each link, firmware reads a specific register of each device to determine the supported clock frequencies. The reported frequency capability, combined with system-specific information about the board layout and power requirements, is used to determine the frequency to be used for each link. Firmware then writes the two frequency registers to set the frequency for each link. Once all devices have been configured, firmware initiates an LDTSTOP# disconnect
or RESET# of the affected chain to cause the new frequency to take effect.
Implementation Examples
Daisy Chain
HyperTransport technology has a daisy-chain topology, giving the opportunity to connect multiple HyperTransport input/output bridges to a single channel. Hyper -Transport technology is designed to support up to 32 devices per channel and can mix and match components with different page link widths and speeds. This capability makes it possible to create Hyper Transport technology devices that are building blocks capable of spanning a range of platforms and market segments. For example, a low-cost entry in a mainstream PC product line might be designed with an AMD Duron„¢ processor. With very little redesign work, as shown in Figure this PC design could be upgraded to a high-end workstation by substituting high-end AMD Athlon„¢ processors and bridges with HyperTransport technology to

expand the platformâ„¢s I/O capabilities. Figure 10 also illustrates the concept of tunnels, in which multiple HyperTransport tunnels can be daisy-chained onto a single I/O link. A tunnel can be viewed as a basic building block for complex system designs.
Switched Environment
A number of industry partners are developing HyperTransport switches, allowing engineers to have a great deal of flexibility in their system designs. In this type of configuration, a HyperTransport I/O switch handles multiple HyperTransport I/O data streams and manages the interconnection between the attached HyperTransport devices. For example, a four-port HyperTransport switch could aggregate data from multiple downstream ports into a single high-speed uplink, or it could route port-to-port connections. A switched environment allows multiple high-speed data paths to be linked while simultaneously supporting slower speed buses.
HYPER TRANSPORT TECHNOLOGY CONSORTIUM
The Consortium is a non-profit organization whose membership is open to any commercial or educational organization. It manages the HyperTransport Specification and promotes the technology to the industry at large. Promoter and Contributor members are eligible for membership in Technical and Marketing Task Force groups that manage the specification and direct the marketing outreach programs.
Conclusion
Hyper Transport technology is a new high-speed, high-performance, point-to-point page link for integrated circuits. It provides a universal connection designed to reduce the number of buses within the system, provide a high-performance page link for embedded applications, and enable highly scalable multiprocessing systems. It is designed to enable the chips inside of PCs and networking and communications devices to communicate with each other up to 48 times faster than with existing technologies. Hyper Transport
technology provides an extremely fast connection that complements externally visible bus standards like the PCI, as well as emerging technologies like InfiniBand and Gigabit Ethernet. Hyper Transport technology is truly the universal solution for in-the-box connectivity .Doubtlessly future will see the tremendous advancement of HyperTransport
technology. HyperTransport technology will bring a revolution in the bus architecture.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: seminar report on transport layer security, what is infiniband, hyper transfer technology, pci peripheralcomponent, hyper transport technology ppt, seminar on hyper cars, seminar topic hyper branched polymer and its application off pdf file,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Electronics Seminar Topics List computer science crazy 259 173,248 11-05-2016, 03:44 PM
Last Post: seminar report asees
  5G technology dhanya1987 8 12,884 11-04-2016, 11:21 AM
Last Post: dhanabhagya
  Seminar Report On Wearable Bio-Sensors mechanical wiki 3 5,700 30-03-2015, 10:07 AM
Last Post: seminar report asees
  FinFET Technology computer science crazy 13 11,623 10-03-2015, 04:38 PM
Last Post: seminar report asees
  Cellonics Technology computer science crazy 3 3,510 05-09-2014, 09:45 PM
Last Post: seminar report asees
  Latest Invention: Acoustic Ear-scanning Technology to Help Avoid Theft project report helper 10 8,636 20-08-2014, 09:02 PM
Last Post: preethikrishna
  RFID Technology computer science crazy 4 4,617 09-08-2014, 07:10 PM
Last Post: Guest
  Seminar Report on Night Vision Technologies computer girl 2 4,693 05-07-2014, 09:17 PM
Last Post: Guest
  Seminar Report On Optical Computing Technology mechanical wiki 3 5,768 27-07-2013, 12:41 PM
Last Post: computer topic
  Seminar Report On NavBelt and GuideCane mechanical wiki 8 7,638 19-07-2013, 11:31 AM
Last Post: computer topic

Forum Jump: