02-04-2011, 02:34 PM
[attachment=11575]
ABSTRACT
Intel’s introduction of Hyper-Threading technology represents a significant improvement in processor utilization and performance. This technology boosts system performance without going to a higher clock rate or adding more processors. This improvement is achieved by making multiple instruction streams, called threads, internally available to a single processor at the same time. These threads allow the processor the opportunity to better schedule the use of internal resources and improve utilization.
HP servers offer this new technology by making use Intel® Xeon processors, which incorporate Hyper-Threading. This technology brief describes the Hyper-Threading concept, as well as its benefits and limitations for the user. Some HP performance test results are also included to show the improvement seen by the use of Hyper-Threading.
Introduction
Hyper-Threading technology is a groundbreaking innovation from Intel that enables multi-threaded server software applications to execute threads in parallel within each processor in a server platform. The Intel® Xeon™ processor family uses Hyper-Threading technology, along with the Intel® NetBurst™ micro architecture, to increase computer power and throughput for today’s Internet, e-Business, and enterprise server applications. This level of threading technology has never been seen before in a general-purpose microprocessor. Hyper-Threading technology helps increase transaction rates, reduces end-user response times, and enhances business productivity providing a competitive edge to e-Businesses and the enterprise. The Intel® Xeon™ processor family for servers represents the next leap forward in processor design and performance by being the first Intel® processor to support thread-level parallelism on a single processor.
With processor and application parallelism becoming more prevalent, today’s server platforms are increasingly turning to threading as a way of increasing overall system performance. Server applications have been threaded (split into multiple streams of instructions) to take advantage of multiple processors. Multi-processing-aware operating systems can schedule these threads for processing in parallel, across multiple processors within the server system. These same applications can run unmodified on the Intel® Xeon™ processor family for servers and take advantage of thread-level-parallelism on each processor in the system. Hyper-Threading technology complements traditional multi-processing by offering greater parallelism and performance headroom for threaded software.
Hyper-Threading
This new technology from Intel enables multi-threaded applications to execute threads in parallel on each individual processor. Available on Intel Xeon processors, Hyper-Threading provides the user with increased computing power to meet the needs of today’s server applications.
Need for the technology
Improving processor utilization has been an industry goal for years. Processor speeds have advanced until a typical processor today can run at frequencies over 2 gigahertz, but much of the rest of the system is not capable of running at that speed. To enable performance improvements, memory caches have been integrated into the processor to minimize the long delays that can result from accessing main memory. Xeon processors, for example, now include three cache levels on the die.
Large server-based applications tend to be memory intensive due to the difficulty of predicting access patterns. The working data sets are also quite large. These two things can create bottlenecks, regardless of memory prefetching techniques. Latency due to these bottlenecks only gets worse when pointer-intensive applications are executed. Any mistake in prediction can force a pipeline to be cleared, incurring a delay to refill this data.
It is this latency that drives processor utilization down. Despite improvements in application development and parallel processing implementations, reaching higher utilization rates remained an unmet goal.
What is Hyper-Threading??
Hyper-Threading Technology enables one physical processor to execute two separate threads at the same time. To achieve this, Intel designed the Xeon processor with the usual processor core, but with two Architectural State devices. Each Architectural State (AS) tracks the flow of a thread being executed by core resources.
After power-up and initialization, these two internal Architectural States define two logical processors. Individually they can be halted, interrupted, or can execute a specific thread independently of the other logical processor. Each AS has an instruction pointer, advanced programmable interrupt controller (APIC) registers, general-purpose registers, and machine state registers.
The two logical processors then share the remaining physical execution resources. An application or operating system (OS) can submit threads to two different logical processors just as it would in traditional multiprocessor system.
Operating Systems and Applications
A system with processors that use Hype Threading
Technology appears to the operating system and application software as having twice the number of
Processors than it physically has. Operating systems manage logical processors as they do physical processors, scheduling run able tasks or threads to logical processors. However, for best performance, the operating system should implement two optimizations
The first is to use the HALT instruction if one logical processor is active and the other is not. HALT will allow the processor to transition to either the ST0- or ST1-mode. An operating system that does not use this optimization would execute on the idle logical processor a sequence of instructions that repeatedly checks for work to do. This so-called “idle loop” can consume significant execution resources that could otherwise be used to make faster progress on the other active logical processor.
The second optimization is in scheduling software threads to logical processors. In general, for best threads to logical processors on different physical
processors before scheduling multiple threads to the
Same physical processor. This optimization allows
Software threads to use different physical execution
Resources when possible.
Benefits of Hyper-Threading Technology
Hyper-Threading technology can result in many benefits to e-Business and the enterprise:
• Improved reaction and response times for end-users and customers
• Increased number of users that a server system can support
• Handle increased server workloads
• Higher transaction rates for e-Businesses
• Greater end-user and business productivity
• Compatibility with existing server applications and operating systems
• Headroom to take advantage of enhancements offered by future software releases
• Headroom for future business growth and new solution capabilities
ABSTRACT
Intel’s introduction of Hyper-Threading technology represents a significant improvement in processor utilization and performance. This technology boosts system performance without going to a higher clock rate or adding more processors. This improvement is achieved by making multiple instruction streams, called threads, internally available to a single processor at the same time. These threads allow the processor the opportunity to better schedule the use of internal resources and improve utilization.
HP servers offer this new technology by making use Intel® Xeon processors, which incorporate Hyper-Threading. This technology brief describes the Hyper-Threading concept, as well as its benefits and limitations for the user. Some HP performance test results are also included to show the improvement seen by the use of Hyper-Threading.
Introduction
Hyper-Threading technology is a groundbreaking innovation from Intel that enables multi-threaded server software applications to execute threads in parallel within each processor in a server platform. The Intel® Xeon™ processor family uses Hyper-Threading technology, along with the Intel® NetBurst™ micro architecture, to increase computer power and throughput for today’s Internet, e-Business, and enterprise server applications. This level of threading technology has never been seen before in a general-purpose microprocessor. Hyper-Threading technology helps increase transaction rates, reduces end-user response times, and enhances business productivity providing a competitive edge to e-Businesses and the enterprise. The Intel® Xeon™ processor family for servers represents the next leap forward in processor design and performance by being the first Intel® processor to support thread-level parallelism on a single processor.
With processor and application parallelism becoming more prevalent, today’s server platforms are increasingly turning to threading as a way of increasing overall system performance. Server applications have been threaded (split into multiple streams of instructions) to take advantage of multiple processors. Multi-processing-aware operating systems can schedule these threads for processing in parallel, across multiple processors within the server system. These same applications can run unmodified on the Intel® Xeon™ processor family for servers and take advantage of thread-level-parallelism on each processor in the system. Hyper-Threading technology complements traditional multi-processing by offering greater parallelism and performance headroom for threaded software.
Hyper-Threading
This new technology from Intel enables multi-threaded applications to execute threads in parallel on each individual processor. Available on Intel Xeon processors, Hyper-Threading provides the user with increased computing power to meet the needs of today’s server applications.
Need for the technology
Improving processor utilization has been an industry goal for years. Processor speeds have advanced until a typical processor today can run at frequencies over 2 gigahertz, but much of the rest of the system is not capable of running at that speed. To enable performance improvements, memory caches have been integrated into the processor to minimize the long delays that can result from accessing main memory. Xeon processors, for example, now include three cache levels on the die.
Large server-based applications tend to be memory intensive due to the difficulty of predicting access patterns. The working data sets are also quite large. These two things can create bottlenecks, regardless of memory prefetching techniques. Latency due to these bottlenecks only gets worse when pointer-intensive applications are executed. Any mistake in prediction can force a pipeline to be cleared, incurring a delay to refill this data.
It is this latency that drives processor utilization down. Despite improvements in application development and parallel processing implementations, reaching higher utilization rates remained an unmet goal.
What is Hyper-Threading??
Hyper-Threading Technology enables one physical processor to execute two separate threads at the same time. To achieve this, Intel designed the Xeon processor with the usual processor core, but with two Architectural State devices. Each Architectural State (AS) tracks the flow of a thread being executed by core resources.
After power-up and initialization, these two internal Architectural States define two logical processors. Individually they can be halted, interrupted, or can execute a specific thread independently of the other logical processor. Each AS has an instruction pointer, advanced programmable interrupt controller (APIC) registers, general-purpose registers, and machine state registers.
The two logical processors then share the remaining physical execution resources. An application or operating system (OS) can submit threads to two different logical processors just as it would in traditional multiprocessor system.
Operating Systems and Applications
A system with processors that use Hype Threading
Technology appears to the operating system and application software as having twice the number of
Processors than it physically has. Operating systems manage logical processors as they do physical processors, scheduling run able tasks or threads to logical processors. However, for best performance, the operating system should implement two optimizations
The first is to use the HALT instruction if one logical processor is active and the other is not. HALT will allow the processor to transition to either the ST0- or ST1-mode. An operating system that does not use this optimization would execute on the idle logical processor a sequence of instructions that repeatedly checks for work to do. This so-called “idle loop” can consume significant execution resources that could otherwise be used to make faster progress on the other active logical processor.
The second optimization is in scheduling software threads to logical processors. In general, for best threads to logical processors on different physical
processors before scheduling multiple threads to the
Same physical processor. This optimization allows
Software threads to use different physical execution
Resources when possible.
Benefits of Hyper-Threading Technology
Hyper-Threading technology can result in many benefits to e-Business and the enterprise:
• Improved reaction and response times for end-users and customers
• Increased number of users that a server system can support
• Handle increased server workloads
• Higher transaction rates for e-Businesses
• Greater end-user and business productivity
• Compatibility with existing server applications and operating systems
• Headroom to take advantage of enhancements offered by future software releases
• Headroom for future business growth and new solution capabilities