dual core processing full report
#1

[attachment=1972]

ABSTRACT
Computational photography combines plentiful computing, digital sensors, modern optics, actuators, probes and smart lights to escape the limitations of traditional film cameras and enables novel imaging applications. Unbounded dynamic range, variable focus, resolution, and depth of field, hints about shape, reflectance, and lighting, and new interactive forms of photos that are partly snapshots and partly videos are just some of the new applications found in Computational Photography.
Computational photography extends digital photography by providing the capability to record much more information and by offering the possibility of processing this information afterward.
INTRODUCTION
Computational photography combines plentiful computing, digital sensors, modern optics, actuators, probes and smart lights to escape the limitations of traditional film cameras and enables novel imaging applications. Unbounded dynamic range, variable focus, resolution, and depth of field, hints about shape, reflectance, and lighting, and new interactive forms of photos that are partly snapshots and partly videos are just some of the new applications found in Computational Photography. The computational techniques encompass methods from modification of imaging parameters during capture to sophisticated reconstructions from indirect measurements. Many ideas in computational photography are still relatively new to digital artists and programmers. A larger problem is that a multi-disciplinary field that combines ideas from computational methods and modern digital photography involves a steep learning curve. For example, photographers are not always familiar with advanced algorithms now emerging to capture high dynamic range images, but image processing researchers face difficulty in understanding the capture and noise issues in digital cameras. The new capture methods include sophisticated sensors, electromechanical actuators and on-board processing. The methods can achieve a 'photomontage' by optimally fusing information from multiple images, improve signal to noise ratio and extract scene features such as depth edges.
Computational photography extends digital photography by providing the capability to record much more information and by offering the possibility of processing this information afterward.
TRADITIONAL FILM-LIKE PHOTOGRAPHY
In traditional film-like digital photography, camera images represent a view of the scene via a 2D array of pixels. With film-like photography, the captured image is a 2D projection of the scene. Due to limited capabilities of the camera, the recorded image is a partial representation of the view.-Nevertheless, the captured image is ready for human consumption: what you see is what you almost get in the photo.
Analog and digital photography share one main limitation: They only record intensities and colors of light rays that a simple lens system projects linearly onto the image plane at a single point in time and under a fixed scene illumination. This is still mainly the principle of the camera obscura that has been known since antiquity. Thus, most of the light rays that are propagated through space and time are not recorded.
j Lens
Detector

Image
Pixel B

HISTORY & EVOLUTION OF COMPUTATIONAL PHOTOGRAPHY
The term was first used by Steve Mann, and possibly others, to describe their own research. More recently its definition has evolved to cover a number of subject areas in computer graphics, computer vision, and applied optics. These areas are given below, organized according to a taxonomy proposed by Shree Nayar.
Dennis Gabor and Gabriel Jonas Lippmann. for example, addressed part of this problem on the analog side when they invented holography and what is known as Lippmann photography. Yet, digitizing photographic recordings does allow postprocessing them digitally. Therefore, computational photography will enable features such as 3D recording, digital refocusing, synthetic re-illumination, improved motion compensation and noise reduction, and much more.
The transition from analog to digital photography has certainly been a big step that is almost complete. Although a few professionals still prefer film. Digital photography has opened many new possibilities, such as immediate image preview, postediting, or recording of short movie clips. Today's megapixel resolution of digital cameras can easily keep up with the quality of analog film for a broad range of consumer and professional applications.
This was reason enough for some of the major camera manufacturers such as Kodak, Canon, and Nikon to downscale or to cease their production of analog film cameras and film. Yet another big step lies ahead of us”and it is not too far off in the distance. It is called computational photography.

COMPUTATIONAL PHOTOGRAPHY OPTICS, SENSORS AND COMPUTATIONS
Computational imaging refers to any image formation method that involves a digital computer. Computational photography refers broadly to computational imaging techniques that enhance or extend the capabilities of digital photography. The output of these techniques is an ordinary photograph, but one that could not have been taken by a traditional camera.
There are four elements of Computational Photography.
(i) Generalized Optics
(ii) Generalized Sensor
(iii) Processing, and
(iv) Generalized Illumination
Spy tati tins
4D Ray Bender
Generalized Sensor
Ray Reconstruction

Picture

Upto 4D Ray Sampler

PIXELS VERSUS RAYS
In traditional film-like digital photography, camera images represent a view of the scene via a 2D array of pixels. Computational Photography attempts to understand and analyze a ray-based representation of the scene. The camera optics encode the scene by bending the rays, the sensor samples the rays over time, and the final 'picture' is decoded from these encoded samples. The lighting (scene illumination) follows a similar path from the source to the scene via optional spatio-temporal modulators and optics. In addition, the processing may adaptively control the parameters of the optics, sensor and illumination.
-Lighting: ray sources
-Optics: ray bending/folding devices
-Sensor: measure light
-Processing: assess it
-Display: reproduce it
Ancient Greeks says that 'eye rays' wipe the world to feel its contents...
GREEKS: Photog. seems obvious because what we gather can be described by ray geometry”if we think of our retina as a sensory organ, we 'WIPE' it across the scene, as if light let our retina 'reach out' and touch' what is around us. So let's look further into that:; lets consider light as a way of exploring our surroundings without contact, a magical way of transporting the perceivable properties of our surroundings into our brain. EVEN THE GREEKS knew this idea well”they used RAYS in exploration of vision, and described how rays going through a small aperture mapped angle to position.
ENCODING AND DECODING
The encoding and decoding process differentiates Computational Photography from traditional 'film-like digital photography'. With film-like photography, the captured image is a 2D projection of the scene. Due to limited capabilities of the camera, the recorded image is a partial representation of the view. Nevertheless, the captured image is ready for human consumption: what you see is what you almost get in the photo. In Computational Photography, the goal is to achieve a potentially richer representation of the scene during the encoding process. In some cases, Computational Photography reduces to 'Epsilon Photography', where the scene is recorded via multiple images, each captured by epsilon variation of the camera parameters. For example, successive images (or neighboring pixels) may have a different exposure, focus, aperture, view, illumination, or instant of capture. Each setting allows recording of partial information about the scene and the final image is reconstructed from these multiple observations. In other cases, Computational Photography techniques lead to Coded Photography where the recorded image appears distorted or random to a human observer. But the corresponding decoding recovers valuable information about the scene.
COMPUTATIONAL PHOTOGRAPHY - COMPONENTS
There are four elements of Computational Photography.
(i) Generalized Optics
(ii) Generalized Sensor
(iii) Processing, and
(iv) Generalized Illumination
Programmable Lighting

Generalized Optics : It consist of SAMP Camera and Camera Array
SAMP Camera is Single Axis Multiple Parameters camera.we can have multiple parameters in a single axis camera. Parameters vary in focus, exposure, and aperture.
Camera array is an array of same type cameras tahat are arranged in a particular order to get continuous image of an object. We can create a foreground segmentation of the picture from that we can extract the foreground image from the picture.
Generalized Sensor : It consist of gradient sensing and flutter and shutter. Gradient sensing camera Sensing Difference between Neighboring Pixels. High Dynamic Range ImagesSensing Pixel Difference with Locally Adaptive Gain.
Processing : It mainly consist of image fusion. Different view of same picture are taken by same camera by changing few parameters of the camera. And finally these images are fused together to form a new real world capture.
Generalized Illumination : It mainly consist of multiflash illumination, it is a new illumination method. It has a light source, also called programmable light
CONCLUSION
Computational photography is an extension of digital photography. It provides the capability to record much more information and offer the possibility of processing this information afterward. Computational imaging refers to any image formation method that involves a digital computer. Computational photography refers broadly to computational imaging techniques that enhance or extend the capabilities of digital photography
Computational photography transforms photography from single-instant-direction toward multiple-instant-direction imaging and illumination. It combines plentiful computing, digital sensors, modern optics, actuators, probes and smart lights to escape the limitations of traditional film cameras and enables novel imaging applications. Unbounded dynamic range, variable focus, resolution, and depth of field, hints about shape, reflectance, and lighting are just some of the new applications found in Computational Photography.
FUTURE SCOPE
Computational photography extends digital photography. Digital photography has opened many new possibilities, such as immediate image preview, post editing, or recording of short movie clips. Today's mega pixel resolution of digital cameras can easily keep up with the quality of analog film for a broad range of consumer and professional applications.
This was reason enough for some of the major camera manufacturers such as Kodak, Canon, and Nikon to downscale or to cease their production of analog film cameras and film. Yet another big step lies ahead of us”and it is not too far off in the distance. It is called computational photography.
Most digital cameras allow capturing small movie sequences. Instead of a simple playback, however, future cameras will register the corresponding video frames into a spacetime slab. This data structure, together with appropriate processing techniques, offers higher image quality”less noise, larger depth of field, higher dynamic range”and opens completely new possibilities, such as consistent group shooting, motion-invariant image stitching, or playback of motion loops. Michael F. Cohen and Richard Szeliski describe this technique in "The Moment Camera."
New interactive forms of photos that are partly snapshots and partly videos are just some of the new applications found in Computational Photography.
New imaging devices that capture a much larger number of light rays that travel in many parameterized directions”called the 4D light field. This novel concept truly revolutionizes digital imaging in many areas and enables new applications, such as multiperspective panoramas and synthetic aperture photography.
Research breakthroughs in 2D image analysis/synthesis, coupled with the growth of digital photography as a practical and artistic medium, are creating a convergence between vision, graphics, and photography. A similar trend is occurring with digital video. At the same time, new sensing modalities and faster CPUs have given rise to new computational imaging techniques in many scientific disciplines. Finally, just as CAD/CAM and visualization were the driving markets for computer graphics research in the 1970s and 1980s, and entertainment and gaming are the driving markets today, a driving market 10 years from now will be consumer digital photography and video. In light of these trends, we can consider the computational photography is the next big step in computer graphics.
BIBLIOGRAPHY
1. Special issue on Computational Photography, IEEE Computer, August 2006.
2. Symposium on Computational Photography and Video (MIT, May 23¬25.2005)
3. http://photo.csail.mit.edu/
4. wikipedia.org
5.
CONTENTS
Pa»e No:
INTRODUCTION 4
TRADITIONAL FILM-LIKE PHOTOGRAPHY 5
HISTORY AND EVOLUTION OF COMPUTATIONAL
PHOTOGRAPHY 6
COMPUTATIONAL PHOTOGRAPHY
OPTICS, SENSORS AND COMPUTATIONS 7
PIXELS VERSUS RAYS 8
ENCODING AND DECODING 9
COMPUTATIONAL PHOTOGRAPHY
COMPONENTS 10
CONCLUSION 12
FUTURE SCOPE 13
BIBLIOGRAPHY 15
ABSTRACT
A dual-core CPU combines two independent processors and their respective caches and cache controllers onto a single silicon chip, or integrated
circuit.
The dual-core type of processor falls into the architectural class of a tightly-coupled multiprocessor. International Business Machines (IBM)'s POWER4, released in 2000, was the first dual-core microprocessor on the market.
Mainly two technologies are used in dual core processors, they are SMP (symmetric multi-Processing) and Hyper Threading.
Proximity of two CPU cores on the same die have the advantage that the cache coherency circuitry can operate at a much higher clock rate than is possible if the signals have to travel off-chip, so combining equivalent CPUs on a single die significantly improves the performance of cache snoop operations.
A dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages.

Computer Science & Engineering

SNGCE, Kadayiruppu


1. Introduction
A dual-core CPU combines two independent processors and their respective caches and cache controllers onto a single silicon chip, or integrated circuit. IBM's POWER4 was the first microprocessor to incorporate 2-cores on a single die. Various dual-core CPUs are being developed by companies such as Motorola, Intel and AMD, and began to appear in consumer products in 2005. Dual-core CPU technology first became a practical viability in 2001 as 180-nm CMOS process technology became feasible for volume production. At this size, multiple copies of the largest microprocessor architectures could be incorporated onto a single production die. (Alternative uses of this newly available "real estate" include widening the bus and interna! registers of existing CPU cores, or incorporating more high-speed cache memory on-chip.)
, ' S
CPU Cora CPi-: o
and
L1 Cacfws
¦J
Bus InlBffaee and
L2 Caches
1
Conceptual diagram of a dual-core chip, with CPU-local Level 1 caches, and shared, on-chip Level 2 caches.

Computer Science & Engineering

SNGCE, Kadayiruppu

2. Multi-Core Processor Architecture
Explained most simply, multi-core processor architecture entails silicon design engineers placing two or more Pentium® processor-based "execution cores," or computational engines, within a single processor. This multi-core processor plugs directly into a single processor socket, but the operating system perceives each of its execution cores as a discrete logical processor, with all the associated execution resources.
The idea behind this implementation of the chip's internal architecture is in essence a "divide and conquer" strategy. In other words, by divvying up the computational work performed by the single Pentium microprocessor core in traditional microprocessors and spreading it over multiple execution cores, a multi-core processor can perform more work within a given clock cycle. Thus, it is designed to deliver a better overall user experience. To enable this improvement, the software running on the platform must be written such that it can spread its workload across multiple execution cores. This functionality is called thread-level parallelism or "threading," and applications and operating systems (such as Microsoft Windows* XP) that are written to support it are referred to as "threaded" or "multi-threaded."
A processor equipped with thread-level parallelism can execute completely separate threads of code. This can mean one thread running from an application and a second thread running from an operating system, or parallel threads running from within a single application. (Multimedia applications are especially conducive to thread-level parallelism because many of their operations can run in parallel.)

Computer Science & Engineering

SNGCE , Kadayiruppu

As software developers continue to design more threaded applications that capitalize on this architecture, multi-core processors can be expected to provide new and innovative benefits for PC users, at home and at work. Multi-core capability can also enhance the user experience in multitasking environments, namely, where a number of foreground applications run concurrently with a number of background applications such as virus protection and security, wireless, management, compression, encryption and synchronization.
Like other hardware-enhanced threaded capabilities advanced at Intel, multi-core capability reflects a shift to parallel processing”a concept originally conceived in the supercomputing world. For example, Hyper-Threading Technology (HT Technology), introduced by Intel in 2002 enables processors to execute tasks in parallel by weaving together multiple "threads" in a single-core processor. But whereas HT Technology is limited to a single core's using existing execution resources more efficiently to better enable threading, multi-core capability provides two or more complete sets of execution resources to increase compute throughput.
In a technical nutshell, Intel believes multi-core processing could support several key capabilities that will enhanZce the user experience, including the number of PC tasks a user can do at one time, do multiple bandwidth-intensive activities and increase the number of users utilizing the same PC.

Computer Science & Engineering

SNGCE , Kadayiruppu

3. Technologies used in dual core processors
Mainly two technologies are used in dual core processors, they are SMP (symmetric multi-Processing) and Hyper Threading.
3.1 SMP (Symmetric Multi-Processing)
SMP is the most common approach to creating a multi-processor system, in which two or more separate processors work together on the same motherboard. The processors co-ordinate and share information through the system bus, and the processors arbitrate the workload amongst themselves with the help of the motherboard chipset and the operating system.
The OS treats both processors more or less equally, assigning work as needed. Both AMD and Intel's new dual-core chips can be considered SMP capable (internally). AMD's dual-core Opteron server processors can be linked to other dual-core chips externally also, but this capability is not present in either company's desktop dual-core lines.
The major limitations of SMP have to do with software and operating system support. Many operating systems (such as Windows XP Home) are not SMP capable and will not make use of the second physical processor. Also, most modern programs are single-threaded, meaning that there is only ever one current set of linked instructions and data for them. This means that only one processor can effectively work on them at a time. Multi-threaded programs do exist, and can take better advantage of the potential power of dual- or multi-CPU configurations, but are not as common as we might like. No other current mainstream desktop processors are SMP capable, as Intel and AMD tend to restrict cutting edge technologies to the higher-end server processors such as the Opteron and Xeon. In the past though, mainstream processors have been SMP capable, most notably the later Intel Pentium 3 processors.

Computer Science & Engineering

SNGCE, Kadayiruppu

3.2 Highway to Hyperthreading
Hyperthreading was Intel's pre-emptive take on multi-core CPUs. The company cloned the front end of its high-end Pentium 4 CPUs, allowing the Pentium 4-HT to begin two operations at once. Once in process, the twin operation 'threads' both share the same set of execution resources, but one thread can take advantage of sections left idle by the other. The idea of Hyperthreading is to double the amount of activity in the chip in order to reduce the problem of 'missed' memory cache requests slowing down the operation of the processor. It also theoretically ensures that iess of the processor's resources will be left idle at any given time. While Hyperthreaded CPUs appear as two logical processors to most operating systems, they are not comparable with true dual-core CPUs since each parallel pair of threads being worked on share the same execution pipeline and same set of L1 and L2 cache memory. Essentially, Hyperthreading is smoke-and-mirrors multitasking, since a single Hyperthreaded processor cannot actually perform two identical actions at the same time.

Computer Science & Engineering

SNGCE. Kadayiruppu

Hyperthreading does speed up certain operations which would be multi¬processor capable, but never as much as a true multi-processor system, dual core or not.
One of Intel's new dual-core chips, the higher-end Pentium Extreme Edition 840 processor, also support Hyperthreading within each core, meaning that to an operating system it would appear as four logical processors on a single die. How this will work out remains to be seen.
4. Two Chips on One Die... Why
So why are both Intel and AMD suddenly peddling dual-core pushcarts so quickly down the aisle
Several reasons; first of all; competition, competition, competition. As we will explore later in more detail, AMD built the potential for dual-core capability into its 64-bit processors right from the start. The necessary I/O structure for the second core already exists, even on single core chips. Neither company can afford to let the other get much of an edge, and AMD has already stolen way too much attention for Intel's comfort with its incredibly successful line of 64-bit processors. It is imperative for Intel to launch a 'pre-emptive strike' and get its own dual-core technology to market quickly, lest marketshare flutter away. As for why dual core processors are being developed in the first place, read on to reason number three.
Secondly, performance. Certain 'multi-threaded' applications can already benefit greatly by allowing more than one processor to work on them at once. Dual processor systems also gain from a general decline in latency. Simply put, while there is no current way to share the current operating system load evenly between two processors, the second processor can step in and keep the system running smoothly while the first is maxed out to 100% burning a CD or encoding a file (or from a software error).

Computer Science & Engineering

SNGCE, Kadayiruppu

Obviously, if dual-oore systems become mainstream, which it looks iike they are going to, future operating systems and applications will be designed with the feature in mind, leading to better functionality down the road. Thirdly, and less obviously, AMD and Intel are desperate. Both companies have run into barriers when it comes to increasing the raw speed of processors, or decreasing the die size. Until these roadblocks are cleared or until the general buying public understands that GHz does not directly translate to performance, both companies will be scrambling to discover any new improvements that will improve processor performance... without actually boosting core speed. This is why the idea of dual-core processors is now a reality, I'm willing to bet.
5. Commercial examples
International Business Machines (IBM)'s POWER4, released in 2000, was the first dual-core microprocessor on the market. ¢ IBM's POWERS dual-core chip is now in production, and the company has a PowerPC 970MP dual-core processor in development. Intel released its dual-core desktop x86 64-bit processors to OEMs on 12 April 2005. Its dual-core Xeon processors, code-named Paxville and Dempsey, are expected to ship to OEMs in the second half of 2005. The company is also currently developing dual-core versions of its Itanium high-end server CPU architecture.
AMD, Intel's chief rival, released its dual-core Opteron server/workstation processors on 22 April 2005, and its dual-core desktop processors, the Athlon 64 X2 family, were released on 31 May 2005. Motorola/Freescale has dual-core ICs based on the PowerPC e600 and e700 cores in development.

Computer Science & Engineering

SNGCE, Kadayiruppu

6. Architectural class
The dual-core type of processor falls into the architectural class of a tightly-coupled multiprocessor. In this class, a processing unit, with an independent instruction stream executes code from a pool of shared memory. Contention for the memory as a resource is managed by arbitration and by the processing unit specific caches. The localized caches make the architecture viable since modern CPUs are highly optimized to maximize bandwidth to the memory interface. Without them, each CPU would run near 50% efficiency. Multiple caches into the same resource must be managed with a cache coherency protocol. Beyond dual-core processors, there are examples of chips with multiple cores. Such chips include network processors which may have a large number of cores or microengines that may operate independently on different packet processing tasks within a networking application.
7. AMD's Approach to Dual Core
The current form factor of the Athlon 64 processor is very conducive to a dual core design. The fact that the memory controller and hypertransport links are built right into the die of the chip means that supporting a second full processor core is no huge logistical feat either for making the die of the chip or the motherboards it will operate in. This is not the biggest advantage that the Athlon 64 architecture has for dual operation though.
Due to the Northbridge-like provisions that AMD had to add to the Athlon 64 die in order to support the onboard memory controller and hypertransport link, it is possible for the dual cores to communicate with each other inside the processor itself.

Computer Science & Engineering

SNGCE, Kadayiruppu

While this might seem like an obvious thing, Intel dual-core processors cannot do this at all (currently). Intel's solution must relay all information over the external 'frontside bus' page link that connects the processor to the rest of the system. AMD dual-core Athlon64 X2 processorss more than double the transistor count of previous Athlon 64 processors. The Athlon 64 X2 4800+ chip sports 233mi 11 ion transistors as opposed to the 106 million or so of the Athlon 64 FX-55. Since the new dual-core chips use the 90nm fabrication method though, overall chip size has just barely increased. Operating voltage will be 1.35 to 1.4V and heat output will be just slightly increased over the high-end Athlon FX processors at 110W.
Each processor core has its own L1 and L2 cache memory, 128KB for L1, and between 512KB and 1MB of L2, depending on the specific model.

AMD Athlon„¢ 64 X2 Dual-Core Processor Design

Computer Science & Engineering

SNGCE, Kadayiruppu

On paper, it really appears that AMD has done its homework. More than that, its engineers appear to have done it months ago when the company first introduced the 64-bit Opteron processor.
The "crossbar switch" that accumulates and distributes address and data transfers from each core to the other core and the rest of the system already had an available connection for a second core.
Where AMD scores major points for paying attention to its user base is with the fact that the first run of dual-core Athlon 64 X2 chips will be compatible with any current Socket 939/940 motherboard, provided the manufacturer updates the BIOS to support the new feature.
Given the havoc that the company wreaked on its users and its bottom line by shifting to Socket 939 so early in the life of the Athlon 64, essentially orphaning Socket 754, this is the kind of good PR that AMD really needs. Here's hoping this is a return to the glory days of Socket A and the huge range of processors that that platform ended up supporting. Selling a dual-core desktop processor as a direct upgrade will be a lot easier than trying to persuade home users to update their motherboards... yet again.

Computer Science & Engineering

SNGCE, Kadayiruppu

8. Intel Approach to Dual-Core: Glue and Brown Paper
Since Intel did not have a nice pre-existing resource space in which to add a second processor core like AMD did, it has been forced to improvise. The Pentium D essentially takes two identical P4 'Prescotf processor dies and sticks 'em together. This has the marked advantage of providing each processor with its own L1 and L2 cache memory.

Intel Dual core architecture

Computer Science & Engineering

SNGCE, Kadayiruppu

Unfortunately, Intel's approach also has the marked disadvantage of forcing both processors to communicate through the North bridge and FSB outside the processor, while AMD's dual-core approach allows the twin cores to exchange information within the processor itself.
Transistor counts for the new chips hit a high of 230million, and the heat output is a hefty 130W for the Pentium Extreme Edition 840 and the fastest Pentium D processors.
Alas, unlike AMD's almost Santa Claus-like decision to keep to the existing Socket 939 platform for the dual-core CPUs, Intel's dual-core solution requires a new pair of supporting chipsets, the Intel 955X and 945P. nVidia's recently released nForce 4 SLl Intel Edition will also support dual-core processors, but this must be added by the manufacturers so early adopters are likely out of luck. What it adds up to is that you are going to have to buy a new motherboard if you want to take advantage of Intel's approach to dual-core processors.

Computer Science & Engineering

SNGCE , Kadayiruppu

9. Heat and Bandwidth: Enemies of Dual-Core Processors
While everything we've looked at so far has been positive, Jef s take a look at some issues that may affect the performance of dual-core Athlons and Pentium 4s relative to their single core siblings. Given that everything inside the AMD processor is already adapted for dual-core as we pointed out above, there are surprisingly few possible pitfalls, but we're going to look at a couple. Intel's road isn't quite so straightforward.
Heat is one obvious worry. Single core Athlon 64 processors can crank up a fair bit by themselves, as witnessed by the enormous retail heatsinks they ship with. The same goes for the latest Pentium 4 chips. Whafs going to happen when you combine two cores capable of giving off that much heat in such a small space Are we going to need mandatory water cooling Apparently not, as AMD defines almost the same thermal envelope for their dual-core processors as for the single core versions. The Intel Dual-core chips give off more heat, but not drastically so.
How is this possible Well there are a few factors to consider: First of all, don't forget that both the AMD and Intel dual-core chips are being produced on the 90nm fabrication process, like the newest single core chips. As always, smaller fab size = less power = relatively less heat.
The second factor to consider when trying to figure out why dual-core processors are not thermal bombs is the slightly slower speed of the planned processors as compared to their single core equivalent. While not a huge difference, 400MHz or so less does make them a little cooler.
Finally, there's the fact that at least one core is going to be running at less than full capacity most of the time.
Bandwidth is a more troubling concern. It's not an enemy of the dual-core processors per se, but more of a limiting factor. Intel has to contend with using a

Computer Science & Engineering

SNGCE, Kadayiruppu

conventional FSB/North bridge setup to allow it's dual-core processor to communicate with itself, and even with support for DDR2-667 in the new 955X chipset, this may bog things down. It just doesn't feel like an efficient way to do things.
On the other hand, the AMD dual-core offerings* internal communications abilities are doubtless more efficient, but in some ways unproven. We already know how dual-CPUs talk to each other over a conventional FSB, but AMD's method of in-chip communication has yet to be really tested. Also the 6.4GBps memory bandwidth of the dual-core Athlons is exactly the same as that of their single core bretheren.
One major concern when dual-core processors were first announced was how they would work with operating systems like Windows XP Home, which is limited to one physical processor. Apparently though, Home users do not need to fret, as both AMD and Intel's dual-core chips should work just fine. The operating system will see a single physical chip with two logical cores just as it currently does with Hyperthreading capable Pentium 4 processors. No word on how the Pentium Extreme Edition 840 with it's dual cores AND hyperthreading will work though...

Computer Science & Engineering

SNGCE, Kadayiruppu

10. Dual Single-Core vs. Single Dual-Core
AMD's Opteron chip is capable of SMP due to its multiple hypertransport links, so which is faster; a single dual-core chip or two single-core chips On paper, dual Opterons should be faster than a single dual-core Opteron at equivalent clock speed for one major reason: Due to the built-in memory controller, each Opteron has exclusive access to its own set of system memory.

AMD Opteron Processor design

Computer Science & Engineering

SNGCE. Kadayiruppu

The dual-core designs have to share the memory controller, leading to competition for resources that will inevitably drag down comparative performance.
Intel SMP systems do not gain this advantage over dual-core siblings since they already share a single memory controller over the front-side bus of the motherboard. It's difficult to tell whether either design has any performance advantage in Intel's implementation.
The data has a shorter path to travel with the dual-core chips, but not so much as to make a radical difference. Certainly Intel dual-core chips should have a pricing advantage over SMP solutions, especially when you factor in the price premium that dual-socket motherboards demand.
It's time to talk money, At first glance, basic economics suggests that dual-core processors should be more affordable than buying a pair of single core processors. After all, the companies are integrating two cores into a single die, saving manufacturing effort.
Besides, there would be no point in charging extra money for the second core of a dual-core chip; no one would buy it, right Maybe, but let's not forget what dual-core chips have to offer besides convenience. The picture is quite different for Intel as opposed to AMD, so let's run through each company's pricing strategies for these chips.

Computer Science & Engineering

SNGCE, Kadayiruppu

11. AMD's Dual Core Lineup
As we mentioned, AMD's dual-core desktop processor is going to be known as 'Athlon 64 X2'. These CPUs will be available shortly at initial speeds of 2.2GHz, with only one other speed, 2.4GHz currently expected for release. Besides speed, L2 cache memory will differentiate the various versions of the processor, with some models having 512KB and some having a full 1MB cache. Lef s look at the names and the feature breakdown. As you can see, the initial offering will be broken down into 'Toledo' core chips with 1MB of L2 memory per core and 'Manchester' core chips with 512KB.
Athlon 64 X2 4800+: {Toledo Core) 2.4GHz, 1MB 12 cache memory $1001 Athlon 64 X2 4600+: (Manchester core) 2.4GHz, 512KB 12 cache memory $803 Athlon 64 X2 4400+: (Toledo core) 2.2GHz, 1MB 12 cache memory $581 Athlon 64 X2 4200+: (Manchester core) 2.2GHz, 512KB 12 cache memory $537 As you can see, AMD's introductory dual-core prices are roughly twice that of its single-core chips, especially at the lower end. This seems reasonable, except for one thing; Intel has no such price doubling plans.

Computer Science & Engineering

SNGCE, Kadayiruppu

12. Intel's Dual Core Lineup
Like AMD, Intel is going to split its dual-core desktop offerings into two lines, though there will be more of a brand distinction between the two than we see in AMD's lineup. First up (and most expensive) will be the Pentium 840EE Extreme Edition Dual core processor (note that Intel have dropped the '4' from the title), clocked at 3.2GHz and featuring 1MB of L2 cache for each core. It will also feature hyperthreading support, allowing it to execute up to four threads at once. It is currently available from some retailers, tilting the scales at a laughable $1100 USD

ri 2

¢ i T
¢ ¦1
i jfitcl dMsi-eor<'> processor wiiii HTTcehrttjk-n ¦jrt;-i.bl*-f. *med**son san four thread;; In &:ir^Mi--
i ¦ 'j' © C ¦


‚¬C.t

3r$D^qjc;«*c.tt tec* t i ~ H ccccc

In to I Dut-C«rc P*c-ccjsor wKh HT Toetntolcay

PCSTAIS c

Following that will be the launch of the 'Pentium D' line of processors, still with 1MB of 12 cache per core, running from 2.8GHz to 3.2GHz in speed, but without hyperthreading. Intel's prices for these chips look to be much more aggressive than the somewhat ridiculous cost of the high-end dual cores from

Computer Science & Engineering

SNGCE. Kadayiruppu

both Intel and AMD. The chips will start in the mid-$200s shading up to the mid $500s for the 3.2GHz model. All processors listed here support Intel's EMT64 64-bit instructions.
So what we've got from Intel looks like this:
Pentium Extreme Edition 840: 3.2GHz, 1MB 12 cache memory, Hyperthreading $1100
Pentium D: 3.2GHz, 1 MB 12 cache memory, $530 Pentium D: 3.0GHz, 1MB 12 cache memory, $320 Pentium D: 2.8GHz, 1MB 12 cache memory, $240
If the projected prices for the Pentium D chips hold true, AMD had better hope that its dual-core Athlon64 X2 has a considerable performance advantage, otherwise the company's hard-won processor advantage could be lost. Even if AMD blows Intel away in performance, the chip giant will still own the 'low end' dual-core market by default. This could get interesting.

Computer Science & Engineering

SNGCE, Kadayiruppu

13. Development motivation
13.1 Technical pressures
As CMOS process technologies continue to shrink, the high end constraints on the complexity that can be placed on a single die move back. In terms of CPU designs, the choice becomes adding more functions to the device {e.g. an Ethernet controller, memory controller, or high-speed CPU cache), or adding complexity to increase CPU throughput. Generally speaking, shrinking the features on the IC also means that they can run at lower power and at a higher clock rate.
Various potential architectures contend for the additional "real estate" on the die. One option is to widen the registers and/or the bus interface of an existing processor architecture. Widening the bus interface alone leads to superscalar processor architectures, and widening both usually requires new programming models. Other options include including multiple levels of memory cache, and developing system-on-a-chip solutions.
13.2 Commercial incentives
Several business motives drive the development of dual-core architectures. Since multiple-CPU SMP designs have been long implemented using discrete CPUs, the issues regarding implementing the architecture and supporting it in software are welt known. Additionally, utilizing a proven processing core design {e.g. Freescale's e700 core) without architectural changes reduces design risk significantly. Finally, the connotations of the terminology "dual-core" (and other multiples) lends itself to marketing efforts.

Computer Science & Engineering

SNGCE, Kadayiruppu

Additionally, for general-purpose processors, much of the motivation for dual-core processors comes from the increasing difficulty of improving processor performance by increasing the operating frequency (frequency-scaling). In order to continue delivering regular performance improvements for general-purpose processors, manufacturers such as Intel have turned to dual-core designs, sacrificing lower manufacturing costs for higher performance in some applications and systems.
It should be noted that while dual-core architectures are being developed, so are the alternatives. An especially strong contender for established markets is to integrate more peripheral functions into the chip.

Computer Science & Engineering

SNGCE. Kadayiruppu

14. Advantages
Proximity of two CPU cores on the same die have the advantage that the cache coherency circuitry can operate at a much higher clock rate than is possible if the signals have to travel off-chip, so combining equivalent CPUs on a single die significantly improves the performance of cache snoop operations. Assuming that the die can fit into the package, physically, the dual-core CPU designs require much less PCB space than multi-chip SMP designs. A dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages.
In terms of competing technologies for the available silicon die area, the dual-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core design. Also, adding more cache suffers from diminishing returns.

Computer Science & Engineering

SNGCE, Kadayiruppu

15. Disadvantages
Dual-core processors require operating system (OS) support to make optimal use of the second computing resource. Also, making optimal use of multiprocessing in a desktop context requires application software support. The higher integration of the dual-core chip drives the production yields down and are more difficult to manage thermally than lower density single-chip designs.
From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Scaling efficiency is largely dependent on the application or problem set. For example, applications that require processing large amounts of data with low computer-overhead algorithms may find this architecture has an I/O bottleneck, underutiiizing the device.
If a dual-core processor has only 1 memory bus (which is often the case) the available memory bandwidth per core is half the one available in a dual-processor mono-core system.
Another issue that has surfaced in recent business development is the controversy over whether dual core processors should be treated as two separate CPUs for software licensing requirements. Typically enterprise server software is licensed per processor, and some software manufacturers feel that dual core processors, while a single CPU, should be treated as two processors and the customer should be charged for two licenses - one for each core. This has been challenged by some since not all dual core processor systems are running Operating Systems that can support the added dual core functionality. This remains an unresolved and thorny issue for software companies and customers.

Computer Science & Engineering

SNGCE, Kadayiruppu

16, Conclusion
This paper gives an overview of the dual-core processor issues. A dual-core CPU combines two independent processors and their respective caches and cache controllers onto a single silicon chip, or integrated circuitBenefits of dual core processor are more performance due to parallelism and reduced power consumption. In future due to the impact of dual core processors there will be advancements in PC security and virtualization technologies and increased utility of home PC's.

Computer Science & Engineering

SNGCE , Kadayiruppu

17. References
1. http://amd,com
2. http://intel.com
3. http://newsfactor.com
4. http://short-media.com
5. http://pcworId.com
6. http://en.wikipedia.org
7. http://hardwaresecrets.com
8. http://siientpcreview.com
9. http://pcstats.com
10. http://tbreak.com
INDEX
1.Introduction 1
2. Multi-Core Processor Architecture 2
3. Technologies used in dual core processors 4
3.1. SMP (Symmetric Multi-Processing) 4
3.2. Highway to Hypertbreading 5
4. Two Chips on One Die... Why 6
5. Commercial examples 7
6. Architectural class 8
7. AMD's Approach to Dual Core 8
8. Intel Approach to Dual-Core: Glue and Brown Paper 11
9. Heat and Bandwidth: Enemies of Dual-Core Processors 13
10. Dual Single-Core vs. Single Dual-Core 15
11. AMD's Dual Core Lineup 17
12. Intel's Dual Core Lineup , 18
13. Development motivation 20
14. Advantages 22
15. Disadvantages 23
16. Conclusion 24
17. References 25
Reply
#2
[attachment=3037]

DUAL CORE TECHNOLOGY
What is DualCore

Dual-core refers to a CPU that includes two complete execution cores per physical processors.
It has combined two processors and their caches and cache controllers onto a single integrated circ(silicon chip).
Dual core processors are well-suited for multitasking environments because there are two complete execution cores instead of one, each with an independent interface to the front side bus.
How Dual core developed??
Dual core processor is the developed version of Pentium 4 processor.
We can call it as two processor in one chipset.
It require a new chipset for its working i945 and i955.
This developed version gives double the speed of other processors.
. Dual-core processors provide two complete execution cores instead of one, each with an independent interface to the front side bus


Pentium 4 and Dual Core

Pentium D is a Pentium 4 with dual-core technology. But there is a very important difference between Pentium 4 and Pentium D besides this new technology. The new Pentium D doesn't have Hyper Threading technology.
. Hyper Threading makes the operating system to think that there are two CPUs installed on the system. Thus, when you use a Pentium 4 with this technology, Windows XP recognizes it as if two CPUs were installed on the system.
Look inside Dual core¦
The most interesting thing about Intel's dual core tecnology is how it is manufactured. With the tecnology available today for manufacturing processors, called 90 nm, the silicon chips for dual core CPUs must be "together", i.e. side-by-side and cut together from the wafer.
With the future 65 nm technology it is possible to manufacture each silicon chip separately and then put them together, i.e. it will be possible to pick each silicon chip from different positions of the wafer, not requiring them to be originally together. This manufacturing process is more efficient .

Dual core working¦.
New chipet with Dual core processor.
Chipset for Dual core.

Dual core requires different chipset for its funtioning.
Together with these two new processors two new chipset series were released, i945 and i955. The current Intel chipsets aren't compatible with dual core technology because they don't support multiprocessor systems. So, even if you have a high-end socket 775 motherboard based on the latest Intel 925X chipset it won't be compatible with the new dual-core Pentium processors. To upgrade your system you will need to replace the motherboard as well.

Block Diagram
Different models of Dual core

There are Three Pentium D models were announced:
Pentium D 820: 2.8 GHz, 1 MB L2 memory cache for each core
Pentium D 830: 3.0 GHz, 1 MB L2 memory cache for each core.
Pentium D 840: 3.2 GHz, 1 MB L2 memory cache for each core.
All of them use a 800 MHz external bus and use the Intel 64-bit extensions (EM64T), so they are based on Pentium 4 6xx series.
Dual core vs core Duo
Core duo is Intels first Dual core CPU. Itâ„¢s a whole new architecture of Microsoft using 2 core in a single die which gives simply 2 core in a single chip.Instead of other CPU itconserves substantially power vs retcheting clock speed giving unbelievable performance.
In case of Core 2 Duo There is no big difference in between them but it has some how faster clock pulse speed of abt 2.67 Ghz.It has a silicon lining which gives it more sophisticated processing abilities.
Both are exist in desktop n laptop version.
Advantages.

The close proximity of multiple CPU cores on the same die has the advantage of allowing the cache coherency circuitry to operate at a much higher clock rate than is possible if the signals have to travel off-chip .
a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages; such reduction reduces latency.
Disadvanages.

Dual core has more processing power and they are incapable of using 2 processor at a time.
For upgradation to Dual core we have to replace the chipset.
About Thread level parallelism
Complete optimization for the dual-core processor requires both the operating system and application running on the computer to support a technology called thread-level parallelism, or TLP.
Thread-level parallelism is the part of the OS or application that runs multiple threads simultaneously, where threads refer to the part of a programs that can execute independently of other parts.
AMD Dual processors.

AMD also announced its line of desktop dual-core processors, the AMD Athlon 64 X2 processor family. The initial model numbers in the new family include the 4200+, 4400+, 4600+ and 4800+ (2.2GHz to 2.4GHz).The processors are based on AMD64 technology and are compatible with the existing base of x86 software, whether single-threaded or multithreaded. Software applications will be able to support AMD64 dual-core processors with a simple BIOS upgrade and no substantial code changes
CONCLUSION

Dual core processor is the innovation in Microprocessors and this shows the future for Multi core processors.
Reply
#3
[attachment=3095]


Dual Core Processor Technology
Abstract
Computer processor design has evolved at a constant pace for the last 20 years. The proliferation of computers into the mass market and the tasks we ask of them continue to push the need for more powerful processors. The market requirement for higher performing processors is linked to the demand for more sophisticated software applications. E-mail, for instance, which is now used globally, was only a limited and expensive technology 10 years ago. Today, software applications span everything from helping large corporations better manage and protect their business-critical data and networks to allowing PCs in the home to edit home videos, manipulate digital photographs, and burn downloaded music to CDs. Tomorrow, software applications might create real-world simulations that are so vivid it will be difficult for people to know if they are looking at a computer monitor or out the window; however, advancements like this will only come with significant performance increases from readily available and inexpensive computer technologies.
Multi-core processors represent a major evolution in computing
technology.
Dual Core Processor Technology
This important the benefits offered by these development is
coming at a time when businesses and consumers are beginning to require
processors due to the exponential growth of digital data and the
globalization of the Internet. Multi-core processors will eventually become
the pervasive computing model because they offer performance and
productivity benefits beyond the capabilities of today's single-core
processors.
Dual Core Processor Technology
I. Moore's Law Isn't Really Dead
A quick search on Moore's law on major news sites in the last few months will undoubtedly bring up a slew of hits on how Moore's law cannot keep up and how the days of doubling performance and the doubling of transistors is over. AMD's take on Moore's law is that it will continue on for some time yet, but the approach will be different. Instead of doubling performance through the doubling of clockspeed, the doubling of performance will come from the increase in the number of cores; doubling the number of cores on a processor will also lead to the number of transistors doubling so while clock speed may not be scaling like in the hey day of early 2000s, performance should still theoretically double
A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one.
In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far
Dual Core Processor Technology
slower than the speed of the CPU. The situation is compounded when
multi-tasking. In this case the processor must switch back and forth
between two or more sets of data streams and programs. CPU resources
are depleted and performance suffers.
II. Dual Core Processor
In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.
To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threading technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.
A dual core processor is different from a multi-processor system. In the latter there are two separate CPUs with their own resources.
Dual Core Processor Technology
In the former, resources are shared and the cores reside on the
same chip.
A multi-processor system is faster than a system with a dual core processor, while a dual core system is faster than a single-core system, all else being equal.
An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance. Note that the two threads may require different resources within the execution unit, which could permit a greater performance increase.
Multi-core processors are the goal and as technology shrinks, there is more "real-estate" available on the die. In the fall of 2004 Bill Siu of Intel predicted that current accommodating motherboards would be here to stay until 4-core CPUs eventually force a changeover to incorporate a new memory controller that will be required for handling 4 or more cores.
As more transistors fit on a chip, total power consumption ” and heat ” increase. Growing demand for mobile, multifunctional devices adds efficient battery usage to the power equation.
I
ntel was one of the first companies to anticipate the trends and clarify the scope of the power challenge. Smaller transistors consume less power, but as transistor density and speed rise, the overall chip consumes more power and generates more heat. In addition, power leakage ” the continued flow of current even when the transistor is "off" ” becomes more problematic, wasting a higher portion of total device power.
Today TIME
Figure : 1 : Architectural Performance
III Dual Processor System
A traditional dual processor system contains two separate physical computer processors in the same chassis. The two processors are usually located on the same circuit board (mother board) but occasionally will be
located on separate circuit boards. In this case, each of the processors will reside in its own socket. A dual-processor (DP) system can also be considered a subset of the larger set of a symmetric multiprocessor (SMP) system. A multi-processor capable operating system can schedule two separate computer processes or two threads within a process to run simultaneously on these separate processors.
Figure : 4 : Dual Processor system
"Dual Core"-this term refers to integrated circuit (IC) chips that contain two complete physical computer processors (cores) in the same IC package.
Typically, this means that two identical processors are manufactured so they reside side-by-side on the same die. It is also possible
Dual Core Processor Technology
to (vertically) stack two separate processor die and place them in the same
IC package. Each of the physical processor cores has its own resources
(architectural state, registers, execution units, etc.). The multiple cores on-
die may or may not share several layers of the on-die cache.
A dual core processor design could provide for each physical processor to: 1) have its own on-die cache, or 2) it could provide for the on-die cache to be shared by the two processors, or 3) each processor could have a portion of on-die cache that is exclusive to a single processor and then have a portion of on-die cache that is shared between the two dual core processors. The two processors in a dual core package could have an on-die communication path between the processors so that putting snoops and requests out on the FSB is not necessary. Both processors must have a communication path to the computer system front-side bus. Note that dual core processors could also contain HT Technology which would enable a single processor IC package, containing two physical processors, to appear as four logical processors capable of running four programs or threads simultaneously. At the Intel Developer Forum in Fall 2004, Intel publicly announced that it plans to release dual core processors for mobile, desktop and server platforms in 2005.
Dual Core |
On- Die ;he
Arch. StAt*

II -'li : I . . . 1 vlll- ProcEssof 1
Figure : 5 : Dual Core System
PERFORMANCE through parallelism
Dual Core Processor Technology
Figure : 6 : Dual Core Processor
The idea behind this implementation of the chip's internal architecture is in essence a "divide and conquer" strategy. In other words, by divvying up the computational work performed by the single Pentium microprocessor core in traditional microprocessors and spreading it over multiple execution cores, a dual - core processor can perform more work within a given clock cycle. Thus, it is designed to deliver a better overall user experience. To enable this improvement, the software running on the platform must be written such that it can spread its workload across the execution cores. This functionality is called thread-level parallelism or "threading," and applications and operating systems (such as Microsoft Windows* XP) that are written to support it are referred to as "threaded" or "multi-threaded."
A processor equipped with thread-level parallelism can execute completely separate threads of code. This can mean one thread running from an application and a second thread running from an operating system, or parallel threads running from within a single application. (Multimedia applications are especially conducive to thread-level parallelism because many of their operations can run parallel.
As software developers continue to design more threaded applications that capitalize on this architecture, dual-core processors can be expected to provide new and innovative benefits for PC users, at home and at work. Dual-core capability can also enhance the user experience in multitasking environments, namely, where a number of foreground applications run concurrently with a number of background applications such as virus protection and security, wireless, management, compression, encryption and synchronization.
IV. Memory Performance
Memory performance is often viewed in two ways. The first is random access latency; the second is the throughput of sequential accesses. Each is relevant to a different work load, and each tells a different story. Random access latency was determined by measuring how long it takes to chase a chain of pointers through memory. Only a single chain was followed at a time, so this is a measure of unloaded latency. Each chain stores only one pointer in a cache line, and each cache line is chosen at random from a pool of memory. The pool of memory simulates the working set of an application. When the memory pool is small enough to fit within cache, the benchmark measures the latency required to fetch data from cache. By adjusting the size of the memory pool one can measure the latency to any specific level of cache, or to main memory, by making the pool larger than all levels of cache.
We measured unloaded latency using a 2.6 GHz single-core Opteron processor and a 2.2 GHz dual-core Opteron processor. These processors were selected because they represent the fastest processors of each type available at this time. Based on the processor frequencies alone, one would expect that the 2.6 GHz singlecore processor would have lower latencies to cache and to memory, and from the figure we can see that this is the case. The 2.6 GHz processor has a latency to L1 cache of about 1.2 nanoseconds, whereas the L1 cache latency of the 2.2 GHz processor is about 1.45 nanoseconds. This corresponds exactly to the difference one would expect based on frequency differences GF/s alone. The L2 cache latencies are 5.6 and 6.6 nanoseconds, respectively, which also show perfect frequency scaling.
The surprise is that for each level in the memory hierarchy there are multiple latencies that may occur. Latencies are not spread randomly across a range, but they gravitate to specific discrete values. This suggests that each latency value reflects a different path through the system, but at this time the idea is only speculation. It definitely bears further investigation. One observation is that the benchmark left the memory affinitization implicitly to the operating system, so it is possible that the higher latencies are references to remote memory, while the lower latencies are to local
Dual Core Processor Technology
memory. If true this would reflect an unexpected failure on the part of the
benchmark or the operating system, and it would not explain why there are multiple latencies to cache. As mentioned before this will be investigated further, but what is important is to note that multiple latencies do happen.
The next few charts, Figure 5 and Figure 6, show the memory throughput for sequential memory accesses. The surprise is that memory throughput is sometimes better for dual-core than for single-core. It has been speculated that dual-core would have lower memory throughput because of its lower clock frequency, and it was thought that the additional cores would cause additional memory contention. Figure 5 shows that the lower frequency clearly does have an effect when only a single processor or core is being used. But this should occur infrequently, seeing how these systems are used as heavily loaded servers rather than as lightly loaded workstations.
m
i:
¦ <







I a a u a a
¢a*uip»cn|
Figure : 7 : Single Processor e326 Memory Throughput
Memory contention, if it occurred, would be seen in Figure 6. In this figure each system is loaded with a single thread per core that generates memory references as fast as it is able, one write operation for every two read operations. The operating system allocates physical memory in such a way that references access local memory. If memory contention were occurring, the dual-core system would have lower throughput than the single-core system. But we can see from Figure 6 that the dual-core system has higher throughput, not lower, so we conclude something different is occurring. Instead of causing contention, the additional load allows the memory controller to operate more effectively, doing a better job of scheduling the commands to memory. So even though the memory controller is clocked more slowly and the memory it's connected to is the same, it is still able to achieve up to 10% higher bandwidth.
V. Hyper-Threading vs. Dual-Core:
HT Technology is limited to a single core using existing execution resources more efficiently to better enable threading, whereas multi-core capability provides two or more complete sets of execution resources to increase compute throughput. Any application that has been optimized for HT Technology will deliver great performance when run on an Intel dual-core processor-based system. So users will be able to take advantage of many existing applications that are already optimized for two threads from the earliest days of Intel's dual-core ramp.
Multi-core chips do more work per clock cycle, and thus can be designed to operate at lower frequencies, than their singlecore counterparts. Since power consumption goes up proportionally with frequency, multi-core architecture gives engineers the means to address the problem of runaway power and cooling requirements. Multi-core processing may enhance user experience in many ways, such as improving the performance of compute-and bandwidth-intensive activities, boosting the number of PC tasks that can be performed simultaneously and increasing the number of users that can utilize the same PC or server. The architecture's flexibility will scale out to meet new usage models that are bound to arise in the future as digitized data.
VI. New Benefits for Both Home and Business
A multitasking scenario can be as simple as a home user editing photos while recording a TV show through a digital video recorder while a child”in another room of the house”streams a music file from the PC. In a business setting, users could increase their productivity by performing multiple tasks more efficiently, such as working on several office-related applications as the system runs virus software in the background. Keep reading for specific scenarios.
In the Digital Enterprise
Today the steady increase in the density of systems in data centers is creating power and cooling challenges for many IT organizations. Part of the answer will be Intel multi-core server platforms. By enabling a single processor form factor to serve multiple processor cores, these platforms will provide superior performance and scalability while remaining relatively constant in power consumption, heat and space requirements. As a result, more processing capacity can be concentrated into fewer servers. This means greater density and fewer servers to manage.
In the Digital Office
Multi-core processors hold the promise of continuing the enormous increases in computer performance seen over the last quarter century. What will this performance mean for office productivity Graphic designers, for instance, can render images much more quickly on multi-core systems. The greater responsiveness of multi-core platforms translates into less waiting for everyone in the digital office. For people like stockbrokers this could literally mean dollars, as their computers enable more-informed investment decisions and faster trades.
In the Digital Home
The digital home, with ever-growing numbers of networked PC and consumer electronics devices, will increasingly depend on the multitasking capabilities of multi-core processors to handle the demands of orchestrating the different networked TVs, stereos, cameras, and other devices and appliances in the household. Multi-core is also taking gaming to a whole new level, and will also make multiparty gaming ubiquitous. Tomorrow's computers will be powerful enough to run multiparty gaming and collaboration on their own. No longer will games have to be housed in huge servers”they will be distributed across the Internet. That should enable greater proliferation and access, plus inspire new forms of games and collaboration.
For Mobile Users
dreamed Intel Centrino mobile technology has taken mobile computing to places we never of. Who would have envisioned working remotely in a coffee shop via Wi-Fi just 5 years ago Adding multi-core processors to the mobile mix will expand horizons even more. Incredible new mobile technologies will enable doctors in cities to remotely diagnose patients living in isolated locations”and that's just one scenario. You can be sure there are others we haven't even dreamed of yet.
Some other important advantages :
¢ Multi-tasking productivity”multi-core processor PC users will experience exceptional performance while executing multiple tasks simultaneously. The ability to do complex, multi-tasked workloads, such as creating professional digital content while checking and writing e-mails in the foreground, and also running firewall software or downloading audio files off the Web in the background, will allow consumers and workers to do more work in less time
¢ PC security can be enhanced because multicore processors can run more sophisticated virus, spam, and hacker protection in the background without performance penalties
¢ Cool and quiet”the enhanced performance offered by multi-core processors will come without the additional heat and fan noise that would likely accompany performance increases with single-core processor machines
VII. Conclusions:
The appearance of dual-core processors and solutions with additional new features once again indicate that the clock frequency race is no longer acute. I believe that in the near future we will not select processors according to their clock rates but will take into account the whole lot of various characteristics, including not only the working frequency, but also the number of processor cores, cache memory size, additional technologies support and maybe more.
As the time passes and the OS and software developers adapt more to the new working conditions, the multi-core processor architectures can lead the industry to the new performance levels.
Dual Core Processor Technology
VIII. Reference:
1 inteltechnology/dual-core
2 amdboard.com
3 http://xbitlabsarticles/cpu/display/dual-core.html
4 howstuffworks.com
5 chip-architect.com
please read http://studentbank.in/report-dual-core-processors--5236 and http://studentbank.in/report-dual-core-p...ull-report for getting more about Dual Core Processor Related information
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: dual core processors desktop, semainar report of dual polarized antenna, inveter tecnology wasing masince, dual core processors definition, how to disable hyperthreading, dual core computer science, hyperthreading ffxiv,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 42,359 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,851 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  imouse full report computer science technology 3 25,090 17-06-2016, 12:16 PM
Last Post: ashwiniashok
  Implementation of RSA Algorithm Using Client-Server full report seminar topics 6 26,806 10-05-2016, 12:21 PM
Last Post: dhanabhagya
  Optical Computer Full Seminar Report Download computer science crazy 46 66,656 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  ethical hacking full report computer science technology 41 74,785 18-03-2016, 04:51 PM
Last Post: seminar report asees
  broadband mobile full report project topics 7 23,548 27-02-2016, 12:32 PM
Last Post: Prupleannuani
  steganography full report project report tiger 15 41,597 11-02-2016, 02:02 PM
Last Post: seminar report asees
  Digital Signature Full Seminar Report Download computer science crazy 20 43,979 16-09-2015, 02:51 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 28,030 01-05-2015, 03:36 PM
Last Post: seminar report asees

Forum Jump: