3dtv technology full report
#1

[attachment=1998]
[attachment=1999]

A 3D television is a television that employs techniques of 3D presentation, like stereoscopic capture, multi-view capture, or 2D plus depth, and a 3D display”a special viewing device to project a television program into a realistic three-dimensional field. . The 3D-video camera that will be developed is based on Zcam„¢. it™s operation is based on generating a "light wall" moving along the field of view Stereoscopy is most widely accepted method for capturing and delivering 3D video. It involves capturing stereo pairs in a two-view setup, with cameras mounted side by side, separated by the same distance as between a person's pupils. Here the procedure are
(a) Light wall moving from camera to scene
(b) Imprinted light wall back to camera
© Truncated light wall containing depth information from the source.

ADVANTAGES
. shows high resolution of 1024 X 768 pixels of stereoscopic color images for multiple viewpoints without special glasses.
¢ is completely scalable and backward compatible in the number of acquired, transmitted and displayed views.
. The large number of views (16) , and the large physical dimension ( 6â„¢X4â„¢ ) of the display lead to a very immersive 3D experience.
. The projector based 3D display has a native resolution of 12 million pixels which is greater than the largest currently available high resolution flat-panel screen of IBM T221 LCD with 9 million pixels.
¢ The overall delay in the system from the acquisition to the display is less than one second.


DISADVANTAGES
¢ The graphics cards and projectors are not synchronized which lead to and increased motion blur for fast movements in the scene.
¢ The Rear projection system has less quality compared to front projection system.
¢ Eye strain,headache and other unpleasant side effects.


FUTURE WORK
¢ Most of the key ideas for the 3D TV system presented in this paper have been known for decades.
¢ There is still much that we can do to improve the quality, sharpness, optical char. etc of the 3D display.
¢ They have to develope high-dynamic range cameras commercially.
¢ True high-dynamic range displays have also been developed
Reply
#2
[attachment=3175]

Presented By:
Peerbasha.P
3D TELEVISION

Abstract---Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.Japan plans to make this futuristic television a commercial reality by 2020as part of abroad national project that will bring together researchers from the government, technology companies and academia. The targeted "virtual reality" television would allow people to view high definitionimages in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.
Keywords--- parallex,display,perception,holographic images
I.INTRODUCTION
Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.
Why 3D TV
The evolution of visual media such as cinema and television is one of the major hallmarks of our modern civilization. In many ways, these visual media now define our modern life style. Many of us are curious: what is our life style going to be in a few years? What kind of films and television are we going to see? Although cinema and television both evolved over decades, there were stages, which, in fact, were once seen as revolutions:
1) at first, films were silent, then sound was added;
2) cinema and television were initially black-and-white, then color was introduced;
3) computer imaging and digital special effects have been the latest major novelty.
II. BASICS OF 3D TV
Human gains three-dimensional information from variety of cues. Two of the most important ones are binocular parallax & motion parallax.
A. Binocular Parallax
It means for any point you fixate the images on the two eyes must be slightly different. But the two different image so allow us to perceive a stable visual world. Binocular parallax defers to the ability of the eyes to see a solid object and a continuous surface behind that object even though the eyes see two different views.
B. Motion Parallax
It means information at the retina caused by relative movement of objects as the observer moves to the side (or his head moves sideways). Motion parallax varies depending on the distance of the observer from objects. The observer's movement also causes occlusion (covering of one object by another), and as movement changes so too does occlusion. This can give a powerful cue to the distance of objects from the observer.
C. Depth perception
It is the visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the distance to an object. The small distance between our eyes gives us stereoscopic depth perception[7]. The brain combines the two slightly different images into one 3D image. It works most effectively for distances up to 18 feet. For objects at a greater distance, our brain uses relative size and motion As shown in the figure, each eye captures its own view and the two separate images are sent on to the brain for processing. When the two images arrive simultaneously in the back of the brain, they are united into one picture. The mind combines the two images by matching up the similarities and adding in the small differences. The small differences between the two images add up to a big difference in the final picture ! The combined image is more than the sum of its parts. It is a three-dimensional stereo picture.
.


Fig.3.1 Depth Perception
D. Stereographic Images
It means two pictures taken with a spatial or time separation that are then arranged to be viewed simultaneously [5]. When so viewed they provide the sense of a three-dimensional scene using the innate capability of the human visual system to detect three dimensions.As you can see, a stereoscopic image is composed of a right perspective frame and a left perspective frame - one for each eye.When your right eye views the right frame and the left frame is viewed by your left eye, your brain will perceive a true 3D view.
Figure 2 shows the stereographic images.
E. Stereoscope
It is an optical device for creating stereoscopic (or three dimensional) effects from flat (two-dimensional) images; D.Brewster first constructed the stereoscope in 1844. It is provided with lenses, under which two equal images are placed, so that one is viewed with the right eye and the other with the lef [5]t. Observed at the same time, the two images merge into a single virtual image, which, as a consequence of our binocular vision, appears to be three-dimensional.
F. Holographic Images
A luminous, 3D, transparent, colored and nonmaterial image appearing out of a 2D medium, called a hologram. A holographic image cannot be viewed without the proper lighting.
III.ARCHITECTURE OF 3D TV
Figure 5 shows the schematic representation of 3D TV system.
The whole system consists mainly three blocks:
1 Aquisition
2. Transmission
3. Display Unit
A. Acquisition
The acquisition stage consists of an array of hardware-synchronized cameras. Small clusters of cameras are connected to the producer PCs. The producers capture live, uncompressed video streams & encode them using standard MPEG coding. The compressed video then broadcast on separate channels over a transmission network, which could be digital cable, satellite TV or the Internet.
Generally they are using 16 Basler A101fc color cameras with 1300X1030, 8 bits per pixel CCD sensors.
1) CCD Image Sensors: Charge coupled devices are electronic devices that are capable of transforming a light pattern (image) into an electric charge pattern (an electronic image).
Figure 6 shows CCD sensors.
Fig.5.2 CCD Image Sensor
2) MPEG-2 Encoding: MPEG-2 is an extension of the MPEG-1 international standard for digital compression of audio and video signals. MPEG-2 is directed at broadcast formats at higher data rates; it provides extra algorithmic 'tools' for efficiently coding interlaced video, supports a wide range of bit rates and provides for multichannel surround sound coding. MPEG- 2 aims to be a generic video coding system supporting a diverse range of applications. They have built a PCI card with custom programmable logic device (CPLD) that generates the synchronization signal for all the cameras. So, what is PCI card?
3) PCI Card:
.
Fig.5.3 PCI Card
There's one element the bus. Essentially, a bus is a channel or path between the components in a computer. We will concentrate on the bus known as the Peripheral Component Interconnect (PCI). We'll talk about what PCI is, how it operates and how it is used, and we'll look into the future of bus technology.
All 16 cameras are individually connected to the card, which is plugged into the one of the producer PCs. Although it is possible to use software synchronization, they consider precise hardware synchronization essential for dynamic scenes. Note that the price of the acquisition cameras can be high, since they will be mostly used in TV studios. They arranged the 16 cameras in regularly spaced linear array.
Fig.5.4 Arrays of 16 Cameras
B. Transmission
Transmitting 16 uncompressed video streams with 1300X1030 resolution & 24 bits per pixel at 30 frames per seconds requires 14.4 Gblsec bandwidth, which is well beyond current broadcast capabilities. For compression & transmission o1 dynamic muitiview video data there are two basic design choices. Either the data from multiple cameras is compressed using spatial or spatio-temporal encoding, or each video stream is compressed individually using temporal encoding. The first option offers higher compression, since there is a lot of coherence between the views. However, it requires that a centralized processor compress multiple video streams. This compression-hub architecture is not scalable, since the addition of more views will eventually overwhelm the internal bandwidth of the encoder. So, they decided to use temporal encoding of individual video stream on distributed processors. This strategy has other advantages. Existing broadband protocols & compression standards do not need to be changed for immediate real world 3D TV experiments. This system can plug into today's digital TV broadcast infrastructure & co-exist in perfect harmony with 2D TV. There did not have access to digital broadcast equipment, they implemented the modified architecture as shown in figure 9.
Eight producer PCs are connected by gigabit Ethernet to eight consumers PCs. Video stream at full camera resolution (1300*103D) are encoded with MPEG-2 & immediately decoded on the producer PCs. This essentially corresponds to a broadband network with infinite bandwidth & almost zeros delay. The gigabit Ethernet provides all-to-all connectivity between decoders & consumers, which is important for distributed rendering & display implementation. So, what is gigabit Ethernet? '
Fig.5.5 Modified System
1) Gigabit Ethernet: It a transmission technology, enables Super Net to deliver enhanced network performance. Gigabit Ethernet is a high speed form of Ethernet (the most widely installed LAN technology), that can provide data transfer rates of about 1 gigabit per second (Gbps). Gigabit Ethernet provides the capacity for server interconnection, campus backbone architecture and the next generation of super user workstations with a seamless upgrade path from existing Ethernet implementations.
2)Decoder & Consumer Processing: The receiver side is responsible for generating the appropriate images to be displayed. The system needs to be able to provide all possible views to the end users at every instance. The decoder receives a compressed video stream, decode it, and store the current uncompressed source frame in a buffer as shown in figure 10. Each consumer has virtual video buffer (VVD) with data from all current source frames. (I.e., all acquired views at a particular time instance).
Fig.5.6 Block Diagram of Decoder and Consumer processing
The consumer then generates a complete output image by processing image pixels from multiple frames in the VVB. Due to the bandwidth 8 processing limitations it would be impossible for each consumer to receive the complete source of frames from all the decoders. This would also limit the scalability of the system. Here is one-to-one mapping between cameras & projectors.
IV.MULTIVIEW AUTO STEREOSCOPIC DISPLAY
A. Holographic Displays
It is widely acknowledged that Dennis Gabor invented the hologram in 1948. he was working on an electron microscope. He coined the word and received a Nobel Prize for inventing holography in 1971. The holographic image is true three-dimensional: it can be viewed in different angles without glasses.
Figure shows the holographic image.
Fig.6.1 Holographic Imag
All current holo-video devices use single-color laser light. To reduce the amount of display data they provide only horizontal parallax. The display hardware is very large in relation to size of the image. So cannot be done in real-time.
B. Holographic Movies
We have developed the world's first holographic equipment with the capability of projecting genuine 3-dimensional holographic films as well as holographic slides and real objects “ for the multiple viewers simultaneously. Our Holographic Technology was primarily designed for cinema.
C. Volumetric Displays
It use a medium to fill or scan a three-dimensional space & individually address & illuminate small voxels. However, volumetric systems produce transparent images that do not provide a fully convincing three dimensional experience. Furthermore, they cannot correctly reproduce the light field of a natural scene because of their limited color reproduction & lack of occlusions. The design of large size volumetric displays also poses some difficult obstacles.
D.Parallax Displays
Parallax displays emit spatially varying directional light. Much of the early 3D display research focused on improvement to Wheat stone's stereoscope. In 1903, F.Ives used a plate with vertical slits as a barrier over an image with alternating strips of left-eye/righteye images. The resulting device is called a parallax stereogram. To extend the limited viewing angle 8 restricted viewing position of stereogram, Kanolt & H.Ives used narrower slits & smaller pitch between the alternating image strips. These multiview images are called parallax panorama grams. Stereogram & panorama grams provide only horizontal parallax. Lippmann proposed using an array of spherical lenses instead of slits. This is frequently called a 'fly's eye" lens sheet, & resulting image is called integral photograph. An integral is a true planar light field with directionally varying radiance per pixel. Integral sacrifice significant spatial resolution in both dimensions to gain full parallax. Researchers in the 1930s introduced the lenticular sheet, a line of array of narrow cylindrical lenses called Isnticules. Lenticular images found widespread use for advertising, CD covers, & postcards. To improve the native resolution of the display, H.Ives invented the multi-projector lenticular display in 1931. He painted the back of a lenticular sheet with diffuse paint & used it as a projection surface for 39 slide projectors. Finally high output resolution, the large number of views & the large physical dimensions of or display leads to a very immersive 3D display. Other research in parallax displays includes time multiplexed 8 tracking-bass systems. In time multiplexing, multiple views are projected at different time instances using a sliding window or LCD shutter. This inherently reduces the frame rate of the display & may lead to noticeable flickering. Headtracking designs are mostly used to display stereo images, although it could also be used to introduce some vertical parallax in multiview lenticular displays. Today's commercial auto stereoscopic displays use variations of parallax barriers or lenticular sheets placed on the top of LCD or plasma screens. Parallax barriers generally reduce some of the brightness &sharpness of the image. Here, this projector based 3D display currently has a native resolution of 12 million pixels.
Fig.6.2 Images of a scene from the viewer side of the display (top row) and
as seen from some of the cameras (bottom row).
Multi Projector
Displays offer very high resolution, flexibility, excellent cost performance, scalability, & large-format images. Graphics rendering for multiprojector systems can be efficiently parallelized on clusters of PCs using, for example, the Chromium API. Projectors also provide the necessary flexibility to adapt to non-planar display geometries. Precise manual alignment of the projector array is tedious 8 becomes downright impossible for more than a handful of projectors or non-planar screens. Some systems use cameras in the loop to automatically compute relative projectors poses for automatic alignment. Here they will use static camera for automatic image alignment & brightness adjustments of the projectors.
V. 3D DISPLAY
This is a brief explanation that we hope sorts out some of the confusion about the many 3D display options that are available today. We'll tell you how they work, and what the relative tradeoffs of each technique are. Those of you that are just interested in comparing different Liquid Crystal Shutter glasses techniques can skip to the section at the end. Of course, we are always happy to answer your questions personally, and point you to other leading experts in the field[4]. Figure shows a diagram of the multi-projector 3D displays with lenticular sheets.
Fig.7.1 Projection-type lenticular 3D displays
They use 16 NEC LT-170 projectors with 1024'768 native output resolution. This is less that the resolution of acquired & transmitted video, which has 1300'1030 pixels. However, HDTV projectors are much more expensive than commodity projectors. Commodity projector is a compact form factor. Out of eight consumer PCs one is dedicated as the controller. The consumers are identical to the producers except for a dual-output graphics card that is connected to two projectors. The graphic card is used only as an output device. For real-projection system as shown in the figure, two lenticular sheets are mounted back-to-back with optical diffuser material in the center. The front projection system uses only one lenticular sheet with a retro reflective front projection screen material from flexible fabric mounted on the back. Photographs show the rear and front projection.
Fig.7.2 Rear Projection and Front Projection
The projection-side lenticular sheet of the rear-projection display acts as a light multiplexer, focusing the projected light as thin vertical stripes onto the diffuser. Close up of the lenticular sheet is shown in the figure 6. Considering each lenticel to be an ideal Pinhole camera, the stripes capture the view-dependent radiance of a threedimensional light field. The viewer side lenticular sheet acts as a light de-multiplexer & projects the view-dependent radiance back to the viewer. The single lenticular sheet of the front-projection screen both multiplexes & demultiplexes the light. The two key parameters of lenticular sheets are the field-of-view (FOV) & the number of lenticules per inch (LPI). Here it is used 72" ' 48" lenticular sheets with 30 degrees FOV & 15 LPI. The optical design of the lenticules is optimized for multiview 3D display. The number of viewing zones of a lenticular display is related to its FOV. For example, if the FOV is 30 degrees, leading to 180/30 = 6 viewing zones.
VI.CONCLUSION
Most of the key ideas for 3D TV systems presented in this paper have been known for decade, such as lenticular screens, multi projector 3D displays, and camera array for acquisition. This system is the first to provide enough view points and enough pixels per view points to produce an immersive and convincing 3D experience. Another area of future research is to improve the optical characteristic of the 3D display computationally. This concept is computational display. Another area of future research is precise color reproduction of natural scenes on multiview display.
REFERENCES
[1] An Assessment of 3DTV Technologies, Levent Onural-Bilkent Un.,Thomas Sikor- Tech. Univ. Of Berlin, Jorn Ostermann- Univ. Of Hanover, Aljoscha Smolic- Fraunhofer Inst.-HHI, M. Reha Civanlar- Koc Univ., John Watson- Univ. Of Aberdeen, NAB-2006 - Las Vegas - 26 April 2006 c Copyright 2006.
[2] T. Capin, K. Pulli, and T. Akenine-Moller, The State of the Art in Mobile Graphics Research, IEEE Computer Graphics and Applications, vol. 28, no. 4, pp. 74 - 84,2008.
[3] K. Muller, P. Merkle, and T. Wiegand, Compressing 3D Visual Content, IEEE Signal Processing Magazine, vol. 24, no. 6, pp. 58-65, November 2007.
[4] T. Okoshi, "Three dimensional displays," Proceedings of the IEEE, vol. 68, pp. 548-564, 1980.
[5] I. Sexton, and P. Surman, Stereoscopic and auto stereoscopic display systems, IEEE Signal Processing Magazine, vol. 16, no. 3, pp. 85-99, 1999.
[6] P C. Fehn, P. Kauff, M. Op De Beeck, F. Ernst, W. IJsselsteijn, M. Pollefeys, L. Van Gool, E. Ofek and I. Sexton, An Evolutionary and Optimized Approach on 3D-TV, Proc. of International Broadcast Conference, 2002.
[7] C. Fehn, A 3D-TV approach using depth image- based rendering (DIBR), Proc. Of
VIIP 2003.
TABLE OF CONTENTS
TITLE PAGE NO.
1. INTRODUCTION 3
2. BASICS OF 3D TV 3
3. ARCHITECTURE OF 3D TV 4
4. STREOGRAPHIC DISPLAY 4
5. 3D DISPLAY 9
6. CONCLUSION 10
7. REFERENCES
Reply
#3
i want seminar report on 3D tv.please mail me the appropriaate n correct info on 3d
Reply
#4
[attachment=4055]

AIMS TO PROJECT 3D TV Project Aims to Create 3D Television by 2020

Tokyo - Imagine watching a football match on a TV that not only shows the players in three dimensions but also lets you experience the smells of the stadium and maybe even pat a goal scorer on the back.
Japan plans to make this futuristic television a commercial reality by 2020as part of a broad national project that will bring together researchers from the government, technology companies and academia.
The targeted "virtual reality" television would allow people to view high definition images in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.
"Can you imagine hovering over your TV to watch Japan versus Brazil in the finals of the World Cup as if you are really there?" asked Yoshiaki Takeuchi, development at Japan's Ministry of Internal Affairs and Communications.
While companies, universities and research institutes around the world have made some progress on reproducing 3D images suitable for TV, developing the technologies to create the sensations of touch and smell could prove the most challenging, Takeuchi said in an interview with Reuters.
Researchers are looking into ultrasound, electric stimulation and wind pressure as potential technologies for touch.
Such a TV would have a wide range of potential uses. It could be used in home-shopping programs, allowing viewers to "feel" a handbag before placing their order, or in the medical industry, enabling doctors to view or even perform simulated surgery on 3D images of someone's heart.
The future TV is part of a larger national project under which Japan aims to promote "universal communication," a concept whereby information is shared smoothly and intelligently regardless of location or language.

Takeuchi said an open forum covering a broad range of technologies related to universal communication, such as language translation and advanced Web search techniques, could be established by the end of this year.
Researchers from several top firms including Matsushita Electric Industrial Co. Ltd. and Sony Corp. are members of a report on the project last month.
The ministry plans to request a budget of more than 1 billion yen to help fund the project in the next fiscal year starting in April 2006

INTRODUCTION

Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.

2.1 Why 3D TV

The evolution of visual media such as cinema and television is one of the major hallmarks of our modern civilization. In many ways, these visual media now define our modern life style. Many of us are curious: what is our life style going to be in a few years? What kind of films and television are we going to see? Although cinema and television both evolved over decades, there were stages, which, in fact, were once seen as revolutions:
1) at first, films were silent, then sound was added;
2) cinema and television were initially black-and-white, then color was introduced;
3) computer imaging and digital special effects have been the latest major novelty.
So the question is: what is the next revolution in cinema and television going to be?
If we look at these stages precisely, we can notice that all types of visual media have been evolving closer to the way we see things in real life. Sound, colors and computer graphics brought a good part of it, but in real life we constantly see objects around us at close range, we sense their location in space, we see them from different angles as we change position. This has not been possible in ordinary cinema. Movie images lack true dimensionality and limit our sense that what we are being seeing is real.
Nearly a century ago, in the 1920s, the great film director Sergei Eisenstein said that the future of cinematography was the 3d motion pictures. Many other cinema pioneers thought in the same way. Even the Lumiere brothers experimented with three-dimensional (stereoscopic) images using two films painted in red and blue (or green) colors and projected simultaneously onto the screen. Viewers saw stereoscopic images through glasses, painted in the opposite colors. But the resulting image was black-and-white, like in the first feature stereoscopic film "Power of Love" (1922, USA, Dir. H. Fairhal).

Basics of 3D TV

Human gains three-dimensional information from variety of cues. Two of the most important ones are binocular parallax & motion parallax.
3.1 Binocular Parallax
It means for any point you fixate the images on the two eyes must be slightly different. But the two different image so allow us to perceive a stable visual world. Binocular parallax r defers to the ability of the eyes to see a solid object and a continuous surface behind that object even though the eyes see two different views.

3.2 Motion Parallax

It means information at the retina caused by relative movement of objects as the observer moves to the side (or his head moves sideways). Motion parallax varies depending on the distance of the observer from objects. The observer's movement also causes occlusion (covering of one object by another), and as movement changes so too does occlusion. This can give a powerful cue to the distance of objects from the observer. For example, you are sitting in the train & trees are going opposite side to you. Wheatstone was able to scientifically prove the page link between parallax & depth perception using a stereoscope- the world's first three dimensional display device. So, there will be a question in your mind that what are this depth perception, stereoscopic images & stereoscope. Let's understand these words.

3.2.1 Depth perception

It is the visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the distance to an object.
The small distance between our eyes gives us stereoscopic depth perception. The brain combines the two slightly different images into one 3D image. It works most effectively for distances up to 18 feet. For objects at a greater distance, our brain uses relative size and motion to determine depth. The ability to distinguish objects in a visual field. Figure 1 shows the depth perception.
As shown in the figure, each eye captures its own view and the two separate images are sent on to the brain for processing. When the two images arrive simultaneously in the back of the brain, they are united into one picture. The mind combines the two images by matching up the similarities and adding in the small differences. The small differences between the two images add up to a big difference in the final picture ! The combined image is more than the sum of its parts. It is a three-dimensional stereo picture.
The word "stereo" comes from the Greek word "stereos" which means firm or solid. With stereovision you see an object as solid in three spatial dimensions-width, height and depth--or x, y and z. It is the added perception of the depth dimension that makes stereovision so rich and special.

3.2.2 Stereographic Images

It means two pictures taken with a spatial or time separation that are then arranged to be viewed simultaneously. When so viewed they provide the sense of a three-dimensional scene using the innate capability of the human visual system to detect three dimensions. Figure 2 shows the stereographic images.
As you can see, a stereoscopic image is composed of a right perspective frame and a left perspective frame - one for each eye.
When your right eye views the right frame and the left frame is viewed by your left eye, your brain will perceive a true 3D view.

Fig.3.3 Stereoscopes

3.2.3 Stereoscope

It is an optical device for creating stereoscopic (or three dimensional) effects from flat (two-dimensional) images; D.Brewster first constructed the stereoscope in 1844. It is provided with lenses, under which two equal images are placed, so that one is viewed with the right eye and the other with the left. Observed at the same time, the two images merge into a single virtual image, which, as a consequence of our binocular vision, appears to be three-dimensional.
For those wondering what "stereoscopic" is all about, viewing stereoscopic images gives an enhanced depth perception. This is similar to the depth perception we get in real life, the same effect IMAX 3D and many computer games now provide.

3.3 Holographic Images

A luminous, 3D, transparent, colored and nonmaterial image appearing out of a 2D medium, called a hologram. A holographic image cannot be viewed without the proper lighting. Holographic images can be viewed in virtual space (behind the film plane), in real space (in front of the film plane), or in both at once. They may be orthoscopic, that is, have the same appearance of depth and parallax as the original 3D image, or pseudoscopic, in which the scene depth is inverted. Holographic images do not create a shadow, since they are non-material. They can only be viewed in.


OVERVIEW OF THE SYSTEM

3D video usually refers to store animated sequences, whereas 3D TV includes real¬time acquisition, coding & transmission of the dynamic scene. In this seminar we present first end-to-end 3D TV system with 16 independent high resolution views & auto stereoscopic display. They have used hardware synchronized cameras to capture multiple perspective scenes. They have developed a fully distributed architecture with clusters of PCs on the sender & receiver side. The system is scalable in the number of acquired, transmitted, & displayed video streams. The system architecture is flexible enough to enable a broad range of research in 3D TV. This system provides enough viewpoints 8 enough pixels per viewpoint to produce a believable & immersive 31) experience. In these system there are following contribution:
1. Distributed architecture
2. Scalability
3. Multiview video rendering
4. High-resolution 3D display
5. Computational alignment for 3D display

4.1 Model Based System

One approach to 3D TV is to acquire multiview video from sparsely arranged cameras & to use some model of the scene for view interpolation.
Typical scene models are per-pixel depth maps, the visual hull, or a prior model of the acquired objects, such as human body shapes as shown in the figure 4.
It has been shown that even coarse scene models improve the image quality during view synthesis. It is possible to achieve very high image quality with two layer image representation that includes automatically extracted boundary mattes near depth penetration. The Blue-C system consists of a room-sized environment with real-time capture & spatially immersive display. All 3D video systems provide the ability to interactively control the viewpoint, the feature that has been termed free viewpoint video by the MPEG Ad-Hoc Group on 3D Audio 8 Video (3DAV). Real-time acquisition of scene models for general, real-world scenes is very difficult. Many systems do not provide real-time end-to end performance, and if they do they are limited to simple scenes with only a handful of objects. Using a dense light field representation that does not require a scene model but on the other hand, dense light field require more storage 8 transmission bandwidth. So, related to this light field systems is our next topic.

4.2 Light Field System

A light field represents radiance as a function of position & direction in regions of space free of occludes. The light field describes the amount of light traveling through every point in 3D space in every possible direction. It varies with the wavelength A, distance x & the unit vector direction w. In this system, the ultimate goal, which Gavin Miller called the " hyper display ", is to capture a time varying light field passing through a surface & emitting the same light field through another surface with minimum delay. Acquisition of dense,

dynamic light fields has only recently become feasible. Some system uses a bundle of optical fibers in front of high definition camera to capture multiple views simultaneously. The problem with the single camera is that the limited resolution of the camera greatly reduces the number & resolution of the acquired views. Dense array of synchronized cameras will give high resolution light fields. These cameras are connected with the cluster of PCs. Camera array consists of up to 128 cameras & special purpose hardware to compress & store all the video data in real-time. Most light field cameras allow interactive navigation & manipulation of the dynamic scene. Now, let's move on to the architecture of the 3D TV.


ARCHITECTURE OF 3D TV

Figure 5 shows the schematic representation of 3D TV system.

The whole system consists mainly three blocks:
1. Acquisition
2. Transmission
3. Display Unit
The system consists mostly of commodity components that are readily available today. Note that the overall architecture of system accommodates different display types. Let's understand the three blocks one after another.

5.1 Acquisition

The acquisition stage consists of an array of hardware-synchronized cameras. Small clusters of cameras are connected to the producer PCs. The producers capture live, uncompressed video streams & encode them using standard MPEG coding. The compressed video then broadcast on separate channels over a transmission network, which could be digital cable, satellite TV or the Internet.
As explain above each camera captures progressive high-definition video in real time. Generally they are using 16 Basler A101fc color cameras with 1300X1030, 8 bits per pixel CCD sensors. The question might be arising in your mind that what are CCD image sensors & MPEG coding?

5.1.1 CCD Image Sensors

Charge coupled device are electronic devices that are capable of transforming a light pattern (image) into an electric charge pattern (an electronic image). The CCD consists of several individual elements that have the capability of collecting, storing and transporting electrical charge from one element to another. This together with the photosensitive properties of silicon is used to design image sensors. Each photosensitive element will then represent a picture element (pixel). With semiconductor technologies and design rules, structures are made that form lines, or matrices of pixels. One or more output amplifiers at the edge of the chip collect the signals from the CCD. An electronic image can be obtained by - after having exposed the sensor with a light pattern - applying series of pulses that transfer the charge of one pixel after another to the output amplifier, line after line. The output amplifier converts the charge into a voltage. External electronics will transform this output signal into a form suitable for monitors or frame grabbers. CCDs have extremely low noise figures. Figure 6 shows CCD sensors.


Fig.5.2 CCD Image Sensor

CCD image sensors can be a color sensor or a monochrome sensor. In a color image sensor an integral RGB color filter array provides color responsively and separation. A monochrome image sensor senses only in black and white. An important environmental parameter to consider is the operating temperature.

5.1.2 MPEG-2 Encoding

MPEG-2 is an extension of the MPEG-1 international standard for digital compression of audio and video signals. MPEG-2 is directed at broadcast formats at higher data rates; it provides extra algorithmic 'tools' for efficiently coding interlaced video, supports a wide range of bit rates and provides for multichannel surround sound coding. MPEG- 2 aims to be a generic video coding system supporting a diverse range of applications. Different algorithmic 'tools', developed for many applications, have been integrated into the full standard. To implement all the features of the standard in all decoders is unnecessarily complex and a waste of bandwidth, so a small number of subsets of the full standard, known as profiles and levels, have been defined. A profile is a subset of algorithmic tools and a level identifies a set of constraints on parameter values (such as picture size and bit rate). A decoder, which supports a particular profile and level, is only required to support the corresponding subset of the full standard and set of parameter constraints.
Now, the cameras are connected by IEEE-1394 High Performance Serial Bus to the producer PCs. The maximum transmitted frame rate at full resolution is 12 frames per seconds. Two cameras each are connected to one of the eight producer PCs. All PCs in this prototype have 3 GHz Pentium 4 Processors, 2 GB of RAM, & run Windows XP.
They chose the Basler cameras primarily because it has an external trigger that allows for complete control over the video timing. They have built a PCI card with custom programmable logic device (CPLD) that generates the synchronization signal for all the cameras. So, what is PCI card?

5.1.3 PCI Card

The power and speed of computer components has increased at a steady rate since desktop computers were first developed decades ago. Software makers create new applications capable of utilizing the latest advances in processor speed and hard drive capacity, while hardware makers' rush to improve components and design new technologies to keep up with the demands of high end software.
There's one element, however, that often escapes notice - the bus. Essentially, a bus is a channel or path between the components in a computer. Having a high-speed bus is as important as having a good transmission in a car. If you have a 700-horsepower engine combined with a cheap transmission, you can't get all that power to the road. There are many different types of buses. In this article, you will learn about some of those buses. We will concentrate on the bus known as the Peripheral Component Interconnect (PCI). We'll talk about what PCI is, how it operates and how it is used, and we'll look into the future of bus technology.
All 16 cameras are individually connected to the card, which is plugged into the one of the producer PCs. Although it is possible to use software synchronization, they consider precise hardware synchronization essential for dynamic scenes. Note that the price of the acquisition cameras can be high, since they will be mostly used in TV studios. They arranged the 16 cameras in regularly spaced linear array. See the figure 8.
The optical axis of each camera is roughly perpendicular to a common camera plane. It is impossible to align multiple cameras precisely, so they use standard calibration procedures to determine the intrinsic & extrinsic camera parameters. In general, the cameras can be arranged arbitrarily because they are using light field rendering in the consumer to synchronize new views. A densely spaced array proved the best light fields capture, but high-quality reconstruction filters could be used if the light field is under sampled.

5.2 Transmission

Transmitting 16 uncompressed video streams with 1300X1030 resolution & 24 bits per pixel at 30 frames per seconds requires 14.4 Gblsec bandwidth, which is well beyond current broadcast capabilities. For compression & transmission o1 dynamic muitiview video data there are two basic design choices. Either the data from multiple cameras is compressed using spatial or spatio-temporal encoding, or each video stream is compressed individually using temporal encoding. The first option offers higher compression, since there is a lot of coherence between the views. However, it requires that a centralized processor compress multiple video streams. This compression-hub architecture is not scalable, since the addition of more views will eventually overwhelm the internal bandwidth of the encoder. So, they decided to use temporal encoding of individual video stream on distributed processors.
This strategy has other advantages. Existing broadband protocols & compression standards do not need to be changed for immediate real world 3D TV experiments. This system can plug into today's digital TV broadcast infrastructure & co-exist in perfect harmony with 2D TV.
There did not have access to digital broadcast equipment, they implemented the modified architecture as shown in figure 9.
Eight producer PCs are connected by gigabit Ethernet to eight consumers PCs. Video stream at full camera resolution (1300*103D) are encoded with MPEG-2 & immediately decoded on the producer PCs. This essentially corresponds to a broadband network with infinite bandwidth & almost zeros delay. The gigabit Ethernet provides all-to-all connectivity between decoders & consumers, which is important for distributed rendering & display implementation. So, what is gigabit Ethernet? '
5.2.1 Gigabit Ethernet
It a transmission technology, enables Super Net to deliver enhanced network performance. Gigabit Ethernet is a high speed form of Ethernet (the most widely installed LAN technology), that can provide data transfer rates of about 1 gigabit per second (Gbps).
Gigabit Ethernet provides the capacity for server interconnection, campus backbone architecture and the next generation of super user workstations with a seamless upgrade path from existing Ethernet implementations.

5.3 Decoder & Consumer Processing

The receiver side is responsible for generating the appropriate images to be displayed. The system needs to be able to provide all possible views to the end users at every instance. The decoder receives a compressed video stream, decode it, and store the current uncompressed source frame in a buffer as shown in figure 10. Each consumer has virtual video buffer (VVD) with data from all current source frames. (I.e., all acquired views at a particular time instance).

Fig.5.6 Block Diagram of Decoder and Consumer processing
The consumer then generates a complete output image by processing image pixels from multiple frames in the VVB. Due to the bandwidth 8 processing limitations it would be impossible for each consumer to receive the complete source of frames from all the decoders. This would also limit the scalability of the system.
Here is one-to-one mapping between cameras & projectors. But it is not very flexible. For example, the cameras need to be equally spaced, which is hard to achieve in practice. Moreover, this method cannot handle the case when the number of cameras & projectors is not same.
Another, more flexible approach is to use image-based rendering to synchronize views at the correct virtual camera positions. They are using unstructured lurnigraph rendering on the consumer side. They choose the plane that is roughly in the center of the depth of field. The virtual viewpoints for the projected images are chosen at even spacing. Now focus on the processing for one particular consumer, i.e., one particular view. For each pixel o (u, v) in the output image, the display controller can determine the view number v& the position (x, y) of each source pixel s (v, x, y) that contributes to it.
To generate output views from incoming video streams, each output pixel is a linear combination of k source pixels:
0 (u, v) 2 wts (v, x, y) (1)
The blending weights w can be pre-computed by the controller based on the virtual view information. The controller sends the position (x, y) of the k source pixels to each decoder v for pixel selection. The index c of the requesting consumer is sent to the decoder for pixel routing from decoders to the consumer. Optionally, multiple pixels can be buffered in to the decoder for pixel block compression before being sent over the network. The consumer decompresses the pixel blocks & stores each pixel in VVB number v at position (x, y). Each output pixel requires from k source frames. That means that the maximum bandwidth on the network to the VVB is k times the size of the output image times the number of frames per second (fps). This can be substantially reduced if pixel block compression is used, at the expense of more processing. So to provide scalability it is important that this bandwidth is independent of the total number of the transmitted views. . The processing requirements in the consumer are extremely simple. It needs to compute equation (1) for each output pixel. The weights are pre computed & stored in a lookup table. The memory requirements are k times the size of the output image. Assuming simple pixel block compression, consumers can easily be implemented in hardware. That means decoders, networks, & consumers could be combined on the one printed circuit board. Let's move on to the different types of display.


MULTIVIEW AUTO STEREOSCOPIC DISPLAY
6.1 Holographic Displays

It is widely acknowledged that Dennis Gabor invented the hologram in 1948. he was working on an electron microscope. He coined the word and received a Nobel Prize for inventing holography in 1971. The holographic image is true three-dimensional: it can be viewed in different angles without glasses. This innovation could be a new revolution - a new era of holographic cinema and of holographic media in whole.
Holographic techniques were first applied to image display by Leith & Upatnieks in 1962. In holographic reproduction, interference fringes on the holographic surface to reconstruct the light wave front of the original object diffract light from illumination source. A hologram displays a continuous analog field has long been considered the "holy grail "of 3D TV. Most recent device, the Mark-2 Holographic Video Display, uses acousto-optic modulators, beam splitters, moving mirrors & lenses to create interactive holograms. In more recent systems, moving parts have been eliminated by replacing the acousto-optic modulators with LCD, focused light arrays, and optically addressed spatial modulators, digital micro mirror devices. Figure shows the holographic image.
All current holo-video devices use single-color laser light. To reduce the amount of display data they provide only horizontal parallax. The display hardware is very large in relation to size of the image. So cannot be done in real-time.

6.2 Holographic Movies

We have developed the world's first holographic equipment with the capability of projecting genuine 3-dimensional holographic films as well as holographic slides and real objects - for the multiple viewers simultaneously. Our Holographic Technology was primarily designed for cinema. However it has many uses in advertising and show business as well.
At the same time we have developed a new 3d digital image processing and projecting technology. It can be used for creation the modern 3d digital movie theaters and for the computer modeling of 3d virtual realities as well. On the same principle we have already tested a system 3d color TV. In all cases audience can see colorful 3-d inconvenient accessories.
Developed in the Holographic Laboratories of Professor Victor Komar (NIKFI), these technologies have received worldwide recognition, including an Oscar for Technical Achievement in Hollywood, a Nika Film Award in Moscow, endorsement from MIT's Media Lab and many others.
On this website you can find general information about our technology, projects, brief history of 3d and holographic cinema, investment opportunities and sales. For more specific questions please check FAQ section on the ENQUIRE page. You can also send us a message via email: the addresses are on the CONTACT page. We have developed the world's first holographic equipment the genuine 3-dimensional holographic films as well as holographic slides and real objects - for the multiple viewers. Our Holographic Technology was primarily designed for cinema. However it has many uses in advertising and show business as well.

6.2.1 Volumetric Displays

It use a medium to fill or scan a three-dimensional space & individually address & illuminate small voxels. However, volumetric systems produce transparent images that do not provide a fully convincing three dimensional experience. Furthermore, they cannot correctly reproduce the light field of a natural scene because of their limited color reproduction & lack of occlusions. The design of large size volumetric displays also poses some difficult obstacles.

6.2.2 Parallax Displays

Parallax displays emit spatially varying directional light. Much of the early 3D display research focused on improvement to Wheat stone's stereoscope. In 1903, F.Ives used a plate with vertical slits as a barrier over an image with alternating strips of left-eye/right-eye images. The resulting device is called a parallax stereogram. To extend the limited viewing angle 8 restricted viewing position of stereogram, Kanolt & H.Ives used narrower slits & smaller pitch between the alternating image strips. These multiview images are called parallax panorama grams.
Stereogram & panorama grams provide only horizontal parallax. Lippmann proposed using an array of spherical lenses instead of slits. This is frequently called a 'fly's eye" lens sheet, & resulting image is called integral photograph. An integral is a true planar light field with directionally varying radiance per pixel. Integral sacrifice significant spatial resolution in both dimensions to gain full parallax. Researchers in the 1930s introduced the lenticular sheet, a line of array of narrow cylindrical lenses called Isnticules. Lenticular images found widespread use for advertising, CD covers, & postcards. To improve the native resolution of the display, H.Ives invented the multi-projector lenticular display in 1931. He painted the back of a lenticular sheet with diffuse paint & used it as a projection surface for 39 slide projectors. Finally high output resolution, the large number of views & the large physical dimensions of or display leads to a very immersive 3D display. Other research in parallax displays includes time multiplexed 8 tracking-bass systems. In time multiplexing, multiple views are projected at different time instances using a sliding window or LCD shutter. This inherently reduces the frame rate of the display & may lead to noticeable flickering. Head-tracking designs are mostly used to display stereo images, although it could also be used to introduce some vertical parallax in multiview lenticular displays. Today's commercial auto stereoscopic displays use variations of parallax barriers or lenticular sheets placed on the top of LCD or plasma screens. Parallax barriers generally reduce some of the brightness & sharpness of the image. Here, this projector based 3D display currently has a native resolution of 12 million pixels.

6.2.3 Multi Projector

Displays offer very high resolution, flexibility, excellent cost performance, scalability, & large-format images. Graphics rendering for multiprojector systems can be efficiently parallelized on clusters of PCs using, for example, the Chromium API. Projectors also provide the necessary flexibility to adapt to non-planar display geometries. Precise manual alignment of the projector array is tedious 8 becomes downright impossible for more than a handful of projectors or non-planar screens. Some systems use cameras in the loop to automatically compute relative projectors poses for automatic alignment. Here they will use static camera for automatic image alignment & brightness adjustments of the projectors.


3D DISPLAY

This is a brief explanation that we hope sorts out some of the confusion about the many 3D display options that are available today. We'll tell you how they work, and what the relative tradeoffs of each technique are. Those of you that are just interested in comparing different Liquid Crystal Shutter glasses techniques can skip to the section at the end. Of course, we are always happy to answer your questions personally, and point you to other leading experts in the field.
Figure shows a diagram of the multi-projector 3D displays with lenticular sheets.


Fig.7.1 Projection-type lenticular 3D displays

They use 16 NEC LT-170 projectors with 1024'768 native output resolution. This is less that the resolution of acquired & transmitted video, which has 1300'1030 pixels. However, HDTV projectors are much more expensive than commodity projectors. Commodity projector is a compact form factor. Out of eight consumer PCs one is dedicated as the controller. The consumers are identical to the producers except for a dual-output graphics card that is connected to two projectors. The graphic card is used only as an output device.
For real-projection system as shown in the figure, two lenticular sheets are mounted back-to-back with optical diffuser material in the center. The front projection system uses only one lenticular sheet with a retro reflective front projection screen material from flexible fabric mounted on the back. Photographs show the rear and front projection.
The projection-side lenticular sheet of the rear-projection display acts as a light multiplexer, focusing the projected light as thin vertical stripes onto the diffuser. Close up of the lenticular sheet is shown in the figure 6. Considering each lenticel to be an ideal
Pinhole camera, the stripes capture the view-dependent radiance of a three-dimensional light field. The viewer side lenticular sheet acts as a light de-multiplexer & projects the view-dependent radiance back to the viewer. The single lenticular sheet of the front-projection screen both multiplexes & demultiplexes the light.
The two key parameters of lenticular sheets are the field-of-view (FOV) & the number of lenticules per inch (LPI). Here it is used 72" ' 48" lenticular sheets with 30 degrees FOV & 15 LPI. The optical design of the lenticules is optimized for multiview 3D display. The number of viewing zones of a lenticular display is related to its FOV. For example, if the FOV is 30 degrees, leading to 180/30 = 6 viewing zones.

7.1 3D TV for 21st Century

Interest in 3D has never been greater. The amount of research and development on 3D photographic, motion picture and television systems is staggering. Over 1000 patent applications have been filed in these areas in the last ten years. There are also hundreds of technical papers and many unpublished projects.
I have worked with numerous systems for 3D video and 3D graphics over the last 20 years and have years developed and marketed many products. In order to give some historical perspective I'll start with an account of my 1985 visit to Exposition 85 in Tsukuba, Japan, I spent a month in Japan visiting with 3D researchers and attending the many 3D exhibits at the Tsukuba Science Exposition. The exposition was one of the major film and video events of the century, with a good chunk of its 2 1/2 billion dollar cost devoted to state of the art audiovisual systems in more than 25 pavilions. There was the world's largest IMAX screen, Cinema-U (a Japanese version of IMAX), OMNIMAX (a dome projection version of IMAX using fisheye lenses) in 3D, numerous 5, 8 and 10 perforation 70mm systems - several with fisheye lens projection onto domes and one in 3D, single, double and triple 8 perforation 35mm systems, live high definition (1125 line) TV viewed on HDTV sets and HDTV video projectors (and played on HDTV video discs and VTR's), and giant outdoor video screens culminating in Sony's 30 meter diagonal Jumbotron (also presented in 3D). Included in the 3D feast at the exposition were four 3D movie systems, two 3DTV systems (one without glasses), a 3D slide show, a Pulfrich demonstration (synthetic 3D created by a dark filter in front of one eye), about 100 holograms of every type, size and quality (the Russian's were best), and 3D slide sets, lenticular prints and embossed holograms for purchase. Most of the technology, from a robot that read music and played the piano to the world's largest tomato plant, was developed in Japan in the two years before the exposition, but most of the 3D hardware and software was the result of collaboration between California and Japan. It was the chance of a lifetime to compare practically all of the state of the art 2D and 3D motion picture and video systems, tweaked to perfection and running 12 hours a day, seven days a week. After describing the systems at Tsukuba, I will survey some of the recent work elsewhere in the world and suggest likely developments during the next decade.

CONCLUSION

Most of the key ideas for 3D TV systems presented in this paper have been known for decade, such as lenticular screens, multi projector 3D displays, and camera array for acquisition. This system is the first to provide enough view points and enough pixels per view points to produce an immersive and convincing 3D experience. Another area of future research is to improve the optical characteristic of the 3D display computationally. This concept is computational display. Another area of future research is precise color reproduction of natural scenes on multiview display.

REFERENCES

1. An Assessment of 3DTV Technologies, Levent Onural-Bilkent Un.,Thomas Sikor-Tech. Univ. Of Berlin, Jorn Ostermann- Univ. Of Hanover, Aljoscha Smolic-Fraunhofer Inst.-HHI, M. Reha Civanlar- Koc Univ., John Watson- Univ. Of Aberdeen, NAB-2006 - Las Vegas - 26 April 2006 © Copyright 2006.
2. T. Capin, K. Pulli, and T. Akenine-Moller, "The State of the Art in Mobile Graphics Research", IEEE Computer Graphics and Applications, vol. 28, no. 4, pp. 74 - 84,
2008.
3. K. Muller, P. Merkle, and T. Wiegand, "Compressing 3D Visual Content", IEEE Signal Processing Magazine, vol. 24, no. 6, pp. 58-65, November 2007.
4. T. Okoshi, "Three dimensional displays," Proceedings of the IEEE, vol. 68, pp. 548¬564, 1980.
5. I. Sexton, and P. Surman, "Stereoscopic and auto stereoscopic display systems," IEEE Signal Processing Magazine, vol. 16, no. 3, pp. 85-99, 1999.
6. P C. Fehn, P. Kauff, M. Op De Beeck, F. Ernst, W. IJsselsteijn, M. Pollefeys, L. Van
Gool, E. Ofek and I. Sexton, "An Evolutionary and Optimized Approach on 3D-TV", Proc. of International Broadcast Conference, 2002.
7. C. Fehn, "A 3D-TV approach using depth image- based rendering (DIBR)", Proc. Of VIIP 2003.
8. D. Florencio and C. Zhang, "Multiview video Compression and Streaming Based on Predicted Viewer Position", Proc. ICASSP 2009.
9. P. Merkle, A. Smolic, K. Muller, and T. Wiegand, "Multi-view Video plus Depth Representation and Coding", Proc. IEEE International Conference on Image Processing (ICIP'07), San Antonio, TX, USA, pp. 201- 204, Sept. 2007.
10. A. Nurminen, "Mobile 3D City Maps", IEEE Computer Graphics and Applications, vol. 28, no. 4, pp. 20-31, 2008.
Reply
#5
please help.hey can you give me the full report on 3d television.
Reply
#6
Hi,
This is Dhivya Priya, Doing MBA in Technology Management.
Can you please send across me the full report of the 3Dtv technology.

Thanks in Advance!!!

Regards,
Dhivya

My Mail id: dhivyapriyakm[at]gmail.com
Reply
#7
Hi,
the full report of this topic is posted in the first page of this thread. please download it.
Reply
#8
For more information about 3d tv please follow the link:
http://studentbank.in/report-3d-tv-conte...ull-report
Reply
#9
Thumbs Up 
[attachment=4279]
AIM TO PROJECT 3DTV


The ultimate goal of the viewing experience is to create the illusion of a real environment in its absence. If this goal is fully achieved, there is no way for an observer to distinguish whether or not what he sees is real or an optical illusion.
Project Aims to Create 3D Television by 2020. Japan plans to make this futuristic television a commercial reality by 2020
The targeted "virtual reality" television would allow people to view high definition images in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.

INTRODUCTION

Three-dimensional TV is expected to be the next revolution in the TV history. A 3D TV prototype system with Real-time Acquisition, Transmission, and 3D display of dynamic scenes is implemented.
This is the first real-time end-to-end 3D TV system with enough views and resolution to provide a truly immersive 3D experience.
3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses.
Confused
Reply
#10

[attachment=6440]


Introduction
A 3D television (3D-TV) is a television set that employs techniques of 3D presentation, such as stereoscopic capture, multi-view capture, or 2D plus depth, and a 3D display—a special viewing device to project a television program into a realistic three-dimensional field.
Technology is around the corner and we must be prepared to embrace it; today a 3D television model is difficult to find in the market, but we just can’t say we haven’t heard of it. 3D is the new concept that multimedia experts are working with nowadays, the reason: three dimensional devices will be the future and we are getting closer to it, day after day. What is a 3D television model?, it is a tv capable of reproducing images in three dimensions, which is the best benefit we get?, the idea of feeling ourselves inside the movie or video game we are watching or playing.
Gamers and multimedia fans affirm that this new concept will change our way of watching reality, since a simple movie, will take part of our real life. We will be able of being the star of a film, or just turn into our favorite game character. A 3D television model is a LCD or Plasma screen which provides a depth sense without using special 3d glasses; how do we get this effect?, the answer is quite complicated but we will explain the process in a understandable language.
Due to small mirrors, placed in each screen pixel, we obtain an image for each of our eyes (the same as holography), and in this way, our brain uses the difference between both images to achieve the effect we are looking for. Another feature we must remark in a 3D television model is the stereo sound, which is sharper and clearer, helping the image to get the necessary impact. 3D television is the new way of experiencing multimedia contents, thanks to a 3D television model, we will be able to have our cinema at home, and besides be part of an horror, action or adventure movie. Although these devices cannot be found in the open market yet, since just a few brands have achieved perfection, multimedia experts believe that by the end of 2010, we will be able to buy some prototypes prepared to entertain our families.
Reply
#11
please send me seminar report with ppt on 3d tv immediately
...............
thank you sending
Reply
#12
hey ,
remrik , none send you , but you can download seminar report with ppt on 3d tv from http://studentbank.in/report-3dtv-techno...ull-report attachments
Reply
#13
above a lot of information on '3dtv technology' is given and attachments too. you please download the report.
Reply
#14


[attachment=8413]

Authors: 1) Gaurish Mhatre 2) Akshada Burambadkar 3) Gaurav Kadam 4) Aniket Avhad
Vivekananda Education Society`s Polytechnic



Abstract
Three-dimensional TV is the one of greatest revolution in the history of television. It is the excitement among the people to know about how 3D TV works and requirements of it. This paper presents about the technologies of 3D and the implementations of it in 3D TV. Also some of details and prices of 3D TV are given

1. Content
1.1 Introduction
Many manufacturers are starting to make or at least announce televisions that will bring your viewing experience into the 3rd dimension. Some of you may remember going down to the local 7-11 to pick up those paper glasses with the red and blue cellophane lenses to watch The Creature from the Black Lagoon in 3D on your old CRT set. The idea behind those paper 3D glasses is actually much older (the first 3D film was actually made in 1922) but the basic concept continues today.3D TV works because we have 2 eyes and space between them. Our eyes see objects at a slightly different angle and our brain uses this information to reliably calculate distance or depth, especially with objects within 20 feet. 3D media uses this natural depth perception by sending a different image to each eye, our brains do the rest.
For example, if you look at a key on your keyboard with only your left eye open, and then your right eye, you will see pretty much the same image, except that each eye gives you a slightly shifted perspective of the same object. This is referred to as parallax and is crucial in our ability to perceive depth. The human brain is wired such that when it simultaneously receives images from the left and right eye each possessing a slightly shifted perspective, it is able to combine these images such that we are able to perceive the depth or distance of an object.
To take this a step further, try moving a small object close to your face and look at it with only one open eye at a time. You will notice that there is a larger shift in the position of the object for your left and right eye image when it is closer to you. If on the other hand, you place the object a few feet away from you and try the same experiment, you’ll notice that there is a very tiny shift in the position of the object in your left and right eye image. This gives you a hint at how our brain perceives depth from these visual cues.
Now that we understand why we perceive in 3D, it follows that for any display technology to be able to trick your eyes into believing that you are viewing a 3D image, it will need to provide a slightly different image for viewing to each eye via some technological trickery. Read on to learn about the different technologies being used in 3D tvs today and in the near future.

1.2 Technologies in 3D

1.2.1 Anaglyphic Glasses
Projected 3D images work on a principle of sending a slightly different image to each of your eyes. With the old red and blue glasses the different images were projected in red and blue and the glasses filtered out one or the other to create the 3D effect. The 3D effect is fairly crude and the picture has to be monochrome since the entire effect is created by filtering colors.

2.2.2 Polarized Method
Another method of producing a 3D image is using polarized lenses in the 3D glasses. Two different polarized images are shown and each lens blocks out one of the images providing a full color 3D effect that is superior to the red and blue glasses. This is the method used in IMAX and other theaters showing 3D movies. Polarization works by using filters that only let waves of light through that are not aligned with filter. Each projected image is aligned with one of the lenses and the opposite lens lets that image through. This allows 3D display of a full color picture. The polarized 3D glasses look like any average pair of sunglasses you'd find on a store rack.

1.2.3 Active 3D
A third method of creating a 3D image involves using shutter glasses. This method, often called "frame sequential display", has the user wearing powered glasses with an LCD screen that opens and closes like a camera lens. The 2 different images are shown in an alternating fashion while the glasses are synced via outboard hardware to open and close each lens separately providing one image to each eye. This effectively halves the frame rate of media being shown so a 60Hz LCD TV being used with shutter glasses will appear at 30Hz and 120Hz at 60Hz. With fast moving images this 3D technology can suffering from flickering, also the tint of the glasses effectively lowers the screen brightness by up to 50%.

1.3.1. Dual Camera: The picture information is captured using camera having two lenses. One lense is for capturing image for right eye and second one for left eye. This images are captured at alternate rate i.e switched alternately at rate of 120 Hz for smooth picture (avoid flickering).
1.3.2 3D processosr :This combines the images captured for left and right eye.
1.3.3. Encoder: For modulation of the signals to be transmitted
1.3.4. Broadcast : Signals are broadcasted using antennas
1.3.5. Antennas receive this signals and processed in the TV for viewing

1.4 More about 3D TV
Everything you watch on your regular TV you can also watch on a 3D TV. This might come as a pleasant surprise if you thought you could only watch the limited amount of 3D material that's been released so far. When you watch regular 2D video on a 3D TV, you won't have to wear 3D glasses and you won't see 3D effects. But you will see a superb picture because 3D screens are more technologically advanced, and produce the best-looking 2D pictures currently available.
More and more 3D Blu-ray movies will come out this year. We're also expecting to see 3D channels from networks like ESPN and Discovery. Ask your cable or satellite provider for details.
Also, most 3D TVs can upconvert 2D video to 3D. Of course, this won't look as good as true 3D — that is, movies and TV shows originally shot in 3D — but it will let you enjoy your TV's 3D capabilities more often
All 3D TVs will display current 2D content with no problem and no glasses required, and we don't expect their picture quality in 2D to be any worse than on an equivalent 2D HDTV. The Blu-ray 3D specification calls for all such discs to also include a 2D version of the movie, allowing current 2D players to play them with no problem.

1.5 "3D-ready" and a "3D-capable" TV
A 3D-ready TV includes the necessary infrared (IR) emitter that sends control signals to compatible 3D glasses. The emitter is actually built into the TV bezel, so you can't see it. A 3D-capable TV doesn't have the emitter built-in — if you want to watch 3D video on these TVs, you'll need to buy an emitter box separately.
1.6 Need of Glasses
It is all because of how 3D TV works. A 3D TV alternates between "left eye" and "right eye" versions of an image very, very quickly. The glasses receive a signal from your 3D TV, ensuring that the correct eye sees the correct image at all times. If you don't have the special "active shutter" 3D glasses, the image will look blurry.

1.7 Stereo Blindness
Between 5 percent and 10 percent of Americans suffer from stereo blindness, according to the College of Optometrists in Vision Development. They often have good depth perception--which relies on more than just stereopsis--but cannot perceive the depth dimension of 3D video presentations. Some stereo-blind viewers can watch 3D material with no problem as long as they wear glasses; it simply appears as 2D to them. Others may experience headaches, eye fatigue or other problems

1.8 Power Consumption of 3D TV
 Panasonic’s TC-P50VT25 50-inch plasma used 160.91 watts in 2D mode and 260.53 watts in 3D mode, an increase of 38.24%.
 Panasonic’s TC-P65VT25 65-inch plasma used 176.84 watts in 2D mode and 354.71 watts in 3D mode, an increase of 50.15%.
 Samsung’s UN55C8000 55-inch LED used 118.73 watts in 2D mode and 152.89 watts in 3D mode, an increase of 22.34%.
 The abnormal one was Sony’s XBR-HX909 LED TV, which used less power than any of the other TVs in any mode, only clocked in at 106.66 watts in 2D mode and 104.65 watts in 3D mode, a decrease of 1.92%.

1.9 Future 3D TV Technology
Autostereoscopy is any method of displaying stereoscopic images without the use of special headgear or glasses on the part of the viewer. It includes two broad classes of displays: those that use head-tracking to ensure that each of the viewer's two eyes sees a different image on the screen, and those that display multiple views so that the display does not need to know where the viewers' eyes are

Glasses are widely considered to be the weak page link in the 3D chain. Consumers would much rather be able to view 3D without having to don any accessories. This is especially true of people who already wear corrective lenses and have to wear glasses over their glasses to view 3D TV.
This provides a 3D effect without the use of glasses but is currently only used on small monitors due to the strictly defined view distance limitations involved. If the viewer moves out of the sweet spot the 3D effect will fall apart or even invert. These screens and the media that utilizes them are still very rare. Larger versions of this technology are probably on the way but it suffers from two issues, price and the very limited view angle and distance.

2. Conclusion:
Thus we have seen various technologies used for 3D imaging and viewing.
We also explained technology of 3D TV and discussed various aspects and features of 3D TV.
The cost factor and the power consumption factors will help you to buy what 3D TV you should go for if you are buying a new one.

3. Reference:
Websites:
1)http://en.wikipediawiki/3D_television
2) http://review3dtvhow-does-3d- TV-work/
3) http://3-d-tv.info/
Books:
3d Television (3dtv) Technology, Systems, and Deployment: Daniel Minoli



Reply
#15
[attachment=8802]
3D TELEVISION
Abstract---Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.Japan plans to make this futuristic television a commercial reality by 2020as part of abroad national project that will bring together researchers from the government, technology companies and academia. The targeted "virtual reality" television would allow people to view high definitionimages in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.
I.INTRODUCTION
Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.
Why 3D TV
The evolution of visual media such as cinema and television is one of the major hallmarks of our modern civilization. In many ways, these visual media now define our modern life style. Many of us are curious: what is our life style going to be in a few years? What kind of films and television are we going to see? Although cinema and television both evolved over decades, there were stages, which, in fact, were once seen as revolutions:
1) at first, films were silent, then sound was added;
2) cinema and television were initially black-and-white, then color was introduced;
3) computer imaging and digital special effects have been the latest major novelty.
II. BASICS OF 3D TV
Human gains three-dimensional information from variety of cues. Two of the most important ones are binocular parallax & motion parallax.
A. Binocular Parallax
It means for any point you fixate the images on the two eyes must be slightly different. But the two different image so allow us to perceive a stable visual world. Binocular parallax defers to the ability of the eyes to see a solid object and a continuous surface behind that object even though the eyes see two different views.
B. Motion Parallax
It means information at the retina caused by relative movement of objects as the observer moves to the side (or his head moves sideways). Motion parallax varies depending on the distance of the observer from objects. The observer's movement also causes occlusion (covering of one object by another), and as movement changes so too does occlusion. This can give a powerful cue to the distance of objects from the observer.
C. Depth perception
It is the visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the distance to an object. The small distance between our eyes gives us stereoscopic depth perception[7]. The brain combines the two slightly different images into one 3D image. It works most effectively for distances up to 18 feet. For objects at a greater distance, our brain uses relative size and motion As shown in the figure, each eye captures its own view and the two separate images are sent on to the brain for processing. When the two images arrive simultaneously in the back of the brain, they are united into one picture. The mind combines the two images by matching up the similarities and adding in the small differences. The small differences between the two images add up to a big difference in the final picture ! The combined image is more than the sum of its parts.
D. Stereographic Images
It means two pictures taken with a spatial or time separation that are then arranged to be viewed simultaneously [5]. When so viewed they provide the sense of a three-dimensional scene using the innate capability of the human visual system to detect three dimensions.As you can see, a stereoscopic image is composed of a right perspective frame and a left perspective frame - one for each eye.When your right eye views the right frame and the left frame is viewed by your left eye, your brain will perceive a true 3D view
E. Stereoscope
It is an optical device for creating stereoscopic (or three dimensional) effects from flat (two-dimensional) images; D.Brewster first constructed the stereoscope in 1844. It is provided with lenses, under which two equal images are placed, so that one is viewed with the right eye and the other with the lef [5]t. Observed at the same time, the two images merge into a single virtual image, which, as a consequence of our binocular vision, appears to be three-dimensional.
F. Holographic Images
A luminous, 3D, transparent, colored and nonmaterial image appearing out of a 2D medium, called a hologram. A holographic image cannot be viewed without the proper lighting.
The whole system consists mainly three blocks:
1 Aquisition
2. Transmission
3. Display Unit
A. Acquisition
The acquisition stage consists of an array of hardware-synchronized cameras. Small clusters of cameras are connected to the producer PCs. The producers capture live, uncompressed video streams & encode them using standard MPEG coding. The compressed video then broadcast on separate channels over a transmission network, which could be digital cable, satellite TV or the Internet.
Generally they are using 16 Basler A101fc color cameras with 1300X1030, 8 bits per pixel CCD sensors.
1) CCD Image Sensors: Charge coupled devices are electronic devices that are capable of transforming a light pattern (image) into an electric charge pattern (an electronic image).
Figure 6 shows CCD sensors.
Reply
#16
3D TELEVISION
Abstract

Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.Japan plans to make this futuristic television a commercial reality by 2020as part of abroad national project that will bring together researchers from the government, technology companies and academia. The targeted "virtual reality" television would allow people to view high definitionimages in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.
Reply
#17
Submitted By :-
SACHIN MEHRA
ROHIT SHARMA

[attachment=9529]
Television evolved
Lcd,plasma (155 cms),hdtv
1922 release –”the power of love”
1950-dozens of b movies,
Want of producers for people to move to theaters.
Installation of vibrating plates in theater seats to stimulate electric shocks.
Goofy glasses are pretty tame.
Not a great impact.
Telivision episodes,specials on 3-d
popular exhibits at the 2009 Consumer Electronic Show
An object in real world looks 3-d , but why the same object looks flat in tv?
It all has to do with the way we focus on objects .
our eyes absorb light reflected off of the items
Our brains interpret the light and create a picture in our minds
Two cases:
object is far away, the light traveling to one eye is parallel with the light traveling to the other eye.
object gets closer, the lines are no longer parallel -- they converge and our eyes shift to compensate.
Focusing an object
brain takes into account the effort it required to adjust your eyes to focus on it as well as how much your eyes had to converge. Together, this information allows you to estimate how far away the object is.
secret to 3-D television and movies
showing each eye the same image in two different locations, you can trick you brain into thinking the flat image you're viewing has depth .
Means different convergence& focal pts. While your eyes may converge upon two images that seem to be one object right in front of you, they're actually focusing on a screen that's further away.
4 most common 3-d tv technologies
How it shows
How does a 3-d tv works

One has to produce two separate, moving images and send one of them to the viewer's left eye and the other to the right. To give the proper illusion of 3D, the left eye's image mustn't be seen by the right eye, while the right eye's image mustn't be seen by the left.
Anaglyph glasses
Polarizing lenses
Active and passive shutters

Reply
#18
I want to see more about 3D television system.
thanksHeart
Reply
#19

[attachment=14545]
AIMS TO PROJECT 3D TV
Project Aims to Create 3D Television by 2020
Tokyo - Imagine watching a football match on a TV that not only shows the players in three
dimensions but also lets you experience the smells of the stadium and maybe even pat a goal
scorer on the back.
Japan plans to make this futuristic television a commercial reality by 2020as part of a
broad national project that will bring together researchers from the government, technology
companies and academia.
The targeted "virtual reality" television would allow people to view high definition
images in 3D from any angle, in addition to being able to touch and smell the objects being
projected upwards from a screen to the floor.
"Can you imagine hovering over your TV to watch Japan versus Brazil in the finals of
the World Cup as if you are really there?" asked Yoshiaki Takeuchi, development at Japan's
Ministry of Internal Affairs and Communications.
While companies, universities and research institutes around the world have made
some progress on reproducing 3D images suitable for TV, developing the technologies to
create the sensations of touch and smell could prove the most challenging, Takeuchi said in
an interview with Reuters.
Researchers are looking into ultrasound, electric stimulation and wind pressure as
potential technologies for touch.
Such a TV would have a wide range of potential uses. It could be used in homeshopping
programs, allowing viewers to "feel" a handbag before placing their order, or in the
medical industry, enabling doctors to view or even perform simulated surgery on 3D images
of someone's heart.
The future TV is part of a larger national project under which Japan aims to promote
"universal communication," a concept whereby information is shared smoothly and
intelligently regardless of location or language.
Takeuchi said an open forum covering a broad range of technologies related to
universal communication, such as language translation and advanced Web search techniques,
could be established by the end of this year.
Researchers from several top firms including Matsushita Electric Industrial Co. Ltd.
and Sony Corp. are members of a report on the project last month.
The ministry plans to request a budget of more than 1 billion yen to help fund the project in
the next fiscal year starting in April 2006
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: akshada burambadkar,
Popular Searches: who is ernst hooghoudt, lenticular len, pestle for ccd, baylor univ truett, ge projectors, 3dtv technology pdf, church projectors**chowdhury,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Transparent electronics full report seminar surveyer 8 24,561 04-04-2018, 07:54 AM
Last Post: Kalyani Wadkar
  wireless charging through microwaves full report project report tiger 90 70,962 27-09-2016, 04:16 AM
Last Post: The icon
  5G technology dhanya1987 8 12,921 11-04-2016, 11:21 AM
Last Post: dhanabhagya
  Wireless Power Transmission via Solar Power Satellite full report project topics 32 50,424 30-03-2016, 03:27 PM
Last Post: dhanabhagya
  surge current protection using superconductors full report computer science technology 13 26,993 16-03-2016, 12:03 AM
Last Post: computer science crazy
  paper battery full report project report tiger 57 61,967 16-02-2016, 11:42 AM
Last Post: Guest
  IMOD-Interferometric modulator full report seminar presentation 3 11,443 18-07-2015, 10:14 AM
Last Post: [email protected]
  digital jewellery full report project report tiger 36 66,704 27-04-2015, 01:29 PM
Last Post: seminar report asees
  FinFET Technology computer science crazy 13 11,634 10-03-2015, 04:38 PM
Last Post: seminar report asees
  LOW POWER VLSI On CMOS full report project report tiger 15 22,286 09-12-2014, 06:31 PM
Last Post: seminar report asees

Forum Jump: