augmented reality
#2
[attachment=2584]

1. ABSTRACT
This paper surveys the field of Augmented Reality, in which 3-D virtual objects are integrated into a 3-D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment and military applications that have been explored. This paper describes the characteristics of Augmented Reality systems, including a detailed discussion of the tradeoffs between optical and video blending approaches. Registration and sensing errors are two of the biggest problems in building effective Augmented Reality systems, so this paper summarizes current efforts to overcome these problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using Augmented Reality.
2. INTRODUCTION
Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. Most AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time.
Getting the right information at the right time and the right place is key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices. But what makes augmented reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. This kind of interface minimizes the extra mental effort that a user has to expend when switching his or her attention back and forth between real-world tasks and a computer screen. In augmented reality, the user's view of the world and the computer interface literally become one.
Mixed Reality
Real Environment
Augmented Reality
Augmented Virtuality
Virtual Environment
Milligrams Reality-Virtuality Continuum
Between the extremes of real life and Virtual Reality lies the spectrum of Mixed Reality, in which views of the real world are combined in some proportion with views of a virtual environment. Combining direct view, stereoscopic video, and stereoscopic graphics, Augmented Reality describes that class of displays that consists primarily of a real environment, with graphic enhancements or augmentations.
3
In Augmented Virtuality, real objects are added to a virtual environment. In Augmented reality, virtual objects are added to real world.
An AR system supplements the real world with virtual (computer generated) objects that appear to co-exist in the same space as the real world. Virtual Reality is a synthetic environment
2.1 Comparison between AR and virtual environments
The overall requirements of AR can be summarized by comparing them against the requirements for Virtual Environments, for the three basic subsystems that they require.
1) Scene generator: Rendering is not currently one of the major problems in AR. VE
systems have much higher requirements for realistic images because they completely
replace the real world with the virtual environment. In AR, the virtual images only
supplement the real world. Therefore, fewer virtual objects need to be drawn, and
they do not necessarily have to be realistically rendered in order to serve the purposes
of the application.
2) Display device: The display devices used in AR may have less stringent requirements than VE systems demand, again because AR docs not replace the real world. For example, monochrome displays may be adequate for some AR applications, while virtually all VE systems today use full color. Optical see-through HMDs with a small field-of-view may be satisfactory because the user can still see the real world with his peripheral vision; the see-through HMD does not shut off the user's normal field-of-view. Furthermore, the resolution of the monitor in an optical see-through HMD might be lower than what a user would tolerate in a VE application, since the optical see-through HMD does not reduce the resolution of the real environment.
3) Tracking and sensing: While in the previous two cases AR had lower requirements than VE, which is not the case for tracking and sensing. In this area, the requirements for AR are much stricter than those for VE systems. A major reason for this is the registration problem.
3. DEVELOPMENTS
¢ Although augmented reality may seem like the stuff of science fiction, researchers have been building prototype systems for more than three decades. The first was developed in the 1960s by computer graphics pioneer Ivan Sutherland and his students at Harvard University and the University of Utah.
¢ In the 1970s and 1980s a small number of researchers studied augmented reality at institutions such as the U.S. Air Force's Armstrong Laboratory, the NASA Ames Research Center and the University of North Carolina at Chapel Hill.
¢ It wasn't until the early 1990s that the term "augmented reality" was coined by scientists at Boeing who were developing an experimental AR system to help workers assemble wiring harnesses.
¢ In 1996 developers at Columbia University developed 'The Touring Machine'
¢ In 2001 MIT came up with a very compact AR system known as 'MIThrill'
¢ Presently research is being done in developing BARS (Battlefield Augmented Reality Systems) by engineers at Naval Research Laboratory, Washington D.C.
4. WORKING
AR systems track the position and orientation of the user's head so that the overlaid material can be aligned with the user's view of the world. Through this process, known as registration, graphics software can place a three-dimensional image of a teacup, for example, on top of a real saucer and keep the virtual cup fixed in that position as the user moves about the room. AR systems employ some of the same hardware technologies used in virtual-reality research, but there's a crucial difference: whereas virtual reality brashly aims to replace the real world, augmented reality respectfully supplements it.
Augmented reality is still in an early stage of research and development at various universities and high-tech companies. Eventually, possibly by the end of this decade, we will see the first mass-marketed augmented-reality system, which one researcher calls "the Walkman of the 21st century." What augmented reality attempts to do is not only superimpose graphics over a real environment in real-time, but also change those graphics to accommodate a user's head- and eye- movements, so that the graphics always fit the perspective. Here are the three components needed to make an augmented-reality system work:
Head Mounted Display
Tracking System
Mobile Computing System
4.1 Head Mounted Displays
Just as monitors allow us to see text and graphics generated by computers, head-mounted displays (HMDs) will enable us to view graphics and text created by augmented-reality systems. There are two basic types of HMDS:
¢ optical see-through
¢ video see-through
4.1.1 Optical see-through displays:
A simple approach to optical see-through display employs a mirror beam splitter”a half-silvered mirror that both reflects and transmits light. If properly oriented in front of the user's eye, the beam splitter can reflect the image of a computer display into the user's line of sight yet still allow light from the surrounding world to pass through. Such beam splitters, which are called combiners, have long been used in "head-up" displays for fighter-jet pilots (and, more recently, for drivers of luxury cars). Lenses can be placed between the beam splitter and the computer display to focus the image so that it appears at a comfortable viewing distance. If a display and optics are provided for each eye, the view can be in stereo. Sony makes a see-through display that some researchers use, called the Glasstron.
OPTICAL SEE-THROUGH DtSPUY
ILIUUID CKYSIAL L'ISI'l/Vi
CDMPTNSATING PRISM
1 TRACKING TARGETS
VIRTU41
spinrR :
:CHPUTER
4.1.2 Video see-through displays:
In contrast, a video see-through display uses video mixing technology, originally developed for television special effects, to combine the image from a head-worn camera with synthesized graphics. The merged image is typically presented on an opaque head-worn display. With careful design, the camera can be positioned so that its optical path is close to that of the user's eye; the video image thus approximates what the user would normally see. As with optical see-through displays, a separate system can be provided for each eye to support stereo vision.
Video composition can be done in more than one way. A simple way is to use chroma-keying: a technique used in many video special effects. The background of the computer graphic images is set to a specific color, say green, which none of the virtual objects use. Then the combining step replaces all green areas with the corresponding parts from the video of the real world. This has the effect of superimposing the virtual objects over the real world. A more sophisticated composition would use depth information. If the system had depth information at each pixel for the real world images, it could combine the real and virtual images by a
A different approach is the virtual retinal display, which forms images directly on the retina. These displays, which Micro Vision is developing commercially, literally draw on the retina with low-power lasers whose modulated beams are scanned by micro electro-mechanical mirror assemblies that sweep the beam horizontally and vertically. Potential advantages include high brightness and contrast, low power consumption, and large depth of field.
Each of the approaches to see-through display design has its pluses and minuses. Optical see-through systems allow the user to see the real world with full resolution and field of view. But the overlaid graphics in current optical see-through systems are not opaque and therefore cannot completely obscure the physical objects behind them. As a result, the superimposed text may be hard to read against some backgrounds, and the three-dimensional graphics may not produce a convincing illusion. Furthermore, although a user focuses physical objects depending on their distance, virtual objects are all focused in the plane of the display. This means that a virtual object that is intended to be at the same position as a physical object may have a geometrically correct projection, yet the user may not be able to view both objects in focus at the same time.
In video see-through systems, virtual objects can fully obscure physical ones and can be combined with them using a rich variety of graphical effects. There is also no discrepancy between how the eye focuses virtual and physical objects, because both are viewed on the same plane. The limitations of current video technology, however, mean that the quality of the visual experience of the real world is significantly decreased, essentially to the level of the synthesized graphics, with everything focusing at the same apparent distance. At present, a video camera and display are no match for the human eye.
VIDEO SEE-THROUGH DISPLAY
COMPUTER (WORN ON H!P)
j LIQUID CKYS ML DISPLAY
1 DISPLAY AND
TRACKHie | [TRACKING) i SENSORS TARGETS
¦ i \
COMPUTER
VIRTUAL
SPinrR .
4.2 An optical approach has the following advantages over a video approach:
I) Simplicity: Optical blending is simpler and cheaper than video blending. Optical approaches have only one "stream" of video to worry about: the graphic images. The real world is seen directly through the combiners, and that time delay is generally a few nanoseconds. Video blending, on the other hand, must deal with separate video streams for the real and virtual images. The two streams of real and virtual images must be properly synchronized or temporal distortion results. Also, optical see-through HMDs with narrow field-of-view combiners offer views of the real world that have little distortion. Video cameras almost always have some amount of distortion that must be compensated for, along with any distortion from the optics in front of the display devices. Since video requires cameras and combiners that optical approaches do not need, video will probably be more expensive and complicated to build than optical-based systems.
2) Resolution: Video blending limits the resolution of what the user sees, both real and virtual, to the resolution of the display devices. With current displays, this resolution is far less than the resolving power of the fovea. Optical see-through also shows the graphic images at the resolution of the display device, but the user's view of the real world is not degraded. Thus, video reduces the resolution of the real world, while optical see-through does not.
3) Safety: Video see-through HMDs are essentially modified closed-view HMDs. If the power is cut off, the user is effectively blind. This is a safety concern in some applications. In contrast, when power is removed from an optical see-through HMD, the user still has a direct view of the real world. The HMD then becomes a pair of heavy sunglasses, but the user can still see.
4) No eye offset: With video see-through, the user's view of the real world is provided by the video cameras. In essence, this puts his "eyes" where the video cameras are. In most configurations, the cameras are not located exactly where the user's eyes are, creating an offset between the cameras and the real eyes. The distance separating the cameras may also not be exactly the same as the user's interpupillary distance (IPD). This difference between camera locations and eye locations introduces displacements from what the user sees compared to what he expects to see. For example, if the cameras are above the user's eyes, he will see the world from a vantage point slightly taller than he is used to.
4.3 Video blending offers the following advantages over optical blending:
1) Flexibility in composition strategies: A basic problem with optical see-through is
that the virtual objects do not completely obscure the real world objects, because the
optical combiners allow light from both virtual and real sources. Building an optical
see-through HMD that can selectively shut out the light from the real world is
difficult. Any filter that would selectively block out light must be placed in the optical
path at a point where the image is in focus, which obviously cannot be the user's eye.
Therefore, the optical system must have two places where the image is in focus: at the
user's eye and the point of the hypothetical filter. This makes the optical design much
more difficult and complex. No existing optical see-through HMD blocks incoming
light in this fashion. Thus, the virtual objects appear ghost-like and semi-transparent.
This damages the illusion of reality because occlusion is one of the strongest depth
cues. In contrast, video see-through is far more flexible about how it merges the real
and virtual images. Since both the real and virtual are available in digital form, video
see-through compositors can, on a pixel-by-pixel basis, take the real, or the virtual, or
some blend between the two to simulate transparency. Because of this flexibility,
video see-through may ultimately produce more compelling environments than
optical see-through approaches.
2) Wide field-of-view: Distortions in optical systems are a function of the radial
distance away from the optical axis. The further one looks away from the center of
the view, the larger the distortions get. A digitized image taken through a distorted
optical system can be undistorted by applying image processing techniques to unwrap
the image, provided that the optical distortion is well characterized. This requires
significant amounts of computation, but this constraint will be less important in the
future as computers become faster. It is harder to build wide field-of-view displays
with optical see-through techniques. Any distortions of the user's view of the real
world must be corrected optically, rather than digitally, because the system has no
digitized image of the real world to manipulate. Complex optics are expensive and
add weight to the HMD. Wide field-of-view systems are an exception to the general trend of optical approaches being simpler and cheaper than video approaches.
3) Real and virtual view delays can be matched: Video offers an approach for reducing or avoiding problems caused by temporal mismatches between the real and virtual images. Optical see-through HMDs offer an almost instantaneous view of the real world but a delayed view of the virtual. This temporal mismatch can cause problems. With video approaches, it is possible to delay the video of the real world to match the delay from the virtual image stream.
4) Additional registration strategies: In optical see-through, the only information the system has about the user's head location comes from the head tracker. Video blending provides another source of information: the digitized image of the real scene. This digitized image means that video approaches can employ additional registration strategies unavailable to optical approaches.
5) Easier to match the brightness of real and virtual objects: Both optical and video technologies have their roles, and the choice of technology depends on the application requirements. Many of the mechanical assembly and repair prototypes use optical approaches, possibly because of the cost and safety issues. If successful, the equipment would have to be replicated in large numbers to equip workers on a factory floor. In contrast, most of the prototypes for medical applications use video approaches, probably for the flexibility in blending real and virtual and for the additional registration strategies offered.
5. TRACKING AND ORIENTATION
The biggest challenge facing developers of augmented reality is the need to know where the user is located in reference to his or her surroundings. There's also the additional problem of tracking the movement of users' eyes and heads. A tracking system has to recognize these movements and project the graphics related to the real-world environment the user is seeing at any given moment. Currently, both video sec-through and optical see-through displays typically have lag in the overlaid material due to the tracking technologies currently available.
5.1 INDOOR TRACKING:
Tracking is easier in small spaces than in large spaces. Trackers typically have two parts: one worn by the tracked person or object and the other built into the surrounding environment, usually within the same room. In optical trackers, the targets”LEDs or reflectors, for instance”can be attached to the tracked person or object, and an array of optical sensors can be embedded in the room's ceiling. Alternatively, the tracked users can wear the sensors, and the targets can be fixed to the ceiling. By calculating the distance to each visible target, the sensors can determine the user's position and orientation.
Researchers at the University of North Carolina-Chapel Hill have developed a very precise system that works within 500 square feet. The HiBall Tracking System is an optoelectronic tracking system made of two parts:
¢ six user-mounted, optical sensors
¢ infrared-light-emitting diodes (LEDs) embedded in special ceiling panels
The system uses the known location of the LEDs, the known geometry of the user-mounted optical sensors and a special algorithm to compute and report the user's position and orientation. The system resolves linear motion of less than .2 millimeters, and angular motions less than .03 degrees. It has an update rate of more than 1500 Hz, and latency is kept at about one millisecond.
In everyday life, people rely on several senses”including what they see, cues from their inner ears and gravity's pull on their bodies”to maintain their bearings. In a similar fashion, "hybrid trackers" draw on several sources of sensory information. For example, the wearer of an AR display can be equipped with inertial sensors (gyroscopes and accelerometers) to record changes in head orientation. Combining this information with data from the optical, video or ultrasonic devices greatly improves the accuracy of the tracking.
5.2 OUTDOOR TRACKING:
Head orientation is determined with a commercially available hybrid tracker that combines gyroscopes and accelerometers with a magnetometer that measures the earth's magnetic field. For position tracking, we take advantage of a high-precision version of the increasingly popular Global Positioning System receiver.
A GPS receiver determines its position by monitoring radio signals from navigation satellites. GPS receivers have an accuracy of about 10 to 30 meters. An augmented-reality system would be worthless if the graphics projected were of something 10 to 30 meters away from what you were actually looking at.
Users can get better results with a technique known as differential GPS. In this method, the mobile GPS receiver also monitors signals from another GPS receiver and a radio transmitter at a fixed location on the earth. This transmitter broadcasts corrections based on the difference between the stationary GPS antenna's known and computed positions. By using these signals to correct the satellite signals, differential GPS can reduce the margin of error to less than one meter. Our system is able to achieve centimeter-level accuracy by employing real-time kinematic GPS, a more sophisticated form of differential GPS that also compares the phases of the signals at the fixed and mobile receivers.
Augmented-reality systems place extraordinarily high demands on the accuracy, resolution, repeatability and speed of tracking technologies. Hardware and software delays introduce a lag between the user's movement and the update of the display. As a result, virtual objects will not remain in their proper positions as the user moves about or turns his or her head. One technique for combating such errors is to equip AR systems with software that makes short-term predictions about the user's future motions by extrapolating from previous movements. And in the long run, hybrid trackers that include computer vision technologies may be able to trigger appropriate graphics overlays when the devices recognize certain objects in the user's view.
6. MOBILE COMPUTING POWER
For a wearable augmented reality system, there is still not enough computing power to create stereo 3-D graphics. So researchers are using whatever they can get out of laptops and personal computers, for now. Laptops are just now starting to be equipped with graphics processing units (GPUs).
Toshiba just added an NVidia GPU to their notebooks that is able to process more than 17-million triangles per second and 286-million pixels per second, which can enable CPU-intensive programs, such as 3-D games. But still, notebooks lag far behind ” NVidia has developed a custom 300-MHz 3-D graphics processor for Microsoft's Xbox game console that can produce 150 million polygons per second ” and polygons are more complicated than triangles. So you can see how far mobile graphics chips have to go before they can create smooth graphics like the ones you see on your home video-game system.
7. APPLICATION DOMAINS
Only recently have the capabilities of real-time video image processing, computer graphic systems and new display technologies converged to make possible the display of a virtual graphical image correctly registered with a view of the 3D environment surrounding the user. Researchers working with augmented reality systems have proposed them as solutions in many domains. The areas that have been discussed range from entertainment to military training. Many of the domains, such as medical are also proposed for traditional virtual reality systems. This section will highlight some of the proposed applications for augmented reality.
7.1 MEDICAL
Because imaging technology is so pervasive throughout the medical field, it is not surprising that this domain is viewed as one of the more important for augmented reality systems. Most of the medical applications deal with image guided surgery. Pre-operative imaging studies, such as CT or MRI scans, of the patient provide the surgeon with the necessary view of the internal anatomy. From these images the surgery is planned. Visualization of the path through the anatomy to the affected area where, for example, a tumor must be removed is done by first creating a 3D model from the multiple views and slices in the preoperative study. This is most often done mentally though some systems will create 3D volume visualizations from the image study. Augmented reality can be applied so that the surgical team can see the CT or MRI data correctly registered on the patient in the operating theater while the procedure is progressing. Being able to accurately register the images at this point will enhance the performance of the surgical team.
Another application for augmented reality in the medical domain is in ultrasound imaging. Using an optical see-through display the ultrasound technician can view a volumetric rendered image of the fetus overlaid on the abdomen of the pregnant woman. The image appears as if it was inside of the abdomen and is correctly rendered as the user moves.
7.2 ENTERTAINMENT
A simple form of augmented reality has been in use in the entertainment and news business for quite some time. Whenever you are watching the evening weather report the weather reporter is shown standing in front of changing weather maps. In the studio the reporter is actually standing in front of a blue or green screen. This real image is augmented with computer generated maps using a technique called chroma-keying. It is also possible to create a virtual studio environment so that the actors can appear to be positioned in a studio with computer generated decorating.
Movie special effects make use of digital compositing to create illusions. Strictly speaking with current technology this may not be considered augmented reality because it is not generated in real-time. Most special effects are created off¬line, frame by frame with a substantial amount of user interaction and computer graphics system rendering. But some work is progressing in computer analysis of the live action images to determine the camera parameters and use this to drive the generation of the virtual graphics objects to be merged.
Princeton Electronic Billboard has developed an augmented reality system that allows broadcasters to insert advertisements into specific areas of the broadcast image. For example, while broadcasting a baseball game this system would be able to place an advertisement in the image so that it appears on the outfield wall of the stadium. The electronic billboard requires calibration to the stadium by taking images from typical camera angles and zoom settings in order to build a map of the stadium including the locations in the images where advertisements will be inserted. By using pre-specified reference points in the stadium, the system automatically determines the camera angle being used and referring to the pre-defined stadium map inserts the advertisement into the correct place.
ARQuake, 76 designed using the same platform, blends users in the real world with those in a purely virtual environment. A mobile AR user plays as a combatant in the computer game 'Quake', where the game runs with a virtual model of the real environment
7.3 MILITARY TRAINING
The military has been using displays in cockpits that present information to the pilot on the windshield of the cockpit or the visor of their flight helmet. This is a form of augmented reality display. SIMNET, a distributed war games simulation system, is also embracing augmented reality technology. By equipping military personnel with helmet mounted visor displays or a special purpose rangefinder the activities of other units participating in the exercise can be imaged. While looking at the horizon, for example, the display equipped soldier could see a helicopter rising above the tree line. This helicopter could be being flown in simulation by another participant. In wartime, the display of the real battlefield scene could be augmented with annotation information or highlighting to emphasize hidden enemy units.
7.4 ENGINEERING DESIGN
Imagine that a group of designers are working on the model of a complex device for their clients. The designers and clients want to do a joint design review even though they are physically separated. If each of them had a conference room that was equipped with an augmented reality display this could be accomplished. The physical prototype that the designers have mocked up is imaged and displayed in the client's conference room in 3D. The clients can walk around the display looking at different aspects of it. To hold discussions the client can point at the prototype to highlight sections and this will be reflected on the real model in the augmented display that the designers are using. Or perhaps in an earlier stage of the design, before a prototype is built, the view in each conference room is augmented with a computer generated image of the current design built from the CAD files describing it. This would allow real time interactions with elements of the design so that either side can make adjustments and changes that are reflected in the view seen by both groups.
7.5 ROBOT PATH PLANNING
Virtual lines show a planned motion of a robot arm
Teleoperation of a robot is often a difficult problem, especially when the robot is far away, with long delays in the communication link. Under this circumstance, instead of controlling the robot directly, it may be preferable to instead control a virtual version of the robot. The user plans and specifies the robot's actions by manipulating the local virtual version, in real time. The results are directly displayed on the real world. Once the plan is tested and determined, then user tells the real robot to execute the specified plan. This avoids pilot-induced oscillations caused by the lengthy delays. The virtual versions can also predict the effects of manipulating the environment, thus serving as a planning and previewing tool to aid the user in performing the desired task. The ARGOS system has demonstrated that stereoscopic AR is an easier and more accurate way of doing robot path planning than traditional monoscopic interfaces. Others have also used registered overlays with telepresence systems.
7.6 Manufacturing, Maintenance and Repair
When the maintenance technician approaches a new or unfamiliar piece of equipment instead of opening several repair manuals they could put 011 an augmented reality display. In this display the image of the equipment would be < ugmented with annotations and information pertinent to the repair. For example, :he location of fasteners and attachment hardware that must be removed would be hi flighted. Then the inside view of the machine would highlight the boards that need to be replaced. The military has developed a wireless vest worn by personnel that is attached to an optical see-through display. The wireless connection allows the scldier to access repair manuals and images of the equipment. Future versions might register those images on the live scene and provide animation to show the procedui es that must be performed.
External view of Columbia printer maintenance application. Note that
all objects must be tracked.
Boeing researchers are developing an augmented reality display to -eplace the large work frames used for making wiring harnesses for their aircraft. Using this experimental system, the technicians are guided by the augmented display that shows the routing of the cables on a generic frame used for all harnesses The augmented display allows a single fixture to be used for making the multiple hanr Confusedses.
7.7 Consumer Design
Virtual reality systems are already used for consumer design. Using perhaps more of a graphics system than virtual reality, when you go to the typical home store wanting to add a new deck to your house, they will show you a graphical picture of what the deck will look like. It is conceivable that a future system would allow you to bring a video tape of your house shot from various viewpoints in your backyard and in real time it would augment that view to show the new deck in its finished form attached to your house. Or bring in a tape of your current kitchen and the augmented reality processor would replace your current kitchen cabinetry with virtual images of the new kitchen that you are designing.
Applications in the fashion and beauty industry that would benefit from an augmented reality system can also be imagined. If the dress store does not have a particular style dress in your size an appropriate sized dress could be used to augment the image of you. As you looked in the three sided mirror you would see the image of the new dress on your body. Changes in hem length, shoulder styles or other particulars of the design could be viewed on you before you place the order. When you head into some high-tech beauty shops today you can see what a new hair style would look like on a digitized image of yourself. But with an advanced augmented reality system you would be able to see the view as you moved. If the dynamics of hair are included in the description of the virtual object you would also see the motion of your hair as your head moved.
7.8 Instant Information
Tourists and students could use these systems to learn more about a certain historical event. Imagine walking onto a Civil War battlefield and seeing a re-creation of historical events on a head-mounted, augmented-reality display. It would immerse you in the event, and the view would be panoramic. The recently started Archeoguide project is developing a wearable AR system for providing tourists with information about a historical site in Olympia, Greece.
7.9 Portability
In almost all Virtual Environment systems, the user is not encouraged to walk around much. Instead, the user navigates by "flying" through the environment, walking on a treadmill, or driving some mockup of a vehicle. Whatever the technology, the result is that the user stays in one place in the real world. Some AR applications, however, will need to support a user who will walk around a large environment. AR requires that the user actually be at the place where the task is to take place. "Flying," as performed in a VE system, is no longer an option. If a mechanic needs to go to the other side of a jet engine, she must physically move herself and the display devices she wears. Therefore, AR systems will place a premium on portability, especially the ability to walk around outdoors, away from controlled environments. The scene generator, the HMD, and the tracking system must all be self-contained and capable of surviving exposure to the environment. If this capability is achieved, many more applications that have not been tried will become available. For example, the ability to annotate the surrounding environment could be useful to soldiers, hikers, or tourists in an unfamiliar new location.
8. LIMITATIONS
8.1 Technological Limitations
Although we've seen much progress in the basic enabling technologies, they still primarily prevent the deployment of many AR applications. Displays, trackers, and AR systems in general need to become more accurate, lighter, cheaper, and less power consuming. By describing problems from our common experiences in building outdoor AR systems, we hope to impart a sense of the many areas that still need improvement. Displays such as the Sony Glasstron are intended for indoor consumer use and aren't ideal for outdoor use. The display isn't very bright and completely washes out in bright sunlight. The image has axed focus to appear several feet away from the user, which is often closer than the outdoor landmarks. The equipment isn't nearly as portable as desired. Since the user must wear the PC, sensors, display, batteries, and everything else required, the end result is a cumbersome and heavy backpack. Laptops today have only one CPU, limiting the amount of visual and hybrid tracking that we can do. Operating systems aimed at the consumer market aren't built to support real-time computing, but specialized real-time operating systems don't have the drivers to support the sensors and graphics in modern hardware. Tracking in unprepared environments remains an enormous challenge. Outdoor demonstrations today have shown good tracking only with significant restrictions in operating range, often with sensor suites that are too bulky and expensive for practical use. Today's systems generally require extensive calibration procedures that an end user would be unacceptably complicated. Many connectors such as universal serial bus (USB) connectors aren't rugged enough for outdoor operation and are prone to breaking. While we expect some improvements to naturally occur from other fields such as wearable computing, research in AR can reduce these difficulties through improved tracking in unprepared environments and calibration- free or auto calibration approaches to minimize set-up requirements.
8.2 User interface limitations
We need a better understanding of how to display data to a user and how the user should interact with the data. Most existing research concentrates on low-level perceptual issues, such as properly perceiving depth or how latency affects manipulation tasks. However, AR also introduces many high-level tasks, such as the need to identify what information should be provided, what's the appropriate representation for that data, and how the user should make queries and reports. For example, a user might want to walk down a street, look in a shop window, and query the inventory of that shop. To date, few have studied such issues. However, we expect significant growth in this area because research AR systems with sufficient capabilities are now more commonly available. For example, recent work suggests that the creation and presentation of narrative performances and structures may lead to more realistic and richer AR experiences.
9. Future Directions:
This section identifies areas and approaches that require further research to produce improved AR systems.
¢ Hybrid approaches: Future tracking systems may be hybrids, because combining approaches can cover weaknesses. The same may be true for other problems in AR. For example, current registration strategies generally focus on a single strategy. Future systems may be more robust if several techniques are combined. An example is combining vision-based techniques with prediction. If the fiducials are not available, the system switches to open-loop prediction to reduce the registration errors, rather than breaking down completely. The predicted viewpoints in turn produce a more accurate initial location estimate for the vision-based techniques.
¢ Real-time systems and time-critical computing: Many VE systems are not truly run in real time. Instead, it is common to build the system, often on UNIX, and then see how fast it runs. This may be sufficient for some VE applications. Since everything is virtual, all the objects are automatically synchronized with each other. AR is a different story. Now the virtual and real must be synchronized, and the real world "runs" in real time. Therefore, effective AR systems must be built with real¬time performance in mind. Accurate timestamps must be available. Operating systems must not arbitrarily swap out the AR software process at any time, for arbitrary durations. Systems must be built to guarantee completion within specified time budgets, rather than just "running as quickly as possible." These are characteristics of flight simulators and a few VE systems. Constructing and debugging real-time systems is often painful and difficult, but the requirements for AR demand real-time performance.
¢ Perceptual and psychophysical studies: Augmented Reality is an area ripe for psychophysical studies. How much lag can a user detect How much registration error is detectable when the head is moving Besides questions on perception, psychological experiments that explore performance issues are also needed. How much does head-motion prediction improve user performance on a specific task How much registration error is tolerable for a specific application before performance on that task degrades substantially Is the allowable error larger while the user moves
her head versus when she stands still Furthermore, not much is known about potential optical illusions caused by errors or conflicts in the simultaneous display of real and virtual objects.
¢ Portability: It is essential that potential AR applications give the user the ability to walk around large environments, even outdoors. This requires making the equipment self-contained and portable. Existing tracking technology is not capable of tracking a user outdoors at the required accuracy.
¢ Multimodal displays: Almost all work in AR has focused on the visual sense: virtual graphic objects and overlays. But augmentation might apply to all other senses as well. In particular, adding and removing 3-D sound is a capability that could be useful in some AR applications.
10. CONCLUSION
Augmented Reality is far behind Virtual Environments in maturity. Several commercial vendors sell complete, turnkey Virtual Environment systems. However, no commercial vendor currently sells an HMD-based Augmented Reality system. A few monitor-based "virtual set" systems are available, but today AR systems are primarily found in academic and industrial research laboratories. The first deployed HMD-based AR systems will probably be in the application of aircraft manufacturing. The former uses optical approaches, while the latter is pursuing video approaches. Boeing has performed trial runs with workers using a prototype system but has not yet made any deployment decisions. Annotation and visualization applications in restricted, limited-range environments are deployable today, although much more work needs to be done to make them cost effective and flexible. Applications in medical visualization will take longer. Prototype visualization aids have been used on an experimental basis, but the stringent registration requirements and ramifications of mistakes will postpone common usage for many years. AR will probably be used for medical training before it is commonly used in surgery. The next generation of combat aircraft will have Helmet-Mounted Sights with graphics registered to targets in the environment. These displays, combined with short-range steerable missiles that can shoot at targets off-bore sight, give a tremendous combat advantage to pilots in dogfights. Instead of having to be directly behind his target in order to shoot at it, a pilot can now shoot at anything within a 60-90 degree cone of his aircraft's forward centerline.
One area where a breakthrough is required is tracking an HMD outdoors at the accuracy required by AR. If this is accomplished, several interesting applications will become possible. Two examples are described here: navigation maps and visualization of past and future environments. The first application is a navigation aid to people walking outdoors. These individuals could be soldiers advancing upon their objective, hikers lost in the woods, or tourists seeking directions to their intended destination. Today, these individuals must pull out a physical map and associate what they see in the real environment around them with the markings on the 2-D map. If landmarks are not easily identifiable, this association can be difficult to perform, as
anyone lost in the woods can attest. An AR system makes navigation easier by performing the association step automatically. If the user's position and orientation are known, and the AR system has access to a digital map of the area, then the AR system can draw the map in 3-D directly upon the user's view. Tourists and students walking around the grounds with such AR displays would gain a much better understanding of these historical sites and the important events that took place there. Similarly, AR displays could show what proposed architectural changes would look like before they are carried out. An urban designer could show clients and politicians what a now stadium would look like as they walked around the adjoining neighborhood, to better understand how the stadium project will affect nearby residents.
After the basic problems with AR are solved, the ultimate goal will be to generate virtual objects that are so realistic that they are virtually indistinguishable from the real environment. Photorealism has been demonstrated in feature films, but accomplishing this in an interactive application will be much harder. Lighting conditions, surface reflections, and other properties must be measured automatically, in real time. More sophisticated lighting, texturing, and shading capabilities must run at interactive rates in future scene generators. Registration must be nearly perfect, without manual intervention or adjustments. While these are difficult problems, they are probably not insurmountable. It took about. 25 years tQ progress from drawing stick figures on a screen to the photorealistic dinosaurs in "Jurassic Park." Within another 25 years, we should be able to wear a pair of AR glasses outdoors to see and interact with photorealistic dinosaurs eating a tree in our backyard.
11. REFERENCES:
¢ Ronald Azuma, Steven Feiner ; Recent Advances in Augmented Reality; IEEE Computer Graphics and Applications, November-December 2001 [page 34]
¢ Columbia University's Computer Graphics and User Interfaces Lab is at cs.columbia.edu/graphics/ [28/09/02, 4:30 pm]
¢ A list of relevant publications can be found at
cs.columbia.edu/graphics/publications/publications.html [22/09/02, 10:30 pm]
¢ AR research sites and conferences are listed at augmented-reality.org
¢ Information on medical applications of augmented reality is at ¦ cs.unc.edu/~us/ [30/09/02, 12:30 pm]
please read http://studentbank.in/report-Augmented-Reality--7901 for presentation of augmented reality
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: sam boyd stadium, augmented reality in jewellery**gy, leading researchers alzheimers, x ray augmented reality application ppt, anywhere interfaces using handheld augmented reality seminar abstract, augmented reality apps for iphone, shoot,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Messages In This Thread
augmented reality - by computer science crazy - 28-12-2009, 01:37 PM
RE: augmented reality - by project report tiger - 04-03-2010, 01:20 PM
RE: augmented reality - by cherrykarthik - 22-03-2010, 11:34 AM
RE: augmented reality - by project topics - 01-04-2010, 12:25 AM

Possibly Related Threads...
Thread Author Replies Views Last Post
  An Overview of Virtual Reality ( Download Full Seminar Report ) computer science crazy 1 5,779 07-01-2013, 11:16 AM
Last Post: seminar details
  Virtual Reality Electrical Fan 2 2,549 11-06-2012, 03:30 PM
Last Post: seminar details
  virtual reality full report project report tiger 7 6,822 11-06-2012, 03:30 PM
Last Post: seminar details
  A SEMINAR REPORT ON VIRTUAL REALITY computer girl 0 1,149 09-06-2012, 12:44 PM
Last Post: computer girl
  AUGMENTED REALITY shibin.sree 13 10,848 14-02-2012, 09:57 AM
Last Post: seminar paper
Shocked mobile virtual reality (Download Full Report And Abstract) computer science crazy 11 10,627 06-10-2011, 10:14 AM
Last Post: seminar addict
  Creating Virtual Reality Applications Using FreeVR seminar class 0 1,272 11-05-2011, 11:11 AM
Last Post: seminar class
  VIRTUAL REALITY TECHNIQUE IN SURGICAL FIELD seminar class 0 1,196 18-04-2011, 10:43 AM
Last Post: seminar class
Thumbs Down Augmented Reality computer science crazy 3 3,333 23-03-2011, 09:53 AM
Last Post: seminar class
  ubiquitous computing and augmented realities seminar class 0 997 04-03-2011, 03:01 PM
Last Post: seminar class

Forum Jump: