PROJECT OXYGEN full report
#1

[attachment=1251]


ABSTRACT

In the future, computation will be human-centered. It will be freely available everywhere, like batteries and power sockets, or oxygen in the air we breathe. It will enter the human world, handling our goals and needs and helping us to do more while doing less. We will not need to carry our own devices around with us. Instead, configurable generic devices, either handheld or embedded in the environment, will bring computation to us, whenever we need it and wherever we might be. As we interact with these "anonymous" devices, they will adopt our information personalities. They will respect our desires for privacy and security.

New systems will boost our productivity. They will help us automate repetitive human tasks, control a wealth of physical devices in the environment, find the information we need (when we need it, without forcing our eyes to examine thousands of search-engine hits), and enable us to work together with other people through space and time.

It must be accessible anywhere. It must adapt to change, both in user requirements and in operating conditions. It must never shut down or reboot ”components may come and go in response to demand, errors, and upgrades, but Oxygen as a whole must be available all the time.
THE APPROACH

INTEGRATED TECHNOLOGIES THAT ADDRESS HUMAN NEEDS

Oxygen enables pervasive, human-centered computing through a combination of specific user and system technologies.

Oxygenâ„¢s user technologies directly address human needs. Speech and vision technologies enable us to communicate with Oxygen as if weâ„¢re interacting with another person, saving much time and effort. Automation, individualized knowledge access, and collaboration technologies help us perform a wide variety of tasks that we want to do in the ways we like to do them.

Oxygenâ„¢s system technologies dramatically extend our range by delivering user technologies to us at home, at work, or on the go. Computational devices, called Enviro21s (E21s), embedded in our homes, offices, and cars sense and affect our immediate environment. Hand-held devices, called Handy21s (H21s), empower us to communicate and compute no matter where we are. Dynamic networks (N21s) help our machines locate each other as well as the people, services, and resources we want to reach.
Oxygenâ„¢s user technologies include:
The Oxygen technologies work together and pay attention to several important themes:
¢ Distribution and mobility ” for people, resources, and services.
¢ Semantic content ” what we mean, not just what we say.
¢ Adaptation and change ” essential features of an increasingly dynamic world.
¢ Information personalities ” the privacy, security, and form of our individual interactions with Oxygen.

Oxygen is an integrated software system that will reside in the public domain. Its development is sponsored by DARPA and the Oxygen Alliance industrial partners, who share its goal of pervasive, human-centered computing. Realizing that goal will require a great deal of creativity and innovation, which will come from researchers, students, and others who use Oxygen technologies for their daily work during the course of the project. The lessons they derive from this experience will enable Oxygen to better serve human needs.
USER TECHNOLOGIES



SYSTEM TECHNOLOGIES

DEVICES AND NETWORKS

People access Oxygen through stationary devices (E21s) embedded in the environment or via portable hand-held devices (H21s). These universally accessible devices supply power for computation, communication, and perception in much the same way that wall outlets and batteries deliver power to electrical appliances. Although not customized to any particular user, they can adapt automatically or be modified explicitly to address specific user preferences. Like power outlets and batteries, these devices differ mainly in how much energy they can supply.

E21 STATIONARY DEVICES

Embedded in offices, buildings, homes, and vehicles, E21s enable us to create situated entities, often linked to local sensors and actuators, that perform various functions on our behalf, even in our absence. For example, we can create entities and situate them to monitor and change the temperature of a room, close a garage door, or redirect email to colleagues, even when we are thousands of miles away. E21s provide large amounts of embedded computation, as well as interfaces to camera and microphone arrays, thereby enabling us to communicate naturally, using speech and gesture, in the spaces they define.

E21s provide sufficient computational power throughout the environment

¢ To communicate with people using natural perceptual resources, such as speech and vision,
¢ To support Oxygen's user technologies wherever people may be, and
¢ To monitor and control their environment.

E21s, as well as H21s, are universal communication and computation appliances. E21s leverage the same hardware components as the H21s so that the same software can run on both devices. E21s differ from H21s mainly in

¢ Their connections to the physical world,
¢ The computational power they provide, and
¢ The policies adopted by the software that runs on the devices.

CONNECTIONS TO THE PHYSICAL WORLD

E21s connect directly to a greater number and wider variety of sensors, actuators, and appliances than do H21s. These connections enable applications built with Oxygen's perceptual and user technologies to monitor and control the environment.











An E21 might control an array of microphones, which Oxygen's perceptual resources use to improve communication with speakers by filtering out background noise. Similarly, it might control an array of antennas to permit improved communication with nearby H21s that, as a result of a better signal-to-noise ratio, use less power. Multiple antennas mounted on the roof of a building, as well as incoming terrestrial lines, connect through E21s to high-bandwidth, local-area N21 networks.

Through the N21 network, an E21 can connect unobtrusively to H21s in the hands or pockets of people in an intelligent space. It can display information on an H21 display in a person's hand or on a nearby wall-mounted display; it may even suggest that the person step a few feet down the hall.

H21 HAND-HELD DEVICES

Users can select hand-held devices, called H21s, appropriate to the tasks they wish to perform. These devices accept speech and visual input, can reconfigure themselves to perform a variety of useful functions, and support a range of communication protocols. Among other things, H21s can serve as cellular phones, beepers, radios, televisions, geographical positioning systems, cameras, or personal digital assistants, thereby reducing the number of special-purpose gadgets we must carry. To conserve power, they may offload communication and computation onto nearby E21s.

Handheld devices, called H21s, provide flexibility in a lightweight design. They are anonymous devices that do not carry a large amount of permanent local state. Instead, they configure themselves through software to be used in a wide range of environments for a wide variety of purposes. For example, when a user picks up an anonymous H21, the H21 will customize itself to the user's preferred configuration. The H21s contain board-level antennas that enable them to couple with a wireless N21 network, embedded E21 devices, or nearby H21s to form collaborative regions.




H21s, like E21s, are universal communication and computation appliances. They leverage the same hardware components as the E21s so that the same software can run on both devices. H21s differ from E21s mainly in

¢ Their connections to the physical world,
¢ The computational power they provide, and
¢ The policies adopted by the software that runs on the devices.

CONNECTIONS TO THE PHYSICAL WORLD

Because handheld devices must be small, lightweight, and power efficient, H21s come equipped with only a few perceptual and communication transducers, plus a low-power network to extend the I/O devices to which it can connect. In particular, H21s are not equipped with keyboards and large displays, although they may be connected to such devices. Through the N21 network, an H21 can connect unobtrusively to nearby, more powerful E21s, which provide additional connections to the physical world. The H21 contains multiple antennas for multiple communications protocols that depend on the transmission range, for example, building-wide, campus wide, or point-to-point.

NETWORK AND SOFTWARE INFRASTRUCTURE

People use Oxygen to accomplish tasks that are part of their daily lives. Universally available network connectivity and computational power enable decentralized Oxygen components to perform these tasks by communicating and cooperating much as humans do in organizations. Components can be delegated to find resources, to page link them together in useful ways, to monitor their progress, and to respond to change.

N21 NETWORKS

N21s support dynamically changing configurations of self-identifying mobile and stationary devices. They allow us to identify devices and services by how we intend to use them, not just by where they are located. They enable us to access the information and services we need, securely and privately, so that we are comfortable integrating Oxygen into our personal lives. N21s support multiple communication protocols for low-power local, building-wide, and campus-wide communication, enabling us to form collaborative regions that arise, adapt, and collapse as needed.












Flexible, decentralized networks, called N21s, connect dynamically changing configurations of self-identifying mobile and stationary devices. N21s integrate different wireless, terrestrial, and satellite networks into one seamless internet. Through algorithms, protocols, and middleware, they

¢ Configure collaborative regions automatically, creating topologies and adapting them to mobility and change.
¢ Provide automatic resource and location discovery, without manual configuration and administration.
¢ Provide secure, authenticated, and private access to networked resources.
¢ Adapt to changing network conditions, including congestion, wireless errors, latency variations, and heterogeneous traffic (e.g., audio, video, and data), by balancing bandwidth, latency, energy consumption, and application requirements.

COLLABORATIVE REGIONS

Collaborative regions are self-organizing collections of computers and/or devices that share some degree of trust. Computers and devices may belong to several regions at the same time. Membership is dynamic: mobile devices may enter and leave different regions as they move around. Collaborative regions employ different protocols for intra-space and inter-space communication because of the need to maintain trust.



RESOURCE AND LOCATION DISCOVERY

N21 networks enable applications to use intentional names, not just location-based names, to describe the information and functionality they are looking for. Intentional names support resource discovery by providing access to entities that cannot be named statically, such as a full soda machine or to the surveillance cameras that have recently detected suspicious activity.

N21 networks integrate name resolution and routing. Intra-space routing protocols perform resolution and forwarding based on queries that express the characteristics of the desired data or resources in a collaborative region. Late binding between names and addresses (i.e., at delivery time) supports mobility and multicast. Early binding supports high bandwidth streams and anycast. Wide-area routing uses a scalable resolver architecture; techniques for soft state and caching provide scalability and fault tolerance.

N21 networks support location discovery through proximity to named physical objects (for example, low-power RF beacons embedded in the walls of buildings). Location discovery enables mobile devices to access and present location-specific information. For example, an H21 might help visitors navigate to their destination with spoken right-left instructions; held up next to a paper or an electronic poster of an old talk, it could provide access to stored audio and video fragments of the talk; pointed to a door, it could provide information about what is happening behind the door.

SECURITY

A collaborative region is a set of devices that have been instructed by their owners to trust each other to a specified degree. A collaborative region that defines a meeting, for example, has a set of trust and authorization rules that specify what happens during a meeting (how working materials and presentation illustrations are shared, who can print on the local printer). Typically, trust rules for a meeting do not allow participants to write arbitrary information anywhere in the region. However, once users know what the trust rules are, they can introduce their devices into the meeting's collaborative region, with confidence that only the expected range of actions will happen, even if the details of the interactions are left to automatic configuration.

Resource and location discovery systems address privacy issues by giving resources and users control over how much to reveal. Rather than tracking the identity, location, and characteristics of all resources and users at all times, these systems accept and propagate only the information that resources and users choose to advertise. Self-certifying names enable clients of discovery systems to trust the advertised information.

ADAPTATION

N21 networks allow devices to use multiple communication protocols. Vertical handoffs among these protocols allow H21 devices to provide seamless and power efficient connectivity across a wide range of domains, for example, building-wide, campus wide, and point-to-point. They also enable applications to adapt to changes in channel conditions (e.g., congestion and packet loss) and in their own requirements (e.g., for bandwidth, latency, or reliability). They provide interfaces to monitoring mechanisms, which allow end-host transport agents to learn about congestion or about packet losses caused by wireless channel errors. This enables end-to-end resource management based on a unified congestion manager, which provides different flows with "shared state learning" and allows applications to adapt to congestion in ways that accommodate the heterogeneous nature of streams. Unlike the standard TCP protocol, which is tuned for bulk data transfers, the congestion manager efficiently handles congestion due to audio, video, and other real-time streaming applications, as well as to multiple short connections. N21 networks provide interfaces to control mechanisms, which enable applications to influence the way their packets are routed.

SOFTWARE ARCHITECTURE

Oxygenâ„¢s software architecture supports change above the device and network levels. The software architecture matches current user goals with currently available software services, configuring those services to achieve the desired goals. When necessary, it adapts the resulting configurations to changes in goals, available services, or operating conditions. Thereby, it relieves users of the burden of directing and monitoring the operation of the system as it accomplishes their goals.



USER TECHNOLOGIES

Several important technologies harness Oxygenâ„¢s pervasive computational, communication, and perceptual resources to advance the human-centered goal of enabling people to accomplish more with less effort.

SPOKEN LANGUAGE, SKETCHING AND VISUAL CUES

Spoken language and visual cues, rather than keyboards and mice, define the main modes of interaction with Oxygen. By integrating these two technologies, Oxygen can better discern our intentions, for example, by using vision to augment speech understanding through the recognition of facial expressions, gestures, lip movements, and gaze. These perceptual technologies are part of the core of Oxygen, not just afterthoughts or interfaces to separate applications.

They can be customized quickly in Oxygen applications to make selected human-machine interactions easy and natural. Graceful switching between different domains (e.g., from a conversation about the weather in Rome to one about airline reservations) supports seamless integration of applications.



KNOWLEDGE ACCESS

Individualized knowledge access technologies offer greatly improved access to information ” customized to the needs of people, applications, and software systems. Universal access to information is facilitated through annotations that allow content-based comparisons and manipulations of data represented in different formats and using different terminologies. Users may access their own knowledge bases, those of friends and associates, and other information publicly available on the Web.

The individualized knowledge access subsystem supports the natural ways people use to access information. In particular, it supports personalized, collaborative, and communal knowledge, "triangulating" among these three sources of information to find the information people need. It observes and adapts to its users, so as to better meet their needs. The subsystem integrates the following components to gather and store data, to monitor user access patterns, and to answer queries and interpret data.

DATA REPRESENTATION

The subsystem stores information encountered by its users using an extensible data model that links arbitrary objects via arbitrarily named arcs. There are no restrictions on object types or names. Users and the system alike can aggregate useful information regardless of its form (text, speech, images, video). The arcs, which are also objects, represent relational (database-type) information as well as associative (hypertext-like) linkage. For example, objects and arcs in A's data model can represent B's knowledge of interest to A”and vice versa.

DATA ACQUISITION

The subsystem gathers as much information as possible about the information of interest to a user. It does so through raw acquisition of data objects, by analyzing the acquired information, by observing people's use of it, by encouraging direct human input, and by tuning access to the user.

AUTOMATIC ACCESS METHODS

The arrival of new data triggers automated services, which, in turn, obtain further data or trigger other services. Automatic services fetch web pages, extract text from postscript documents, identify authors and titles in a document, recognize pairs of similar documents, and create document summaries that can be displayed as a result of a query. The system allows users to script and add more services, as they are needed.

HUMAN ACCESS METHODS

Since automated services can go only so far in carrying out these tasks, the system allows users to provide higher quality annotations on the information they are using, via text, speech, and other human interaction modalities.

AUTOMATED OBSERVERS

Subsystems watch the queries that users make, the results they dwell upon, the files they edit, the mail they send and receive, the documents they read, and the information they save. The system exploits observations of query behavior by converting query results into objects that can be annotated further. New observers can be added to exploit additional opportunities. In all cases, the observations are used to tune the data representation according to usage patterns.

AUTOMATION

The automation subsystem provides technologies for encapsulating objects, both physical and virtual, so that their actions can be automated. It also provides scripting technologies that automate new processes in response to direct commands, or by observing, imitating, and fine-tuning established processes.

BASIC AUTOMATION OBJECTS

Basic automation objects are "black boxes" of low-level actions that can be managed by higher-level automation processes. The objects can be either physical or virtual. A basic physical object senses or actuates a physical entity”it may sense the temperature or whether an office door is open, and it may crank up the heat or send an image to a display. A basic virtual object collects, generates, or transforms information”it may extract designated items from incoming electronic forms, operate on them in a designated manner, and send the results to a particular device.
A common intelligent interface connects basic physical objects to the network. The interface consists of a chip containing a microprocessor, a network adapter, main memory, and non-volatile storage. It makes different sensors, actuators, and appliances more powerful, provides device status information, reduces the bandwidth they require, and downloads commands and new low-level software.
Any software object can be a basic virtual object. Electronic forms are particularly common basic virtual objects, because they serve as convenient "interfaces" for exchanging information among people and organizations.

CONTROL OVER COMBINED OBJECTS

The automation architecture provides mechanisms for composing modular components, such speech, vision, and appliances, and for controlling their behavior based on user scripts. The architecture allows distributed objects (or agents) to refer to one another by function and capability, without respect to their location. Objects communicate using a universal streaming data "bus" standard. They can move around, be re-connected dynamically, and seamlessly resume previously established connections with one another. The scripting language enables users to specify easily and rapidly the tasks they wish to automate.

The automation subsystem uses a top-level watch-reason-automate "loop" to monitor and filter information of interest to the automation process, to select appropriate automation regimes for given tasks, and to implement those regimes. The scripting language enables users to customize automation regimes in response to context changes and other factors too complicated to handle automatically, either in the original script or in the watch-reason-automate loop.

COLLABORATION

The collaboration subsystem uses the knowledge access subsystem and the automation subsystem to support collaboration. The collaboration subsystem adds to the "semantic web" of the knowledge access subsystem by recording the context of human-to-human interactions. It informs the automation and knowledge access subsystems when we are engaged in a collaborative task so that the responses of these subsystems can be tailored appropriately to all those participating in the task.

MAINTAINING COLLABORATION CONTEXT

The collaboration subsystem uses the individualized knowledge access subsystem to represent and acquire information about human interactions, for example, by using the vision subsystem to determine who is present at a discussion and to observe physical gestures, by using the spoken language subsystem to track what people say to each other, and by observing human interactions with software applications. The collaboration subsystem remembers how a group arranges its workspace, and it creates virtual work places for distributed groups. It maintains the context of each collaborative group in an individualized knowledge database, so that it can be recalled to continue the discussion at a future time or in another place. Automated observers track features of interest to the collaboration and add to the knowledge database. Semantic links in the database maintain the history of the discussion and identify issues, alternative courses of action, arguments for and against each alternative, and resolutions to pursue particular alternatives. Human input helps guide the indexing process, by identifying critical decisions and linking them to the rationale behind them.

AUTOMATING COLLABORATIVE TASKS

The collaboration subsystem uses the automation subsystem, together with Bayesian techniques for analysis and knowledge-based techniques for process management, to act as a coordinator and mediate interactions among members of a collaborative team. It knows the interests, organizational roles, and skills of all team members, and it understands the application domain within which the team functions. For example, it tracks action items within the group and dependencies with other groups, retrieving relevant information and bringing it to the attention of the most appropriate individuals. The collaboration system plays the role of an active participant, noticing tasks that need to be undertaken, noticing when information required for those tasks has been developed, and making conclusions when appropriate.







SOFTWARE TECHNOLOGIES

Project Oxygen's software architecture provides mechanisms for
¢ Building applications using composable, distributed components,
¢ Customizing, adapting, and altering component behavior,
¢ Replacing components, at different degrees of granularity, in a consistent fashion,
¢ Person-centric, rather than device-centric, security, and
¢ Disconnected operation and nomadic code.
Oxygen's software architecture relies heavily on abstraction to support change through adaptation and customization, on specification to support components that use these abstractions, and on persistent object stores with transactional semantics to provide operational support for change.

ABSTRACTION
Computations are modular, as is storage. Abstractions characterize components that carry out computations and objects used in computations. In Oxygen, abstractions support the use of adaptable components and objects by providing

¢ Application access to components traditionally hidden beneath intervening layers of software, so as to observe and influence their behavior.
¢ Intent-based interfaces, not just syntax or address-based interfaces, so as to facilitate component and object use, adjustment, replacement, and upgrade.
¢ Stream-oriented interfaces that treat speech, vision, and sensor data as first-class objects, so as to enable compilers to manage low-level pipelining concurrency and multithreaded programs to adjust their behavior correctly at runtime in response to changes in the number of streams or the interactions among them.
¢ Constraint and event abstractions, which separate computation from control, trigger what is processed when, and provide flexibility for modifying behavior at runtime without compromising system integrity.
¢ Cutpoints, so as to provide safe fallbacks and to enable "eternal computation".

SPECIFICATIONS

Specifications make abstractions explicit, exposing features to other system components. In Oxygen, specifications support adaptation and change by providing information about
¢ system configurations, to determine what modules and capabilities are available locally,
¢ module repositories, to provide code over the network for installation on handheld and other devices,
¢ module dependencies, to support complete and consistent installations or upgrades,
¢ module capabilities, to support other components and applications in scripting their use, and
¢ module behavior, to support their safe use through a combination of static and runtime checks.



PERSISTENT OBJECT STORE WITH TRANSACTIONAL SEMANTICS

Code, data objects, and specifications reside in a common object-oriented store, which supports all Oxygen technologies (i.e., user, perceptual, system, and device technologies). Object-orientation helps maintain the integrity of the store by restricting updates to those performed by methods in the store. The store has transactional semantics, which enables concurrent access, rollback and recovery, and consistent updates to modules and data. It also operates efficiently, using techniques such as optimistic concurrency, pre-fetching, and lazy updates and garbage collection, which defer the costs of modifying the store as long as possible or until there is time to spare.



HOW DOES OXYGEN WORK







The figure showing h21-n21-e21 communications.






PERCEPTUAL TECHNOLOGIES

SPEECH

The spoken language subsystem provides a number of limited-domain interfaces, as well as mechanisms for users to navigate effortlessly from one domain to another. Thus, for example, a user can inquire about flights and hotel information when planning a trip, then switch seamlessly to obtaining weather and tourist information. The spoken language subsystem stitches together a set of useful domains, thereby providing a virtual, broad-domain quilt that satisfies the needs of many users most of the time. Although the system can interact with users in real-time, users can also delegate tasks for the system to perform offline.

The spoken language subsystem is an integral part of Oxygen's infrastructure, not just a set of applications or external interfaces. Four components, with well-defined interfaces, interact with each other and with Oxygen's device, network, and knowledge access technologies to provide real-time conversational capabilities.


SPEECH RECOGNITION

The speech recognition component converts the user's speech into a sentence of distinct words, by matching acoustic signals against a library of phonemes”irreducible units of sound that make up a word. The component delivers a ranked list of candidate sentences, either to the language-understanding component or directly to an application. This component uses acoustic processing (e.g., embedded microphone arrays), visual clues, and application-supplied vocabularies to improve its performance.

LANGUAGE UNDERSTANDING

The language understanding-component breaks down recognized sequences of words grammatically, and it systematically represents their meaning. The component is easy to customize, thereby easing integration into applications. It generates limited-domain vocabularies and grammars from application-supplied examples, and it uses these vocabularies and grammars to transform spoken input into a stream of commands for delivery to the application. It also improves language understanding by listening throughout a conversation”not just to explicit commands”and remembering what has been said.

Lite speech systems, with user-defined vocabularies and actions, can be tailored quickly to specific applications and integrated with other parts of the Oxygen system in a modular fashion.

LANGUAGE GENERATION

The language generation component builds sentences that present application-generated data in the user's preferred language.

SPEECH SYNTHESIS

A commercial speech synthesizer converts sentences, obtained either from the language generation component or directly from the application, into speech.

VISION

The visual processing system contains visual perception and visual rendering subsystems. The visual perception subsystem recognizes and classifies objects and actions in still and video images. It augments the spoken language subsystem, for example, by tracking direction of gaze of participants to determine what or whom they are looking at during a conversation, thereby improving the overall quality of user interaction. The visual rendering subsystem enables scenes and actions to be reconstructed in three dimensions from a small number of sample images without an intermediate 3D model. It can be used to provide macroscopic views of application-supplied data.

Like the spoken language subsystem, the visual subsystem is an integral part of Oxygen's infrastructure. Its components have well-defined interfaces, which enable them to interact with each other and with Oxygen's device, network, and knowledge access technologies. Like lite speech systems, lite vision systems provide user-defined visual recognition, for example, of faces and handwritings.

OBJECT RECOGNITION

A trainable object recognition component automatically learns to detect limited-domain objects (e.g., people or different kinds of vehicles) in unconstrained scenes using a supervised learning technology. This learning technology generates domain models from as little information as one or two sample images, either supplied by applications or acquired without calibration during operation. The component recognizes objects even if they are new to the system or move freely in an arbitrary setting against an arbitrary background. As people do, it adapts to objects, their physical characteristics, and their actions, thereby learning to improve object-specific performance over time.

For high-security transactions, where face recognition is not a reliable solution, a vision-based biometrics approach (e.g., fingerprint recognition) integrates sensors in handheld devices transparently with the Oxygen privacy and security environment to obtain cryptographic keys directly from biometrics measurements.

ACTIVITY MONITORING AND CLASSIFICATION

An unobtrusive, embedded vision component observes and tracks moving objects in its field of view. It calibrates itself automatically, using tracking data obtained from an array of cameras, to learn relationships among nearby sensors, create rough site models, categorize activities in a variety of ways, and recognize unusual events.


CHALLENGES

To support highly dynamic and varied human activities, the Oxygen system must master many technical challenges. It must be

pervasive”it must be everywhere, with every portal reaching into the same information base;
embedded”it must live in our world, sensing and affecting it;
nomadic”it must allow users and computations to move around freely, according to their needs;
adaptable”it must provide flexibility and spontaneity, in response to changes in user requirements and operating conditions;
powerful, yet efficient”it must free itself from constraints imposed by bounded hardware resources, addressing instead system constraints imposed by user demands and available power or communication bandwidth;
intentional”it must enable people to name services and software objects by intent, for example, "the nearest printer," as opposed to by address;
eternal”it must never shut down or reboot; components may come and go in response to demand, errors, and upgrades, but Oxygen as a whole must be available all the time.


CONCLUSION

Widespread use of Oxygen and its advanced technologies will yield a profound leap in human productivity ” one even more revolutionary than the move from mainframes to desktops. By enabling people to use spoken and visual cues to automate routine tasks, access knowledge, and collaborate with others anywhere, anytime, Oxygen stands to significantly amplify human capabilities throughout the world.




REFERENCES


1. IEEE Spectrum March 2002
2. WWW.LCS.MIT.EDU
3. WWW.AI.MIT.EDU
4. WWW.GLOBAL.ACER.COM
5. WWW.DELTACA.COM
6. WWW.HP.COM
7. WWW.NOKIA.COM
8. WWW.RESEARCH.PHILIPS.COM




CONTENTS


1. THE APPROACH 1

2. SYSTEM TECHNOLOGIES 3

3. USER TECHNOLOGIES 13
4. SOFTWARE TECHNOLOGIES 20

5. HOW DOES OXYGEN WORK 23

6. PERCEPTUAL TECHNOLOGIES 25

7. CHALLENGES 29

8. CONCLUSION 30

9. REFERENCES 31




ACKNOWLEDGEMENT

I extend my sincere gratitude towards Prof. P.Sukumaran Head of Department for giving us his invaluable knowledge and wonderful technical guidance

I express my thanks to Mr. Muhammed Kutty our group tutor and also to our staff advisor Ms. Biji Paul for their kind co-operation and guidance for preparing and presenting this seminars.

I also thank all the other faculty members of AEI department and my friends for their help and support.
Reply
#2

to get information about the topic project oxygen full report ppt and related topic refer the page link bellow

http://studentbank.in/report-project-oxy...d-abstract


http://studentbank.in/report-project-oxygen-full-report

http://studentbank.in/report-project-oxygen?pid=239
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: download ppt of project oxygen, powerpoint presentation on project oxygen, technical seminar project oxygen, ppt on project oxygen, project oxygen filetype ppt, science project oxygen molecule, artificial oxygen tree project,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  computer networks full report seminar topics 8 41,813 06-10-2018, 12:35 PM
Last Post: jntuworldforum
  OBJECT TRACKING AND DETECTION full report project topics 9 30,531 06-10-2018, 12:20 PM
Last Post: jntuworldforum
  imouse full report computer science technology 3 24,775 17-06-2016, 12:16 PM
Last Post: ashwiniashok
  Implementation of RSA Algorithm Using Client-Server full report seminar topics 6 26,487 10-05-2016, 12:21 PM
Last Post: dhanabhagya
  Optical Computer Full Seminar Report Download computer science crazy 46 66,157 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  ethical hacking full report computer science technology 41 74,294 18-03-2016, 04:51 PM
Last Post: seminar report asees
  broadband mobile full report project topics 7 23,187 27-02-2016, 12:32 PM
Last Post: Prupleannuani
  steganography full report project report tiger 15 41,195 11-02-2016, 02:02 PM
Last Post: seminar report asees
  Digital Signature Full Seminar Report Download computer science crazy 20 43,496 16-09-2015, 02:51 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 27,881 01-05-2015, 03:36 PM
Last Post: seminar report asees

Forum Jump: