Artificial PASSENGER Download The Seminar Report
#1
Tongue 

The AP is an artificial intelligence“based companion that will be
resident in software and chips embedded in the automobile dashboard. The heart of the
system is a conversation planner that holds a profile of you, including details of your
interests and profession.

A microphone picks up your answer and breaks it down into separate words with
speech-recognition software. A camera built into the dashboard also tracks your lip
movements to improve the accuracy of the speech recognition. A voice analyzer then
looks for signs of tiredness by checking to see if the answer matches your profile. Slow
responses and a lack of intonation are signs of fatigue.

This research suggests that we can make predictions about various aspects of driver
performance based on what we glean from the movements of a driverâ„¢s eyes and that a
system can eventually be developed to capture this data and use it to alert people when
their driving has become significantly impaired by fatigue.

The natural dialog car system analyzes a driverâ„¢s answer and the
contents of the answer together with his voice patterns to determine if he is alert while
driving. The system warns the driver or changes the topic of conversation if the system
determines that the driver is about to fall asleep. The system may also detect whether a
driver is affected by alcohol or drugs.


Download The Report
http://rapidsharefiles/215576210/ARTIFICIAL_PASSENGER.rar
Reply
#2
[attachment=2478]
[attachment=2479]


ABSTRACT
The AP is an artificial intelligence“based companion that will be resident in
software and chips embedded in the automobile dashboard. The heart of the system is a
conversation planner that holds a profile of you, including details of your interests and
profession.
A microphone picks up your answer and breaks it down into separate words with
speech-recognition software. A camera built into the dashboard also tracks your lip
movements to improve the accuracy of the speech recognition. A voice analyzer then
looks for signs of tiredness by checking to see if the answer matches your profile. Slow
responses and a lack of intonation are signs of fatigue.
This research suggests that we can make predictions about various aspects of driver
performance based on what we glean from the movements of a driverâ„¢s eyes and that a
system can eventually be developed to capture this data and use it to alert people when
their driving has become significantly impaired by fatigue.
CHAPTER 1
Introduction
. The AP is an artificial intelligence“based companion that will be
resident in software and chips embedded in the automobile dashboard. The heart of the
system is a conversation planner that holds a profile of you, including details of your
interests and profession.
A microphone picks up your answer and breaks it down into separate words with
speech-recognition software. A camera built into the dashboard also tracks your lip
movements to improve the accuracy of the speech recognition. A voice analyzer then
looks for signs of tiredness by checking to see if the answer matches your profile. Slow
responses and a lack of intonation are signs of fatigue.
This research suggests that we can make predictions about various aspects of driver
performance based on what we glean from the movements of a driverâ„¢s eyes and that a
system can eventually be developed to capture this data and use it to alert people when
their driving has become significantly impaired by fatigue.
The natural dialog car system analyzes a driverâ„¢s answer and the
contents of the answer together with his voice patterns to determine if he is alert while
driving. The system warns the driver or changes the topic of conversation if the system
determines that the driver is about to fall asleep. The system may also detect whether a
driver is affected by alcohol or drugs.

CHAPTER 2
2.1 What is an artificial passenger
¢ Natural language e-companion.
¢ Sleep preventive device in cars to overcome drowsiness.
¢ Life safety system.
2.2 What does it do
¢ Detects alarm conditions through sensors.
¢ Broadcasts pre-stored voice messages over the speakers.
¢ Captures images of the driver.
¢
¢ CHAPTER 3
¢
¢
¢ 3.1 Field of invention
¢
The present invention relates to a system and method for determining three
¢
¢ dimensional head pose, eye gaze direction, eye closure amount, blink detection and flexible
¢
¢ feature detection on the human face using image analysis from multiple video sources.
¢
¢ Additionally, the invention relates to systems and methods that makes decisions using
¢
¢ passive video analysis of a human head and face. These methods can be used in areas
¢
¢ of application such as human-performance measurement, operator monitoring and
¢
¢ interactive multi-media.
¢
¢
¢
¢ 3.2 Background of the invention
¢
Early techniques for determining head-pose used devices that were fixed to the
¢
¢ head of the subject to be tracked. For example, reflective devices were attached to the subjects
¢
¢ head and using a light source to illuminate the reflectors, the reflector locations were determined.
¢
¢ As such reflective devices are more easily tracked than the head itself, the problem of tracking
¢
¢ head-pose was simplified greatly.
¢
Virtual-reality headsets are another example of the subject wearing a device for
¢
¢ the purpose of head-pose tracking. These devices typically rely on a directional antenna and
¢
¢ radio-frequency sources, or directional magnetic measurement to determine head-pose.
¢
Wearing a device of any sort is clearly a disadvantage, as the user's competence
¢
¢ and acceptance to wearing the device then directly effects the reliability of the system. Devices
¢
¢ are generally intrusive and will affect a user's behaviour, preventing natural motion or operation.
¢
Structured light techniques that project patterns of light onto the face in order to determine head-
¢
¢ pose are also known.
The light patterns are structured to facilitate the recovery of 3D information using
¢
¢ simple image processing. However, the technique is prone to error in conditions of lighting
¢
¢ variation and is therefore unsuitable for use under natural lighting conditions.
¢
¢
¢
¢
3.3 Examples of systems that use this style of technique
¢
¢
¢ Examples of systems that use this style of technique can be seen in "A Robust
¢
¢ Model-Based Approach for 3D Head Tracking in Video Sequences" by Marius Malciu and
¢
¢ Francoise Preteux, and "Robust 3D Head Tracking Under Partial Occlusion" by Ye Zhang and
¢
¢ Chandra Kambhamettu, both from Conference of Automatic and Gesture Recognition 2000,
¢
¢ Grenoble France.
¢
CHAPTER 4
4.1 Devices that are used in AP
The main devices that are used in this artificial passenger are:-
1) Eye tracker.
2) Voice recognizer or speech recognizer.
4.2 How does eye tracking work
Collecting eye movement data requires hardware and software specifically designed to
perform this function. Eye-tracking hardware is either mounted on a user's head or mounted
remotely. Both systems measure the corneal reflection of an infrared light emitting diode (LED),
which illuminates and generates a reflection off the surface of the eye. This action causes the
pupil to appear as a bright disk in contrast to the surrounding iris and creates a small glint
underneath the pupil . It is this glint that head-mounted and remote systems use for calibration
and tracking.
4.2.1 Hardware: Head-mounted and remote systems
The difference between the head-mounted and remote eye systems is how the eye
tracker collects eye movement data. Head-mounted systems , since they are fixed on a user's
head and therefore allow for head movement, use multiple data points to record eye movement.
To differentiate eye movement from head movement, these systems measure the pupil glint from
multiple angles. Since the unit is attached to the head, a person can move about when operating
a car or flying a plane, for example.
For instance, human factors researchers have used head-mounted eye-tracking
systems to study pilots' eye movements as they used cockpit controls and instruments to land
airplanes (Fitts, Jones, and Milton 1950). These findings led to cockpit redesigns that improved
usability and significantly reduced the likelihood of incidents caused by human error. More
recently, head-mounted eye-tracking systems have been used by technical communicators to
study the visual relationship between personal digital assistant (PDA) screen layout and eye
movement.
Remote systems, by contrast, measure the orientation of the eye relative to a fixed
unit such as a camera mounted underneath a computer monitor . Because remote units do not
measure the pupil glint from multiple angles, a person's head must remain almost motionless
during task performance. Although head restriction may seem like a significant hurdle to
overcome, Jacob and Karn (2003) attribute the popularity of remote systems in usability to their
relatively low cost and high durability compared with head-mounted systems.
Since remote systems are usually fixed to a computer screen, they are often used
for studying onscreen eye motion. For example, cognitive psychologists have used remote eye-
tracking systems to study the relationship between cognitive scanning styles and search
strategies (Crosby and Peterson 1991). Such eye-tracking studies have been used to develop
and test existing visual search cognitive models. More recently, human-computer interaction
(HCI) researchers have used remote systems to study computer and Web interface usability.
Through recent advances in remote eye-tracking equipment, a range of head
movement can now be accommodated. For instance, eye-tracking hardware manufacturer Tobii
Technology now offers a remote system that uses several smaller fixed sensors placed in the
computer monitor frame so that the glint underneath the pupil is measured from multiple angles.
This advance will eliminate the need for participants in eye-tracking studies to remain perfectly
still during testing, making it possible for longer studies to be conducted using remote systems.
4.2.2 Software: Data collection, analysis, and representation
Data collection and analysis is handled by eye-tracking software. Although some
software is more sophisticated than others, all share common features. Software catalogs eye-
tracking data in one of two ways. In the first, data are stored in video format. ERICA's Eye
Gaze[TM] software, for instance, uses a small red x to represent eye movement that is useful for
observing such movement in relation to external factors such as user verbalizations. In the other,
data are stored as a series of x/y coordinates related to specific grid points on the computer
screen.
Data can be organized in various ways--by task or participant, for example”and
broken down into fixations and saccades that can be visually represented onscreen. Fixations,
which typically last between 250 and 500 milliseconds, occur when the eye is focused on a
particular point on a screen . Fixations are most commonly measured according to duration and
frequency. If, for instance, a banner ad on a Web page receives lengthy and numerous fixations,
it is reasonable to conclude that the ad is successful in attracting attention. Saccades, which
usually last between 25 and 100 milliseconds, move the eye from one fixation to the next fixation
. When saccades and fixations are sequentially organized, they produce scanpaths. If, for
example, a company would like to know why people are not clicking on an important page link in what
the company feels is a prominent part of the page, a scanpath analysis would show how people
visually progress through the page. In this case, such an analysis might show that the page link is
poorly placed because it is located on a part of the screen that does not receive much eye traffic.

Lake Fong, Post-Gazette
Former CMU professor Richard Grace is shown on a TV monitor while testing a DD 850, a dashboard-mounted infrared camera that can detect when a driver is starting to fall asleep. The device will beep to alarm a sleepy driver.
CHAPTER 5
5.1 Algorithm for monitoring head/eye motion for driver alertness with one camera
Visual methods and systems are described for detecting alertness and
vigilance of persons under conditions of fatigue, lack of sleep, and exposure to mind
altering substances such as alcohol and drugs. In particular, the intention can have
particular applications for truck drivers, bus drivers, train operators, pilots and watercraft
controllers and stationary heavy equipment operators, and students and employees
during either daytime or nighttime conditions. The invention robustly tracks a person's
head and facial features with a single on-board camera with a fully automatic system,
that can initialize automatically, and can reinitialize when it need's to and provide
outputs in realtime. The system can classify rotation in all viewing direction, detects'
eye/mouth occlusion, detects' eye blinking, and recovers the 3Dgaze of the eyes. In
addition, the system is able to track both through occlusion like eye blinking and also
through occlusion like rotation. Outputs can be visual and sound alarms to the driver
directly. Additional outputs can slow down the vehicle cause and/or cause the vehicle to
come to a full stop. Further outputs can send data on driver, operator, student and
employee vigilance to remote locales as needed for alarms and initiating other actions.
5.2 REPRESENTATIVE IMAGE:
This invention relates to visual monitoring systems, and in particular to
systems and methods for using digital cameras that monitor head motion and eye motion
with computer vision algorithms for monitoring driver alertness and vigilance for drivers
of vehicles, trucks, buses, planes, trains and boats, and operators of stationary and
moveable and stationary heavy equipment, from driver fatigue and driver loss of sleep,
and effects from alcohol and drugs, as well as for monitoring students and employees
during educational, training and workstation activities.
A driver alertness system comprising:
(a) a single camera within a vehicle aimed at a head region of a driver;
(b) means for simultaneously monitoring head rotation, yawning and full eye occlusion of
the driver with said camera, the head rotation including nodding up and down, and
moving left to right, and the full eye occlusion including eye blinking and complete eye
closure, the monitoring means includes means for determining left to right rotation and
the up and down nodding from examining approximately 10 frames from approximately
20 frames; and
© alarm means for activating an alarm in real time when a threshold condition in the
monitoring means has been reached, whereby the driver is alerted into driver vigilance.
The monitoring means includes: means for determining gaze direction of the
driver,a detected condition selected from at least one of: lack of sleep of the driver,
driver fatigue, alcohol effects and drug effects of the driver, inializing means to find a
face of the driver; grabbing means to grab a frame; tracking means to truck head of the
driver; measuring means to measure rotation and nodding of the driver; detecting means
to detect eye blinking and eye closures of the driver; yawing means to detect yawning of
the driver.
5.3 Method of detecting driver vigilance comprises the following steps
1) Aiming a single camera at a head of a driver of a vehicle; detecting
frequency of up and down nodding and left to right rotations of the head within
a selected time period of the driver with the camera;
2) Determining frequency of eye blinkings and eye closings of the driver within
the selected time period with the camera;
3) Determining the left to right head rotations and the up and down head nodding
from examining approximately 10 frames from approximately 20 frames;
4) Determining frequency of yawning of the driver within the selected time
period with the camera;
5) Generating an alarm signal in real time if a frequency value of the number of
the frequency of the up and down nodding, the left to right rotations, the eye
blinkings, the eye closings, the yawning exceed a selected threshold value.
CHAPTER 6
Detailed description of preferred embodiments
Before explaining the disclosed embodiment of the present in detail it is to be
understood that the invention is not limited in its application to the details of the
particular arrangement shown since the invention is capable of other embodiments. Also,
the terminology used herein is for the purpose of description and not of limitation.
The novel invention can analyze video sequences of a driver for determining
when the driver is not paying adequate attention to the road. The invention collects data
with a single camera placed that can be placed on the car dashboard. The system can
focus on rotation of the head and eye blinking, two important cues for determining driver
alertness, to make determination of the driver's vigilance level. Our head tracker consists
of tracking the lip corners, eye centers, and side of the face. Automatic initialization of
all features is achieved using color predicates and a connected components algorithm. A
connected component algorithm is one in which every element in the component has a
given property. Each element in the component is adjacent to another element either by
being to the left, right, above, or below. Other types of connectivity can also be allowed.
An example of a connected component algorithm follows: If we are given various land
masses, then one could say that each land mass is a connected component because the
water separates the land masses. However, if a bridge was built between two land masses
then the bridge would connect them into one land mass. So a connected component is
one in which every element in the component is accessible from any other element in the
component.
For the invention, the term Occlusion of the eyes and mouth often occurs when
the head rotates or the eyes close, so our system tracks through such occlusion and can
automatically reinitialize when it mis-tracks. Also, the system performs blink detection
and determines 3-D direction of gaze. These are necessary components for monitoring
driver alertness.
The novel method and system can track through local lip motion like yawning, and
presents a robust tracking method of the face, and in particular, the lips, and can be
extended to track during yawning or opening of the mouth.
A general overview of is the novel method and system for daytime conditions is
given below, and can include the following steps:
1. Automatically initialize lips and eyes using color predicates and connected
components.
2. Track lip corners using dark line between lips and color predicate even through large
mouth movement like yawning
3. Track eyes using affine motion and color predicates
4. Construct a bounding box of the head
5. Determine rotation using distances between eye and lip feature points and sides of the
face
6. Determine eye blinking and eye closing using the number and intensity of pixels in the
eye region
7. Determine driver vigilance level using all acquired information.
The above steps can be modified for night time conditions.
The novel invention can provide quick substantially realtime monitoring responses.
For example, driver vigilance can be determined within as low as approximately 20
frames, which would be within approximately of a second under some
conditions(when camera is taking pictures at a rate of approximately 30 frames per
second). Prior art systems usually require substantial amounts of times, such as at least
400 frames which can take in excess of 20 seconds if the camera is taking pictures at
approximately 30 frames per second. Thus, the invention is vastly superior to prior art
systems.
The video sequences throughout the invention were acquired using a video
camera placed on a car dashboard. The system runs on an UltraSparc using 320×240 size
images with 30 fps video.
The system will first determine day or night status. It is nighttime if: a
camera clock time period is set for example to be between 18:00 and 07:00 hours.
Alternatively, day or night status can be checked if the driver has his night time driving
headlights on by wiring the system to the headlight controls of the vehicle. Additionally,
day or night status can be set if the intensity of the image, is below a threshold. In this
case then it must be dark. For example, if the intensity of the image (intensity is defined
in many ways, one such way is the average of all RGB(Red, Green, Blue) values) is
below approximately 40 then the nighttime method could be used. The possible range of
values for the average RGB value is 0 to approximately 255, with the units being
arbitrarily selected for the scale.

If day time is determined then the left side of the flow chart depicted in
FIG will follow then first initialize to find face . A frame is grabbed from the video
output. Tracking of the feature points is performed in steps. Measurements of rotation
and orientation of the face occurs. Eye occlusion such as blinking and eye closure is
examined. Determining if yawning occurs. The rotation, eye occlusion and yawning in
formation is used to measure the driver's vigilance .
If night time is determined, then the right flow chart series of steps
occurs, by first initializing to find the face. Next a frame is grabbed from the video
output. Tracking of the lip corners and eye pupils is performed. Measure rotation and
orientation of face. The feature points are corrected if necessary. Eye occlusion such as
blinking and eye closure is examined. Determining if yawning is occurring is done. The
rotation, eye occlusion and yawning steps in formation is used to measure the driver's
vigilance.
DAYTIME CONDITIONS
For the daytime scenario, initialization is performed to find the face feature
points. A frame is taken from a video stream of frames. Tracking is then done in stages.
Lip tracking is done. There are multiple stages in the eye tracker. Stage 1 and Stage 2
operate independently. A bounding box around the face is constructed and then the
facial orientation can be computed. Eye occlusion is determined. Yawning is detected.
The rotation, eye occlusion, and yawning information is fused to determine the vigilance
level of the operator. This is repeated by which allows the method and system to grab
another frame from a video stream of frames and continue again.
The system initializes itself. The lip and eye colors ((RED, BLUE,
GREEN)RGB) are marked in the image offline. The colors in the image are marked to
be recognized by the system. Mark the lip pixels in the image is important. All other
pixel values in the image are considered unimportant. Each pixel has an Red®,
Green)G), and Blue(B) component. For a pixel that is marked as important, go to this
location in the RGB array indexing on the R, G, B components. This array location can
be incremented by equation (1)
exp(-1.0*( j*j+k*k+i*i )/(2*sigma*sigma)); (1)
where: sigma is approximately 2;
j refers to the component in the y direction and can go from approximately -2 to approximately 2;
k refers to the component in the z direction and can go from approximately -2 to approximately 2;
i refers to the component in the x direction and can go from approximately -2 to approximately 2.
Thus simply increment values in the x, y, and z direction from approximately -2 to
approximately +2 pixels, using the above function. As an example running through
equation (1), given that sigma is 2, let i=0, j=1, and k=-1, then the function evaluates to
exp(-1.0*(1+1+0)/(2*2*2))=exp(-1*2/8)=0.77880, where exp is the standard
exponential function (e x ).
Equation (1) is run through for every pixel that is marked as important. If a color, or pixel
value, is marked as important multiple times, its new value can be added to the current
value. Pixel values that are marked as unimportant can decrease the value of the RGB
indexed location via equation (2) as follows:
exp(-1.0*( j*j+k*k+i*i )/(2*(sigma-1)*(sigma-1))). (2)
where: sigma is approximately 2;
j refers to the component in the y direction and can go from approximately -2 to approximately 2;
k refers to the component in the z direction and can go from approximately -2 to approximately 2;
i refers to the component in the x direction and can go from approximately -2 to approximately 2.
Thus simply increment values in the x, y, and z direction from approximately -2 to
approximately +2 pixels, using the above function. As an example running through
equation (1), given that sigma is 2, let i=0, j=1, and k=-1, then the function evaluates to
exp(-1.0*(1+1+0)/(2*1*1))=exp(-1*2/2(=0.36788, where exp is the standard
exponential function (e x ).
The values in the array which are above a threshold are marked as being one of the
specified colors. The values in the array below the threshold are marked as not being of
the specified color. An RGB(RED, GREEN BLUE) array of the lip colors is generated,
and the endpoints of the biggest lip colored component are selected as the mouth corners.
The driver's skin is marked as important. All other pixel values in the image are
considered unimportant. Each pixel has an R, G, B component. So for a pixel that is
marked as important, go to this location in the RGB array indexing on the R, G, B
components. Increments this array location by equation (1) given and explained above, it
is both written and briefly described here for convenience: exp(-1.0*(j*j+k*k+i*i)/(2
*sigma*sigma)); sigma is 2. Increment values in the x, y, and z direction from
approximately -2 to approximately +2, using equation 1. Do this for every pixel that is
marked as important. If a color, or pixel value, is marked as important multiple times, its
new value is added to the current value.
Pixel values that are marked as unimportant decrease the value of the RGB
indexed location via equation (2), given and explained above, and is both written and
briefly described here for convenience:
exp(-1.0*(j*j+k*k+i*i)/(2*(sigma-1)*(sigma-1))). The values in the array which are
above a threshold are marked as being one of the specified colors. Another RGB array is
generated of the skin colors, and the largest non-skin components above the lips are
marked as the eyes. The program method then starts looking above the lips in a vertical
manner until it finds two non-skin regions, which are between approximately 15 to
approximately 800 pixels in an area. The marking of pixels can occur automatically by
considering the common color of various skin/lip tones.
NIGHTIME CONDITIONS
If it is nighttime perform the following steps: To determine if it is night any of the
three conditions can occurr. If a camera clock is between 18:00 and 07:00 hours and/or if
the driver has his night time driving headlights on or if the intensity of the image, is
below a threshold it must be dark, so use the night time algorithm steps.
The invention initialize eyes by finding the bright spots with dark around them.
In the first two frames the system finds the brightest pixels with dark regions around
them. These points are marked as the eye centers. In subsequent frames there brightest
regions are referred to as the eye bright tracker estimate. If these estimates are too far
from the previous values, retain the old values as the new eye location estimates. The
next frame is then grabbed.
The system runs two independent subsystems. Starting with the left subsystem
first the dark pixel is located and tested to see if it is close enough to the previous eye
location. If these estimates are too far from the previous values, the system retains the
old values as the new eye location estimates. If the new estimates are close to the
previous values, then these new estimates are kept.

The second subsystem, finds the image transform . This stage tries to find a
common function between two images in which the camera moved some amount. This
function would transform all the pixels in one image to the corresponding point in
another image. This function is called an affine function. It has six parameters, and it is a
motion estimation equation.

CHAPTER 7

Other applications from the same method.
1) Cabins in airplanes.
2) Water craft such as boats.
3) Trains and subways.
SUMMARY
Summary of the invention

A primary objective of the invention is to provide a system and method for
monitoring driver alertness with a single camera focused on the face of the driver to
monitor for conditions of driver fatigue and lack of sleep.
A secondary objective of the invention is to provide a system and method for
monitoring driver alertness which operates in real time which would be sufficient time to
avert an accident.
A third objective of the invention is to provide a system and method for
monitoring driver alertness that uses a computer vision to monitor both the eyes and the
rotation of the driver's head through video sequences.
BIBLIOGRAPHY
[1] http://freepatentsonline4682348.html
[2] Mark Roth, Pittsburgh Post-Gazette.S
[3] SAE Technical Paper Series, #942321, Estimate of Driver's Alertness Level Using
Fuzzy Method.
[4] sCrosby and Peterson 1991.
[5] New Scientist.
Reply
#3
please send seminar report........
Reply
#4
please send the seminar report
Reply
#5
hiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
Reply
#6


[attachment=5177]


Artificial PASSENGER Download The Seminar Repor
t

Submitted By:-
JITHESH PT CUAHMCA009


The AP is an artificial intelligence“based companion that will be
resident in software and chips embedded in the automobile dashboard. The heart of the
system is a conversation planner that holds a profile of you, including details of your
interests and profession.

A microphone picks up your answer and breaks it down into separate words with
speech-recognition software. A camera built into the dashboard also tracks your lip
movements to improve the accuracy of the speech recognition. A voice analyzer then
looks for signs of tiredness by checking to see if the answer matches your profile. Slow
responses and a lack of intonation are signs of fatigue.

This research suggests that we can make predictions about various aspects of driver
performance based on what we glean from the movements of a driverâ„¢s eyes and that a
system can eventually be developed to capture this data and use it to alert people when
their driving has become significantly impaired by fatigue.

The natural dialog car system analyzes a driverâ„¢s answer and the
contents of the answer together with his voice patterns to determine if he is alert while
driving. The system warns the driver or changes the topic of conversation if the system
determines that the driver is about to fall asleep. The system may also detect whether a
driver is affected by alcohol or drugs.


Download The Report
http://rapidsharefiles/215576210/AR...SENGER.rar


Reference: http://studentbank.in/report-artificial-...z11ZFJIQfY
Reply
#7
pls give the report of artificial passenger
pls give the report of artificial passenger
pls give the detailed report of artificial passenger
Reply
#8
PLZ GIVE A DETAILED SEMINAR REPORT ABOUT ARTIFICIAL PASSENGER
Reply
#9
for more details on ARTIFICIAL PASSENGER, go through the following threads

http://studentbank.in/report-artificial-passenger
http://studentbank.in/report-an-artificial-passenger-ap
Reply
#10
hey, too much thaks to u.
u really did a great job.
try to keep in on.......
thank you
-
prafull
Reply
#11
Plz send me the complete report of this artificial passenger seminar topic
Reply
#12
Require Artificial intelligence complete documentation n abstract n pdf immediately
Reply
#13
[attachment=9784]
1. INTRODUCTION
Studies of road safety found that human error was the sole cause in more than half of all accidents .One of the reasons why humans commit so many errors lies in the inherent limitation of human information processing .With the increase in popularity of Telematics services in cars (like navigation, cellular telephone, internet access) there is more information that drivers need to process and more devices that drivers need to control that might contribute to additional driving errors. This paper is devoted to a discussion of these and other aspects of driver safety. Telematics typically is any integrated use of telecommunications and informatics, also known as Information and Communications Technology (ICT). Artificial Intelligence is defined by Webster's as a branch of computer science dealing with the simulation of intelligent behavior in computers. It can also be defined as the capability of a machine to imitate intelligent human behavior. From that we can tell that any program that can strike up a conversation with a human (such as the artificial passenger) would be artificial intelligence.
The Artificial passenger was introduced by Dimitry Kanevsky and Wlodek Zadrozny of IBM labs. IBM got the patent for this in 2001. The artificial passenger is a type of artificial intelligence
Field of invention
The present invention relates to a system and method for determining three dimensional head pose, eye gaze direction, eye closure amount, blink detection and flexible feature detection on the human face using image analysis from multiple video sources.
Additionally, the invention relates to systems and methods that make decisions using passive video analysis of a human head and face. These methods can be used in areas of application such as human-performance measurement, operator monitoring and interactive multi-media.
Background of the invention
Early techniques for determining head-pose used devices that were fixed to the head of the subject to be tracked. For example, reflective devices were attached to the subjects head and using a light source to illuminate the reflectors, the reflector locations were determined.
As such reflective devices are more easily tracked than the head itself, the problem of tracking head-pose was simplified greatly.
Virtual-reality headsets are another example of the subject wearing a device for the purpose of head-pose tracking. These devices typically rely on a directional antenna and radio-frequency sources, or directional magnetic measurement to determine head-pose.
Wearing a device of any sort is clearly a disadvantage, as the user's competence and acceptance to wearing the device then directly affects the reliability of the system. Devices are generally intrusive and will affect a user's behavior, preventing natural motion or operation.
Structured light techniques that project patterns of light onto the face in order to determine head-pose are also known. The light patterns are structured to facilitate the recovery of 3D information using simple image processing. However, the technique is prone to error in conditions of lighting variation and is therefore unsuitable for use under natural lighting conditions.
Summary of the Invention
The present invention provides apparatus and techniques for providing an alarm indication to an owner or driver of a vehicle to indicate potentially hazardous or undesirable conditions. An advantage of the present invention is that it is configured to monitor the environment of a vehicle and provide an alarm indication to an owner or driver of the vehicle regardless of the location of the owner or driver. Additionally, the present invention is configured to have the ability to take preventative and/or corrective actions with respect to the potentially hazardous or undesirable situation.
Accordingly, in a first aspect of the present invention, a situation controller for a vehicle is provided. The situation controller includes a processing device and an image monitor coupled to the processing device, for monitoring images associated with one or more items within the vehicle. The situation controller also includes a device for communicating a message relating to the one or more monitored items wherein the content of the message is determined by the processing device based at least in part on the one or more monitored items. Additionally, a controller coupled to the processing device, for controlling at least one function of the vehicle in response to the one or more monitored items within the vehicle, is included.
In a second aspect of the present invention, a camera system is combined with an artificial passenger system (also referred to herein as a “vehicle system situation controller” or “situation controller”) to monitor an environment of a vehicle and provide an alarm indication to the owner. The camera system identifies the position of keys, for example, and notifies the driver that he or she has left the keys in a particular spot in the vehicle. Thus, the present invention will warn the driver against accidentally locking the keys in the car.
In accordance with a third aspect of the present invention, the artificial passenger is connected to a temperature indicator to analyze the temperature in the vehicle. Thus, in combination with the camera, the artificial passenger is able to determine that a child or pet has been left in a vehicle that it is beginning to get very hot or cold. If the temperature gets too hot or too cool inside the vehicle, the artificial passenger has several options including sending a message to the owner/driver, calling the owner's phone or beeper, calling the police, opening a window or a door, and sounding an alarm to get the attention of people walking by the vehicle (as well as allowing them to open the door to help the occupant). The artificial passenger is able to analyze the situation and execute a corrective action, which includes opening a window or a door to allow the temperature to moderate or to allow the child or pet to leave the vehicle, after the artificial passenger has notified the driver or authorities.
In a fourth aspect of the present invention, the artificial passenger is configured to analyze the situation to determine, for example, whether groceries were left in the vehicle. If the owner did not remove all of the groceries, the artificial passenger will call the owner and tell him or her that the groceries were left in the vehicle. The artificial passenger utilizes an odor detector or sensor as well as the camera to detect whether groceries were left in the vehicle.
In accordance with a fifth aspect of the invention, a communication system that interacts with the owner of the vehicle from a remote location is provided. The communication system utilizes, for example, the Internet and/or a global positioning system (GPS) to locate and communicate with the vehicle owner. Through the communication system, the owner can, for example, open a vehicle door remotely such that a person can enter the locked vehicle.
These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments, which is to be read in connection with the accompanying drawings.
Examples of systems that use this style of technique
examples of systems that use this style of technique can be seen in "A Robust Model-Based Approach for 3D Head Tracking in Video Sequences" by Marius Malciu and Francoise Preteux, and "Robust 3D Head Tracking Under Partial Occlusion" by Ye Zhang and Chandra Kambhamettu, both from Conference of Automatic and Gesture Recognition 2000, Grenoble France.
2. OVERVIEW
The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard. The heart of the system is a conversation planner that holds a profile of you, including details of your interests and profession. When activated, the AP uses the profile to cook up provocative questions such “Who was your first teacher?” via a speech generator and in-car speakers. A microphone picks up your answer and breaks it down into separate words with speech-recognition software. A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition. A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of intonation are signs of fatigue. If you reply quickly and clearly, the system judges you to be alert and tells the conversation planner to continue the line of questioning. If your response is slow or doesn’t make sense, the voice analyzer assumes you are dropping off and acts to get your attention.
The system, according to its inventors, does not go through a suite of rote questions demanding rote answers. Rather, it knows your tastes and will even, if you wish, make certain you never miss Paul Harvey again. This is from the patent application: “An even further object of the present invention is to provide a natural dialog car system that understands content of tapes, books, and radio programs and extracts and reproduces appropriate phrases from those materials while it is talking with a driver.”
For example, a system can find out if someone is singing on a channel of a radio station. The system will state, “And now you will hear a wonderful song!” or detect that there is news and state, “hear the following and play some news.” The system also includes a recognition system to detect who is speaking over the radio and alert the driver if the person speaking is one the driver wishes to hear.” Just because you can express the rules of grammar in software doesn’t mean a driver is going to use them. The AP is ready for that possibility:
It provides for a natural dialog car system directed to human factor engineering for example, people using different strategies to talk (for instance, short vs. elaborate responses). In this manner, the individual is guided to talk in a certain way so as to make the system work—e.g., “Sorry, I didn’t get it. Could you say it briefly?” Here, the system defines a narrow topic of the user reply (answer or question) via an association of classes of relevant words via decision trees. The system builds a reply sentence asking what are most probable word sequences that could follow the user’s reply.”
Reply
#14
PRESENTED BY
CHITRA L

[attachment=9785]
Introduction
■ Driver fatigue according to NHTSA annually causes
 1,00,000 crashes
 1,500 fatalities
 71,000 injuries
 Majority of the road accidents were preceded by eye closures of 0.5 second as long as 2 to 3 seconds
 A normal human blink last 0.2 to 0.3 seconds
Existing System
 Miniature system installed in driver’s hat.
 Use of simulation drinks (eg: coffee &tea).
 Tablets to prevent sleeping
 In order to overcome the disadvantages of existing method IBM introduces a new sleep prevention technology device.
ARTIFICIAL PASSENGER
 Developed By
 IBM (International Business Machines Corporation, NY) has developed a software that holds the conversation with driver to determine whether the driver can respond alertly enough, called “Artificial Passenger”.
 Designed to make solo journey safer and more bearable.
Showing the dashboard of the car where the whole
AP system is attached
About Artificial Passenger
 The AP is an Artificial Intelligence based companion that will be resident in software and chips embedded in the automobile dashboard.
 Conversational planner: holds a profile of the driver, including details of his interests and profession.
 A microphone picks up driver’s answer and breaks it down into separate words with speech-recognition software.
 A camera built into the dashboard, tracks driver’s head and lip movements to improve the accuracy of speech recognition.
 A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and lack of attention are signs of fatigue.
 There is a mobile indicator device, with the help of which the driver can answer the call using touch sensitive wheel.
 The touch sensitive wheels have sensors on it.
 It helps the driver to control
the devices of the car
without interrupting the
dialog between dashboard
and the driver.
Components
The components which supports the working of the system:
 Automatic speech recognizer (ASR)
 Natural language processor (NLP)
 Driver analyzer
 Conversational planner (CP)
 Alarm, Microphone
 Camera and Eye tracker
Automatic speech recognition
 Allows the computer to identify the words spoken on to a microphone.
 Two ASRs used:
 Speaker independent ASR – used for decoding voice signals of the driver.
 Operates with voice car media and decodes tapes, audio book, telephone mails etc.
Natural language processor
 NL Processing is a combination of CS and linguistics.
 Major functions of NLP are:
 Co – reference resolution
 Machine translation
 Natural language generation
 Natural language understanding
 Processes decoded signals of textual data provided from ASR.
 Identifies related meanings from the contents of the decoded messages.
 Produces variant of responses.
 This output goes to the driver analyzer as an input.
Driver analyzer
 Receives the textual data and voice data from NLP.
 Measures the time response using a clock.
 Depending on the time responses, conclusions about driver’s alertness will be given to the conversational planner.
 Steps for detecting driver vigilance
Conversational planner
 It is the heart of the system.
 Instructs the language generator to produce the response.
 If the driver is in perfect condition CP instructs the language generator to continue the conversation otherwise it is instructed to change the conversation.
 For each question, it creates a set of possible answers.
Alarm
If the CP (conversational planner) receives information that the driver is about to fall asleep, an alarm (buzzer) system is activated.
Microphone
For picking up the words and separate them by some internally used software for conversation.
Camera
■ Analyze video sequences of the driver to determine whether the driver is paying adequate attention to the road.
■ Track the head and lip movements of the driver.
 Used for the improvement of accuracy of the speech recognition system.
 Calculations vary for day and night conditions
 Night time is determined based on the following conditions:
 If camera clock is between 18.00 hrs and 07.00 hrs
 If the intensity of image is lower than the threshold value
 If the night time driving
headlight is on
Eye Tracker
 Eye-tracking software is used for this purpose. The camera will track the eyes.
 System measures the corneal reflection of an infrared light emitting diode (LED), which illuminates and generates a reflection of the surface of the eye. Glint is used for calibration and tracking.
 Working of eye tracker
Working
 Conversational planner initiates the conversation
 Camera tracks the head and lip movements and also eye movements and detect whether the driver is alert or not.
 It detects the frequency of yawning of the driver and generates an alarm signal if the above frequencies exceeds a selected threshold value.
 It allows the driver to open and close the door from a remote location.
 Communicates with drivers of other vehicles is the driver has a heart attack or is sick
 Using a touch sensitive wheel the driver can open the window, control the volume of the ipod, attend phone calls etc.
 Give information about the chances of vehicle failure and maintains records for accident investigation case.
Aspects And Features
There are five aspects of Artificial Passenger
 Situation Controller
 Camera system
 Temperature Indicator
 Odor Detector
 Communication System
 Workload manager
Situation controller
 The situation controller consists of:
 Processing Device and an image monitor coupled to processing device
 Communication Device for communicating message
 Controller for controlling functions of vehicle
Camera system
 The camera system is used :
 To monitor the environment of a vehicle.
 Provides alarm indication to the owner.
 Identifies position of items & notifies the driver about its position.
Temperature Indicator
 To analyze temperature of vehicle.
 In combination with camera, the AP determines whether a child or pet is left in a vehicle that is beginning to get very hot or cold.
 It sends a message to the owner/driver and takes the correct actions
 The AP is able to analyze the situation & executes a corrective action
 Opens window or door to allow temperature to moderate
 Allow child or pet to leave the vehicle after informing the driver /authorities.
Ordor Sensor/ Detector
 AP uses Odor Detector as well as camera to detect whether groceries were left in the vehicle.
 AP informs the driver/owner about this.
 Periodically sprinkles sweet air inside the vehicle.
Communicaiton system
 Interacts with owner of vehicle from remote location.
 It utilizes GPS to locate & communicate with vehicle owner.
 Through communication system the owner can open a vehicle door remotely & let the person out who has been locked
Workload Manager
 Determine a moment – to – moment analysis of the user's cognitive workload.
 Done by collecting data about user conditions, monitoring local and remote events.
 Read the profiles and mood of drivers in other vehicles.
 It must be designed in such a way that it can integrate sensor information and rules on when and if distracting information is delivered.
 The workload manager is connected to the safety driver manager (SDM).
 The goal of SDM is to evaluate the potential risk of a traffic accident by producing measurements related to stresses on the driver or vehicle.
Features
 The main features of the AP are:
 Conversational Telematics
 Analyzing the data
 Sharing data
 Retrieving the data on demand
Conversational Telematics
 Intelligent enough to anticipate the user needs
 Provides uniform access to devices and network
services inside and outside the car
 Reports the car conditions
 Makes you stay awake
Analyzing the data
 Automated analysis initiative is data management software for identifying failure trends and predicting specific vehicle failures before they happen.
 System consists of capturing, storing, retrieving and analyzing vehicle data.
 Evaluates the data and takes the corrective measures.
 Internet based diagnostics server reads the car data to determine the root cause of vehicle failure
Sharing data
 Collecting dynamic and event – driven data is a problem.
 Ensuring security and integrity while sharing data.
Retrieving data on demand
 Resource manager must manage broad range of data that changes rapidly.
 Server must give service providers the ability to declare what data they need, without knowing location of data.
Applications
 Prevents the driver, falling asleep during long and solo trip.
 If any problem it alerts the vehicles near by this, so the driver there becomes alert.
 Opens and closes the doors and windows of the car automatically.
 It is also used for the entertainment (playing verbal games, listening to music etc)
Advantages
 Makes automobile driving more convenient.
 Checks for signs of sleepiness, if detected alert the driver.
 Maintains a database for “accident investigation use”.
 Sensors in the front and rear of the automobile controls the distance between cars and automatically apply breaks if something gets in front of the car.
Disadvantages
 High cost.
 Sensors in the front and rear of automobile doesn’t help if danger comes from sides.
 NLP component should be downside to run on local computers.
 Remote connections to severs are not available everywhere, can have delays, and are not robust.
 Some users will produce some phrases that are not represented in collected data nor in grammar that are developed from this data.
Future Enhancements
 Provide us with a shortest-time routing based on road conditions changing because of weather and traffic
 Information about the cars on the route
 Provides distributive user interface between cars
Conclusion
Successful implementation of artificial passenger would allow use of various service in car like reading emails, navigation, downloading music files, voice games without compromising a driver safety.
Reply
#15
Wink 

plz snd me seminar report on artificial passenger............
on bijnor.ruchi[at]gmail.com
tnx........
Reply
#16
Presented By:
O. Govinda Rao
CH. Hari Prasad

[attachment=10700]
Introduction
 IBM (International business machines corporation, NY) has developed a software that holds the conversation with driver to determine whether the driver can respond alertly enough, called “Artificial Passenger”.
 The name was firstly suggested in an article in new scientist magazine.
 This was designed to make solo journey safer and more bearable.
What is an Artificial Passenger?
 Natural language e-companion.
 Sleep preventive device in cars to overcome drowsiness.
 Life safety system.
Why Such System??
 According to a national survey in UK and USA, it is observed that the driver fatigue annually causes,
• 10000 crashes
• 1500 deaths
• 7100 injuries , which causes annual cost of $12.5 billions.
 Majority of off-road accidents observed were preceded by eye closure of half and even 2-3 seconds where the normal human eye blinks at 0.2-0.3 seconds.
What Does It Do?
 Detects alarm conditions through sensors.
 Broadcasts pre-stored voice messages over the speakers.
 Captures images of the driver
Working Components
There are some of the components which supports for the working of the system:
 Automatic speech recognizer (ASR)
 Natural language processor
 Driver analyzer
 Conversational planner (CP)
 Alarm
 Microphone
 Camera
Automatic speech recognition:-
There are two ASRs used in the system:
 First one is “speaker independent” and used for decoding voice signals of the driver.
 Second one operates with voice car media and decodes tapes, audio book, telephone mails etc.
Natural language processor:-
 Processes the decoded signals of textual data provided from the ASR.
 Identifies related meanings from the contents of the decoded messages
Driver analyzer:-
 Receives the textual data and voice data.
 Measures the time response using a clock.
 Analysis is both subjective and objective.
Conversational planner:-
 It is the heart of the system.
 Instructs the language generator to produce the response.
Alarm:-
 If the CP (conversational planner) receives information that the driver is about to fall asleep an alarm system is activated.
Microphone:-
 For picking up the words and separate them by some internally used software for conversation.
Reply
#17
[attachment=10732]
CHAPTER 1 :-
Introduction :-

The AP is an artificial intelligence–based companion that will be
resident in software and chips embedded in the automobile dashboard. The heart of the
system is a conversation planner that holds a profile of you, including details of your
interests and profession.
A microphone picks up your answer and breaks it down into separate words with
speech-recognition software. A camera built into the dashboard also tracks your lip
movements to improve the
This research suggests that we can make predictions about various aspects of driver
performance based on what we glean from the movements of a driver’s eyes and that a
system can eventually be developed to capture this data and use it to alert people when
their driving has become significantly impaired by fatigue.
The natural dialog car system analyzes a driver’s answer and the
contents of the answer together with his voice patterns to determine if he is alert while
driving. The system warns the driver or changes the topic of conversation if the system
determines that the driver is about to fall asleep. The system may also detect whether a
driver is affected by alcohol or drugs.
CHAPTER 2:-
2.1 What is an artificial passenger?

• Natural language e-companion.
• Sleep preventive device in cars to overcome drowsiness.
• Life safety system.
2.2 What does it do?
• Detects alarm conditions through sensors.
• Broadcasts pre-stored voice messages over the speakers.
• Captures images of the driver.
CHAPTER 3 :-
3.1 Field of invention :-

The present invention relates to a system and method for determining three dimensional head pose, eye gaze direction, eye closure amount, blink detection and flexible feature detection on the human face using image analysis from multiple video sources.
Additionally, the invention relates to systems and methods
that makes decisions using passive video analysis of a human head and face.
These methods can be used in areas of application such as human-performance
measurement,operator monitoring and interactive multi-media.
3.2 Background of the invention :-
Early techniques for determining head-pose used devices that were fixed to the head of the subject to be tracked. For example, reflective devices were attached to the subjects head and using a light source to illuminate the reflectors, the reflector locations were determined.
As such reflective devices are more easily tracked than the head itself, the problem of tracking head-pose was simplified greatly.
Virtual-reality headsets are another example of the subject wearing a device for the purpose of head-pose tracking. These devices typically rely on a directional antenna and radio-frequency sources, or directional magnetic
measurement to determine head-pose.
Wearing a device of any sort is clearly a disadvantage, as the
user's competence and acceptance to wearing the device then directly effects the reliability of the system. Devices are generally intrusive and will affect a
user's behaviour, preventing natural motion or operation. Structured light
techniques that project patterns of light onto the face in order to determine head-pose.
The light patterns are structured to facilitate the recovery of 3D
information using simple image processing. However, the technique is prone
to error in conditions of lighting variation and is therefore unsuitable for use
under natural lighting conditions.
3.3 Examples of systems that use this style of technique
Examples of systems that use this style of technique can be seen
in "A Robust Model-Based Approach for 3D Head Tracking in Video Sequences"
by Marius Malciu and Francoise Preteux, and "Robust 3D Head Tracking Under
Partial Occlusion" by Ye Zhang and Chandra Kambhamettu, both from
Conference of Automatic.
CHAPTER 4
4.1 Devices that are used in AP
The main devices that are used in this artificial passenger are:-
1) Eye tracker.
2) Voice recognizer or speech recognizer.
4.2 How does eye tracking work?
Collecting eye movement data requires hardware and software specifically
designed to perform this function. Eye-tracking hardware is either mounted on a user's
head or mounted remotely. Both systems measure the corneal reflection of an infrared
light emitting diode (LED), which illuminates and generates a reflection off the surface of
the eye. This action causes the pupil to appear as a bright disk in contrast to the
surrounding iris and creates a small glint underneath the pupil . It is this glint that head-
mounted and remote systems use for calibration and tracking.
4.2.1 Hardware:
Head-mounted and remote systems:-
The difference between the head-mounted and remote eye systems is
how the eye tracker collects eye movement data. Head-mounted systems , since they are
fixed on a user's head and therefore allow for head movement, use multiple data points to
record eye movement. To differentiate eye movement from head movement, these
systems measure the pupil glint from multiple angles. Since the unit is attached to the
head, a person can move about when operating a car or flying a plane, for example.
For instance, human factors researchers have used head-mounted eye-
tracking systems to study pilots' eye movements as they used cockpit controls and
instruments to land airplanes (Fitts, Jones, and Milton 1950). These findings led to
cockpit redesigns that improved usability and significantly reduced the likelihood of
incidents caused by human error. More recently, head-mounted eye-tracking systems
have been used by technical communicators to study the visual relationship between
personal digital assistant (PDA) screen layout and eye movement.
Remote systems, by contrast, measure the orientation of the eye relative to
a fixed unit such as a camera mounted underneath a computer monitor . Because remote
units do not measure the pupil glint from multiple angles, a person's head must remain
almost motionless during task performance.
Although head restriction may seem like a significant hurdle to overcome,
Jacob and Karn (2003) attribute the popularity of remote systems in usability to their
relatively low cost and high durability compared with head-mounted systems.
Since remote systems are usually fixed to a computer screen, they are
often used for studying onscreen eye motion. For example, cognitive psychologists have
used remote eye-tracking systems to study the relationship between cognitive scanning
styles and search strategies (Crosby and Peterson 1991). Such eye-tracking studies have
been used to develop and test existing visual search cognitive models. More recently,
human-computer interaction (HCI) researchers have used remote systems to study
computer and Web interface usability.
Through recent advances in remote eye-tracking equipment, a range of
head movement can now be accommodated. For instance, eye-tracking hardware
manufacturer Tobii Technology now offers a remote system that uses several smaller
fixed sensors placed in the computer monitor frame so that the glint underneath the pupil
is measured from multiple angles.
This advance will eliminate the need for participants in eye-tracking
studies to remain perfectly still during testing, making it possible for longer studies to be
conducted using remote systems.
Reply
#18
Presented by:
Jaiveer singh shilla

[attachment=10883]
Introduction to Capsule Camera
Imagine a vitamin pill-sized camera that could travel through your body taking pictures, helping diagnose a problem which doctor previously would have found only through surgery.
Work of artificial passenger
 Many models. Direct from vendor (CompUSA has some)
 General price range $1500-3000
 Fujitzu Stylistic - $2300 including stand with CD-room drive
 Some models are like a laptop where the screen can be flipped over
USES
 Crohn's Disease.
 Malabsorption Disorders.
 Tumors of the small intestine & Vascular Disorders.
 Ulcerative Colitis
 Medication Related To Small Bowel Injury
Advantages
 Painless, no side affects or complications.
 Miniature size, so can move easily through the digestive system.
 Accurate, precise and effective.
 Images taken are of very high quality which are sent almost instantaneously to the data recorder for storage.
 Made of bio-compatible material, doesn’t cause any harm to the body.
Drawbacks & Overcomes
1.Patients with gastrointestinal structures or narrowing are not good candidates for this procedure due to the risk of obstruction.
The first drawback is overcome using another product manufactured with the help of anotechnology which is the rice- grain sized motor.
2. The Pill will get stucked if there is a partial obstruction in the small intestine.
3. Impossible to control Camera behavior.
These two drawbacks can be overcome using a bi-directional telemetry Camera.
APPLICATIONS
First introduced in US Sensor/Software system detects and counteracts sleepiness behind the wheel. Seventies staples John Travolta and the Eagles made successful comebacks, and another is trying: That voice in the automobile dashboard that used to remind drivers to check the headlights and buckle up could return to new cars in just a few years—this time with jokes, a huge vocabulary, and a spray bottle
Reply
#19
[attachment=12358]
ARTIFICIAL PASSENGER
What is an artificial passenger?

Natural language e-companion.
Sleep preventive device in cars to overcome drowsiness.
Life safety system.
What does it do?
Detects alarm conditions through sensors.
Broadcasts pre-stored voice messages over the speakers.
Captures images of the driver.
Devices that are used in AP
The main devices that are used in this artificial passenger are:-
Eye tracker.
Voice recognizer or speech recognizer.
About AP
The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard.
The system has a conversation planner that holds a profile of you, including details of your interests and profession.
A microphone picks up your answer and breaks it down into separate words with speech-recognition software.
A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition.
A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of attention are signs of fatigue.
If you reply quickly and clearly, the system judges you to be alert and tells the conversation planner to continue the line of questioning.
If your response is slow or doesn’t make sense, the voice analyzer assumes you are dropping off and acts to get your attention.
Detecting driver vigilance
Aiming a single camera at a head of a driver.
Detecting frequency of up and down nodding and left to right rotations of the head within a selected time period.
Determining frequency of eye blinkings and eye closings.
HOW DOES TRACKING DEVICE WORK?
Data collection and analysis is handled by eye-tracking software.
Data are stored as a series of x/y coordinates related to specific grid points on the computer screen.
Our head tracker consists of tracking the lip corners, eye centers, and side of the face.
“Occlusion” of the eyes and mouth often occurs when the head rotates or the eyes close, so our system tracks through such occlusion and can automatically reinitialize when it mis-tracks.
Representative Image
Eye Tracker
Monitoring System
Tracking includes the following steps
Automatically initialize lips and eyes using color predicates and connected components.
Track lip corners using dark line between lips and color predicate even through large mouth movement like yawning.
Construct a bounding box of the head.
Determine rotation using distances between eye and lip feature points and sides of the face.
Determine eye blinking and eye closing using the number and intensity of pixels in the eye region.
The lip and eye colors ((RED, BLUE, GREEN)RGB) are marked in the image offline.
Mark the lip pixels in the image is important.
Each pixel has an Red®, Green)G), and Blue(B) component. For a pixel that is marked as important, go to this location in the RGB array indexing on the R, G, B components.
This array location can be incremented by equation (1): exp(−1.0*( j*j+k*k+i*i )/(2*sigma*sigma)); (1) where: sigma is approximately 2;
If a color, or pixel value, is marked as important multiple times, its new value can be added to the current value. Pixel values that are marked as unimportant can decrease the value of the RGB indexed location via equation (2) as follows: exp(−1.0*( j*j+k*k+i*i )/(2*(sigma−1)*(sigma−1))). (2) where: sigma is approximately 2;
The values in the array which are above a threshold are marked as being one of the specified colors. Another RGB array is generated of the skin colors, and the largest non-skin components above the lips are marked as the eyes
First a dark pixel is located
The system goes to the eye center of the previous frame and finds the center of mass of the eye region pixels
Look for the darkest pixel, which corresponds to the pupil
This estimate is tested to see if it is close enough to the previous eye location
Feasibility occurs when the newly computed eye centers are close in pixel distance units to the previous frame's computed eye centers. This kind of idea makes sense because the video data is 30 frames per second, so the eye motion between individual frames should be relatively small.
If new points are too far away, the system searches a window around the eyes and finds all non-skin connected components in approximately a 7×20 pixel window, and finds the slant of the line between the lip corners using equation (5). This equation finds the slope between two points in general. (( y 2 −y 1 )/( x 2 −x 1 )) (5) where: x 1 ,y 1 is the coordinate of a feature; and x 2 ,y 2 is the coordinate of the other corresponding feature.
The system selects the eye centroids that have the closest slant to that of the slant between the lip corners using equation (5). These two stages are called the eye black hole tracker.
The detection of eye occlusion is done by analysing the bright regions.
As long as there are eye-white pixels in the eye region the eyes are open. If not blinking is happening. To determine what eye-white color is, in the first frame of each sequence we find the brightest pixel in the eye region and use this as the eye white color.
If the eyes have been closed for more than approximately 40 of the last approximately 60 frames, the system declares that driver has his eyes closed for too long.
Output
Onsite alarms within the vehicle
Remote alarms
Other applications
1) Cabins in airplanes.
2) Water craft such as boats.
3) Trains and subways.
SUMMARY
Method for monitoring driver alertness
Sufficient time to avert an accident.
Monitoring of full facial occlusion of the driver.
Reply
#20
Presented by
PRAJNA KARKAL

[attachment=12987]
ARTIFICIAL PASSENGER
INTRODUCTION
What is Artificial intelligence ?

Artificial intelligence is the science of making intelligent machines.
The branches of AI are:
Search
Pattern recognition
Natural language processing
Perception
BRIEF HISTORY
According to a national survey in UK and USA, it is observed that the driver fatigue annually causes
100000 crashes
15000 deaths
71000 injuries
In order to overcome the sleepiness the driver could have taken one of the following
Use of simulation drinks (e.g.: coffee and tea)
Some tablets to prevent sleeping.
Miniature system installed in driver’s hat.
ARTIFICIAL PASSENGER
Artificial passenger was developed by Dimitry Kanevsky and Wlodek Zadrozny.
EXISTING SYSTEM
What is an artificial passenger?
The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard.
Sleep preventive device in cars to overcome drowsiness
What does it do?
A microphone picks up answer and breaks it down into separate words .
A camera tracks your lip movements to improve the accuracy of the speech recognition.
A voice analyzer then looks for signs of tiredness.
Slow responses and a lack of interaction are signs of fatigue.
artificial passenger opens all the windows, sound a buzzer, increase background music volume, or even spray the driver with ice water.
ARCHITECTURAL DESIGN
WORKING COMPONENTS

The components which support for the working of the system:
Automatic Speech Recognizer (ASR)
Natural Language Processor (NLP)
Driver analyzer
Conversational planner (CP)
Alarm
External service provider
Microphone
Camera
CAMERA
TECHNOLOGY DETAILS
VOICE CONTROL INTERFACE

voice is used instead of hands to control Telematics devices in the car.
e.g. when playing voice games, issuing commands via voice.
"What is the distance to JFK?" or "Or how far is JFK?" or "How long to drive to JFK?" etc.
The commands can be given in the natural language. Hence the difficulty in remembering the commands can be reduced by using NLU.
NLU components should be located on a server that is accessed by cars remotely or it can be embedded on chip.
When the system gets a voice response, it searches through files .
Executes the appropriate command.
Otherwise the system executes other options that are defined by a Dialog Manager (DM).
Example..
1. Ask questions (via a text to speech module) to
resolve ambiguities:
- (Driver) Please, plot a course to Yorktown
- (DM) Within Massachusetts?
- (Driver) No, in New York
2. Fill in missing information and remove
ambiguous references from context:
- (Driver) What is the weather forecast for today?
- (DM) Partly cloudy, 50% chance of rain
- (Driver) What about Ossining?
- (DM) Partly sunny, 10% chance of rain
3. Manage failure and provide contextual,
failure- dependent help and actions
- (Driver) When will we get there?
- (DM) Sorry, what did you say?
- (Driver) I asked when will we get there?
The instantaneous data collection could be dealt s by creating a learning transformation system (LT)
Examples
Monitor driver and passenger actions in the car’s internal and external environment across a network.
Extract and record the Driver Safety Manager relevant data in databases.
EMBEDDED SPEECH RECOGNITION
The front-end computes standard mel-frequency cepstral coefficients (MFCC).
The mel-filters are placed in the frequency range [200Hz – 5500 Hz].
The labeler then computes the log likelihood of each feature vector according to observation.
HMM-Hidden Markov Model.
The decoder implements a synchronous search over its active vocabulary.
Cont…
TOUCH SENSORS
WORKLOAD MANAGER

An object of the workload manager is to determine a moment-to-moment analysis of the user's cognitive workload.
Collects data about user conditions, monitoring local and remote events, and prioritizing message delivery.
Sensors provide information about local events e.g. heavy rain.
Provide information about driver characteristics
e.g. speaking speed, eyelid status.
PRIVACY AND SOCIAL ASPECTS
Privacy aspects:
The speech messages must be encrypted.
Social aspects:
Users must clearly understand
what the system is?,
what the system can and cannot do, and
what they need to do to maximize its performance to suit their unique needs.
ADVANTAGES
Artificial Passenger is broadly used to prevent accident.
Prevents the driver, falling asleep during long trip.
If the driver gets a heart attack or he is drunk it will send signals to vehicles nearby about this so drivers of the other vehicles become alert.
Opens and closes the doors and windows of the car automatically.
It is also used for the entertainment.
It provides a natural dialog car system that understands content of tapes, books and radio programs.
The system may also detect whether a driver is affected by alcohol or drugs.
DISADVANTAGES
Car computers are usually not very powerful due to cost considerations.
Natural Language Processor (NLP) is usually controlled by the remote server.
APPLICATIONS
Cabins in airplanes
Water craft such as boats
Trains
CONCLUSION
Successful implementation AP would allow use of various services in car (like readinge-mail, navigation, voice games etc) without compromising driver safety.
A primary objective of the invention is to provide a system and method for monitoring driver alertness with a single camera focused on the face of the driver to monitor for conditions of driver fatigue and lack of sleep.
Reply
#21
[attachment=15382]
ABSTRACT
An artificial passenger (AP) is a device that would be used in a motor vehicle to make sure that the driver stays awake. IBM has developed a prototype that holds a conversation with a driver, telling jokes and asking questions intended to determine whether the driver can respond alertly enough. Assuming the IBM approach, an artificial passenger would use a microphone for the driver and a speech generator and the vehicle's audio speakers to converse with the driver. The conversation would be based on a personalized profile of the driver. A camera could be used to evaluate the driver's "facial state" and a voice analyzer to evaluate whether the driver was becoming drowsy. If a driver seemed to display too much fatigue, the artificial passenger might be programmed to open all the windows, sound a buzzer, increase background music volume, or even spray the driver with ice water. One of the ways to address driver safety concerns is to develop an efficient system that relies on voice instead of hands to control Telematics devices.
One of the ways to reduce a driver’s cognitive workload is to allow the driver to speak naturally when interacting with a car system (e.g. when playing voice games, issuing commands via voice). It is difficult for a driver to remember a syntax, such as "What is the distance to JFK?""Or how far is JFK?" or "How long to drive to JFK?" etc.). This fact led to the development of Conversational Interactivity for Telematics (CIT) speech systems at IBM Research. CIT speech systems can significantly improve a driver-vehicle relationship and contribute to driving safety. But the development of full fledged Natural Language Understanding (NLU) for CIT is a difficult problem that typically requires significant computer resources that are usually not available in local computer processors that car manufacturer provide for their cars. To address this, NLU components should be located on a server that is accessed by cars remotely or NLU should be downsized to run on local computer devices (that are typically based on embedded chips).Some car manufacturers see advantages in using upgraded NLU and speech processing on the client in the car, since remote connections to servers are not available everywhere, can have delays, and are not robust. Our department is developing a “quasi-NLU”component - a “reduced” variant of NLU that can be run in CPU systems with relatively limited resources.
INTRODUCTION
US

Studies of road safety found that human error was the sole cause in more than half of all accidents .One of the reasons why humans commit so many errors lies in the inherent limitation of human information processing .With the increase in popularity of Telematics services in cars (like navigation, cellular telephone, internet access) there is more information that drivers need to process and more devices that drivers need to control that might contribute to additional driving errors
ER TECHNOLOGIES
ARTIFICIAL PASSENGER OVERVIEW

The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard. The heart of the system is a conversation planner that holds a profile of you, including details of your interests and profession. When activated, the AP uses the profile to cook up provocative questions such “Who was the first person you dated?” via a speech generator and in-car speakers.
A microphone picks up your answer and breaks it down into separate words with speech-recognition software. A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition. A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of intonation are signs of fatigue. If you reply quickly and clearly, the system judges you to be alert and tells the conversation planner to continue the line of questioning. If your response is slow or doesn’t make sense, the voice analyzer assumes you are dropping off and acts to get your attention.
The system, according to its inventors, does not go through a suite of rote questions demanding rote answers. Rather, it knows your tastes and will even, if you wish, make certain you never miss Paul Harvey again. This is from the patent application: “An even further object of the present invention is to provide a natural dialog car system that understands content of tapes, books, and radio programs and extracts and reproduces appropriate phrases from those materials while it is talking with a driver. For example, a system can find out if someone is singing on a channel of a radio station.
The system will state, “And now you will hear a wonderful song!” or detect that there is news and state, “Do you know what happened now—hear the following and play some news.” The system also includes a recognition system to detect who is speaking over the radio and alert the driver if the person speaking is one the driver wishes to hear.” Just because you can express the rules of grammar in software doesn’t mean a driver is going to use them. The AP is ready for that possibility: It provides for a natural dialog car system directed to human factor engineering for example, people using different strategies to talk (for instance, short vs. elaborate responses ). In this manner, the individual is guided to talk in a certain way so as to make the system work—e.g., “Sorry, I didn’t get it. Could you say it briefly?” Here, the system defines a narrow topic of the user reply (answer or question) via an association of classes of relevant words via decision trees. The system builds a reply sentence asking what are most probable word sequences that could follow the user’s reply.”
Reply
#22
hii plzz send me report of this
Reply
#23
To get more information about the topic " Artificial PASSENGER Download The Seminar Report" please refer the page link below

http://studentbank.in/report-artificial-...3#pid56453

http://studentbank.in/report-artificial-...ars-report

http://studentbank.in/report-artificial-...ort?page=2

http://studentbank.in/report-artificial-...ort?page=3

http://studentbank.in/report-artificial-...ort?page=4

Reply
#24
plz send the full seminar report on artificial passenger
Reply
#25
To get more information about the topic " Artificial PASSENGER Download The Seminar Report" please refer the page link below

http://studentbank.in/report-artificial-...3#pid56453

http://studentbank.in/report-artificial-...ars-report

http://studentbank.in/report-artificial-...ort?page=2

http://studentbank.in/report-artificial-...ort?page=3

http://studentbank.in/report-artificial-...ort?page=4
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Tagged Pages: seminar report on artificial pasenger, artificial passenger seminar report pdf, artificial passenger documentation, the ap is an artificial intelligence based companion that will be resident in software and chips embedded in the automobile d, artificial passenger ppt free download, http seminarprojects org t artificial passenger download the seminar report page 10, artificifial passenger,
Popular Searches: artificial passenger ieee paper pdf, symbion os seminar report download, ieee pdf for artificial passenger, aquacraft boats, brightest flashlightvoice, download seminar report for html5, project report on artificial passenger ieee paper,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Optical Computer Full Seminar Report Download computer science crazy 46 66,799 29-04-2016, 09:16 AM
Last Post: dhanabhagya
  Digital Signature Full Seminar Report Download computer science crazy 20 44,180 16-09-2015, 02:51 PM
Last Post: seminar report asees
  HOLOGRAPHIC VERSATILE DISC A SEMINAR REPORT Computer Science Clay 20 39,364 16-09-2015, 02:18 PM
Last Post: seminar report asees
  Computer Sci Seminar lists7 computer science crazy 4 11,524 17-07-2015, 10:29 AM
Last Post: dhanyasoubhagya
  Steganography In Images (Download Seminar Report) Computer Science Clay 16 25,854 08-06-2015, 03:26 PM
Last Post: seminar report asees
  Mobile Train Radio Communication ( Download Full Seminar Report ) computer science crazy 10 28,068 01-05-2015, 03:36 PM
Last Post: seminar report asees
  A SEMINAR REPORT on GRID COMPUTING Computer Science Clay 5 16,244 09-03-2015, 04:48 PM
Last Post: iyjwtfxgj
Heart wireless intelligent network(win) (Download Full Report And Abstract) computer science crazy 7 15,373 10-02-2015, 05:52 PM
Last Post: seminar report asees
  SQL INJECTION A SEMINAR REPORT Computer Science Clay 10 12,130 18-10-2014, 09:50 PM
Last Post: jaseela123d
  Image Processing & Compression Techniques (Download Full Seminar Report) Computer Science Clay 42 22,974 07-10-2014, 07:57 PM
Last Post: seminar report asees

Forum Jump: