WEARABLE REAL-TIME STEREO VISION FOR THE VISUALLY IMPAIRED
#1

WEARABLE REAL-TIME STEREO VISION FOR THE VISUALLY IMPAIRED

.doc   WEARABLE REAL1.doc (Size: 109.5 KB / Downloads: 1)
Abstract
Visually impaired find their navigation difficult as they often lack the needed information for bypassing obstacles and hazards. Electronic Travel Aids (ETAs) are devices that use sensor technology to assist and improve the blind user’s mobility in terms of safety and speed. Modern ETAs does not provide distance information directly and clearly. This paper proposes a method for determining distance using a stereo matching method to help blind individuals for their navigation. The system developed in this work, named Stereo Vision based Electronic Travel Aid (SVETA), consists of a computing device, stereo cameras and stereo earphones, all molded in a helmet. An improved area based stereo matching is performed over the transformed images to calculate dense disparity image. Low texture filter and left/right consistency check are carried out to remove the noises and to highlight the obstacles. A sonification procedure is proposed to map the disparity image to stereo musical sound, which has information about the features of the scene in front of the user. The sound is conveyed to the blind user through stereo headphones. Experimentations have been conducted and preliminary investigations have proven the viability of this method for applying in real time environment.
Index Terms— Stereo matching, Electronic Travel Aid, Disparity, Stereo Vision, Sonification.
I. INTRODUCTION
Most aspects of the dissemination of information to aid navigation and cues for active mobility are passed to human through the most complex sensory system, the vision system. This visual information forms the basis for most navigational tasks and so with impaired vision an individual is at a disadvantage because appropriate information about the environment is not available. According to World Health Organization census, around 180 million people worldwide are visually disabled, of those 40 to 45 million populations are totally blind [1]. This population is expected to double by the year 2020.Two low technology aids for the blind, the long cane and the guide dog [2], have been used by blind for many years. A number of electronic mobility aids using sonar [3]–[4] have also been developed to detect obstacles, but market acceptance is rather low as useful information obtainable from them are not significantly more than that from the long cane. The outputs produced are also complex for user understanding. Recent research efforts are being directed to produce new navigational system in which digital video camera is used as vision sensor. In The voice [5], the image is captured using single video camera mounted on a headgear and the captured image is scanned from left to right direction for sound generation. The top portion of the image is converted into high frequency tones and the bottom portion into low frequency tones. The loudness of sound depends on the brightness of the pixel. Similar work has been carried out in NAVI [6] where the captured image is resized to 32 X 32 and the gray scale of the image is reduced to 4 levels. With the help of image processing technique the image is differentiated into objects and background. The objects are assigned with high intensity values and the background is suppressed to low intensity values. Here the processed image is converted into stereo sound where the amplitude of the sound is directly proportional to intensity of image pixels, and the frequency of sound is inversely proportional to vertical orientation of pixels. By using single camera the distance information cannot be obtained effectively.
The distance is one of the important aspects for collision free navigation for blinds. In order to incorporate the distance information, stereo cameras have to be used. The manner in which human beings use their two eyes to see and perceive the three-dimensional world has inspired the use of two cameras to model the world in three dimensions. The different perspectives of the same view seen by two cameras lead to a relative displacement of the same objects or the same points in world reference (called disparity). The size and direction of these disparities can be utilized for depth estimation. The depth of a point is inversely proportional to the amount of disparity.
Using stereo vision for blind navigation application is in early stages and only limited research efforts has been reported in it. In Optophone [7], to obtain a depth map an edge detection routine is applied to images from two cameras. Disparity is calculated using the edge features of both the images. The depth map is then converted into sound using the method
Engineering Letters, 14:2, EL_14_2_2 (Advance online publication: 16 May 2007)
applied in The vOICe system [5] where, the top portion of the image is converted into high frequency tones and the bottom portion into low frequency tones. The loudness of sound is directly proportional to intensity of the pixel. In Optophone the disparity map of all the edge features in the images is obtained. The user will find difficult to locate the object since unwanted edge features will also exist. With only the edge information, it will be difficult to identify the object.
Another pioneering work by Zelek et. al. involves stereo camera and was designed to provide information about the environment through tactile feedback to the blind [8]. The system comprises of a laptop, a stereo head with two cameras and a virtual touch tactile system. The tactile system is made up of piezo-electric buzzers attached to each finger on a glove worn by the user. Here the cameras capture images, and the disparity is calculated from those images. The depth information are conveyed to the user by stimulating the fingers. In this work no image processing efforts are undertaken to highlight the object information in the output. More over the system suffers in stereo matching.
Another important work reported in this area is the visual support system developed by Yoshihiro Kawai and Fumiaki Tomita [9]. The prototype system has a computer, stereo camera system with three small cameras, headset with a microphone and headphone and sound space processor. The images captured by small stereo cameras are analysed to obtain 3D structure, and object recognition is performed. The results are then converted to user via 3D virtual sound. The prototype developed is huge and not portable. It can be applicable only in indoor environment. From the literature, it is clear that efforts are made to use stereo vision in Electronic Travel Aid. But the recent researches have faced the problems in stereo matching, information transference and in making the system portable. There are no commercial stereo vision based electronic travel aids so far. In this paper, methods have been developed to overcome the problems encountered in earlier researches. An improved area based stereo matching is employed in this paper to calculate the distance information and obstacle information is conveyed to the blind using the musical tone concept.
II. OVERVIEW OF SVETA SYSTEM
The prototype system is named as Stereo Vision based Electronic Travel Aid (SVETA). The hardwares used in this work are small enough to be carried out easily. The SVETA system consists of a headgear molded with stereo cameras and stereo earphones. The Compact Computing Device (CCD) is placed in a specially designed pouch. The user has to wear the pouch, wherever he goes with the SVETA system. The stereo camera selected is a compact, low-power digital stereo head with an IEEE 1394 digital interface. It consists of two 1.3 mega pixel, progressive scan CMOS imagers mounted in a rigid body, and a 1394 peripheral interface module, joined in an integral unit. The CCD is handy with a high performance 500MHz Intel mobile Celeron processor with 256 MB RAM. The helmet can be worn over the head. The SVETA prototype system is shown in Figure 1(a).
The stereo cameras are placed in the front of the headgear, located slightly above the position of the eyes as shown in Figure 1(b). The stereo cameras capture the visual information infront of the blind user. The captured images are then processed using the proposed methodology in the CCD. The information about the obstacle is conveyed to the blind user by musical tones and voice commands through stereo earphones.
Headgear
SVETA (a) Prototype system (b) Blind user wearing SVETA

III. STEREO VISION
Stereovision is a paradigm to calculate the distance of an object by analyzing the two images of an object acquired from two different directions or orientations. Image acquisition, camera modeling, feature acquisition, image matching, depth determination are the various steps in this paradigm [10]. Image matching (stereo matching) is an important and difficult step in this paradigm. The stereo matching algorithms available in the current literature are broadly classified into two classes: area-based and feature-based algorithms [11]. Feature-based algorithms need preprocessing of stereo images to find the positions of features such as edges, corner points and line segments. These techniques provide sparse disparity, i.e. only at the positions of features. The features in one image may be
CCD enclosed in a Pouch
Stereo Cameras
Belt
Stereo Earphones

occluded in the other image. Hence, these techniques are less preferred. Area-based (window-based) algorithms are advantageous, as they provide dense disparity. These algorithms perform matching at each pixel, using absolute intensity value of pixels. In this paper, an improved area based stereo matching algorithm has been proposed for depth determination in SVETA.
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: real time people tracking for mobile robots using thermal vision, bus stop notifications for vision impaired, 3d fusion of stereo and spectral series acquired with camera array, lm741 stereo audio mixer project, stereo 3d movies, 1 double image mixing for d stereo 1 double image mixing for d stereoscopic vision, stereo,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  Time Travel seminar addict 2 1,855 10-11-2014, 10:41 PM
Last Post: jaseela123d
  Radio frequency based real time Child Monitoring and alarm system simple details seminar addict 1 1,998 06-09-2014, 06:45 PM
Last Post: Guest
  REAL TIME OPERATING SYSTEM project uploader 2 1,983 01-10-2012, 03:43 PM
Last Post: seminar details
  Smart Clothing: The Shift to Wearable Computing project uploader 0 1,133 11-06-2012, 11:22 AM
Last Post: project uploader
  Real-time Transport Protocol seminar details 0 942 08-06-2012, 05:10 PM
Last Post: seminar details
  Vision for Beyond 4G Broadband Radio Systems seminar details 0 1,026 07-06-2012, 12:07 PM
Last Post: seminar details
  Security Watching All the Time ppt seminar details 0 875 05-06-2012, 01:10 PM
Last Post: seminar details
  REMOTE-CONTROLLED REAL-TIME CLOCK WITH DEVICE CONTROLLER seminar paper 0 1,005 15-03-2012, 04:07 PM
Last Post: seminar paper
  Real-Time Image Processing Applied To Traffic – Queue Detection Algorithm seminar paper 0 784 09-03-2012, 02:42 PM
Last Post: seminar paper
  Permanent Magnet Synchronous Machine Model for Real- Time Simulation seminar paper 0 918 02-03-2012, 02:51 PM
Last Post: seminar paper

Forum Jump: