06-04-2011, 04:59 PM
PRESENTED BY
G.NITHIYA DEVI
S.AUXILIA VINNARASI
[attachment=11819]
INTRODUCTION
Our electronic eye aims at helping millions of blind and visually impaired people lead more independent lives.
An effective navigation system would improve the mobility of millions of blind people all over the world.
Our new “eye” will allow blind people to cross busy roads in total safety for the first time.
TOP II TECH INVENTION OF 2010
THE ELECTRONIC EYE
MIT researchers are developing a microchip that will enable a blind person to recognize faces and navigate a room without assistance. It helps the blind to regain partial eyesight.
Users are required to wear special glasses fitted with a small camera that transmits images to the titanium- encased chip. It fires an electrode array under the retina that simulates the optic nerve.
The camera would be mounted at eye level, and be connected to a tiny computer. It will relay information using a voice speech system and give vocal commands and information through a small speaker placed near the ear
1 – Tells the user whether any cross road is present
2 - Tells the user whether the traffic signal is favorable or not
3 – Tells the user the time taken to cross the road.
IMAGE ANALYSER
The image analyzer contains the bitmap image, which has to be processed to detect the presence of a zebra crossing.
EDGE DETECTION
There are 2 methods to detect edges:
1. Gradient
2. Laplacian
CROSSROAD PATTERN DETECTION
To detect basic shapes within the images ,we use Hough transform .
Hough transform can be used to detect straight lines from edges detected.
In the first figure lines are constructed from collinear points & in the second figurea line is formed by joining the points L1,L2,L3,L4.
Cross ratio of the original four points = the cross ratio of the constructed lines.
To detect the presence of a zebra crossing we use the “projective invariant”
The time required to cross the road is calculated based on an assumption that the user covers a distance of one foot in a minute on an average.
So, the time required to cover the calculated distance is calculated based on a simple logic.
Generally, the time taken, T, to cross the road can be found out by
TRAFFIC LIGHT DETECTOR
The function of the traffic light detector is to recognize if the pedestrian light .
If the user can cross the road safely, the voice speech system will instruct him to cross the road.
CURVATURE SCALE SPACE COMPUTATION AND MATCHING
The CSS image is a multi-scale organization of the inflection points
curvature is a local measure of how fast a planar contour is turning.
Contour evolution is achieved by first parametrizing using arclength.
The result is a set of 2 coordinate functions which are then convolved, with a Gaussian filter standard deviation.
In CSS image ,the horizontal axis represents the arc length parameter
vertical axis represents the standard deviation of the Gaussian filter.
If an image of a pedestrian light in the image database finds a match with an image in the camera, the pedestrian can cross the road.
The time in seconds required to cross the road is also detected based on the image of numbers in the database.
TIMING UNIT
The timing unit compares the calculated value T, the time required by the user to cross the road with the time left to cross the road T1, as identified from the image (traffic signal time).
If T < T1, the system instructs the user to cross the road. Else it asks to wait till it is safe to cross the road.
VOICE SPEECH SYSTEM
AUDITORY IMAGE REPRESENTATION
The images captured by the camera t and the pixels in each column generate a particular sound pattern, consisting of a combination of frequencies
The result is an auditory signature effectively an inverse spectrogram that characterizes the particular image.
High-level scene interpretation applied to the processed images will produce a symbolic description of the scene.
The symbolic description is then converted into verbal instructions appropriate to the needs of the user.
VOICE VISION
The VOICE VISION technology for the totally blind offers the experience of live camera views through sophisticated image-to-sound renderings.
The VOICE mapping: vertical positions of points in a visual sound are represented by pitch, while horizontal positions are represented by time-after-click.
Brightness is represented by loudness. In this manner, pixels become... voicels
CONCLUSION
The development of mobility aids for the visually impaired is a challenging task that has many potential solutions.
Blind pedestrians in the greatest danger are those who must cross wide, busy roads.
This system along with the available low technology aids can relieve the visually challenged of being dependent on others and lead normal lives.
This effective navigation system would improve the mobility of millions of blind people all over the world.