08-06-2012, 10:35 AM
Finger Detection for Sign Language Recognition
Finger Detection.pdf (Size: 416.63 KB / Downloads: 11)
Abstract
Computer recognition of sign language is an
important research problem for enabling communication with
hearing impaired people. This paper introduces an efficient and
fast algorithm for identification of the number of fingers opened
in a gesture representing an alphabet of the American Sign
Language. Finger Detection is accomplished based on the
concept of Boundary Tracing and Finger Tip Detection.
INTRODUCTION
The long-term goal of our research is to enable
communication between visually impaired (i.e., blind) people
on the one hand and hearing and speech impaired (i.e, deaf
and dumb) people on the other. Since the former cannot see
and the latter use sign language, there is currently no means
of communication between such people who are
unfortunately in significantly large numbers in a country such
as India.
RELATED WORK
There have been many previous works which extracted
certain features of the hand for finger detection.
Some common features extracted include hand silhouettes
[6], [7], contours [8], key points distributed along hand
(fingertips, joints) [9], [10], [11], [12], [18], [21], and
distance-transformed images [13].
There have also been works where finger detection has
been accomplished via color segmentation and contour
extraction [12, 14]. But this technique requires fine-tuning
every time the system switches to a new user as the color
complexion varies from person to person.
CONCLUSION AND FUTURE WORK
A boundary-trace based finger detection technique is
presented and cusp detection analysis is done to locate the
finger tip. This algorithm designed is a simple, efficient and
robust method to locate finger tips and enables us to identify
a class of hand gestures belonging to the American Sign
Language which have fingers open.
The accuracy obtained in this work is sufficient for the
purposes of converting sign language to text and speech since
a dictionary can be used to correct any spelling errors
resulting from the 5% error in our gesture recognition
algorithm.