Posts: 2
Threads: 2
Joined: Jan 2010
plzzz give me seminar report on WHEELCHAIR CONTROL USING VOICE SIGNALS FOR DISABLED PATIENTS...
thank u....
Posts: 247
Threads: 5
Joined: Jan 2010
Relational Interface for a Voice Controlled Wheelchair
Introduction
Traditional joystick interfaces enable analog control of wheelchairs.Patients find it extremely difficult or impossible to use the wheelchair for activities of daily living. A voice controlled wheelchair that understands high level commands can increase the mobility options for such people.Voice control can make the life of the patients easier. The fieldâ„¢s focus to date seems to have been on getting the control algorithms right rather than on the high level interface. Existing systems have modes for high level control. Based on user input from a discrete (eg, voice) or continuous (eg, joystick) source.
Defining the Problem
Studies reveal that in most of the cases, most of the utterances mapped to left, right, and straight, where the exact goal depended on the situation. There were also commands, such as look left to turn left without moving, and Follow the wall.
Architecture
The architecture of the system includes a high level brain module that receives the output from the speech recognition system, decides what routines to apply, and sends appropriate commands to the robot based on the application of routines to sensor readings.
Robot Simulation:
Take the example of the Player/Stage/Gazebo project, which is an open source robot simulator and control architecture. This simulator sends sensor output to the Player robot control server, and receives commands from it. The Player server can talk to a wide variety of robot platforms, so moving from the simulator to the real world, or moving among various real-world architectures.
Language Understanding:
In this project, the Sphinx speech recognizer is used along with Peter Gorniakâ„¢s speech understanding system to convert the speech signal into user commands. The Charniakâ„¢s parser is used to parse selected transcribed utterances from the study, and then Gorniakâ„¢s utility is used to generate an initial grammar and dictionary.
Semantic Representation:
The parser converts the userâ„¢s utterance into a frame like representation of the motion command. The framehas the following fields in it:
1) path: Specifies the constraints on the path given in the utterance. Consider go vs turn.
2)speed Specifies the speed. Consider fast vs slow.
3)goal Specifies the goal specified in the utterance. Consider right vs left vs back.
Robot Control:
The robot control control system consists a set of controllers that each perform different tasks. Each controller corresponds to a different behavior of the robot. control algorithm of the robot selects the appropriate controller and its arguments based on input from the language understanding system.
Relational Behavior
when it realizes it is having trouble, The system tries to relate to the user. It uses beeps for feedback. Confused :It beeps in a confused way when it detects a failure to understand. Happy: It beeps in a happy way the first time it understands after several failures. Sad: It beeps in a sad way after several failures to understand.
[attachment=1486]