26-01-2012, 01:04 PM
Enhancing Input On and Above the Interactive Surface with Muscle Sensing
[attachment=16642]
INTRODUCTION
Interactive surfaces extend traditional desktop computing by allowing direct manipulation of objects, drawing on our experiences with the physical world. However, the limited scope of information provided by current tabletop interfaces falls significantly short of the rich gestural capabilities of the human hand. Most systems are unable to differentiate properties such as which finger or person is touching the surface, the amount of pressure exerted, or gestures that occur when not in contact with the surface. These limitations constrain the design space and interaction bandwidth of tabletop systems.
BACKGROUND AND RELATED WORK
We briefly review relevant work on interactive surfaces and provide background on muscle-sensing and its use in human-computer interaction.
Interactive Surface Sensing
While most available multi-touch systems are capable of tracking various points of user contact with a surface (e.g., [5]), the problem of identifying particular fingers, hands, or hand postures is less well solved. Existing approaches to solving this problem include camera-based sensing, electro-static coupling, and instrumented gloves.
Muscle Sensing
In an independent line of work, researchers have demonstrated the feasibility of using forearm electromyography (EMG) to decode fine finger gestures for human-computer interaction [17,18]. EMG measures the electrical signals used by the central nervous system to communicate motor intentions to muscles, as well as the electrical activity associated directly with muscle contractions. We refer the reader to [14] for a thorough description of EMG.