Skinput: Appropriating the Body as an Input Surface
#4
Skinput
The Human Arm Touch screen


Manisha Nair
S8CS
Mohnadas College Of Engineering And Technology


[attachment=10149]


Abstract
Devices with significant computational power and capabilities can now be easily carried on our bodies.
However, their small size typically leads to limited interaction space (e.g., diminutive screens, buttons,
and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply
make buttons and screens larger without losing the primary benefit of small size, we consider alternative
approaches that enhance interactions with small mobile systems. One option is to opportunistically
appropriate surface area from the environment for interactive purposes. For example, it describes a
technique that allows a small mobile device to turn tables on which it rests into a gestural finger input
canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to
carry appropriated surfaces with them (at this point, one might as well just have a larger device).
However, there is one surface that has been previous overlooked as an input canvas and one that
happens to always travel with us: our skin. Appropriating the human body as an input device is appealing
not only because we have roughly two square meters of external surface area, but also because much of it
is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception – our sense
of how our body is configured in three-dimensional space – allows us to accurately interact with our
bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our
nose, and clap our hands together without visual assistance. Few external input devices can claim this
accurate, eyes-free input characteristic and provide such a large interaction area. Skinput, a technology
that appropriates the human body for acoustic transmission, allows the skin to be used as an input
surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing
mechanical vibrations that propagate through the body. We collect these signals using a novel array of
sensors worn as an armband. This approach provides an always available, naturally portable, and on-
body finger input system. We assess the capabilities, accuracy and limitations of our technique through a
two-part, twenty-participant use study.

Introduction
Touch screens may be popular both in science fiction
and real life as the symbol of next-gen technology but
an innovation called Skinput suggests the true
interface of the future might be us. This technology
was developed by Chris Harrison, a third year Ph.D.
student in Carnegie Mellon University’s Human-
Computer Interaction Institute (HCII), along with
Desney Tan and Dan Morris of Microsoft Research.
A combination of simple bio-acoustic sensors and
some sophisticated machine learning makes it
possible for people to use their fingers or forearms
and potentially any part of their bodies as touch pads
to control smart phones or other mobile devices.
Skinput turns your own body into a touch screen
interface. It uses a different and novel technique: It
“listens” to the vibrations in your body. It could help
people to take better advantage of the tremendous
computing power and various capabilities now
available in compact devices that can be easily worn
or carried. The diminutive size that makes smart
phones, MP3 players and other devices so portable
also severely limits the size, utility and functionality
of the keypads, touch screens and jog wheels
typically used to control them. Thus, we can use our
own skin-the body’s largest organ as an input canvas
because it is always travels with us and makes the
ultimate interactive touch surface. It is a
revolutionary input technology which uses the skin as
the tracking surface or the unique input device and
has the potential to change the way humans interact
with electronic gadgets. It is used to control several
mobile devices including a mobile phone and a
portable music player. Skinput system listens to the
sounds made by tapping on parts of a body and pairs
those sounds with actions that drive tasks on a
computer or cell phone. When coupled with a small
projector, it can simulate a menu interface like the
ones used in other kinds of electronics. Tapping on
different areas of the arm and hand allow users to
scroll through menus and select options. Skinput
could also be used without a visual interface. For
instance, with an MP3 player one doesn’t need a
visual menu to stop, pause, play, advance to the next
track or change the volume. Different areas on the
arm and fingers simulate common commands for
these tasks, and a user could tap them without even
needing to look. Skinput uses a series of sensors to
track where a user taps on his arm. This system is
simply amazing and accurate.

Primary Goals
Always-Available Input:
The primary goal of Skinput is to provide an always
available mobile input system – that is, an input
system that does not require a user to carry or pick up
a device. A number of alternative approaches have
been proposed that operate in this space. Techniques
based on computer vision are popular. These,
however, are computationally expensive and error
prone in mobile scenarios (where, e.g., non-input
optical flow is prevalent). Speech input is a logical
choice for always-available input, but is limited in its
precision in unpredictable acoustic environments, and
suffers from privacy and scalability issues in shared
environments. Other approaches have taken the form
of wearable computing. This typically involves a
physical input device built in a form considered to be
part of one’s clothing. For example, glove-based
input systems allow users to retain most of their
natural hand movements, but are cumbersome,
uncomfortable, and disruptive to tactile sensation.
Post and Orth present a “smart fabric” system that
embeds sensors and conductors into fabric, but taking
this approach to always-available input necessitates
embedding technology in all clothing, which would
be prohibitively complex and expensive. The Sixth
Sense project proposes a mobile, always available
input/output capability by combining projected
information with a color-marker-based vision
tracking system. This approach is feasible, but suffers
from serious occlusion and accuracy limitations. For
example, determining whether, e.g., a finger has
tapped a button, or is merely hovering above it, is
extraordinarily difficult.

Bio-Sensing:
Skinput leverages the natural acoustic conduction
properties of the human body to provide an input
system, and is thus related to previous work in the
use of biological signals for computer input. Signals
traditionally used for diagnostic medicine, such as
heart rate and skin resistance, have been appropriated
for assessing a user’s emotional state. These features
are generally subconsciously driven and cannot be
controlled with sufficient precision for direct input.
Similarly, brain sensing technologies such as
electroencephalography (EEG) and functional near-
infrared spectroscopy (fNIR) have been used by HCI
researchers to assess cognitive and emotional state;
this work also primarily looked at involuntary
signals. In contrast, brain signals have been harnessed
as a direct input for use by paralyzed patients, but
direct brain computer interfaces (BCIs) still lacks the
bandwidth required for everyday computing tasks,
and require levels of focus, training, and
concentration that are incompatible with typical
computer interaction. Researchers have harnessed the
electrical signals generated by muscle activation
during normal hand movement through
electromyography (EMG). At present, however, this
approach typically requires expensive amplification
systems and the application of conductive gel for
effective signal acquisition, which would limit the
acceptability of this approach for most users. The
input technology most related to our own is that of
Amento et al, who placed contact microphones on
user’s wrist to assess finger movement. However, this
work was never formally evaluated, as is constrained
to finger motions in one hand. The Hambone system
employs a similar setup. Moreover, both techniques
required the placement of sensors near the area of
interaction (e.g., the wrist), increasing the degree of
invasiveness and visibility. Finally, bone conduction
microphones and headphones – now common
consumer technologies - represent an additional bio-
sensing technology that is relevant to the present
work. These leverage the fact that sound frequencies
relevant to human speech propagate well through
bone. Bone conduction microphones are typically
worn near the ear, where they can sense vibrations
propagating from the mouth and larynx during
speech. Bone conduction headphones send sound
through the bones of the skull and jaw directly to the
inner ear, bypassing transmission of sound through
the air and outer ear, leaving an unobstructed path for
environmental sounds

How Skinput Achieves The Goals
Skin:

To expand the range of sensing modalities for always
available input systems, we introduce Skinput, a
novel input technique that allows the skin to be used
as a finger input surface. In our prototype system, we
choose to focus on the arm (although the technique
could be applied elsewhere). This is an attractive area
to appropriate as it provides considerable surface area
for interaction, including a contiguous and flat area
for projection. Appropriating the human body as an
input device is appealing not only because we have
roughly two square meters of external surface area,
but also because much of it is easily accessible by our
hands (e.g., arms, upper legs, torso). Furthermore,
proprioception (our sense of how our body is
configured in three-dimensional space) allows us to
accurately interact with our bodies in an eyes-free
manner. For example, we can readily flick each of
our fingers, touch the tip of our nose, and clap our
hands together without visual assistance. Few
external input devices can claim this accurate, eyes-
free input characteristic and provide such a large
interaction area. Also the forearm and hands contain
a complex assemblage of bones that increases
acoustic distinctiveness of different locations. To
capture this acoustic information, we developed a
wearable armband that is non-invasive and easily
removable. In this section, we discuss the mechanical
phenomenon that enables Skinput, with a specific
focus on the mechanical properties of the arm.

Bio-Acoustics:
When a finger taps the skin, several distinct forms of
acoustic energy are produced. Some energy is
radiated into the air as sound waves; this energy is
not captured by the Skinput system. Among the
acoustic energy transmitted through the arm, the most
readily visible are transverse waves, created by the
displacement of the skin from a finger impact. When
shot with a high-speed camera, these appear as
ripples, which propagate outward from the point of
contact. The amplitude of these ripples is correlated
to both the tapping force and to the volume and
compliance of soft tissues under the impact area. In
general, tapping on soft regions of the arm creates
higher amplitude transverse waves than tapping on
boney areas (e.g., wrist, palm, fingers), which have
negligible compliance. In addition to the energy that
propagates on the surface of the arm, some energy is
transmitted inward, toward the skeleton. These
longitudinal (compressive) waves travel through the
soft tissues of the arm, exciting the bones, which are
much less deformable then the soft tissue but can
respond to mechanical excitation by rotating and
translating as a rigid body. This excitation vibrates
soft tissues surrounding the entire length of the bone,
resulting in new longitudinal waves that propagate
outward to the skin. We highlight these two separate
forms of conduction – transverse waves moving
directly along the arm surface and longitudinal waves
moving into and out of the bone through soft tissues
because these mechanisms carry energy at different
frequencies and over different distances. Roughly
speaking, higher frequencies propagate more readily
through bone than through soft tissue, and bone
conduction carries energy over larger distances than
soft tissue conduction. While we do not explicitly
model the specific mechanisms of conduction, or
depend on these mechanisms for our analysis, we do
believe the success of our technique depends on the
complex acoustic patterns that result from mixtures
of these modalities. Similarly, we also believe that
joints play an important role in making tapped
locations acoustically distinct. Bones are held
together by ligaments, and joints often include
additional biological structures such as fluid cavities.
This makes joints behave as acoustic filters. In some
cases, these may simply dampen acoustics; in other
cases, these will selectively attenuate specific
frequencies, creating location specific acoustic
signatures.

Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: skinput circuit diagram, skinput pdf in wikepideya, ieee paper on skinput, future scope skinput, skinput documentation, future scopes of skinput, ieee papers on skinput,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Messages In This Thread
RE: Skinput: Appropriating the Body as an Input Surface - by seminar project explorer - 14-03-2011, 10:20 PM

Possibly Related Threads...
Thread Author Replies Views Last Post
Photo Surface conduction Electron emitter Display Computer Science Clay 6 4,889 17-10-2012, 02:47 PM
Last Post: Guest
  surface-conduction electron-emitter display (SED) tnz 10 6,496 25-02-2012, 10:04 AM
Last Post: seminar paper
  Surface-Conduction Electron-Emitter Display (SED) Computer Science Clay 2 2,715 31-01-2012, 09:44 AM
Last Post: seminar addict
  Surface Mount Technology computer science crazy 3 4,264 05-01-2012, 09:44 AM
Last Post: seminar addict
  MIMO (Multiple Input Multiple Output) computer science crazy 4 3,930 08-11-2011, 09:57 AM
Last Post: seminar addict
  SURFACE PLASMON RESONANCE NANOLASERS science projects buddy 2 2,445 30-08-2011, 09:55 AM
Last Post: seminar addict
  Surface Mount Technology MT Electrical Fan 2 3,170 30-04-2011, 07:00 AM
Last Post: davinderpro
  surface plasmon resonance seminar class 0 1,086 04-03-2011, 10:38 AM
Last Post: seminar class
  Low power and high performance sram design using bank-based selective forward body bi computer science crazy 0 1,204 21-10-2009, 08:37 PM
Last Post: computer science crazy
  Vertical Cavity Surface Emitting Laser Computer Science Clay 0 1,436 30-07-2009, 04:10 PM
Last Post: Computer Science Clay

Forum Jump: