Skinput: Appropriating the Body as an Input Surface
#1



Abstract

present Skinput, a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always available, naturally portable, and on-body finger input system. We assess the capabilities, accuracy and limitations of our technique through a two-part, twenty-participant user study. To further illustrate the utility of our approach, we conclude with several proof-of-concept applications we developed.


Introduction

Devices with significant computational power and capabilities can now be easily carried on our bodies. However, their small size typically leads to limited interaction space (e.g., diminutive screens, buttons, and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply make buttons and screens larger without losing the primary benefit of small size, we consider alternative approaches that enhance interactions with small mobile systems.

One option is to opportunistically appropriate surface area from the environment for interactive purposes. For example, [10] describes a technique that allows a small mobile device to turn tables on which it rests into a gestural finger input canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to carry appropriated surfaces with them (at this point, one might as well just have a larger device). However, there is one surface that has been previous overlooked as an input canvas, and one that happens to always travel with us: our skin.
Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception – our sense of how our body is configured in three-dimensional space – allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area.

For more details, please visit
http://research.microsoften-us/um/redmond/groups/cue/publications/HarrisonSkinputCHI2010.pdf
Reply
#2
Thumbs Up 
[attachment=5603]
ABSTRACT:
We present Skinput, a technology that appropriates the human body for acoustic transmission, allowing the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always available, naturally portable, and on-body finger input system. We assess the capabilities, accuracy and limitations of our technique through a two-part, twenty-participant user study. To further illustrate the utility of our approach, we conclude
with several proof-of-concept applications we developed.



INTRODUCTION
Devices with significant computational power and capabilities can now be easily carried on our bodies. However, their small size typically leads to limited interaction space (e.g.,
diminutive screens, buttons, and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply make buttons and screens larger without
losing the primary benefit of small size, we consider alternative approaches that enhance interactions with small mobile systems.
One option is to opportunistically appropriate surface area from the environment for interactive purposes. For example, [10] describes a technique that allows a small mobile
device to turn tables on which it rests into a gestural finger input canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to carry appropriatedsurfaces with them (at this point, one might as well just have a larger device). However, there is one surface that has been previous overlooked as an input canvas, and one that happens to always travel with us: our skin. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception – our sense of how our body is configured in three-dimensional space – allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area. In this paper, we present our work on Skinput – a method that allows the body to be appropriated for finger input using a novel, non-invasive, wearable bio-acoustic sensor.


Reply
#3

Presented By:
AMAN DU

[attachment=6664]


What it is?

A novel input technique that allows the skin to be used as a finger input surface.

To capture this acoustic information , they developed a wearable armband that is non-invasive and easily removable

How skinput works


Data was then sent from the client over a local socket to our primary application, written in Java.

Key function of application are:

Live visualization.
Segmentation of data stream.
Classification of Input instances.
Reply
#4
Skinput
The Human Arm Touch screen


Manisha Nair
S8CS
Mohnadas College Of Engineering And Technology


[attachment=10149]


Abstract
Devices with significant computational power and capabilities can now be easily carried on our bodies.
However, their small size typically leads to limited interaction space (e.g., diminutive screens, buttons,
and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply
make buttons and screens larger without losing the primary benefit of small size, we consider alternative
approaches that enhance interactions with small mobile systems. One option is to opportunistically
appropriate surface area from the environment for interactive purposes. For example, it describes a
technique that allows a small mobile device to turn tables on which it rests into a gestural finger input
canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to
carry appropriated surfaces with them (at this point, one might as well just have a larger device).
However, there is one surface that has been previous overlooked as an input canvas and one that
happens to always travel with us: our skin. Appropriating the human body as an input device is appealing
not only because we have roughly two square meters of external surface area, but also because much of it
is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception – our sense
of how our body is configured in three-dimensional space – allows us to accurately interact with our
bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our
nose, and clap our hands together without visual assistance. Few external input devices can claim this
accurate, eyes-free input characteristic and provide such a large interaction area. Skinput, a technology
that appropriates the human body for acoustic transmission, allows the skin to be used as an input
surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing
mechanical vibrations that propagate through the body. We collect these signals using a novel array of
sensors worn as an armband. This approach provides an always available, naturally portable, and on-
body finger input system. We assess the capabilities, accuracy and limitations of our technique through a
two-part, twenty-participant use study.

Introduction
Touch screens may be popular both in science fiction
and real life as the symbol of next-gen technology but
an innovation called Skinput suggests the true
interface of the future might be us. This technology
was developed by Chris Harrison, a third year Ph.D.
student in Carnegie Mellon University’s Human-
Computer Interaction Institute (HCII), along with
Desney Tan and Dan Morris of Microsoft Research.
A combination of simple bio-acoustic sensors and
some sophisticated machine learning makes it
possible for people to use their fingers or forearms
and potentially any part of their bodies as touch pads
to control smart phones or other mobile devices.
Skinput turns your own body into a touch screen
interface. It uses a different and novel technique: It
“listens” to the vibrations in your body. It could help
people to take better advantage of the tremendous
computing power and various capabilities now
available in compact devices that can be easily worn
or carried. The diminutive size that makes smart
phones, MP3 players and other devices so portable
also severely limits the size, utility and functionality
of the keypads, touch screens and jog wheels
typically used to control them. Thus, we can use our
own skin-the body’s largest organ as an input canvas
because it is always travels with us and makes the
ultimate interactive touch surface. It is a
revolutionary input technology which uses the skin as
the tracking surface or the unique input device and
has the potential to change the way humans interact
with electronic gadgets. It is used to control several
mobile devices including a mobile phone and a
portable music player. Skinput system listens to the
sounds made by tapping on parts of a body and pairs
those sounds with actions that drive tasks on a
computer or cell phone. When coupled with a small
projector, it can simulate a menu interface like the
ones used in other kinds of electronics. Tapping on
different areas of the arm and hand allow users to
scroll through menus and select options. Skinput
could also be used without a visual interface. For
instance, with an MP3 player one doesn’t need a
visual menu to stop, pause, play, advance to the next
track or change the volume. Different areas on the
arm and fingers simulate common commands for
these tasks, and a user could tap them without even
needing to look. Skinput uses a series of sensors to
track where a user taps on his arm. This system is
simply amazing and accurate.

Primary Goals
Always-Available Input:
The primary goal of Skinput is to provide an always
available mobile input system – that is, an input
system that does not require a user to carry or pick up
a device. A number of alternative approaches have
been proposed that operate in this space. Techniques
based on computer vision are popular. These,
however, are computationally expensive and error
prone in mobile scenarios (where, e.g., non-input
optical flow is prevalent). Speech input is a logical
choice for always-available input, but is limited in its
precision in unpredictable acoustic environments, and
suffers from privacy and scalability issues in shared
environments. Other approaches have taken the form
of wearable computing. This typically involves a
physical input device built in a form considered to be
part of one’s clothing. For example, glove-based
input systems allow users to retain most of their
natural hand movements, but are cumbersome,
uncomfortable, and disruptive to tactile sensation.
Post and Orth present a “smart fabric” system that
embeds sensors and conductors into fabric, but taking
this approach to always-available input necessitates
embedding technology in all clothing, which would
be prohibitively complex and expensive. The Sixth
Sense project proposes a mobile, always available
input/output capability by combining projected
information with a color-marker-based vision
tracking system. This approach is feasible, but suffers
from serious occlusion and accuracy limitations. For
example, determining whether, e.g., a finger has
tapped a button, or is merely hovering above it, is
extraordinarily difficult.

Bio-Sensing:
Skinput leverages the natural acoustic conduction
properties of the human body to provide an input
system, and is thus related to previous work in the
use of biological signals for computer input. Signals
traditionally used for diagnostic medicine, such as
heart rate and skin resistance, have been appropriated
for assessing a user’s emotional state. These features
are generally subconsciously driven and cannot be
controlled with sufficient precision for direct input.
Similarly, brain sensing technologies such as
electroencephalography (EEG) and functional near-
infrared spectroscopy (fNIR) have been used by HCI
researchers to assess cognitive and emotional state;
this work also primarily looked at involuntary
signals. In contrast, brain signals have been harnessed
as a direct input for use by paralyzed patients, but
direct brain computer interfaces (BCIs) still lacks the
bandwidth required for everyday computing tasks,
and require levels of focus, training, and
concentration that are incompatible with typical
computer interaction. Researchers have harnessed the
electrical signals generated by muscle activation
during normal hand movement through
electromyography (EMG). At present, however, this
approach typically requires expensive amplification
systems and the application of conductive gel for
effective signal acquisition, which would limit the
acceptability of this approach for most users. The
input technology most related to our own is that of
Amento et al, who placed contact microphones on
user’s wrist to assess finger movement. However, this
work was never formally evaluated, as is constrained
to finger motions in one hand. The Hambone system
employs a similar setup. Moreover, both techniques
required the placement of sensors near the area of
interaction (e.g., the wrist), increasing the degree of
invasiveness and visibility. Finally, bone conduction
microphones and headphones – now common
consumer technologies - represent an additional bio-
sensing technology that is relevant to the present
work. These leverage the fact that sound frequencies
relevant to human speech propagate well through
bone. Bone conduction microphones are typically
worn near the ear, where they can sense vibrations
propagating from the mouth and larynx during
speech. Bone conduction headphones send sound
through the bones of the skull and jaw directly to the
inner ear, bypassing transmission of sound through
the air and outer ear, leaving an unobstructed path for
environmental sounds

How Skinput Achieves The Goals
Skin:

To expand the range of sensing modalities for always
available input systems, we introduce Skinput, a
novel input technique that allows the skin to be used
as a finger input surface. In our prototype system, we
choose to focus on the arm (although the technique
could be applied elsewhere). This is an attractive area
to appropriate as it provides considerable surface area
for interaction, including a contiguous and flat area
for projection. Appropriating the human body as an
input device is appealing not only because we have
roughly two square meters of external surface area,
but also because much of it is easily accessible by our
hands (e.g., arms, upper legs, torso). Furthermore,
proprioception (our sense of how our body is
configured in three-dimensional space) allows us to
accurately interact with our bodies in an eyes-free
manner. For example, we can readily flick each of
our fingers, touch the tip of our nose, and clap our
hands together without visual assistance. Few
external input devices can claim this accurate, eyes-
free input characteristic and provide such a large
interaction area. Also the forearm and hands contain
a complex assemblage of bones that increases
acoustic distinctiveness of different locations. To
capture this acoustic information, we developed a
wearable armband that is non-invasive and easily
removable. In this section, we discuss the mechanical
phenomenon that enables Skinput, with a specific
focus on the mechanical properties of the arm.

Bio-Acoustics:
When a finger taps the skin, several distinct forms of
acoustic energy are produced. Some energy is
radiated into the air as sound waves; this energy is
not captured by the Skinput system. Among the
acoustic energy transmitted through the arm, the most
readily visible are transverse waves, created by the
displacement of the skin from a finger impact. When
shot with a high-speed camera, these appear as
ripples, which propagate outward from the point of
contact. The amplitude of these ripples is correlated
to both the tapping force and to the volume and
compliance of soft tissues under the impact area. In
general, tapping on soft regions of the arm creates
higher amplitude transverse waves than tapping on
boney areas (e.g., wrist, palm, fingers), which have
negligible compliance. In addition to the energy that
propagates on the surface of the arm, some energy is
transmitted inward, toward the skeleton. These
longitudinal (compressive) waves travel through the
soft tissues of the arm, exciting the bones, which are
much less deformable then the soft tissue but can
respond to mechanical excitation by rotating and
translating as a rigid body. This excitation vibrates
soft tissues surrounding the entire length of the bone,
resulting in new longitudinal waves that propagate
outward to the skin. We highlight these two separate
forms of conduction – transverse waves moving
directly along the arm surface and longitudinal waves
moving into and out of the bone through soft tissues
because these mechanisms carry energy at different
frequencies and over different distances. Roughly
speaking, higher frequencies propagate more readily
through bone than through soft tissue, and bone
conduction carries energy over larger distances than
soft tissue conduction. While we do not explicitly
model the specific mechanisms of conduction, or
depend on these mechanisms for our analysis, we do
believe the success of our technique depends on the
complex acoustic patterns that result from mixtures
of these modalities. Similarly, we also believe that
joints play an important role in making tapped
locations acoustically distinct. Bones are held
together by ligaments, and joints often include
additional biological structures such as fluid cavities.
This makes joints behave as acoustic filters. In some
cases, these may simply dampen acoustics; in other
cases, these will selectively attenuate specific
frequencies, creating location specific acoustic
signatures.

Reply
#5
[attachment=10345]
INTRODUCTION
Devices with significant computational power and capabilities can now be easily carried on our bodies. However, their small size typically leads to limited interaction space (e.g., diminutive screens, buttons, and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply make buttons and screens larger without losing the primary benefit of small size, we consider alternative approaches that enhance interactions with small mobile systems. One option is to opportunistically appropriate surface area from the environment for interactive purposes. For example, [10] describes a technique that allows a small mobile device to turn tables on which it rests into a gestural finger input canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to carry appropriated surfaces with them (at this point, one might as well just have a larger device). However, there is one surface that has been previous overlooked as an input canvas, and one that happens to always travel with us: our skin.
Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception – our sense of how our body is configured in three-dimensional space – allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area.
In this paper, we present our work on Skinput – a method that allows the body to be appropriated for finger input using a novel, non-invasive, wearable bio-acoustic sensor.
The contributions of this paper are:
1) We describe the design of a novel, wearable sensor for bio-acoustic signal acquisition (Figure
2) We describe an analysis approach that enables our system to resolve the location of finger taps on the body.
3) We assess the robustness and limitations of this system through a user study.
4) We explore the broader space of bio-acoustic input through prototype applications and additional experimentation.
RELATED WORK
Always-Available Input

The primary goal of Skinput is to provide an alwaysavailable mobile input system – that is, an input system that does not require a user to carry or pick up a device. A number of alternative approaches have been proposed that operate in this space. Techniques based on computer vision are popular (e.g. [3,26,27], see [7] for a recent survey). These, however, are computationally expensive and error prone in mobile scenarios (where, e.g., non-input optical flow is prevalent). Speech input (e.g. [13,15]) is a logical choice for always-available input, but is limited in its precision in unpredictable acoustic environments, and suffers from privacy and scalability issues in shared environments.
Other approaches have taken the form of wearable computing. This typically involves a physical input device built in a form considered to be part of one’s clothing. For example, glove-based input systems (see [25] for a review) allow users to retain most of their natural hand movements, but are cumbersome, uncomfortable, and disruptive to tactile sensation. Post and Orth [22] present a “smart fabric” system that embeds sensors and conductors into fabric, but taking this approach to always-available input necessitates embedding technology in all clothing, which would be prohibitively complex and expensive.
The SixthSense project [19] proposes a mobile, alwaysavailable input/output capability by combining projected information with a color-marker-based vision tracking system. This approach is feasible, but suffers from serious occlusion and accuracy limitations. For example, determining whether, e.g., a finger has tapped a button, or is merely hovering above it, is extraordinarily difficult. In the present work, we briefly explore the combination of on-body sensing with on-body projection.
Bio-Sensing
Skinput leverages the natural acoustic conduction properties of the human body to provide an input system, and is thus related to previous work in the use of biological signals for computer input. Signals traditionally used for diagnostic medicine, such as heart rate and skin resistance, have been appropriated for assessing a user’s emotional state (e.g. [16,17,20]). These features are generally subconsciouslydriven and cannot be controlled with sufficient precision for direct input. Similarly, brain sensing technologies such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIR) have been used by HCI researchers to assess cognitive and emotional state (e.g. [9,11,14]); this work also primarily looked at involuntary signals. In contrast, brain signals have been harnessed as a direct input for use by paralyzed patients (e.g. [8,18]), but direct braincomputer interfaces (BCIs) still lack the bandwidth required for everyday computing tasks, and require levels of focus, training, and concentration that are incompatible with typical computer interaction.
There has been less work relating to the intersection of finger input and biological signals. Researchers have harnessed the electrical signals generated by muscle activation during normal hand movement through electromyography (EMG) (e.g. [23,24]). At present, however, this approach typically requires expensive amplification systems and the application of conductive gel for effective signal acquisition, which would limit the acceptability of this approach for most users.
The input technology most related to our own is that of Amento et al. [2], who placed contact microphones on a user’s wrist to assess finger movement. However, this work was never formally evaluated, as is constrained to finger motions in one hand. The Hambone system [6] employs a similar setup, and through an HMM, yields classification accuracies around 90% for four gestures (e.g., raise heels, snap fingers). Performance of false positive rejection remains untested in both systems at present. Moreover, both techniques required the placement of sensors near the area of interaction (e.g., the wrist), increasing the degree of invasiveness and visibility.
Finally, bone conduction microphones and headphones – now common consumer technologies - represent an additional bio-sensing technology that is relevant to the present work. These leverage the fact that sound frequencies relevant to human speech propagate well through bone. Bone conduction microphones are typically worn near the ear, where they can sense vibrations propagating from the mouth and larynx during speech. Bone conduction headphones send sound through the bones of the skull and jaw directly to the inner ear, bypassing transmission of sound
through the air and outer ear, leaving an unobstructed path for environmental sounds.
Acoustic Input
Our approach is also inspired by systems that leverage acoustic transmission through (non-body) input surfaces. Paradiso et al. [21] measured the arrival time of a sound at multiple sensors to locate hand taps on a glass window. Ishii et al. [12] use a similar approach to localize a ball hitting a table, for computer augmentation of a real-world game. Both of these systems use acoustic time-of-flight for localization, which we explored, but found to be insufficiently robust on the human body, leading to the fingerprinting approach described in this paper.
SKINPUT
To expand the range of sensing modalities for always available input systems, we introduce Skinput, a novel input technique that allows the skin to be used as a finger input surface. In our prototype system, we choose to focus on the arm (although the technique could be applied elsewhere). This is an attractive area to appropriate as it provides considerable surface area for interaction, including a contiguous and flat area for projection (discussed subsequently).
Furthermore, the forearm and hands contain a complex assemblage of bones that increases acoustic distinctiveness of different locations. To capture this acoustic information, we developed a wearable armband that is non-invasive and easily removable (Figures 1 and 5).
In this section, we discuss the mechanical phenomena that enable Skinput, with a specific focus on the mechanical properties of the arm. Then we will describe the Skinput sensor and the processing techniques we use to segment, analyze, and classify bio-acoustic signals.
Bio-Acoustics
When a finger taps the skin, several distinct forms of acoustic energy are produced. Some energy is radiated into the air as sound waves; this energy is not captured by the Skinput system. Among the acoustic energy transmitted through the arm, the most readily visible are transverse waves, created by the displacement of the skin from a finger impact (Figure 2). When shot with a high-speed camera, these appear as ripples, which propagate outward from the point of contact (see video). The amplitude of these ripples is correlated to both the tapping force and to the volume and compliance of soft tissues under the impact area. In general, tapping on soft regions of the arm creates higher amplitude transverse waves than tapping on boney areas (e.g., wrist, palm, fingers), which have negligible compliance.
In addition to the energy that propagates on the surface of the arm, some energy is transmitted inward, toward the skeleton (Figure 3). These longitudinal (compressive) waves travel through the soft tissues of the arm, exciting the bone, which is much less deformable then the soft tissue but can respond to mechanical excitation by rotating and translating as a rigid body. This excitation vibrates soft tissues surrounding the entire length of the bone, resulting in new longitudinal waves that propagate outward to the skin.
We highlight these two separate forms of conduction – transverse waves moving directly along the arm surface, and longitudinal waves moving into and out of the bone through soft tissues – because these mechanisms carry energy at different frequencies and over different distances. Roughly speaking, higher frequencies propagate more readily through bone than through soft tissue, and bone conduction carries energy over larger distances than soft tissue conduction. While we do not explicitly model the specific mechanisms of conduction, or depend on these mechanisms for our analysis, we do believe the success of our technique depends on the complex acoustic patterns that result from mixtures of these modalities.
Similarly, we also believe that joints play an important role in making tapped locations acoustically distinct. Bones are held together by ligaments, and joints often include additional biological structures such as fluid cavities. This makes joints behave as acoustic filters. In some cases, these may simply dampen acoustics; in other cases, these will selectively attenuate specific frequencies, creating location specific acoustic signatures.
Figure 1. Transverse wave propagation: Finger impacts displace the skin, creating transverse waves (ripples). The sensor is activated as the wave passes underneath it.
Figure 2. Longitudinal wave propagation: Finger impacts create longitudinal (compressive) waves that cause internal skeletal structures to vibrate. This, in turn, creates longitudinal waves that emanate outwards from the bone (along its entire length) toward the skin.
Sensing
To capture the rich variety of acoustic information described in the previous section, we evaluated many sensing technologies, including bone conduction microphones, conventional
microphones coupled with stethoscopes [10], piezo contact microphones [2], and accelerometers. However, these transducers were engineered for very different applications than measuring acoustics transmitted through the human body. As such, we found them to be lacking in several
significant ways. Foremost, most mechanical sensors are engineered to provide relatively flat response curves over the range of frequencies that is relevant to our signal. This is a desirable property for most applications where a faithful representation of an input signal – uncolored by the properties of the transducer – is desired. However, because only a specific set of frequencies is conducted through the arm in response to tap input, a flat response curve leads to the capture of irrelevant frequencies and thus to a high signal-to-noise ratio.
While bone conduction microphones might seem a suitable choice for Skinput, these devices are typically engineered for capturing human voice, and filter out energy below the range of human speech (whose lowest frequency is around 85Hz). Thus most sensors in this category were not especially sensitive to lower-frequency signals (e.g., 25Hz), which we found in our empirical pilot studies to be vital in characterizing finger taps.
To overcome these challenges, we moved away from a single sensing element with a flat response curve, to an array of highly tuned vibration sensors. Specifically, we employ small, cantilevered piezo films (MiniSense100, Measurement Specialties, Inc.). By adding small weights to the end of the cantilever, we are able to alter the resonant frequency, allowing the sensing element to be responsive to a unique, narrow, low-frequency band of the acoustic spectrum. Adding more mass lowers the range of excitation to which a sensor responds; we weighted each element such that it aligned with particular frequencies that pilot studies showed to be useful in characterizing bio-acoustic input.
Reply
#6
[attachment=12193]

skinput


Presented By,
Aruna Arali


INTRODUCTION

Skinput uses our skin as a medium for controlling a computer or other electronic gadgets.
Much of external surface area of human body is easily accessible by our hands.
Uses the concept of proprioception – our sense of how our body is configured in three-dimensional space – allows us to accurately interact with our bodies in an eyes-free manner.
WHAT IS SKINPUT??
Giving input through skin.
It was developed by Chris Harrison(Carnegie Mellon University).
It can allow user to simply tap their skin to control audio devices, make phone calls and navigate browsing systems.
TOUCHSCREEN vs SKINPUT





PRINCIPLES OF SKINPUT

It “listens “ to the vibration in our body.
“Skinput” also responds to various hand gestures.
The arm is an instrument.
CONDUCTIVE BODY PAINT

Our body is painted with a specially formulated ink that acts a medium to send the information from one person to another, transmit data from a person to computer.
Our gestures, movements or touch allow us to communicate with electronic devices directly.
The ink used is non-toxic and water soluble and is safe for skin application.
HOW IT WORKS?

It needs bluetooth connection.
It uses a micro chip sized pico projector to display menu.
Uses an acoustic detector to detect sound vibrations.
SKINPUT INTERFACE INPUT ARMBAND
.

When we tap our skin with our finger we generate
transverse waves that the sensor arrays can pick up. With high-speed
photography we can actually see these waves as they ripple outward
from the finger tap like those formed by a stone thrown into
pond.




You also generate compressive waves that travel through
the arm tissues until they reach the bone. The bone then acts as a
radiator, retransmitting new longitudinal waves that propagate
outward until they reach the sensor array (where they can be
measured, too).


Advanced signal filtering and detection techniques are
then applied to the sensors' ten channels of input to identify
which spot was tapped, turning the tap into a usable,
uniquely identifiable input signal.
OTHER APPROACHES

Glove-based input system:

Which allows users to retain most of their natural hand movements, but are cumbersome and uncomfortable.

Smart-fabric system:

That embeds sensors and conductors into fabric but is a complex and expensive approach.
ADVANTAGES

Interact with the gadget directly.
Don’t have to worry about keypad.
People with larger fingers get trouble in navigating tiny buttons and keyboards on mobile phones, with skinput that problem disappears.
APPLICATIONS

Computer interfaces.
Communication devices.
Such as mobile phones.
Medical devices.
Diagnostic medicine ,such as heart rate and skin
resistance to appropriately assess a user’s emotional
state.
LIMITATIONS

Though the band seems easy enough to slip on, it’s highly unlikely that most people will want it residing on their arms all day. Plus, unless you already use a Bluetooth device you’ll still have to reach for your cell phone to take that call.
Skin put is not available yet, but could be in the next few years.
CONCLUSION

Skinput technology provides an always available mobile input system that does not require a user to carry or pick up a device.
Using skinput technology, human body can be appropriated as an input surface to any of the devices ,system performs very well for a series of gestures, even when the body is in motion.





REFERENCES

Electronics for you magazine


Reply
#7
[attachment=12393]
What is Skin put Technology ??
Developed by Chris Harrison at Microsoft Labs in Redmond, Washington.
A new prototype that allows you to use your skin as both a touch screen and an input device.
Marriage of two technologies
The ability to detect the ultralow- frequency sound.
The microchip-sized "pico" projectors.
PRINCIPLE OF SKINPUT
Tapping on different parts of your arm creates vibrations .
It applies the use of series of sensors to determine where the user taps on their arms.
Responds to various hand gestures.
How it works….
Tapping the “buttons” causes ripples to run through your skin and bones.
An acoustic detector, in the armband, calculates which part of the display is to be activated.
RELATED PICTURES
Advantages….
Allows your skin to become your touch screen.
More accuracy than we have ever had with a mouse.
Removes trouble in navigating tiny buttons and keyboards on mobile phones.
ADVANTAGES CONTD….
Complete and portable system.
Projected interface can appear much larger than it ever could on a device’s screen.
Dimming the lights creates an even greater color contrast if your skin and the text are too similar in color during daylight.
DISADVANTAGES….
Skinput will be dysfunctional for the road.
Unless you already use a Bluetooth device you’ll still have to reach for your cell phone to take that call.
Inconvenient to reside the band entire day on arms.
CONCLUSION
Provide an always available mobile input system.

Reply
#8
Submitted by:-
ANUJ KUMAR

[attachment=12764]
INTRODUCTION
Devices with significant computational power and capabilities can now be easily carried on our bodies. However, their small size typically leads to limited interaction space (e.g., diminutive screens, buttons, and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply make buttons and screens larger without losing the primary benefit of small size, we consider alternative approaches that enhance interactions with small mobile systems. One option is to opportunistically appropriate surface area from the environment for interactive purposes. For example, [10] describes a technique that allows a small mobile device to turn tables on which it rests into a gestural finger input canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to carry appropriated surfaces with them (at this point, one might as well just have a larger device). However, there is one surface that has been previous overlooked as an input canvas, and one that happens to always travel with us: our skin.
Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception – our sense of how our body is configured in three-dimensional space – allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area.
In this paper, we present our work on Skinput – a method that allows the body to be appropriated for finger input using a novel, non-invasive, wearable bio-acoustic sensor.
The contributions of this paper are:
1) We describe the design of a novel, wearable sensor for bio-acoustic signal acquisition (Figure 1.)
2) We describe an analysis approach that enables our system to resolve the location of finger taps on the body.
3) We assess the robustness and limitations of this system through a user study.
Figure 1. A wearable, bio-acoustic sensing array built into an armband. Sensing elements detect vibrations transmitted through the body. The two sensor packages shown above each contain five, specially weighted, cantilevered piezo films, responsive to a particular frequency range.
4) We explore the broader space of bio-acoustic input through prototype applications and additional experimentation.
RELATED WORK
Always-Available Input
The primary goal of Skinput is to provide an alwaysavailable mobile input system – that is, an input system that does not require a user to carry or pick up a device. A number of alternative approaches have been proposed that operate in this space. Techniques based on computer vision are popular (e.g. [3,26,27], see [7] for a recent survey). These, however, are computationally expensive and error prone in mobile scenarios (where, e.g., non-input optical flow is prevalent). Speech input (e.g. [13,15]) is a logical choice for always-available input, but is limited in its precision in unpredictable acoustic environments, and suffers from privacy and scalability issues in shared environments.
Other approaches have taken the form of wearable computing. This typically involves a physical input device built in a form considered to be part of one’s clothing. For example, glove-based input systems (see [25] for a review) allow users to retain most of their natural hand movements, but are cumbersome, uncomfortable, and disruptive to tactile sensation. Post and Orth [22] present a “smart fabric” system that embeds sensors and conductors into fabric, but taking this approach to always-available input necessitates embedding technology in all clothing, which would be prohibitively complex and expensive.
The SixthSense project [19] proposes a mobile, alwaysavailable input/output capability by combining projected information with a color-marker-based vision tracking system. This approach is feasible, but suffers from serious occlusion and accuracy limitations. For example, determining whether, e.g., a finger has tapped a button, or is merely hovering above it, is extraordinarily difficult. In the present work, we briefly explore the combination of on-body sensing with on-body projection.
Bio-Sensing
Skinput leverages the natural acoustic conduction properties of the human body to provide an input system, and is thus related to previous work in the use of biological signals for computer input. Signals traditionally used for diagnostic medicine, such as heart rate and skin resistance, have been appropriated for assessing a user’s emotional state (e.g. [16,17,20]). These features are generally subconsciouslydriven and cannot be controlled with sufficient precision for direct input. Similarly, brain sensing technologies such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIR) have been used by HCI researchers to assess cognitive and emotional state (e.g. [9,11,14]); this work also primarily looked at involuntary signals. In contrast, brain signals have been harnessed as a direct input for use by paralyzed patients (e.g. [8,18]), but direct braincomputer interfaces (BCIs) still lack the bandwidth required for everyday computing tasks, and require levels of focus, training, and concentration that are incompatible with typical computer interaction.
There has been less work relating to the intersection of finger input and biological signals. Researchers have harnessed the electrical signals generated by muscle activation during normal hand movement through electromyography (EMG) (e.g. [23,24]). At present, however, this approach typically requires expensive amplification systems and the application of conductive gel for effective signal acquisition, which would limit the acceptability of this approach for most users.
The input technology most related to our own is that of Amento et al. [2], who placed contact microphones on a user’s wrist to assess finger movement. However, this work was never formally evaluated, as is constrained to finger motions in one hand. The Hambone system [6] employs a similar setup, and through an HMM, yields classification accuracies around 90% for four gestures (e.g., raise heels, snap fingers). Performance of false positive rejection remains untested in both systems at present. Moreover, both techniques required the placement of sensors near the area of interaction (e.g., the wrist), increasing the degree of invasiveness and visibility.
Finally, bone conduction microphones and headphones – now common consumer technologies - represent an additional bio-sensing technology that is relevant to the present work. These leverage the fact that sound frequencies relevant to human speech propagate well through bone. Bone conduction microphones are typically worn near the ear, where they can sense vibrations propagating from the mouth and larynx during speech. Bone conduction headphones send sound through the bones of the skull and jaw directly to the inner ear, bypassing transmission of sound through the air and outer ear, leaving an unobstructed path for environmental sounds.
Acoustic Input
Our approach is also inspired by systems that leverage acoustic transmission through (non-body) input surfaces. Paradiso et al. [21] measured the arrival time of a sound at multiple sensors to locate hand taps on a glass window. Ishii et al. [12] use a similar approach to localize a ball hitting a table, for computer augmentation of a real-world game. Both of these systems use acoustic time-of-flight for localization, which we explored, but found to be insufficiently robust on the human body, leading to the fingerprinting approach described in this paper.
SKINPUT
To expand the range of sensing modalities for always available input systems, we introduce Skinput, a novel input technique that allows the skin to be used as a finger input surface. In our prototype system, we choose to focus on the arm (although the technique could be applied elsewhere). This is an attractive area to appropriate as it provides considerable surface area for interaction, including a contiguous and flat area for projection (discussed subsequently).
Furthermore, the forearm and hands contain a complex assemblage of bones that increases acoustic distinctiveness of different locations. To capture this acoustic information, we developed a wearable armband that is non-invasive and easily removable (Figures 1 and 5).
In this section, we discuss the mechanical phenomena that enable Skinput, with a specific focus on the mechanical properties of the arm. Then we will describe the Skinput sensor and the processing techniques we use to segment, analyze, and classify bio-acoustic signals.
Bio-Acoustics
When a finger taps the skin, several distinct forms of acoustic energy are produced. Some energy is radiated into the air as sound waves; this energy is not captured by the Skinput system. Among the acoustic energy transmitted through the arm, the most readily visible are transverse waves, created by the displacement of the skin from a finger impact (Figure 2). When shot with a high-speed camera, these appear as ripples, which propagate outward from the point of contact (see video). The amplitude of these ripples is correlated to both the tapping force and to the volume and compliance of soft tissues under the impact area. In general, tapping on soft regions of the arm creates higher amplitude transverse waves than tapping on boney areas (e.g., wrist, palm, fingers), which have negligible compliance.
In addition to the energy that propagates on the surface of the arm, some energy is transmitted inward, toward the skeleton (Figure 3). These longitudinal (compressive) waves travel through the soft tissues of the arm, exciting the bone, which is much less deformable then the soft tissue but can respond to mechanical excitation by rotating and translating as a rigid body. This excitation vibrates soft tissues surrounding the entire length of the bone, resulting in new longitudinal waves that propagate outward to the skin.
We highlight these two separate forms of conduction – transverse waves moving directly along the arm surface, and longitudinal waves moving into and out of the bone through soft tissues – because these mechanisms carry energy at different frequencies and over different distances. Roughly speaking, higher frequencies propagate more readily through bone than through soft tissue, and bone conduction carries energy over larger distances than soft tissue conduction. While we do not explicitly model the specific mechanisms of conduction, or depend on these mechanisms for our analysis, we do believe the success of our technique depends on the complex acoustic patterns that result from mixtures of these modalities.
Similarly, we also believe that joints play an important role in making tapped locations acoustically distinct. Bones are held together by ligaments, and joints often include additional biological structures such as fluid cavities. This makes joints behave as acoustic filters. In some cases, these may simply dampen acoustics; in other cases, these will selectively attenuate specific frequencies, creating location specific acoustic signatures.
Reply
#9
[attachment=14316]
CHAPTER 1
INTRODUCTION

Devices with significant computational power and capabilities can now be easily carried on our bodies. However, their small size typically leads to limited interaction space (e.g., diminutive screens, buttons, and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply make buttons and screens larger without losing the primary benefit of small size, we consider alternative approaches that enhance interactions with small mobile systems.
One option is to opportunistically appropriate surface area from the environment for interactive purposes. For example, describes a technique that allows a small mobile device to turn tables on which it rests into a gestural finger input canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to carry appropriated surfaces with them (at this point, one might as well just have a larger device). However, there is one surface that has been previous overlooked as an input canvas and one that happens to always travel with us: our skin.
Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception – our sense of how our body is configured in three-dimensional space – allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area.
In this paper, we present our work on Skinput – a method that allows the body to be appropriated for finger input using a novel, non-invasive, wearable bio-acoustic sensor.
The contributions of this paper are:
1) We describe the design of a novel, wearable sensor for bio-acoustic
signal acquisition (Figure 1).
2) We describe an analysis approach that enables our system to resolve the location of finger taps on the body.
3) We assess the robustness and limitations of this system through a user study.
4) We explore the broader space of bio-acoustic input through prototype applications and additional experimentation.
Figure 1. A wearable, bio-acoustic sensing array built into
an armband( Sensing elements detect vibrations transmitted
through the body. The two sensor packages shown
above each contain five, specially weighted and cantilevered
Piezo films, responsive to a particular frequency range)
CHAPTER 2
RELATED WORK
Always-Available Input:

The primary goal of Skinput is to provide an always available mobile input system – that is, an input system that does not require a user to carry or pick up a device. A number of alternative approaches have been proposed that operate in this space. Techniques based on computer vision are popular. These, however, are computationally expensive and error prone in mobile scenarios (where, e.g., non-input optical flow is prevalent). Speech input is a logical choice for always-available input, but is limited in its precision in unpredictable acoustic environments, and suffers from privacy and scalability issues in shared environments.
Other approaches have taken the form of wearable computing. This typically involves a physical input device built in a form considered to be part of one’s clothing. For example, glove-based input systems for a review) allow users to retain most of their natural hand movements, but are cumbersome, uncomfortable, and disruptive to tactile sensation. Post and Orth present a “smart fabric” system that embeds sensors and conductors into fabric, but taking this approach to always-available input necessitates embedding technology in all clothing, which would be prohibitively complex and expensive.
The Sixth Sense project proposes a mobile, always available input/output capability by combining projected information with a color-marker-based vision tracking system. This approach is feasible, but suffers from serious occlusion and accuracy limitations. For example, determining whether, e.g., a finger has tapped a button, or is merely hovering above it, is extraordinarily difficult. In the present work, we briefly explore the combination of on body sensing with on-body projection.
Bio-Sensing:
Skinput leverages the natural acoustic conduction properties of the human body to provide an input system, and is thus related to previous work in the use of biological signals for computer input. Signals traditionally used for diagnostic medicine, such as heart rate and skin resistance, have been appropriated for assessing a user’s emotional state. These features are generally subconsciously driven and cannot be controlled with sufficient precision for direct input. Similarly, brain sensing technologies such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIR) have been used by HCI researchers to assess cognitive and emotional state this work also primarily looked at involuntary signals. In contrast, brain signals have been harnessed as a direct input for use by paralyzed patients but direct brain computer interfaces (BCIs) still lack the bandwidth required for everyday computing tasks, and requires levels of focus, training, and concentration that are incompatible with typical computer interaction.
There has been less work relating to the intersection of finger input and biological signals. Researchers have harnessed the electrical signals generated by muscle activation during normal hand movement through electromyography (EMG). At present, however, this approach typically requires expensive amplification systems and the application of conductive gel for effective signal acquisition, which would limit the acceptability of this approach for most users.
The input technology most related to our own is that of Amento et al., who placed contact microphones on a user’s wrist to assess finger movement. However, this work was never formally evaluated, as is constrained to finger motions in one hand. The Hambone system employs a similar setup, and through an HMM, yields classification accuracies around 90% for four gestures (e.g., raise heels, snap fingers). Performance of false positive rejection remains untested in both systems at present. Moreover, both techniques required the placement of sensors near the area of interaction (e.g., the wrist), increasing the degree of invasiveness and visibility.
Finally, bone conduction microphones and headphones – now common consumer technologies - represent an additional bio-sensing technology that is relevant to the present work. These leverage the fact that sound frequencies relevant to human speech propagate well through bone. Bone conduction
microphones are typically worn near the ear, where they can sense vibrations propagating from the mouth and larynx during speech. Bone conduction headphones send sound through the bones of the skull and jaw directly to the inner ear, bypassing transmission of sound through the air and outer ear, leaving an unobstructed path for environmental sounds.
Reply
#10
Kindly send ppt for the topic skinput-the human arm touchscreen technology on my email id barkha.patel2010[at]gmail.com.......
waiting for your reply.
Reply
#11

To get more information about the topic "Skinput: Appropriating the Body as an Input Surface " please refer the page link below

http://studentbank.in/report-skinput-app...0#pid53310

http://studentbank.in/report-skinput-app...ut-surface
Reply
#12
please send skinput ppt to my mail id masanampriya67[at]gmail.com
Reply
#13



to get information about the topic "skinput " full report ppt and related topic refer the page link bellow

http://studentbank.in/report-skinput-technology--43972

http://studentbank.in/report-skinput-app...ut-surface
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: download ppts for skinput, input pen, microsoft skinput, full report of skinput, skinput full abstract, skinput ppt documentations free download, skinput ppt download,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
Photo Surface conduction Electron emitter Display Computer Science Clay 6 4,788 17-10-2012, 02:47 PM
Last Post: Guest
  surface-conduction electron-emitter display (SED) tnz 10 6,315 25-02-2012, 10:04 AM
Last Post: seminar paper
  Surface-Conduction Electron-Emitter Display (SED) Computer Science Clay 2 2,637 31-01-2012, 09:44 AM
Last Post: seminar addict
  Surface Mount Technology computer science crazy 3 4,167 05-01-2012, 09:44 AM
Last Post: seminar addict
  MIMO (Multiple Input Multiple Output) computer science crazy 4 3,825 08-11-2011, 09:57 AM
Last Post: seminar addict
  SURFACE PLASMON RESONANCE NANOLASERS science projects buddy 2 2,385 30-08-2011, 09:55 AM
Last Post: seminar addict
  Surface Mount Technology MT Electrical Fan 2 3,111 30-04-2011, 07:00 AM
Last Post: davinderpro
  surface plasmon resonance seminar class 0 1,030 04-03-2011, 10:38 AM
Last Post: seminar class
  Low power and high performance sram design using bank-based selective forward body bi computer science crazy 0 1,142 21-10-2009, 08:37 PM
Last Post: computer science crazy
  Vertical Cavity Surface Emitting Laser Computer Science Clay 0 1,398 30-07-2009, 04:10 PM
Last Post: Computer Science Clay

Forum Jump: