brain computer interface
#1

[attachment=794]
ABSTRACT
A Brain-Computer interface is a device which enables people to interact with computer-based systems through conscious control of their thoughts. BCI is any system which can derive meaningful information directly from the userâ„¢s brain activity in real time. The current and the most important application of a BCI is the restoration of communication channel for patients with locked-in-syndrome. Most current BCIs are not invasive. The electrodes pick up the brainâ„¢s electrical activity and carry it into amplifiers. These amplifiers amplify the signal approximately ten thousand times and then pass the signal via an analog to digital converter to a computer for processing. The computer processes the EEG signal and uses it in order to accomplish tasks such as communication and environmental control.

1.INTRODUCTION
What is a Brain-Computer Interface?
A brain-computer interface uses electrophysiological signals to control remote devices. Most current BCIs are not invasive. They consist of electrodes applied to the scalp of an individual or worn in an electrode cap such as the one shown in 1-1 (Left). These electrodes pick up the brainâ„¢s electrical activity (at the microvolt level) and carry it into amplifiers such as the ones shown in 1-1 (Right). These amplifiers amplify the signal approximately ten thousand times and then pass the signal via an analog to digital converter to a computer for processing. The computer processes the
EEG signal and uses it in order to accomplish tasks such as communication and environmental control. BCIs are slow in comparison with normal human actions, because of the complexity and noisiness of the signals used, as well as the time necessary to complete recognition and signal processing.
The phrase brain-computer interface (BCI) when taken literally means to interface an individualâ„¢s electrophysiological signals with a computer. A true BCI only uses signals from the brain and as such must treat eye and muscle movements as artifacts or noise. On the other hand, a system that uses eye, muscle, or other body potentials mixed with EEG signals, is a brain-body actuated system.
Figure : Scheme of an EEG-based Brain Computer Interface with on-line feedback. The EEG is recorded from the head surface, signal processing techniques are used to extract features. These features are classified, the output is displayed on a computer screen. This feedback should help the subject to control its EEG patterns.
The BCI system uses oscillatory electroencephalogram (EEG) signals, recorded during specific mental activity, as input and provides a control option by its output. The obtained output signals are presently evaluated for different purposes, such as cursor control, selection of letters or words, or control of prosthesis. People who are paralyzed or have other severe movement disorders need alternative methods for communication and control. Currently available augmentative communication methods require some muscle control. Whether they use one muscle group to supply the function normally provided by another (e.g., use extraocular muscles to drive a speech synthesizer) .Thus, they may not be useful for those who are totally paralyzed (e.g., by amyotrophic lateral sclerosis (ALS) or brainstem stroke) or have other severe motor disabilities. These individuals need an alternative communication channel that does not depend on muscle control. The current and the most important application of a BCI is the restoration of communication channel for patients with locked-in-syndrome.
2. STRUCTURE OF BRAIN-COMPUTER INTERFACE
The common structure of a Brain-Computer Interface is the following :
1) Signal Acquisition: the EEG signals are obtained from the brain through invasive or non-invasive methods (for example, electrodes).
2) Signal Pre-Processing: once the signals are acquired, it is necessary to clean them.
3) Signal Classification: once the signals are cleaned, they will be processed and classified to find out which kind of mental task the subject is performing.
4) Computer Interaction: once the signals are classified, they will be used by an appropriate algorithm for the development of a certain application.
BRAIN-COMPUTER INTERFACE ARCHITECTURE
The processing unit is subdivided into a preprocessing unit, responsible for artefact detection, and a feature extraction and recognition unit that identifies the command sent by the user to the BCI. The output subsystem generates an action associated to this command. This action constitutes a feedback to the user who can modulate her mental activity so as to produce those EEG patterns that make the BCI accomplish her intents.

3. APPLICATIONS OF BRAIN-COMPUTER INTERFACE
Brain-Computer Interface (BCI) is a system that acquires and analyzes neural signals with the goal of creating a communication channel directly between the brain and the computer. Such a channel potentially has multiple uses. The current and the most important application of a BCI is the restoration of communication channel for patients with locked-in-syndrome.
1) Patients with conditions causing severe communication disorders:
“ Advanced Amyotrophic Lateral Sclerosis (ALS)
“ Autism
“ Cerebral Palsy
“ Head Trauma
“ Spinal Injury
The output signals are evaluated for different purpose such as cursor control, selection of letters or words.
2) Military Uses:
The Air Force is interested in using brain-body actuated control to make faster responses possible for fighter pilots. While brain-body actuated control is not a true BCI, it may still provide motivations for why a BCI could prove useful in the future.A combination of EEG signals and artifacts (eye movement, body movement, etc.) combine to create a signal that can be used to fly a virtual plane.
3) Bioengineering Applications:
Assist devices for the disabled. Control of prosthetic aids.
4) Control of Brain-operated wheelchair.
5) Multimedia & Virtual Reality Applications:
? Virtual Keyboards
? Manipulating devices such as television set, radio, etc.
? Ability to control video games and to have video games react to actual EEG signals.
4.PRINCIPLES OF ELECTROENCEPHALOGRAPHY
4.1 The Nature of the EEG signals.
The electrical nature of the human nervous system has been recognized for more than a century. It is well known that the variation of the surface potential distribution on the scalp reflects functional activities emerging from the underlying brain. This surface potential variation can be recorded by affixing an array of electrodes to the scalp, and measuring the voltage between pairs of these electrodes, which are then filtered, amplified, and recorded. The resulting data is called the EEG. Configurations of electrodes usually follow the International 10-20 system of placement. The 10-20 System of Electrode Placement, which is based on the relationship between the location of an electrode and the underlying area of cerebral cortex (the "10" and "20" refer to the 10% or 20% interelectrode distance).
The extended 10-20 system for electrode placement. Even numbers indicate electrodes located on the right side of the head while odd numbers indicate electrodes on the left side. The letter before the number indicates the general area of the cortex the electrode is located above. A stands for auricular,C for central, Fp for prefrontal, F for frontal, P for parietal, O for Occipital, and T for temporal. In addition, electrodes for recording vertical and horizontal electrooculographic (EOG) movements are also place. Vertical EOG electrodes are placed above and below an eye and horizontal EOG electrodes are placed on the side of both eyes away from the nose.
Nowadays, modern techniques for EEG acquisition collect these underlying electrical patterns from the scalp, and digitalize them for computer storage. Electrodes conduct voltage potentials as microvolt level signals, and carry them into amplifiers that magnify the signals approximately ten thousand times. The use of this technology depends strongly on the electrodes positioning and the electrodes contact. For this reason,electrodes are usually constructed from conductive materials, such us gold or silver chloride, with an approximative diameter of 1 cm, and subjects must also use a conductive gel on the scalp to maintain an acceptable signal to noise ratio.
4.2 EEG wave groups.
The analysis of continuous EEG signals or brain waves is complex, due to the large amount of information received from every electrode. As a science in itself, it has to be completed with its own set of perplexing nomenclature. Different waves, like so many Radio stations, are categorized by the frequency of their emanations and, in some cases, by the shape of their waveforms. Although none of these waves is ever emitted alone, the state of consciousness of the individuals may make one frequency range more pronounced than others. Five types are particularly important:
BETA. The rate of change lies between 13 and 30 Hz, and usually has a low voltage between 5-30 V BETA. The rate of change lies between 13 and 30 Hz, and usually has a low voltage between 5-30 V Beta is the brain wave usually associated with active thinking, active attention, focus on the outside world or solving concrete problems. It can reach frequencies near 50 hertz during intense mental activity.
ALPHA. The rate of change lies between 8 and 13 Hz, with 30-50 V amplitude. Alpha waves have been thought to indicate both a relaxed awareness and also in attention. They are strongest over the occipital (back of the head) cortex and also over frontal cortex. Alpha is the most prominent wave in the whole realm of brain activity and possibly covers a greater range than has been previously thought of. It is frequent to see a peak in the beta range as high as 20 Hz, which has the characteristics of an alpha state rather than a beta, and the setting in which such a response appears also leads to the same conclusion. Alpha alone seems to indicate an empty mind rather than a relaxed one, a mindless state rather than a passive one, and can be reduced or eliminated by opening the eyes, by hearing unfamiliar sounds, or by anxiety or mental concentration.
THETA. Theta waves lie within the range of 4 to 7 Hz, with an amplitude usually greater than 20 ?V. Theta arises from emotional stress, especially frustration or disappointment.Theta has been also associated with access to unconscious material, creative inspiration and deep meditation. The large dominant peak of the theta waves is around 7 Hz.
DELTA. Delta waves lie within the range of 0.5 to 4 Hz, with variable amplitude. Delta waves are primarily associated with deep sleep, and in the waking state, were thought to indicate physical defects in the brain. It is very easy to confuse artifact signals caused by the large muscles of the neck and jaw with the genuine delta responses. This is because the muscles are near the surface of the skin and produce large signals whereas the signal which is of interest originates deep in the brain and is severely attenuated in passing through the skull. Nevertheless, with an instant analysis EEG, it is easy to see when the response is caused by excessive movement.
GAMMA. Gamma waves lie within the range of 35Hz and up. It is thought that this band reflects the mechanism of consciousness - the binding together of distinct modular brain functions into coherent percepts capable of behaving in a re-entrant fashion (feeding back on themselves over time to create a sense of stream-of-consciousness).
MU. It is an 8-12 Hz spontaneous EEG wave associated with motor activities and maximally recorded over motor cortex. They diminish with movement or the intention to move. Mu wave is in the same frequency band as in the alpha wave, but this last one is recorded over occipital cortex.
Most attempts to control a computer with continuous EEG measurements work by monitoring alpha or mu waves, because people can learn to change the amplitude of these two waves by making the appropriate mental effort. A person might accomplish this result, for instance, by recalling some strongly stimulating image or by raising his or her level of attention.
5. NEUROPSYCHOLOGICAL SIGNALS USED IN BCI APPLICATIONS
5.1 Generation of Neuropsychological Signals
Interfaces based on brain signals require on-line detection of mental states from spontaneous activity: different cortical areas are activated while thinking different things (i.e. a mathematical computation, an imagined arm movement, a music composition, etc). The information of these "mental states" can be recorded with different methods.
Neuropsychological signals can be generated by one or more of the following three:

¢ implanted methods
¢ evoked potentials (also known as event related potentials)
¢ operant conditioning
Both evoked potential and operant conditioning methods are normally externally-based BCIs as the electrodes are located on the scalp. The table describes the different signals in common use. It may be noted that some of the described signals fit into multiple categories.
Implanted methods use signals from single or small groups of neurons in order to control a BCI. In most cases, the most suitable option for placing the electrodes is the motor cortex region, because of its direct relevance to motor tasks, its relative accessibility compared to motor areas deeper in the brain, and the relative ease of recording from its large pyramidal cells. These methods have the benefit of a much higher signal-to-noise ratio at the cost of being invasive. They require no remaining motor control and may provide either discrete or continuous control.
Evoked potentials (EPs) are brain potentials that are evoked by the occurrence of a sensory stimulus. They are usually obtained by averaging a number of brief EEG segments time-registered to a stimulus in a simple task. In a BCI, EPs may provide control when the BCI application produces the appropriate stimuli. This paradigm has the benefit of requiring little to no training to use the BCI at the cost of having to make users wait for the relevant stimulus presentation. EPs offer discrete control for almost all users.
Exogenous components, or those components influenced primarily by physical stimulus properties, generally take place within the first 200 milliseconds after stimulus onset. These components include a Negative waveform around 100 ms (N1) and a Positive waveform around 200 ms after stimulus onset (P2). Visual evoked potentials (VEPs) fall into this category. Uses short visual stimuli in order to determine what command an individual is looking at and therefore wants to pick. Using VEPs has the benefit of a quicker response than longer latency components. The VEP requires subject to have good visual control in order to look at the appropriate stimulus and allows for discrete control.
One commonly studied ERP in BCI is a component called the P300.It is a positive peak in the potential that reaches a maximum of about 300 ms after the stimulus is presented. The P3 has been shown to be fairly stable in locked-in patients, reappearing even after severe brain injuries.
Figure : (Solid line) The general form of the P3 component of the evoked potential (EP). The P3 is a cognitive EP that appears approximately 300 ms after
a task relevant stimulus. (Dotted line) The general form of a non-task related response.
Operant conditioning is a method for modifying the behavior (an operant), which utilizes contingencies between a discriminative stimulus, an operant response, and a reinforcer to change the probability of a response occurring again in a given situation. In the BCI framework, it is used to train the patients to control their EEG. As it is presented in, several methods use operant conditioning on spontaneous EEG signals for BCI control. The main feature of this kind of signals is that it enables continuous rather than discrete control. This feature may also serve as a drawback: continuous control is fatiguing for subjects and fatigue may cause changes in performance since control is learned.
5.2 Common Neuropsychological Signals Used In BCIs
6. EEG SIGNAL PRE-PROCESSING
One of the main problems in the automated EEG analysis is the detection of the different kinds of interference waveforms (artifacts) added to the EEG signal during the recording sessions. These interference waveforms, the artifacts, are any recorded electrical potentials not originated in brain. There are four main sources of artifacts emission:
1. EEG equipment.
2. Electrical interference external to the subject and recording system.
3. The leads and the electrodes.
4. The subject her/himself: normal electrical activity from the heart, eye blinking, eyes movement, and muscles in general.
In case of visual inspections, the artifacts can be quite easily detected by EEG experts. However, during the automated analysis these signal patterns often cause serious misclassifications thus reducing the clinical usability of the automated analyzing systems. Recognition and elimination of the artifacts in real “ time EEG recordings is a complex task, but essential to the development of practical systems.
6.1 Classical Methods for removing eyeblink artifacts :
¢ Rejection methods consist of discarding contaminated EEG, based on either automatic or visual detection. Their success crucially depends on the quality of the detection, and its use depends also on the specific application for which it is used.Thus, although for epileptic applications, it can lead to an unacceptable loss of data, for others, like a Brain Computer interface, its use can be adequate.
¢ Subtraction methods are based on the assumption that the measured EEG is a linear combination of an original EEG and a signal caused by eye movement, called EOG (electrooculogram). The EOG is a potential produced by movement of the eye or eyelid. The original EEG is hence recovered by subtracting separately recorded EOG from the measured EEG, using appropriate weights (rejecting the influence of the EOG on particular EEG channels).
6.2 EEG Feature Extraction
For the analysis of oscillatory EEG components, the following preprocessing methods:
1) calculation of band power in predefined, subject-specific frequency
bands in intervals of 250 (500) ms.
2) adaptive autoregressive (AAR) parameters estimated for each iteration with the recursive least squares algorithm (RLS).
3) calculation of common spatial filters (CSP).

Band power at each electrode position is estimated by first digitally bandpass filtering the data, squaring each sample and then averaging over several consecutive samples. Before the band power method is used for classification, first the reactive frequency bands must be selected for each subject. This means that data from an initial experiment without feedback are required. Based on these training data, the most relevant frequency components can be determined by using the distinction sensitive learning vector quantization (DSLVQ) algorithm. This method uses a weighted distance function and adjusts the influence of different input features (e.g., frequency components) through supervised learning. When DSLVQ is applied to spectral components of the EEG signals (e.g., in the range from 5 to 30 Hz), weight values of individual frequency components according to their relevance for the classification task are obtained.
The AAR parameters, in contrast, are estimated from the EEG signals limited only by the cutoff frequencies, providing a description of the whole EEG signal. Thus, an important advantage of the AAR method is that no a priori information about the frequency bands is necessary .
For both approaches, two closely spaced bipolar recordings from the left and right sensorimotor cortex were used. In further studies, spatial information from a dense array of electrodes located over central areas was considered to improve the classification accuracy. For this purpose, the CSP method was used to estimate spatial filters that reflect the specific activation of cortical areas during hand movement imagination. Each electrode is weighted according to their importance for the classification. The method makes a decomposition of EEG data into spatial patterns which are extracted from two populations (EEG data during left and right movement imagination) and is based on simultaneous diagonalization of two covarinance matrices. The pattern maximizes the difference between left and right population and the only information contained in these patterns is where the variance of the EEG varies most when comparing two conditions. During on-line operation the EEG data is filtered with the most important spatial patterns and the variance of the time series is calculated for several consecutive samples.
7. SIGNAL CLASSIFICATION PROCEDURES
An important step toward real-time processing and feedback presentation is the setup of a subject-specific classifier. For this, two different approaches are followed:
i) neural network based classification, e.g. a learning vector quantization (LVQ)
ii) linear discriminant analysis (LDA)
Learning Vector Quantization (LVQ) has proven to be an effective classification procedure. LVQ is shown to be comparable with other neural network algorithms for the task of classifying EEG signals, yielding approximately 80% classification accuracy for three out of the four subjects tested when differentiating between two different mental tasks. LVQ was mainly applied to online experiments with delayed feedback presentation. In these experiments, the input features were extracted from a 1-s epoch of EEG recorded during motor imagery. The EEG was filtered in one or two subject-specific frequency bands before calculating four band power estimates, each representing a time interval of 250 ms, per EEG channel and frequency range. Based on these features, the LVQ classifier derived a classification and a measure describing the certainty of this classification, which in turn was provided to the subject as a feedback symbol at the end of each trial.
In experiments with continuous feedback based on either AAR parameter estimation or CSPâ„¢s, a linear discriminant classifier has usually been applied for on-line classification. The AAR parameters of two EEG channels or the variance time series of the CSPâ„¢s are linearly combined and a time-varying signed distance (TSD) function is calculated. With this method it is possible to indicate the result and the certainty of classification, e.g., by a continuously moving feedback bar. The different methods of EEG preprocessing and classification have been compared in extended on-line experiments and data analyzes. These experiments were carried out using a newly developed BCI system running in real-time under Windows with a 2, 8, or 64 channel EEG amplifier . The installation of this system, based on a rapid prototyping environment, includes a software package that supports the real-time implementation and testing of different EEG parameter estimation and classification algorithms.
8. EXISTING BCI SYSTEMS
8.1 The Brain Response Interface
Sutter's Brain Response Interface (BRI) is a system that takes advantage of the fact that large chunks of the visual system are devoted to processing information from the foveal region. The BRI uses visually evoked potentials (VEP's) produced in response to brief visual stimuli. These EP's are then used to give a discrete command to pick a certain part of a computer screen. This system is one of the few that have been tested on severely handicapped individuals. Word processing output approaches 10-12 words/min. and accuracy approaches 90% with the use of epidural electrodes. This is the only system mentioned that uses implanted electrodes to obtain a larger, less contaminated signal. A BRI user watches a computer screen with a grid of 64 symbols (some of which lead to other pages of symbols) and concentrates on the chosen symbol. A specific subgroup of these symbols undergoes a equiluminant red/green fine check or plain color pattern alteration in a simultaneous stimulator scheme at the monitor vertical refresh rate (40-70 frames/s). Sutter considered the usability of the system over time and since color alteration between red and green was almost as effective as having the monitor flicker, he chose to use the color alteration because it was shown to be much less fatiguing for users. The EEG response to this stimulus is digitized and stored. Each symbol is included in several different subgroups and the subgroups are presented several times. The average EEG response for each subgroup is computed and compared to a previously saved VEP template (obtained in an initial training session), yielding a high accuracy system.This system is basically the EEG version of an eye movement recognition system and contains similar problems because it assumes that the subject is always looking at a command on the computer screen. On the positive side, this system has one of the best recognition rates of current systems and may be used by individuals with sufficient eye control. Performance is much faster than most BCIs, but is very slow when compared to the speed of a good typist (80 words/min.). The system architecture is advanced. The BRI is implemented on a separate processor with a Motorola 68000 CPU. A schematic of the system is shown in Figure. The BRI processor interacts with a special display showing the BRI grid of symbols as well as a speech synthesizer and special keyboard interface. The special keyboard interface enables the subject to control any regular PC programs that may be controlled from the keyboard. In addition, a remote control is interfaced with the BRI in order to enable the subject to control a TV or VCR. Since the BRI processor loads up all necessary software from the hard drive of a connected PC, the user may create or change command sequences. The main drawback of the system architecture is that it is based on a special hardware interface. This may be problematic when changes need to be made to the system over time.
Figure: A schematic of the Brain Response Interface (BRI) system
8.2 P3 Character Recognition
In a related approach, Farwell and Donchin use the P3 evoked potential. A 6x6 grid containing letters from the alphabet is displayed on the computer monitor and users are asked to select the letters in a word by counting the number of times that a row or column containing the letter flashes. Flashes occur at about 10 Hz and the desired letter flashes twice in every set of twelve flashes. The average response to each row and column is computed and the P3 amplitude is measured. Response amplitude is reliably larger for the row and column containing the desired letter. After two training sessions, users are able to communicate at a rate of 2.3 characters /min, with accuracy rates of 95%. This system is currently only used in a research setting. A positive aspect of using a longer latency component such as the P3 is that it enables differentiating between when the user is looking at the computer screen or looking someplace else (as the P3 only occurs in certain stimulus conditions). Unfortunately, this system is also agonizingly slow, because of the need to wait for the appropriate stimulus presentation and because the stimuli are averaged over trials. While the experimental setup accomplishes its main goal of showing that the P3 may be used for a BCI interface, the subjective experiences of a subject with this system have yet to be considered. The 10 Hz rate of flashing may fatigue users as Sutter mentions and this rate of flashing may cause epilepsy in some subjects.
8.3 ERS/ERD Cursor Control
Pfurtscheller and his colleagues take a different approach.Using multiple electrodes placed over sensorimotor cortex they monitor event-related synchronization/desynchronization (ERS/ERD). In all sessions, epochs with eye and muscle artifact are automatically rejected. This rejection can slow subject performance speeds.As this is a research system, the user application is a simple screen that allows control of a cursor in either the left or right direction. In one experiment, for a single trial the screen first appears blank, then a target box is shown on one side of the screen. A cross hair appears to let the user know that he/she must begin trying to move the cursor towards the box. Feedback may be delayed or immediate and different experiments have slightly different displays and protocols. After two training sessions, three out of five student subjects were able to move a cursor right or left with accuracy rates from 89-100%. Unfortunately, the other two students performed at 60% and 51%. When a third category was added for classification, performance dropped to a low of 60% in the best case. The architecture of this BCI now contains a remote control interface that allows controlling the system over a phone line, LAN, or Internet connection. This allows maintenance to be done from remote locations. The system may be run from a regular PC, a notebook, or an embedded computer and is being tested for opening and closing a hand orthesis in a patient with a C5 lesion. From this information, it appears that the user application must be independent from the BCI, although it is possible that two different BCI programs were constructed.
This BCI system was designed with the following requirements in mind:
1. The system must be able to record, analyze, and classify EEG-data in real- time.
2. The classification results must have the ability to be used to control a device on-line.
3. The system must have the ability to have different experimental paradigms and give multimodal stimulations.
4. The system must display the EEG channels on-line on a monitor.
5. The system must store all data for later off-line analysis.
The system has the ability to record up to 96 channels of EEG simultaneously through the use of multiple A/D boards. Simulink and Matlab are the two software packages used: Simulink to calculate the parameters of the EEG state in real-time and Matlab to handle the data acquisition, timing, and experimental presentation. This design has the benefit of separating data processing from acquisition and application concerns. This may lead to greater encapsulation of data and maintainability. This design has the drawback of trying to use Matlab for both data acquisition and the BCI application. For simple applications such as the cursor control task, this decision makes sense. When the application becomes more complex this design decision may lead to problems. Matlab is not an object-oriented language and data encapsulation is not necessarily easy to accomplish. This may lead to poor maintainability. In addition, the system depends on Matlab for all program capabilities. This is fine for simple graphical interfaces, but may break down when the programmer wants to communicate with another program or even over the web. For these cases Matlab may offer several special program extensions, but buying many extensions becomes problematic and expensive. It would be easier to enable the application creator to use a variety of languages for the application.
8.4 A Steady State Visual Evoked Potential BCI
Middendorf and colleagues use operant conditioning methods in order to train volunteers to control the amplitude of the steady-state visual evoked potential (SSVEP) to florescent tubes flashing at 13.25 Hz. This method of control may be considered as continuous as the amplitude may change in a continuous fashion. Either a horizontal light bar or audio feedback is provided when electrodes located over the occipital cortex measure changes in signal amplitude. If the VEP amplitude is below or above a specified threshold for a specific time period, discrete control outputs are generated. After around 6 hours of training, users may have an accuracy rate of greater than 80% in commanding a flight simulator to roll left of right. In the flight simulator, the stimulus lamps are located adjacent to the display behind a translucent diffusion panel. As operators increase their SSVER amplitude above one threshold, the simulator rolls to the right. Rolling to the left is caused by a decrease in the amplitude. A functional electrical stimulator (FES), has been integrated for use with this BCI. Holding the SSVER above a specified threshold for one second, causes the FES to turn on. The activated FES then starts to activate at the muscle contraction level and begins to increase the current, gradually recruiting additional muscle fibers to cause knee extension. Decreasing the SSVER for over a second, causes the system to deactivate, thus lowering the limb. Recognizing that the SSVEP may also be used as a natural response, Middendorf and his colleagues have recently concentrated on experiments involving the natural SSVEP. When the SSVEP is used as a natural response, virtually no training is needed in order to use the system. The experimental task for testing this method of control has been to have subjects select virtual buttons on a computer screen. The luminance of the virtual buttons is modulated, each at a different frequency to produce the SSVEP. The subject selects the button by simply looking at it as in Sutterâ„¢s Brain Response Interface. From the 8 subjects participating in the experiment, the average percent correct was 92% with an average selection time of 2.1 seconds. Middendorfâ„¢s group has advocated using visual evoked potentials, in this manner as opposed to their previous work on training control of the SSVEP, for multiple reasons. Using an inherent response means that less time is spent on training. The main drawback of this groupâ„¢s approach appears to be that they flicker light at different frequencies. Sutter solved the problem of flicker-related fatigue by using alternating red/green illumination. The main frequency of stimulus presentation at 13.25 Hz may also cause epilepsy.
8.5 Mu Rhythm Cursor Control
Wolpaw and his colleagues free their subjects from being tied to a flashing florescent tube by training subjects to modify their mu rhythm. This method of control is continuous as the mu rhythm may be altered in a continuous manner. It can be attenuated by movement and tactile stimulation as well as by imagined movement. A subject's main task is to move a cursor up or down on a computer screen. While not all subjects are able to learn this type of biofeedback control, the subjects that do perform with accuracy greater than or equal to 90%. These experiments have also been extended to two-dimensional cursor movement, but the accuracy of this is reported as having not reached this level of accuracy when compared to the one-dimensional control .Since the mu rhythm isn't tied to an external stimulus, it frees the user from dependence on external events for control. The BCI system consists of a 64-channel EEG amplifier, two 32-channel A/D converter boards, a TMS320C30-based DSP board, and a PC with two monitors. One monitor is used by the subject and one by the operator of the system . Only a subset of the 64-channels are used for control, but the number of channels allows recognition to be adjusted to the unique topographical features of each subjectâ„¢s head. The DSP board is programmable in the C-language, enabling testing of all program code prior to running it on the DSP board. Software is also programmed in C in order to create consistency across system modules. The architecture of the system is shown in Figure. Four processes run between the PC and the DSP board. As signal acquisition occurs, an interrupt request is sent from the A/D board to the DSP at the end of A/D conversion. The DSP then acquires the data from all requested channels sequentially and combines them to derive the one or more EEG channels that control cursor movement. This is the data collection process. A second process then takes care of performing a spectral analysis on the data. When this analysis is completed, the results are moved to dual-ported memory and an interrupt to the PC is generated. A background process on the PC then acquires spectral data from the DSP board and computes cursor movement information as well as records relevant trial information.
Figure : A schematic of the mu rhythm cursor control system architecture.
The system contains four parallel processes.
This process runs at a fixed interval of 125 msec. The fourth process handles thegraphical user interfaces for both the operator and the subject and records data to disk.The separation of data collection and analysis enables different algorithms to be inserted for processing the EEG signals. All algorithms are written in C, which is much easier to program in than Assembly language, but is not as easy as the commercial Matlab ® scripting language and environment, which contains many helpful functions for mathematically processing data. The third and fourth processes contain design decisions that may make maintenance and flexibility difficult. The graphical user interface is tied to data storage. Conversion of EEG signals to cursor control numbers happens over the DSP foreground/background processes and in the PC background process. This lack of encapsulation promises to make changing the application and signal processing difficult if such changes are planned.
8.6 The Thought Translation Device
As another application used with severely handicapped individuals, the Thought Translation Device has the distinction of being the first BCI to enable an individual without any form of motor control to communicate with the outside world. Out of six patients with ALS, 3 were able to use the Thought Translation Device. Of the other three, one lost motivation and later died and another discontinued use of the Thought Translation Device part way through training, and then later was unable to regain control. The paper implies that users do not want to use the BCI unless they absolutely must, but does not disambiguate subjective user satisfaction of the system
from general user depression. The training program may use either auditory or visual feedback. The slow cortical potential is extracted from the regular EEG on-line, filtered, corrected for eye movement artifacts, and fed back to the patient. In the case of auditory feedback, the positivity/negativity of a slow cortical potential is represented by pitch. When using visual feedback, the target positivity/negativity is represented by a high and low box on the screen. A ball-shaped light moves toward or away from the target box depending on a subjectâ„¢s performance. The subject is reinforced for good
performance with the appearance of a happy face or a melodic sound sequence. When a subject performs at least 75% correct, he/she is switched to the language support program. At level one, the alphabet is split into two halves (letter-banks) which are presented successively at the bottom of the screen for several seconds. If the subject selects the letter-bank being shown by generating a slow cortical potential shift, that side of the alphabet is split into two halves and so on, until a single letter is chosen. A return function allows the patient to erase the last written letter. These patients may now write email in order to communicate with other ALS patients world-wide. An Internet version of the thought translation device is under construction. The authors comment that patients refuse to use pre-selected word sequences because they feel less free in presenting their own intentions and thoughts.
8.7 An Implanted BCI
The implanted brain-computer interface system devised by Kennedy and colleagues has been implanted into two patients. These patients are trained to control a cursor with their implant and the velocity of the cursor is determined by the rate of neural firing. The neural waveshapes are converted to pulses and three pulses are an input to the computer mouse. The first and second pulses control X and Y position of the cursor and a third pulse as a mouse click or enter signal.The patients are trained using software that contains a row of icons representing common phrases (Talk Assist developed at Georgia Tech), or a standard Ëœqwertyâ„¢ or alphabetical keyboard (Wivik software from Prentke Romich Co.). When using a keyboard, the selected letter appears on a Microsoft Wordpad screen. When the phrase or sentence is complete, it is output as speech using Wivox software from Prentke Romich Co. or printed text. There are two paradigms using the Talk Assist program and a third one using the visual keyboard. In the first paradigm, the cursor moves across the screen using one group of neural signals and down the screen using another group of larger amplitude signals. Starting in the top left corner, the patient enters the leftmost icon. He remains over the icon for two seconds so that the speech synthesizer is activated and phrases are produced. In the second paradigm, the patient is expected to move the cursor across the screen from one icon to the other. The patient is encouraged to be as accurate as possible, and then to speed up the cursor movement while attempting to remain accurate. In the third paradigm, a visual keyboard is shown and the patient is encouraged to spell his name as accurately and quickly as possible and then to spell anything else he wishes.This system uses commercially available software and thus the BCI implementation does not have to worry about maintenance of the user application. Unfortunately, the maximum communication rate with this BCI has been around 3 characters per minute. This is the same rate as quoted for EMG-based control with patient JR and is comparable with the rates achieved by externally-based BCI systems. Kennedy has founded Neural Signals, Inc. in order to help create hardware and software for locked-in individuals and the company is continually looking for methods to improve control. JR now has access to email and may be contacted through the email address shown on the companyâ„¢s web site.
9. Non-Invasive Vs Invasive Signal Detection
Non-Invasive
Pros
¢ no surgical risks
Cons
¢ low signal resolution
¢ greater interference from other signals
¢ interfaces must be routinely cleaned and changed
Invasive
Pros
¢ higher resolution recording
¢ less interference from other signals
¢ faster communication possible
Cons
¢ determining which neurons to record from
¢ surgical risks

10. CONCLUSION
BCI is a system that records electrical activity from the brain and classifies these signals into different states. Few applications currently being used have been discussed. Since the BCI enables people to communicate and control appliances with just the use of brain signals it opens many gates for disabled people. The possible future applications are numerous. Even though this field of science has grown vastly in last few years we are still a few steps away from the scene where people drive brain-operated wheelchairs on the streets. New technologies need to be developed and people in the neuroscience field need also to take into account other brain imaging techniques, such as MEG and fMRI, to develop the future BCI. As time passes BCI might be a part of our every day lives. Who knows, in twenty years Iâ„¢ll not have to type this report with my fingers, but just the conscious control of my thoughts would be enough.
11. BIBLIOGRAPHY
¢ cs.rit.edu/~jdb/
¢ bci.epfl.ch/publications/
¢ www-cdr.stanford.edu/~jack/

CONTENTS
1. Introduction 1
2. The Structure Of BCI 4
3. Applications Of BCI 6
4. Principles Of Electroencephalography 8
4.1 The Nature Of EEG Signals
4.2 EEG Wave Groups
5. Neuropsychological Signals Used In BCI 12
5.1 Generation Of EEG Signals
5.2 Common Neuropsychological Signals

6. EEG Signal Pre-Processing 16
6.1 Classical Method
6.2 EEG Feature Extraction
7. Signal Classification Procedures 19
8. Existing BCI Systems 21
9. Pros And Cons 33
10. Conclusion 34

ACKNOWLEDGEMENT
I express my sincere gratitude to Dr. Agnisarman Namboodiri, Head of Department of Information Technology and Computer Science , for his guidance and support to shape this paper in a systematic way.
I am also greatly indebted to Mr. Saheer and Ms. Deepa, Department of IT for their valuable suggestions in the preparation of the paper.
In addition I would like to thank all staff members of IT department and all my friends of S7 IT for their suggestions and constrictive criticism.
Reply
#2
HeartHeartHeartHeartHeartHeartHeart
Reply
#3
Brain-Computer Interfaces, Virtual Reality, and Videogames
Far beyond science-fiction clichés and the image of a person connected to cyberspace via direct cerebral
implants as in The Matrix, brain-computer interfaces (BCIs) can offer a new means of playing videogames
or interacting with 3D virtual environments (VEs).
Only in recent years have research groups been attempting to connect BCIs and virtual worlds.
However, several impressive prototypes already exist that enable users to navigate in virtual scenes or
manipulate virtual objects solely by means of their cerebral activity, recorded on the scalp via
electroencephalography (EEG) electrodes. Meanwhile, virtual reality (VR) technologies provide
motivating, safe, and controlled conditions that enable improvement of BCI learning as well as the
investigation of the brain responses and neural processes involved.
STATE OF THE ART
VR technologies and videogames can be powerful BCI companions. Researchers have shown that BCIs
provide suitable interaction devices for VR applications [10] and videogames [6]. On the other hand, the
community now widely accepts that VR is a promising and efficient medium for studying and improving
BCI systems.
Brain-computer interaction with virtual worlds
Interactions with VE can be decomposed into elementary tasks [1] such as navigating to change the
viewpoint or selection and manipulation of virtual objects.
In virtual worlds, current BCI systems can let users change the camera position in a VE toward the left
or right by using two different brain signals, such as left- or right-hand motor imagery (MI) or two steadystate
visual-evoked potentials (SSVEP) at different frequencies. MI-based BCIs have also been used to
control the steering of a virtual car [2], explore a virtual bar [10], or move along a virtual street [3] or
through a virtual flat [7]. These BCIs typically provide the user with one to three commands, each
associated with a given task.
Concerning selection and manipulation of virtual objects, developers base most BCIs on P300 or SSVEP
signals. In these applications, virtual objects generally provide a stimulus that triggers a specific and
recognizable brain signal that draws the user’s attention to the associated object to select and manipulate it.
Those BCIs let the user turn on and off devices such as a virtual TV or lamp using the P300 [4], or
manipulate more complex objects such as a virtual avatars using SSVEP
Virtual reality for studying and improving BCI
Researchers can use VR to study and improve brain-computer interaction. The technology also helps
researchers perform safe and perfectly controlled experiments. For example, it has enabled the simulation
of wheelchair control with a BCI [3] and various BCI groups have used it to study how users would react
while navigating in a complex 3D environment using a BCI in close to real-life conditions [7][9].
Several studies have compared feedback consisting of classical 2D displays with feedback consisting of
entertaining VR applications [2][7]. These studies show that users’ performance ranked higher with VR
feedback than with simple 2D feedback. Moreover, evidence suggests that the more immersive the VR
display, the better users perform [7][10]. Even though some observations await confirmation, VR appears
to shorten BCI learning and increase users’ performance by increasing their motivation.
TYPICAL APPLICATIONS
Several universities and laboratories have pursued the creation of more compelling interaction with
virtual worlds using BCI, including University College Dublin, MediaLabEurope, Graz University of
Technology, University College London, University of Tokyo and INRIA.
MindBalance videogame
Researchers at University College Dublin and MediaLabEurope have created MindBalance [5], a
videogame that uses BCI to interact with virtual worlds. As Figure 1 shows, the game involves moving an
animated 3D character within a virtual environment. The objective is to gain one-dimensional control of the
character’s balance on a tightrope using only the player’s EEG. The developed BCI uses the SSVEP
generated in response to phase-reversing checkerboard patterns. The SSVEP simplifies the signalprocessing
methods dramatically so that users require little or no training. The game positions a checkerboard on either side of the character. These checkerboards are phasereversed
at 17 and 20 Hz. Each game begins with a brief calibration period. This requires the subject to
attend to the left and right checkerboards, as indicated by arrows, for 15 seconds each. The system uses the
recorded data to calibrate the BCI and adapt its parameters to the current player’s EEG. This process
repeats three times.
When playing the game, the user must control the animated character, which is walking a tightrope
while being subjected to random movements to the left and right. If the user does not accurately attend to
the correct side to control the character after initially losing balance (first degree), the character will move
to a more precarious (second degree) state of instability, then, progressively, to an unrecoverable state
(third degree), at which point the user falls.
For correct user control, the animated character will move up a degree of balance until perfectly upright,
allowing forward progress to resume. Audiovisual feedback streams into the user’s file, providing
information on the character’s stability. The visual feedback shows the degree of inclination in relation to
the tightrope.
The BCI’s performance proved to be robust in resisting distracting visual stimulation in the game’s
visually rich environment and relatively consistent across six subjects, with 41 of 48 games successfully
completed.
The average real-time control accuracy across subjects was 89 percent. Some subjects achieved better
performance in terms of success in completing the game. This suggests that either practice or a more
motivated approach to stimulus fixation results in a more pronounced visual response.
Dual university collaboration
In a first experiment designed by researchers at the Graz University of Technology and the University
College London’s virtual reality laboratory, a tetraplegic subject mastered control of his wheelchair’s
simulated movements along a virtual street populated with 15 virtual characters

download full report
http://irisa.fr/bunraku/GENS/alecuyer/Le..._draft.pdf
Reply
#4
to get information about the topic brain computer interface full report ppt and related topic refer the page link bellow

http://studentbank.in/report-brain-compu...ars-report
http://studentbank.in/report-brain-computer-interface
http://studentbank.in/report-brain-compu...ace--15580

http://studentbank.in/report-brain-compu...face--5412

http://studentbank.in/report-brain-compu...erface-bci

http://studentbank.in/report-brain-compu...ort?page=9

http://studentbank.in/report-brain-compu...ort?page=2

http://studentbank.in/report-brain-compu...ort?page=8

http://studentbank.in/report-brain-compu...?pid=67204

http://studentbank.in/report-brain-compu...rt?page=10

http://studentbank.in/report-brain-compu...rt?page=11
Reply

Important Note..!

If you are not satisfied with above reply ,..Please

ASK HERE

So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: drawback of jk flipflop, biocybernetics bci**l trining in 220kv 132kv 33kv sub station project**ker, brain computer interface definition, brain computer interface seminar pdf, amnh eps, brain computer interface music, biocybernetics bci,

[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Possibly Related Threads...
Thread Author Replies Views Last Post
  blue brain project full report seminar topics 40 39,683 04-04-2013, 01:18 PM
Last Post: Guest
  Seminar on Brain gate computer girl 1 3,231 11-01-2013, 12:40 PM
Last Post: seminar details
  ADVANCED COMPUTER ANALYSIS OF POWER SYSTEM CONTROL AND POWER ELECTRONICS TRANSIENTS seminar class 1 1,843 01-12-2012, 01:40 PM
Last Post: seminar details
  Brain Fingerprinting karthikiyer2k 4 6,090 19-11-2012, 01:42 PM
Last Post: seminar details
  digital visual interface full report project reporter 2 4,119 07-11-2012, 11:56 AM
Last Post: seminar details
Music Digital Visual Interface Computer Science Clay 1 1,725 07-11-2012, 11:55 AM
Last Post: seminar details
  Real Time System Interface computer science crazy 1 1,935 01-11-2012, 02:26 PM
Last Post: seminar details
  BRAIN MACHINE INTERFACE seminar surveyer 1 1,746 08-06-2012, 05:41 PM
Last Post: seminar details
  Artificial Brain seminar topics 2 2,927 14-03-2012, 11:23 AM
Last Post: seminar paper
  FIBER OPTICS BASED COMPUTER NETWORK full report project topics 6 8,353 03-02-2012, 10:59 AM
Last Post: seminar addict

Forum Jump: