32
qwertyuiopasdfghjklzxcvbnmqwertyui opasdfghjklzxcvbnmqwertyuiopasdfgh jklzxcvbnmqwertyuiopasdfghjklzxcvb nmqwertyuiopasdfghjklzxcvbnmqwer tyuiopasdfghjklzxcvbnmqwertyuiopas dfghjklzxcvbnmqwertyuiopasdfghjklzx cvbnmqwertyuiopasdfghjklzxcvbnmq wertyuiopasdfghjklzxcvbnmqwertyuio pasdfghjklzxcvbnmqwertyuiopasdfghj klzxcvbnmqwertyuiopasdfghjklzxcvbn mqwertyuiopasdfghjklzxcvbnmqwerty uiopasdfghjklzxcvbnmqwertyuiopasdf ghjklzxcvbnmqwertyuiopasdfghjklzxc vbnmqwertyuiopasdfghjklzxcvbnmrty uiopasdfghjklzxcvbnmqwertyuiopasdf ghjklzxcvbnmqwertyuiopasdfghjklzxc vbnmqwertyuiopasdfghjklzxcvbnmqw Language Memory Stimulation Application(LMSA) Brain Computer Interface and Language 4/22/2010 Tobey Strauch

Language Memory Stimulation

Embed Size (px)

Citation preview

qwertyuiopasdfghjklzxcvbnmqwertyui

opasdfghjklzxcvbnmqwertyuiopasdfgh

jklzxcvbnmqwertyuiopasdfghjklzxcvb

nmqwertyuiopasdfghjklzxcvbnmqwer

tyuiopasdfghjklzxcvbnmqwertyuiopas

dfghjklzxcvbnmqwertyuiopasdfghjklzx

cvbnmqwertyuiopasdfghjklzxcvbnmq

wertyuiopasdfghjklzxcvbnmqwertyuio

pasdfghjklzxcvbnmqwertyuiopasdfghj

klzxcvbnmqwertyuiopasdfghjklzxcvbn

mqwertyuiopasdfghjklzxcvbnmqwerty

uiopasdfghjklzxcvbnmqwertyuiopasdf

ghjklzxcvbnmqwertyuiopasdfghjklzxc

vbnmqwertyuiopasdfghjklzxcvbnmrty

uiopasdfghjklzxcvbnmqwertyuiopasdf

ghjklzxcvbnmqwertyuiopasdfghjklzxc

vbnmqwertyuiopasdfghjklzxcvbnmqw

Language Memory Stimulation

Application(LMSA)

Brain Computer Interface and Language

4/22/2010

Tobey Strauch

Language Memory Stimulation via Brain Computer Interface 2

Introduction:

The purpose of my project this semester was to present an application idea that would use a

brain interface and would allow people who cannot speak but that have the ability to create the words

in their mind to have an avenue of communication. My first presentation presented this idea as a

language memory stimulation application with the front end being a brain computer interface. This is

challenging in that most of today’s brain computer interfaces do not use more than just attention and

meditation potentials to direct an action. This concept involves detecting brain waves for the purpose

of interpreting speech without actually saying the words, or to induce conversation via computer output

to enhance language training after stroke. Computer assisted language can enhance aphasia patient

treatment after stroke if devices can be made user friendly enough and functional enough to do so.

Meinzer has documented studies showing that intensive language training can encourage cortical

reorganization [11]. Using software to interact with the patient outside of the clinical environment

during that process could improve results.

The following is a layout of the proposed language memory stimulation application (LMSA).

The screen application proposed was the following:

Language Memory Stimulation via Brain Computer Interface 3

In order to accomplish such an application, it must be discovered if there are brain waves

distinguishable in thought that can be captured and are distinct enough to represent a word using the

Neuro Sky Mindset. The first part of this report discusses analyzing brain waves with the Neuro Sky

Mindset. The second part discusses putting those waves into a format that can be used for the

application.

Analyzing Neuro Sky Mindset for Creating an Application Interface:

The purpose of this project, for this semester, was to determine if there are brain waves specific

to words so that I could use the data to create a language memory stimulation application as I presented

in my proposal. Before creating an application that uses brain waves retrieved via the Neuro Sky

Mindset, I needed to see if it was possible to get a pattern from specific words.

Most of the research done today focuses on creating sounds based on brain waves generated

from speech related to the pronunciation of vowels and consonants [2],[7]. This is done by monitoring

Language Memory Stimulation via Brain Computer Interface 4

the signals sent to the muscles that are used to move the mouth, the tongue and the larynx. It is my

thought that people with aphasia do not have to reprocess and relearn how to pronounce the vowels

and consonants that they learned as a child. It has been shown that the connectivity is affected when

the brain swells and that the transfer of those signals to the muscles gets broken. The other train of

thought is that the place where words are stored is moved or changed, or non-accessible. The problem

still relies on the relay of the information. If the signals that the brain uses to initiate speech can be

transferred to a computer then there is the possibility of relaying information without speaking.

The motivation for such research is inspired by DARPA and the push to create brain controlled

prosthetics. Currently, there are brain controlled limbs being tested on animals and the Veteran’s

Administration plans on testing them with people this year.

Background about Language Oriented Brain Computer Interfaces:

Aphasia is a different issue in that the levels of inability vary from patient to patient. The

National Stroke Association and National Aphasia Association are referenced so that people may know

the impact that merging computer science and neuroscience could have. Because I am not a

neurologist, I was interested in comparing the idea from MIT and their work on computer generating

vowels via brain waves by using the Neuro Sky Mindset (currently being used in toys that allow brain

waves to control the movement of an object), to see if when we concentrate on a word or speak it if

there is a specific wave pattern for each word. If a specific wave pattern can be documented then it

could be used in a software application such as presented in my paper about the Language Memory

Stimulation Game. Because I could not make the application in a semester, I concentrated on

identifying if specific brain patterns could be related to specific words.

Language Memory Stimulation via Brain Computer Interface 5

Using the Neuro Sky Mindset I monitored the front lobe and the back lobe within the limitations

of the dry contact provided. The Mindset is limited because of the noise induced by the physical set up

and lack of secure contact of the node. The Mindset is also limited by its signaling process. The

signaling is made more for gaming and each power setting for the alpha and beta waves has been

manipulated for that goal and cannot be compared to a medical grade EEG device. Most of the

software applications for the Mindset that are being created focus more on the concentration and

meditation signals than the actual alpha and beta waves. What I failed to explain in the proposal was

that this project is not about doing neuroscience, but about capturing brain waves as an input to a

computer application. This is possible because the brain waves are characteristic to any other electrical

type wave in that they are sinusoidal and have a frequency and a bandwidth.

The knowledge borrowed from neuroscience is to understand the implication of the application,

and to learn where the waves need to be captured from. The two red circles in Figure 1 show where the

Neuro Sky node can monitor on the left side of the brain. The Broca’s aphasia and Wernicke’s aphasia

areas are of interest. The Broca’s aphasia area is closer to the intended position of the Neuro Sky

Mindset node.

Language Memory Stimulation via Brain Computer Interface 6

Figure 1: Brain Diagram [4]

Brain waves can be monitored from both spots but data was collected from Broca’s area for this

experiment. Initially, I wanted to see if I could monitor changes in the concentration level, or if there

were brain wave changes when thinking of a specific picture. This did not work well with the Neuro Sky

Mindset. However, saying the word shows a specific wave that can be monitored by Neuro Sky

equipment.

Applications of Brain Computer Interfaces Involving Language:

The more we understand about speech and how the brain initiates it, the more we can know

how to help those that have lost the ability to talk. This author’s interest is in aphasia. The government

is pushing for brain controlled prosthetics and has awarded over fifty million dollars to various

researchers in the last three years [10]. However, there are many more applications. This year alone,

DARPA has budgeted “$4 million for a program named Silent Talk, which aims to "allow user-to-user

communication on the battlefield without the use of vocalized speech through analysis of neural

Language Memory Stimulation via Brain Computer Interface 7

signals." [3] This same idea can be used in video games, or to control robots, giving the world of

robotics a whole new feedback channel. [10]

Like the idea behind “Silent Talk”, it was my intention in this experiment to see if there were

specific patterns that could be associated with words [3]. If those patterns could be captured in the

same manner that we currently capture sound waves, then each word could be assigned a distinct

pattern and could allow for a different mode of communication for those who cannot speak. If this was

successful, then I could use it to create a software game that would allow aphasia patients to practice

sending brain waves to a computer interface and seeing what the associated word is in a picture,

converting that to a text word, and then associating that word with sentence choices. This would enable

an aphasia patient to have an avenue in which they could possibly have a conversation. The most

positive outcome would be that they would be enabled to do business with less assistance. The

questions that came to mind before trying to start a software application was whether or not the Neuro

Sky headset could be used with such an application.

Could the Neuro Sky Mindset detect thought patterns associated with words? If thought

patterns could not be distinguished, could speech patterns be distinguished? With these ideas in mind, I

came up with two objectives:

1. Determine if a pattern for each word could be related per subject tested based on the

similarities for each word. For instance, in audio studies, the words tree, dog, car, house, and

ball would all have the same characteristics. Is this true for brain waves?

2. Determine if there is enough difference between each word pattern in brain wave format that it

could be detected and used as input into a computer system to assist disabled or to send

commands to a unit.

Language Memory Stimulation via Brain Computer Interface 8

The following paragraphs describe the process and display the results. The software used to decipher

the output from the Neuro Sky Mindset was Matlab and open source EEGLab.

Process to Distinguish Words via Brain Wave Analysis:

I picked five words to work on. The words are as follows: tree, dog, car, house, and ball. I used generic

pictures of the tree, dog, car, house and ball. I then concentrated on them, thought them, tagged the

output with a mark, and then spoke the word and tagged it. I tested three subjects in this manner for

approximately two minutes for each word: tree, dog, car, house, and ball.

I did not separate the signals for speaking and thinking because the idea was that I had to think the word

to speak it. I did this from a science standpoint with the idea of marking the signal, before or after. The

disclaimer is that I am not a neuroscientist and that the Neuro Sky Mindset is not medical grade. The

process was to see if a signal characteristic could be found for the thought or the thought preparation to

speak. The signaling for speaking is easily picked up by the Neuro Sky Mindset. The thoughts are not.

Language Memory Stimulation via Brain Computer Interface 9

Figure two shows data for tree and then speaking tree. It is evident that there is plenty of noise.

Figure 2: Subject One's data for tree.

Figure 2 displays the type of signaling being observed. If I take a further look at the signal, I can compare

the power spectrum density and the statistical characteristics. In doing so, visual inspection can show

that there are differences in the brain signals when concentrating on different words and then say ing

them.

Statistical Results for Tree:

For this paper, I limited the word to tree and had three subjects focus on tree, tell me when they

thought it by squeezing my finger and then had them say tree. The following graphs show the statistical

results for the word tree produced by three subjects, produced using EEGLAB open source code.

0.000815

+-

Scale

t

t

t

t

t

63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99100101102103104105106107108109110111112113

Data channel

Emily: Tree

Language Memory Stimulation via Brain Computer Interface 10

Figure 3 Statistical Results for Tree, Subject 1

0

50

100

150

200

250

300

Potential [µV]

Data Histogram and Fitted Normal PDF

Gaussian fit

Trimmed G.fit

Mean

-125 -75 -25 0 25 75 125

1

Potential [µV]-2 0 2

-1

-0.5

0

0.5

1

1.5

x 10-4

Standard Normal Quantiles [Std.Dev.]

Ord

ere

d O

bse

rva

tio

ns [µ

V]

QQ Plot (Data vs Standard Normal)

Channel 1

Mean: 2.78e-005

Trimmed mean: 2.82e-005

Standard dev.: 3.856e-005

Trimmed st.d.: 3.303e-005

Variance: 1.487e-009

Range: 0.0003

Data points: 3510

0.025-quantile: -5.6e-005

0.5 -quantile: 2.85e-005 (median)

0.975-quantile: 0.000101

Kurtosis: 3.39 (0 if Gaussian)

Distribution is super-Gaussian

Skewness: -0.173 (0 if Gaussian)

Distribution is left-skewed

Kolmogorov-Smirnov test: not Gaussian

Language Memory Stimulation via Brain Computer Interface 11

Figure 4 Statistical Results for Tree, Subject 2

0

500

1000

1500

Potential [µV]

Data Histogram and Fitted Normal PDF

Gaussian fit

Trimmed G.fit

Mean

-125 -75 -25 0 25 75 125

1

Potential [µV]-4 -2 0 2 4

-5

0

5

x 10-4

Standard Normal Quantiles [Std.Dev.]

Ord

ere

d O

bse

rva

tio

ns [µ

V]

QQ Plot (Data vs Standard Normal)

Channel 1

Mean: -1.18e-011

Trimmed mean: -5.97e-007

Standard dev.: 0.000209

Trimmed st.d.: 0.0001825

Variance: 4.369e-008

Range: 0.00165

Data points: 24172

0.025-quantile: -0.0004

0.5 -quantile: -2.14e-006 (median)

0.975-quantile: 0.000412

Kurtosis: 3.19 (0 if Gaussian)

Distribution is super-Gaussian

Skewness: 0.0358 (0 if Gaussian)

Distribution is right-skewed

Kolmogorov-Smirnov test: not Gaussian

Language Memory Stimulation via Brain Computer Interface 12

Figure 5 Statistical Results for Tree, Subject 3

WORD Subject 1 Subject 2 Subject 3

Mean 2.78x10^-5 -1.18x10^-11 3.01x10^-7

Stand Deviation 3.856x10^-5 0.00021 0.000209

Variance 1.487x10^-9 4.369x10^-10 4.371x10^-10

0

1000

2000

3000

4000

Potential [µV]

Data Histogram and Fitted Normal PDF

Gaussian fit

Trimmed G.fit

Mean

-125 -75 -25 0 25 75 125

1

Potential [µV]-4 -2 0 2 4

-5

0

5

x 10-4

Standard Normal Quantiles [Std.Dev.]O

rde

red

Ob

se

rva

tio

ns [µ

V]

QQ Plot (Data vs Standard Normal)

Channel 1

Mean: 3.01e-005

Trimmed mean: 2.95e-005

Standard dev.: 0.0002091

Trimmed st.d.: 0.0001823

Variance: 4.371e-008

Range: 0.001731

Data points: 103865

0.025-quantile: -0.000371

0.5 -quantile: 2.76e-005 (median)

0.975-quantile: 0.000444

Kurtosis: 3.19 (0 if Gaussian)

Distribution is super-Gaussian

Skewness: 0.0383 (0 if Gaussian)

Distribution is right-skewed

Kolmogorov-Smirnov test: not Gaussian

Language Memory Stimulation via Brain Computer Interface 13

Average mean amongst the three subjects is 9x10^-6. Average standard deviation is .001409.

Variance is within the same magnitude. It is difficult to surmise a specific percent error between the

three brain waves because I do not have a baseline brain wave to compare. Comparisons can only be

made between the signals. Comparisons can be made of signal patterns and the signal patterns for the

words are similar between subjects.

However, further testing with a larger subject pool would have to occur for significant

conclusions. Dog, car, house and ball showed similar results as far as similarities per subjects tested.

The better response shown by subject one was because subject one was tested in a quieter

environment.

Frequency Responses for TREE:

The following figures show the frequency responses of the word tree for the three subjects.

These would involve an average of 400 samples per frame sampling at 128 Hz.

Language Memory Stimulation via Brain Computer Interface 14

Figure 6 Tree Response Subject 1

Figure 7 Tree Response Subject 2

ERSP(dB)

-50

0

50

0.5 1 1.5 2 2.5

x 104

-60

20

Time (ms)

dB

10

20

30

40

50

-180 -80

Fre

quen

cy (

Hz)

dB

ITC phase

-100

0

100

0.5 1 1.5 2 2.5

x 104

-1

1

x 10-4

Time (ms)

V

10

20

30

40

50

0.8 1.2

Fre

quen

cy (

Hz)

ERP

ERSP(dB)

-50

0

50

2 4 6 8 10

x 104

-80

20

Time (ms)

dB

10

20

30

40

50

-80 -20

Fre

quency (

Hz)

dB

ITC phase

-100

0

100

2 4 6 8 10

x 104

-5

5

x 10-4

Time (ms)

V

10

20

30

40

50

0.8 1.2

Fre

quency (

Hz)

ERP

Language Memory Stimulation via Brain Computer Interface 15

Figure 8 Tree Response Subject 3

The spectrum analysis of each word shows also show similarities that display those brain waves

involving trees are repeatable among different people. More tests would be needed to verify the

validity of the assumption. It also appears that the software would have to be trained to the individual.

There are also differences due to noise and differences in the test environment. However, with the

equipment and changes in environment, differences can be seen between tree, dog, car, house, and

ball.

The following comparisons show the epochs of the two subjects for tree. It can be seen that

there is similar peaks at the 30 Hz range and between 10 and 20 Hz.

ERSP(dB)

-50

0

50

2 4 6 8 10 12 14 16

x 104

-80

20

Time (ms)

dB

10

20

30

40

50

-100 -40

Fre

quen

cy (

Hz)

dB

ITC phase

-100

0

100

2 4 6 8 10 12 14 16

x 104

-5

5

x 10-4

Time (ms)

V

10

20

30

40

50

0.8 1.2

Fre

quen

cy (

Hz)

ERP

Language Memory Stimulation via Brain Computer Interface 16

Figure 9 Tree Epoch Subject 1

Figure 10 Tree Epoch Subject 2

There were other responses for tree that raised curiosity because of the shape of the graphing

of the frequencies, but they were not repeatable.

ERSP(dB)

-5

0

5

-500 0 500 1000 1500

-5

5

Time (ms)

dB

10

20

30

40

50

-60-40

Freq

uenc

y (H

z)

dB

ITC

0

0.2

0.4

0.6

-500 0 500 1000 1500

-2

2

x 10-4

Time (ms)

V

10

20

30

40

50

0.2 0.6

Freq

uenc

y (H

z)

ERP

ERSP(dB)

-10

0

10

-500 0 500 1000 1500-10

20

Time (ms)

dB

10

20

30

40

50

-80 -40

Freq

uenc

y (H

z)

dB

ITC

0

0.2

0.4

-500 0 500 1000 1500-5

15x 10

-5

Time (ms)

V

10

20

30

40

50

0.1 0.3

Freq

uenc

y (H

z)

ERP

Language Memory Stimulation via Brain Computer Interface 17

Figure 11 Tree Frequency Response

Figure 11 shows peaks at 30 and 45 Hz, and at 1.5 ms, it appears to be the shadow of a tree.

Below the subject is outlined but it is inconclusive as to whether it is repeatable with filtering at this

time. If it was repeatable then there would be reason to believe that capturing images for words was

possible. However, in this experiment it was not repeatable and no other “phenomenon” showed up

Language Memory Stimulation via Brain Computer Interface 18

with the tests involving other words.

Figure 12 Frequency Response of Tree

Figure 13 Another perspective of TREE where the filtering appeared like images of trees. In this case the bottom ERP data appears to be tree routes and branches.

ERSP(dB)

-40

-20

0

20

40

1000 1500 2000 2500 3000 3500

-40

20

Time (ms)

dB

3

568

1115202737

-100 -50

Fre

quen

cy (

Hz)

dB

ITC phase

-100

0

100

1000 1500 2000 2500 3000 3500-5

10x 10

-5

Time (ms)

V

3

568

1115202737

0.8 1.2

Fre

quen

cy (

Hz)

ERP

Language Memory Stimulation via Brain Computer Interface 19

Quazi Random Signal (QRS) Data Analysis and Event Related Potential (ERP):

“Brain wave states are defined by wave amplitudes and gamma, beta, alpha, theta, and delta

frequencies, as delineated by electroencephalographic (EEG) analysis of the brain wave signals “[8].

QRS allows a quantitative analysis by averaging the signal states to get a fixed pattern result [5]. The

following graphs display different characteristics for each of the words tree, dog, car, house, and ball.

Figure 14:TREE QRS

-0.0009

-0.00045

0

0.00045

0.0009

Tria

ls

Data channel

100

-1000 -500 0 500 1000 1500

-4-2024

x 10-4

Time (ms)

V

Language Memory Stimulation via Brain Computer Interface 20

Figure 15 DOG QRS

Figure 16 CAR QRS

-9.4e-005

-4.7e-005

0

4.7e-005

9.4e-005

Tria

ls

Data channel

100

200

300

400

500

-1000 -500 0 500 1000 1500-2-1012345

x 10-5

Time (ms)

V

-0.00017

-8e-005

0

8e-005

0.00017

Tria

ls

Data channel

100

200

300

400

500

-1000 -500 0 500 1000 1500

-9-6-303

x 10-5

Time (ms)

V

Language Memory Stimulation via Brain Computer Interface 21

Figure 17 HOUSE QRS

Figure 18 BALL QRS

-0.00015

-7e-005

0

7e-005

0.00015

Tria

ls

Data channel

1000

-1000 -500 0 500 1000 1500-2-10123

x 10-5

Time (ms)

V

-9e-005

-4.5e-005

0

4.5e-005

9e-005

Tria

ls

Data channel

100

200

300

-1000 -500 0 500 1000 1500

-1.6-0.8

00.81.6

x 10-5

Time (ms)

V

Language Memory Stimulation via Brain Computer Interface 22

The QRS patterns show that there are specific characteristics that can be generated for each word.

Conclusions about Determining Patterns for Words:

Two objectives were proposed to determine if a real time application could be made that

involves brain waves as an input to a system:

1. Determine if a pattern for each word could be related per subject tested based on the

similarities for each word. For instance, in audio studies, the words tree, dog, car, house, and

ball would all have the same characteristics. Is this true for brain waves?

2. Determine if there is enough difference between each word pattern in brain wave format that it

could be detected and used as input into a computer system to assist disabled or to send

commands to a unit.

The data gathered from this experiment shows that there are repeatable thought patterns for

specific words in that three people can think “tree” and speak “tree” and get similar outpu t signals from

the brain. Secondly, there are patterns associated with different words that can be used to distinguish

different subjects.

Further investigation would be indicative because the Neuro Sky Mindset only has one dry node.

Also, the subjects observed are not used to the equipment. The lack of a quiet place to test and a

significant pool of subjects is also a problem. Some people found me approaching them about

monitoring their brain waves to be an invasion of privacy. However, with my given budget and time

constraints, the results seem to point in the direction that brain computer interface devices are viable.

It also has to be considered that the sensitivity and noise acceptance of the Neuro Sky Mindset is not

ideal. During tests, signals are changed by eye blinks, arm movements, head turns, etc. All of these

Language Memory Stimulation via Brain Computer Interface 23

issues need to be considered when examining this data. Overall though, the future of a hand held, brain

computer interface with an application that aphasia patients can use for communication as an

alternative to an implant, is inevitable. “Silent Talk” will probably come first since the military has

funding, however, the technology will provide solutions in many arenas.

Language Memory Stimulation Application:

My interest in brain computer interfaces is to create an application that could be used to help

people that cannot talk. This means taking data from the Neuro Sky Mindset, buffering it and then using

the signal characteristics to create a variable that can be pointed to the object under discussion. The

idea would be the same as processing audio signals to text, with the exception of substituting the audio

signal with a brain wave. Manipulating brain waves for this task would allow a communication device

that aphasia effected people could use in social environments. Other companies have derived similar

devices for the purpose of learning.

Brain Wave Interaction for the Purpose of Learning:

The concept to use brain waves in a learning environment is not new. Paras Kaul discussed the

thought in 2006, that brain power “actualizes an increased frame of reference, which further develops

the capability for non-verbal communications, remote viewing, and self healing [7]. In 1991, Interactive

Brain Wave Visual Analyzer System (IBVA4) was first marketed and Apple computers were used to allow

graphical displays of activity on the left and right hemispheres. Masahiro Kahata originated the system.

Today it costs $2600.

Language Memory Stimulation via Brain Computer Interface 24

Using the Neuro Sky Mindset is much more affordable. Signal ranges are the same in that the

frequencies are at 0 to 50 Hz, with voltages at 0-20 micro volts. However, it has less software capability

and only one dry node as explained earlier. Both systems have blue tooth functionality. The Neuro Sky

Mindset is of interest for the purpose of putting software on a handheld device like and iPod or phone

and allowing interaction to happen while being mobile. This makes the Neuro Sky ideal for aphasia

clients to be able to use in social environments.

The Problem About Deciphering Brain Waves for Word Recognition:

The current problem with creating an LMSA or any application would be to buffer the signal in

real time. The Neuro Sky chip samples at 520 Hz. The EEGLAB can resample it at 128 Hz and make it a

bit more realistic, but it is still a process to do in real time using a cohesive package. Software has to be

written for doing this computation and then converting it into something usable for an application.

Most of the games using the ThinkGear chip currently are using peak variations on the meditation and

the attention waves that are the result of the algorithms put in the proprietary chip. Monitoring more

specific characteristics and converting that to use for a speech application may not be possible w ith one

node. However, it is plausible, if the software can decipher the signal.

Neuro Sky currently uses proprietary algorithms to decipher the brain waves and distribute the

information in a Think-gear packet [14]. Applications that would work in real time have to be able to

handle that kind of sampling and buffer the signal in order to manipulate it for an output. This is not

something I can do in a semester in relation to deciphering a signal, isolating one cycle, and

characterizing that computationally as a word. Thus the data above shows averaging. Still, the question

Language Memory Stimulation via Brain Computer Interface 25

arose, how do I take that averaging and put it to use. One solution is to follow the path of audio waves.

However, it is not as simple as taking brain wave data and converting it to and audio type format.

Audio Compared to Brain Waves:

Audio waves look like the following:

Figure 19 Audio Waves

Language Memory Stimulation via Brain Computer Interface 26

Brain waves look like the following:

Figure 20 Brain waves while mouthing dog.

With this in mind, my thought for the conversion of brain waves to text is to duplicate audio

processing. Audio processing uses Hidden Markov Models as discussed in class. The signals are

decomposed by taking a Fourier transformation and de-correlating the spectrum to obtain a sequence

of n-dimensional real value vectors in a short window. The Verterbi algorithm is used to find a “best

path”, and map an output to a variable [8].

Based on the above summation, the next step into creating an LMSA would be to take the EEG

data and compile it into a text file and then covert it to a .wav file and test to see if brain wave patterns

generated from thinking and mouthing words will convert to similar outputs used in voice to text

software.

The Matlab code that can make this possible is the following:

1 2 3 4 5 qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

qrs

-500 0 5001000 -1000-500 0 5001000 -1000-500 0 5001000 -1000-500 0 5001000 -1000-500 0 5001000 -1000

Data channel0.0002366

+-

Scale

Language Memory Stimulation via Brain Computer Interface 27

xbuff = [];

for k = 1:(the number of QRS peaks)

x = (insert function here describing the wave);

xbuff = [xbuff; x(:)];

end

brainwave(xbuff,fs);

wavwrite(xbuff,fs,bits,'audio.wav');

This is borrowed from audio conversion. Since tone looks at the differences in the peaks and

neurologist do the same when getting QRS data from brain waves, it seems possible that if I can capture

the differences for speech brain waves that words could be derived in the same manner. The graph

below displays the results of research done changing brain waves to a musical sound by looking at the

differences of the peaks as tones [17]. These are brain waves that were converted to pitch.

Figure 21 EEG to Music

Language Memory Stimulation via Brain Computer Interface 28

Figure 22 Wavelength and pitch.[15]

If I take a .wav file of a subject repeating the word dog three times, I get an output that looks

like the following.

Figure 23 Voice file of "dog" (3 repeats)

Figure 24 Brain Waves saying "dog" five times

continuously.

Figure 19 shows the brain waves that occur when saying the word dog. However when I

converted the brain wave data to a .edf file, using a common file conversion, and then converted the

.edf file to a .wav file for analysis, I got a flat line and it registered as 1 second of data, and showed

Language Memory Stimulation via Brain Computer Interface 29

nothing on the screen. It is not clear what the error was except that the period for brain waves is

smaller than audio and the software needs to be modified to accept that. The three repeats above are 7

seconds long. Off the shelf software for converting brain waves would have to be modified to deal with

milliseconds instead of seconds.

Creating software, or modifying it in the same respect of reading audio waves, has to be

addressed for the use of brain waves in applications such as LMSA. It is not the same as looking at peak

data or using the values in the attention and meditation packets because more than just the peaks have

to be analyzed to identify a word. This means addressing sampling and time response issues, as well as

buffering because brain waves travel faster than sound waves. James Fong at the University of Toronto

is converting brain waves to music, but is using more high tech equipment than what I could use for my

experiment. The point being that it is possible to change brain waves into something to measurable for

a computer to manipulate.

Further Work for Converting Brain Waves to Words:

Machines have been made to convert brain waves to music. Machines have been made to

convert brain images to pictures so that one can redisplay memory images in a clients head. Scientists

can monitor which parts of the brain activate when a person chooses an obje ct or talks. Neuro Sky has

created a head set that monitors thresholds of attention and meditation. Others are working on finding

other signals of emotions. MIT has created systems that allow brain power to move chairs and

prosthetic arms [2]. They have also used implants to transmit waves that allow computers to capture

vowel sounds from brain wave activity.

Language Memory Stimulation via Brain Computer Interface 30

With the advent of hardware that is capable of monitoring waves in milliseconds and faster, as

well as micro volts, and computers that can calculate millions of data points in hours, instead of days; it

is only a short time before speech from the brain becomes more than vowel sounds repeated from

signals in the head. With technology like Neuro Sky, it’s a short while before there are hand held

computers that can help people voice their thoughts. With that in mind and the images that I can

produce using the Neuro Sky Mindset and open source software, I have no doubt with more expensive

equipment, the source of language in the mind and what restricts it from leaving the lips will be found

and resolved. “Silent Talk” will become real, robots will be driven via commands from brain waves, and

in the process aphasia patients will have a different avenue to use for communications.

Language Memory Stimulation via Brain Computer Interface 31

References:

[1] American Speech and Language Association: http://asha.org/public/speech/disorders/TBI.htm

[2] Brumberg, “Artificial Speech Synthesizer Control by Brain Computer Interface”; Division of Health

Sciences and Technology, Harvard/MIT, 2009.

[3] Drummond, Katie (2009-05-14). "Pentagon Preps Soldier Telepathy Push". Wired Magazine.

http://www.wired.com/dangerroom/2009/05/pentagon-preps-soldier-telepathy-push. Retrieved 2009-

05-06.

[4] Delorme, A, Makeig, S. "EEGLAB: an open source toolbox for analysis of single-trial EEG

dynamics," Journal of Neuroscience Methods 134:9-21 (2004)

[5] Funk, James, Eyetap, http://www.eyetap.org/about_us/people/fungja/regen.html

[6] Husian, Juebin, MD, PhD, Merk,

“http://www.merck.com/mmhe/sec06/ch082/ch082a.html#MMHE_06_082_01”, March 2008.

[7] Hudson, DL, “Inclusion of Signal Analysis in a Hybrid Medical Decision Support System,” University of

California, San Francisco, 2004.

[8] Kaul, Paras, “Brain Wave Interactive Learning: Where Multimedia and Neuroscience Converge”,

George Mason University, Advances in Computer Information, and Systems Science and Engineering,

Springer, 2006.

[9] Klee, Harold, Simulation of Dynamic Systems with Matlab and Simulink, CRC Press, University of

Central Florida, 2007.

Language Memory Stimulation via Brain Computer Interface 32

[10] Krusienki, “Toward Enhanced P300 Speller Performance”, Journal of Neurosci ence Methods, Vol

167, Jan 2008.

[11] Meinzer, Marcus, “Functional re-recruitment of dysfunctional brain areas predicts language recovery in chronic aphasia,” University of Konstanz, Germany, May 2007.

[12] National Aphasia Association

[13] National Stroke Association

[14] Neuro Sky : www.Neuro Sky.com

[15] Pope, David, Neurotech, http://www.neurotechreports.com/pages/darpaprosthetics.html, 2009.

[16] Teenager moves video icons just by imagination, press release, Washington University in St Louis, 9

October 2006.

[17] Wu, D., Li, C., & Yao, D. (2009). Scale-Free Music of the Brain PLoS ONE, 4 (6)

DOI:10.1371/journal.pone.0005915