10
Research Article A Dense Long Short-Term Memory Model for Enhancing the Imagery-Based Brain-Computer Interface Xiaofei Zhang , 1 Tao Wang , 2 Qi Xiong , 1 and Yina Guo 1 1 School of Electronic Information Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China 2 State Grid Yangquan Power Supply Company, 333 Desheng East Street, Yangquan City, Shanxi Province, China Correspondence should be addressed to Yina Guo; [email protected] Received 30 November 2020; Revised 19 February 2021; Accepted 13 March 2021; Published 24 March 2021 Academic Editor: Qiangqiang Yuan Copyright © 2021 Xiaofei Zhang et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Imagery-based brain-computer interfaces (BCIs) aim to decode different neural activities into control signals by identifying and classifying various natural commands from electroencephalogram (EEG) patterns and then control corresponding equipment. However, several traditional BCI recognition algorithms have the “one person, one model” issue, where the convergence of the recognition model’s training process is complicated. In this study, a new BCI model with a Dense long short-term memory (Dense-LSTM) algorithm is proposed, which combines the event-related desynchronization (ERD) and the event-related synchronization (ERS) of the imagery-based BCI; model training and testing were conducted with its own data set. Furthermore, a new experimental platform was built to decode the neural activity of different subjects in a static state. Experimental evaluation of the proposed recognition algorithm presents an accuracy of 91.56%, which resolves the “one person one model” issue along with the difficulty of convergence in the training process. 1. Introduction Brain–computer interface (BCI) [1] technology directly uses the EEG signal [2] of the cerebral cortex to avoid human nerve transmission and builds an interactive bridge between the central nervous system of the brain and the external environment. BCI technology plays a significant role in biomedicine and rehabilitation, among other fields [3–7]. e study of BCI was first conducted in the 1920s. e concept of BCI was gradually formed [8] after the German Hans Berger medical team collected electrical signals from the cerebral cortex through surface electrodes for the first time [2]. With the continuous development of science and technology, several BCI studies [9–15] have also achieved various remarkable results, among which the imagery-based BCI [16] is popular. Its research is based on the development of ERD/ERS physiology [17, 18]. By imagining limb movement consciousness tasks, users can generate dis- tinctive brain spatial patterns in the cerebral motor cortex and then use pattern recognition algorithms to translate different activity patterns into different categories of information to realize brain–computers interaction. Com- pared to the visual/auditory evoked BCI, it does not require the stimulation and assistance of the external environment belonging to the active BCI [19]. erefore, it has a pro- spective broader application. e imagery-based BCI was first proposed by the team of an American scholar Wolpaw in 1993. e team designed a BCI system that used the μ-wave in the brain electrical signal to control a mouse; the recognition accuracy was 70% [16]. Subsequently, several other imagery-based BCI studies emerged. In 2015, Yao proposed a BCI based on stimulation assistance. e recognition accuracy was 80%, which im- proved BCI blindness to a certain extent (the recognition accuracy rate was previously lower than 70%); however, the accuracy rate had significant room for improvement [20]. In 2017, Li et al. introduced deep learning to the application of BCI [21], considering the optimal wavelet packet transform (OWPT) method and the long short-term memory (LSTM) network for feature extraction and classification, the results indicate a high recognition accuracy; however, the OWPT method is time-consuming and is not suitable for online Hindawi Computational Intelligence and Neuroscience Volume 2021, Article ID 6614677, 10 pages https://doi.org/10.1155/2021/6614677

A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

Research ArticleA Dense Long Short-Term Memory Model for Enhancing theImagery-Based Brain-Computer Interface

Xiaofei Zhang 1 Tao Wang 2 Qi Xiong 1 and Yina Guo 1

1School of Electronic Information Engineering Taiyuan University of Science and Technology Taiyuan 030024 China2State Grid Yangquan Power Supply Company 333 Desheng East Street Yangquan City Shanxi Province China

Correspondence should be addressed to Yina Guo zulibesttyusteducn

Received 30 November 2020 Revised 19 February 2021 Accepted 13 March 2021 Published 24 March 2021

Academic Editor Qiangqiang Yuan

Copyright copy 2021 Xiaofei Zhang et al-is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Imagery-based brain-computer interfaces (BCIs) aim to decode different neural activities into control signals by identifying andclassifying various natural commands from electroencephalogram (EEG) patterns and then control corresponding equipmentHowever several traditional BCI recognition algorithms have the ldquoone person one modelrdquo issue where the convergence of therecognition modelrsquos training process is complicated In this study a new BCI model with a Dense long short-term memory(Dense-LSTM) algorithm is proposed which combines the event-related desynchronization (ERD) and the event-relatedsynchronization (ERS) of the imagery-based BCI model training and testing were conducted with its own data set Furthermore anew experimental platform was built to decode the neural activity of different subjects in a static state Experimental evaluation ofthe proposed recognition algorithm presents an accuracy of 9156 which resolves the ldquoone person one modelrdquo issue along withthe difficulty of convergence in the training process

1 Introduction

Brainndashcomputer interface (BCI) [1] technology directly usesthe EEG signal [2] of the cerebral cortex to avoid humannerve transmission and builds an interactive bridge betweenthe central nervous system of the brain and the externalenvironment BCI technology plays a significant role inbiomedicine and rehabilitation among other fields [3ndash7]

-e study of BCI was first conducted in the 1920s -econcept of BCI was gradually formed [8] after the GermanHans Berger medical team collected electrical signals fromthe cerebral cortex through surface electrodes for the firsttime [2] With the continuous development of science andtechnology several BCI studies [9ndash15] have also achievedvarious remarkable results among which the imagery-basedBCI [16] is popular Its research is based on the developmentof ERDERS physiology [17 18] By imagining limbmovement consciousness tasks users can generate dis-tinctive brain spatial patterns in the cerebral motor cortexand then use pattern recognition algorithms to translatedifferent activity patterns into different categories of

information to realize brainndashcomputers interaction Com-pared to the visualauditory evoked BCI it does not requirethe stimulation and assistance of the external environmentbelonging to the active BCI [19] -erefore it has a pro-spective broader application

-e imagery-based BCI was first proposed by the team ofan American scholar Wolpaw in 1993 -e team designed aBCI system that used the μ-wave in the brain electrical signalto control a mouse the recognition accuracy was 70 [16]Subsequently several other imagery-based BCI studiesemerged In 2015 Yao proposed a BCI based on stimulationassistance -e recognition accuracy was 80 which im-proved BCI blindness to a certain extent (the recognitionaccuracy rate was previously lower than 70) however theaccuracy rate had significant room for improvement [20] In2017 Li et al introduced deep learning to the application ofBCI [21] considering the optimal wavelet packet transform(OWPT) method and the long short-term memory (LSTM)network for feature extraction and classification the resultsindicate a high recognition accuracy however the OWPTmethod is time-consuming and is not suitable for online

HindawiComputational Intelligence and NeuroscienceVolume 2021 Article ID 6614677 10 pageshttpsdoiorg10115520216614677

recognition In November 2018 Lin and Shihb embeddedthe following two deep learning models into the BCI systemfor MI-EEG signal classification to identify two imaginarymovements LSTM and generalized regression neural net-work (GRNN) the results indicated that the performance ofGRNN is better than that of other strategies [22] In April2019 Professor Anumanchipalli et al a neurosurgeon at theUniversity of California San Francisco (UCSF) and hiscolleagues developed a decoder that converts human brainnerve signals into speech which is powerful for assistingpatients who cannot speak to achieve vocal communication[15] In 2019 Jiao et al proposed a novel sparse grouprepresentation model (SGRM) for improving the efficiencyof MI-based BCI by exploiting the intersubject informationwhich solves the problem that takes long time to recordsufficient electroencephalogram (EEG) data for robustclassifier training [23] In a recent study Willett et aldemonstrated a BCI in the cerebral cortex in which a newrecurrent neural network decoding method was used todecode imaginary writing actions from neural activities inthe motor cortex and translate them into text in real time-e subjectrsquos typing speed was 90 characters per minute andthe accuracy rate was 99 it also consists of a generalautomatic correction function

Apparently the feature extraction classification andrecognition algorithms of signals in the BCI directly de-termine the practicability and effectiveness of the BCIHowever most existing experimental BCI paradigms usetraditional classification algorithms based on feature values[1 2 24ndash29] which largely limit the development of BCIincluding ldquoBCI blindrdquo (recognition accuracy of less than70) ldquoone person one modelrdquo (due to individual differ-ences recognition models cannot be shared) and modeltraining process issues such as difficulty in convergence

-e purpose of this study is to solve the ldquoone person onemodelrdquo issue while ensuring recognition accuracy -us acomposite network model is considered for which thecontribution is twofold

(1) Model aiming to resolve the issues relative to thetraditional BCI model a new BCI model is proposedand a new experimental platform is built to verify thefeasibility of the model Furthermore the data col-lected during the experiment is used to compile adata set for model training and testing to achieveaccurate decoding of signals

(2) Dense LSTM algorithm since the EEG signals in theimagery-based BCI need to obtain deeper featureswhen the LSTM network is used for classificationthe model training process has significant jittersconsidering recognition accuracy as well as difficultyin convergence -e method of machine vision isabandoned and deep learning is introduced to ef-fectively improve poor generalization ability A newclassification algorithm is proposed First the rec-ognition accuracy of the algorithm is ensuredthrough the LSTM network and compared for dif-ferent groups of people second the convergence ofthe model training process is ensured by grafting the

Dense layer and the accuracy of gesture recognitiongoes up to 9156

Section 2 introduces the model and experimental plat-form Sections 3 and 4 present the Dense LSTM algorithmand analysis of the experimental test results respectivelyFinally Section 5 summarizes the study

2 Model and Experimental Platform

In this section the new model is first proposed followed by apartial introduction of the experimental platform andprocess

21 Model -e new imagery-based BCI model is shown inFigure 1 -e model is mainly composed of the followingthree parts signal collection and preprocessing data rec-ognition and classification and control of applicationequipment During the experiment the subject imaginedgestures in a static state while simultaneously completing theacquisition and preprocessing of the EEG signals throughthe device the processed data was then transmitted to theclassification module Finally the control of the applicationdevice was completed according to the recognition resultwhich realized the decoding of the subjectrsquos ldquomental gestureactionrdquo

During the experiment each subject imagined fivegestures in turn namely (i) thumb bending (ii) index fingerbending (iii) middle finger bending (iv) ring fingerbending and (v) little thumb bending -e gesture needs tobe imagined eight times During the experiment the subjectsneed to be completely still and the surrounding environ-ment was kept silent to avoid any noise interference -eexperiment recorder started to record and intercept the datafor 10 seconds after the EEG was stabilized -e subjectsstopped imagining the gesture with the end beep sound after10 seconds they can rest for half a minute until the next datacollection in this cycle

-e time schedule of a single experiment for data col-lection is shown in Figure 2 which is divided into four stepsas follows

Step 1 during the first five seconds of the acquisitionthe subject was completely relaxed and stillStep 2 imaginary movements of the intended gestureswere performed for 10 s in the brain after hearing aprompted soundStep 3 after 10 seconds the same sound was promptedfor the subject to stop imagining the gesturesStep 4 the subject rested for half a minute until the nextdata collection in this cycle

A total of 20 subjects participated in this experimentincluding 15 boys and 5 girls -e age of the subjects rangedbetween 18 and 40 with an average age of 25 All subjectswere in good health and met the requirements for partici-pating in the BCI experiment Prior to the experiment allsubjects were trained and studied imagery-based BCI theywere all informed of the experimental precautions to ensure

2 Computational Intelligence and Neuroscience

that the experimenter understood the entire process to avoidunnecessary factors affecting the experimental results

22 Experiment Platform -e EEG acquisition equipmentrequired for the experiment included the NCERP EEG andan evoked potential instrument developed by ShanghaiNuocheng Electric Co Ltd -e instrument consists of acomputer host a display an audio and video stimulationbox an EEG main control box a physiological amplificationbox and electrode composition -e equipment has a highsampling rate of up to 8 kHzCH and a 32-bit resolution thedata collected is more accurate It can adopt multiple col-lected data according to actual experimental needs to obtainmore EEG data characteristics -is instrument adopts anoninvasive EEG signal acquisition method and uses 24-channel silver electrodes As shown in Figure 3 the place-ment position adopts the international unified standard 1020 system method -e electrode cap and the physiologicalamplification box are connected to complete the physio-logical signal acquisition and amplification -e collecteddata is then transmitted to the EEG main control boxthrough the optical fiber and finally transmitted to thecomputer host through the USB interface -e constructioneffect of the experimental platform is shown in Figure 4

23 Signal Collection and Preprocessing During the exper-iment the data can be preprocessed by setting the param-eters of the EEG signal collection instrument Using theartifact correction method [30] remove the EOG artifactsfrom the collected EEG signal and a low-pass filter was set toremove the EEG signal that caused interference near the

50Hz power frequency -e valid data after preprocessingsuch as interception and arrangement is further processedwhich is normalized and mapped to the interval [minus1 1]Finally the processed data is compiled into a data set for theexperiment and the parameters of the data set are shown inTable 1

Since each partition of the human brain corresponds todifferent functions and the prefrontal cortex is responsiblefor processing imagination and thinking-related activitiesthe channels of F3 and F4 (as shown in Figure 3) datacorresponding to the electrode caps of the frontal lobe aremainly used to make the data set During the experiment

Relax Gesture

imagination Beep Beep

Rest for 30s

0 5s 15s Time (s)

Figure 2 EEG signal acquisition process

Datapreprocessing

Control

Classification

Figure 1 Imaginative BCI model

GNDFp2

Sp2

F4

C4

Fz P2

T6

F6

Oz

T4

Fp2Sp1

F7 F3

P3

C3T3

T5

O1 O2

Sp2

A1 A2

CzRef

Figure 3 Location of 24 guide electrodes

Computational Intelligence and Neuroscience 3

after the subjects heard the prompts they imagined eachgesture in turn At the same time the data recorder startedthe recording operation after the EEG stabilized inter-cepting 10 seconds of data and used it to make a data set-e data set used in this experiment is compiled from theexperimental data of 20 subjects from which 80 of the datais randomly selected for classification and recognition modeltraining and 20 is used for model testing these 20 subjectsuse one model

3 Dense LSTM Algorithm

In this section a new recognition model is established bycombining the Dense layer [31] with the LSTM network andthe model parameters are presented to achieve an optimalcombination

31 Dense LSTM -e classification algorithm module be-longs to the data processing and recognition part of the BCICommon classifiers such as the support vector machine(SVM) multilayer perceptron (MLP) and Bayes classifierare ubiquitous for which recognition accuracy is not highand the ldquoBCI blindrdquo issue exists among other problems Linet al used the LSTM algorithm to achieve higher efficiencybut it takes longer to extract feature values using the optimalwavelet packet transform method -us the Dense LSTM

model is proposed in this study As shown in Figure 5 thefirst half of the model is the LSTM layer followed by theadded Dense layer

-e LSTM network consists of three designed gates -eforget gate determines whether the output information ofthe previous moment is retained or discarded After theeffect on the output of the previous time and the input of thecurrent time the output is a value in the range of 0 to 1Utilizing the sigmoid function the role of the forget gate isshown in the following equation

ft sigmoid WTf times Stminus1 + U

Tf times xt + bf1113872 1113873 (1)

In the formula W and U mean the weight of gate Smeans the output data of the previous moment xmeans theinput data at the current moment b means the bias term ofgate the subscript ldquofrdquo is the name of the gate and t and tminus 1indicate different moments

-e role of the input gate is to control the input at thecurrent time which directly determines how much newinformation will be input into the hidden layer of the LSTM-e working principle of the input gate is shown in thefollowing equation

it sigmoid WTi times Stminus1 + U

Ti times xt + bi1113872 1113873 (2)

-e candidate gates calculate the total storage of theinput at the current time and the previous input informa-tion -e working process is shown in the followingequation

1113957ct tanh WTc times Stminus1 + U

Tc times xt + bc1113872 1113873 (3)

-e update of information while the LSTM is func-tioning is determined by the forget gate the input gate andthe candidate gate -e forget gate determines if the in-formation to be lost is equal to ft times ctminus1 and the input gateand the candidate gate determine if the new information isequal to it times 1113957ct which is added at the current momentCombining these two the hidden layer processes the newstate as shown in the following equation

ct ft times ctminus1 + it times 1113957ct (4)

-e output gate determines how much information isinput to the next moment that is st and the calculationprocess is obtained by ct -e output gate functions as shownin the following equations and otis the weight of the outputgate which is in the range of 0 to 1 -e size of ot willdetermine the information passed to the next moment

ot sigmoid WTo times Stminus1 + U

To times xt + bo1113872 1113873 (5)

St ot times tanh ct( 1113857 (6)

During the data collection process the collected EEGsignal is transmitted to the LSTM as input data and theresult following the analysis and processing of the LSTM istransmitted to the connected Dense network -e Densenetwork part is composed of two Dense layers and the twoDense layers have the same network structure Each Dense

Figure 4 NCERP EEG and evoked potential instrument

Table 1 Imagination movement data set

Name Imaginary data setNumber of subjects 20Number ofexperiments 40 timesperson

Experimental content Gesture imaginationData label 0 1 2 3 4Number of data 6400

Data preprocessing

(1) Remove EOG (electrooculogram)(2) 50Hz low-pass filtering

(3) Remove 50Hz power frequencynoise

(4) Normalization

4 Computational Intelligence and Neuroscience

network is composed of an input layer a hidden layer andan output layer -e structure is shown in Figure 6 allneurons between each adjacent layer are connected to eachother -rough the superposition of the composite networkthe feature propagation of the data can be strengthened sothat the network can mine deeper features in the data andpromote the convergence of the classification and recog-nition model in the training process

-e principle of the Dense network operation is shownin the following equations

atminus1 W11 lowast otminus1 + W12 lowast ot + W13 lowast ot+1 + b1 (7)

at W21 lowast otminus1 + W22 lowast tt + W23 lowast tt+1 + b2 (8)

at+1 W31 lowast otminus1 + W32 lowast ot + W33 lowast ot+1 + b3 (9)

Since the output of each network layer is a nonlinearcombination which impacts the recognition capability of themodel after the Dense network the activation function isused to produce a nonlinear effect on the output result tosolve the problem that the linear model cannot by improvingthe effectiveness and reliability of the classification andrecognition models -e principle of the activation functionof Softmax can be expressed by the following equation

σ(s)j e

sj

1113936KK1 e

sj K 1 2 3 (10)

-e activation function of Softmax maps the outputvalue of the Dense network to a vector(σ(s)1 σ(s)2 σ(s)3 σ(s)4 ) where σ(s)j is a real numberfor which the size is in the range of (0 1) indicating theprobability of belonging to each category in the multi-classification problem 1113936 σ(s)j is 1 -e network can finallyobtain the result of classification according to the probabilityof each classification with the respective mapped vector

Dense layer Dense layerLSTM layer

Stndash2

Stndash1

St+1

xt+1

Ct+1Ot+1 Ot2+1at+1 at2+1

Otndash1 Ot2ndash1

Ot

Ot

Ot

xt

Ot Ot2 at2at

ft

ft

ftit

it

St Ct

it

Ctndash2

Ctndash1

xtndash1

In

~Ct

~Ct

~Ct

atndash1 at2ndash1

Out

Figure 5 Flow chart of Dense LSTM algorithm

Inputlayer

Hiddenlayer

Outputlayer

Figure 6 Structure of the Dense layer

Computational Intelligence and Neuroscience 5

32 Model Parameter Settings -e classification model ofthe BCI proposed in this study is formed by connecting theLSTM network and the Dense network -e connection ofdifferent network layers has varying degrees of impact on therecognition effect -erefore it is necessary to set appro-priate network parameters to ensure optimal recognitioneffects

Herein the parameter unit size of the LSTM layer ispresented -e result is shown in Figure 7 By comparing therecognition accuracy rate and the time required for eachiteration when training the model when the unit size is 128the recognition accuracy rate is 9156 and each iterationtakes 052 secondsWhen the unit size is 256 the recognitionaccuracy rate is 9180 but the time required for each it-eration is 162 seconds -e recognition rate differencebetween them is not obvious but the iteration time is quitedifferent At this time the effect is optimal thus the unit ofthe LSTM layer is set to 128

We compare some versions of our Dense LSTM algo-rithm By testing the size of the parameter unit in differentDense networks and observing the training process of theclassification recognition model as well as the accuracy ofrecognition the results are shown in Table 2 When theLSTM layer is grafted to a Dense network during the processof reducing the unit size of the Dense network from 256 to128 the recognition accuracy rate increases rapidly from8232 to 8766 when the unit size decreases from 128 to32 the recognition accuracy rate slowly rises to 8952When the LSTM layer was grafted with two Dense networksand the unit size of the first Dense network was fixed at 256the unit size of the second Dense network was reduced from256 to 128 and from 64 to 32 During this process therecognition accuracy rate increased from 8543 to 8642and from 8859 to 8932 When the unit size of the firstDense network was fixed at 128 the unit size of the secondDense network was reduced from 128 to 64 and to 32 andthe recognition accuracy rate sequentially increased from8945 to 8945 and to 9022 When the unit size of thefirst Dense network was fixed at 64 and the unit size of thesecond Dense network was fixed at 32 changing the lossfunction to MSE the recognition accuracy rate was 9068After the loss function is changed to Softmax the recog-nition accuracy rate is 9156

Based on the comprehensive recognition accuracy andthe time spent in training and learning the classificationrecognition model the combination of the recognitionmodel LSTM layer and the Dense network layer and itsparameter unit selects the LSTM parameter combination of(128)-DENSE (64)-DENSE (32)

4 Results and Analysis

In this section the data set collected in the experiment isused to train and test the model to verify the feasibility of theproposed model by targeted verification of the convergencein the model training process and the recognition accuracyfor different groups of people All experiments have achievedthe control of the uHand20 manipulator palm that is thesubject simply imagines gestures in the brain when the

subject is at rest and the manipulator palm performs thesame gesture at the same time

41 Verification of Convergence After experimental com-parison the optimal model parameters of the model areobtained LSTM(128)-DENSE (64)-DENSE (32) -etraining process using the proposed Dense LSTM and LSTMalgorithms is shown in Figure 8 -e red broken line in-dicates the change of the recognition accuracy rate of theDense LSTM algorithm during the training process and theblue broken line indicates the accurate recognition of theLSTM model rate of change Apparently in the process of 0to 100 iterations the recognition accuracy of the two modelsrapidly increased to approximately 85 After 100 iterationsthe recognition accuracy increased relatively slowly After300 iterations the Dense LSTM algorithm was constant itwill grow slowly but the LSTM algorithm does not growFinally the recognition accuracy of the Dense LSTM al-gorithm reaches 9156 and the recognition accuracy of theLSTM algorithm is slightly lower than 90 During thetraining of the model the recognition accuracy of the DenseLSTM algorithm changes smoothly while the LSTM hassignificant jitters and several glitches and the model isunstable -erefore the Dense LSTM algorithm proposed inthis study presented an optimization effect

Table 2 Influence of the unit size of network parameters in eachlayer

Parameter unit Accuracy ()128-256 (LSTM-DENSE) 8232128-128 (LSTM-DENSE) 8766128-64 (LSTM-DENSE) 8893128-32 (LSTM-DENSE) 8952128-256-256 (LSTM-DENSE-DENSE) 8543128-256-128 (LSTM-DENSE-DENSE) 8642128-256-64 (LSTM-DENSE-DENSE) 8859128-256-32 (LSTM-DENSE-DENSE) 8932128-128-64 (LSTM-DENSE-DENSE) 8945128-128-32 (LSTM-DENSE-DENSE) 9022128-64-32 (LSTM-DENSE-DENSE) (MSE) 9068128-64-32 (LSTM-DENSE-DENSE) (Softmax) 9156

8716

8947

9156 9180

ndash03

02

07

12

17

22

8400850086008700880089009000910092009300

Tim

e (s)

Accu

racy

()

Unit

AccuracyTime (s)

32 64 128 256

Figure 7 -e influence of unit parameter of LSTM on experi-mental results

6 Computational Intelligence and Neuroscience

42 Verification of Recognition Accuracy As shown in Ta-ble 3 the average recognition rate and running time of thenewly proposed Dense LSTM algorithm is compared withseveral neural networks commonly used in traditional deeplearning BCI including Recurrent Neural Network (RNN)Convolutional Neural Networks (CNN) [32] and LSTM andfeature extraction-classifier algorithms in machine learningincluding support vector machine (SVM) and Bayesiannetwork -e results indicate that under the same parametersettings the Dense LSTM algorithm proposed in this studyhas a higher average recognition rate for all subjects so it ismore suitable for the classification of EEG signals in theimagery-based BCI To a certain extent it solves the LSTMnetwork data in the imagery-based BCI -e nonconvergenceproblem in the classification and recognition model trainingprocess improves the practicality and generalization perfor-mance of the imagery-based BCI All subjects used the sameclassification and recognition model and the recognitionaccuracy rate was 9156 -is somewhat resolves theproblems relative to ldquoBCI blindnessrdquo and ldquoone person onemodelrdquo that are common in BCIs making the application ofimaginary BCIs more widespread

-e trained classification and recognition model is ap-plied to the classification and recognition module of theimaginary BCI model and 5 out of the 20 subjects arerandomly selected for the gesture recognition cross-validationexperiment in the imagination mode Each gesture is done 20

times and the experimental results are shown in Table 4which represents the experimental results of five randomlyselected subjects Considering the experiment 5 subjectsmade 100 gestures for each gesture

-e imaginary BCI data was randomly selected for thefive age groups ranging 15sim20 years old 20sim25 years old25sim30 years old 30sim35 years old and 35sim40 years old forclassification -e test results are shown in Figure 9 theaccuracy of gesture recognition for all ages reached morethan 905 of which the highest recognition accuracy forages 25 to 30 years old reached 9258 -e farther theremaining four age groups were from the age range of 25 to30 years the lower the recognition accuracy is -e ex-perimental results show that age causes the recognition

LSTM-Dense

LSTM

0

02

03

04

05

06

07

08

09

100 200 300 400 500 600 700 800Epochs

ACC

Figure 8 LSTM and Dense LSTM model training process

Table 3 Comparison of average accuracy and running time ofclassification algorithm

Classificationalgorithm

Classification accuracy()

Running time(sec)

Dense LSTM 9156 379LSTM 9001 423RNN 8534 456GRU 8356 615CNN 8149 532SVM 7086 469Bayesian 6945 648

Computational Intelligence and Neuroscience 7

accuracy of the imaginary BCI to change but the impact isnot very significant It is related to the attention controlability of subjects of different ages

A total of 100 samples were randomly selected from thedata of female and male subjects for classification andrecognition -e experimental results are shown in Table 5Table 5 presents the classification and recognition of thegestures of female and male subjects among them theoverall recognition accuracy rate of female subjects is 968and that of male subjects was 892 -e recognition

accuracy of female subjects is slightly higher than that ofmale subjects and the recognition accuracy rates of the fivegestures are approximately the same

In order to prove the feasibility of the proposed algo-rithm we focused on one case study taking the middlefinger as an example -e number of subjectsrsquo samples isincreased on the basis of the original data set however theexperimental process is similar to the previous one Werandomly selected 5 subjects from all collected samples maleand female sample data set and different ages for

Table 4 Cross-validation experiment of gesture recognition

Recognized categoryActual gestures

Average accuraciesGesture 0 Gesture 1 Gesture 2 Gesture 3 Gesture 4

Subject 1 90 80 100 95 95 92Subject 2 95 85 90 90 90 90Subject 3 90 90 95 100 85 92Subject 4 80 90 95 85 90 88Subject 5 90 85 90 90 95 90

8900

9000

9100

9200

9300

15~20 20~25 25~30 30~35 35~40

Figure 9 Recognition accuracy of all ages

Table 5 Cross-validation experiment of different gender gesture recognition

Recognized categoryActual gestures

Gesture 0 () Gesture 1 () Gesture 2 () Gesture 3 () Gesture 4 ()Female 93 93 92 91 93Male 90 89 88 89 90

Table 6 -e middle finger case study

SamplesActual gestures

Average accuracies ()Subject 1 () Subject 2 () Subject 3 () Subject 4 () Subject 5 ()

Random samples 91 94 89 95 94 926Male 90 89 91 88 90 896Female 96 91 92 90 93 92415ndash20 years 89 90 89 91 87 89220ndash25 years 90 92 92 91 90 9125ndash30 years 91 93 92 91 94 92230ndash35 years 91 92 90 93 92 91635ndash40 years 90 92 90 92 93 914

Table 7 Significance test results

Source SS df MS F Frobgt FColumns 72078 4 180194 238 01381Rows 2874 2 143702 19 02117Error 60612 8 75765Total 16143 14

8 Computational Intelligence and Neuroscience

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 2: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

recognition In November 2018 Lin and Shihb embeddedthe following two deep learning models into the BCI systemfor MI-EEG signal classification to identify two imaginarymovements LSTM and generalized regression neural net-work (GRNN) the results indicated that the performance ofGRNN is better than that of other strategies [22] In April2019 Professor Anumanchipalli et al a neurosurgeon at theUniversity of California San Francisco (UCSF) and hiscolleagues developed a decoder that converts human brainnerve signals into speech which is powerful for assistingpatients who cannot speak to achieve vocal communication[15] In 2019 Jiao et al proposed a novel sparse grouprepresentation model (SGRM) for improving the efficiencyof MI-based BCI by exploiting the intersubject informationwhich solves the problem that takes long time to recordsufficient electroencephalogram (EEG) data for robustclassifier training [23] In a recent study Willett et aldemonstrated a BCI in the cerebral cortex in which a newrecurrent neural network decoding method was used todecode imaginary writing actions from neural activities inthe motor cortex and translate them into text in real time-e subjectrsquos typing speed was 90 characters per minute andthe accuracy rate was 99 it also consists of a generalautomatic correction function

Apparently the feature extraction classification andrecognition algorithms of signals in the BCI directly de-termine the practicability and effectiveness of the BCIHowever most existing experimental BCI paradigms usetraditional classification algorithms based on feature values[1 2 24ndash29] which largely limit the development of BCIincluding ldquoBCI blindrdquo (recognition accuracy of less than70) ldquoone person one modelrdquo (due to individual differ-ences recognition models cannot be shared) and modeltraining process issues such as difficulty in convergence

-e purpose of this study is to solve the ldquoone person onemodelrdquo issue while ensuring recognition accuracy -us acomposite network model is considered for which thecontribution is twofold

(1) Model aiming to resolve the issues relative to thetraditional BCI model a new BCI model is proposedand a new experimental platform is built to verify thefeasibility of the model Furthermore the data col-lected during the experiment is used to compile adata set for model training and testing to achieveaccurate decoding of signals

(2) Dense LSTM algorithm since the EEG signals in theimagery-based BCI need to obtain deeper featureswhen the LSTM network is used for classificationthe model training process has significant jittersconsidering recognition accuracy as well as difficultyin convergence -e method of machine vision isabandoned and deep learning is introduced to ef-fectively improve poor generalization ability A newclassification algorithm is proposed First the rec-ognition accuracy of the algorithm is ensuredthrough the LSTM network and compared for dif-ferent groups of people second the convergence ofthe model training process is ensured by grafting the

Dense layer and the accuracy of gesture recognitiongoes up to 9156

Section 2 introduces the model and experimental plat-form Sections 3 and 4 present the Dense LSTM algorithmand analysis of the experimental test results respectivelyFinally Section 5 summarizes the study

2 Model and Experimental Platform

In this section the new model is first proposed followed by apartial introduction of the experimental platform andprocess

21 Model -e new imagery-based BCI model is shown inFigure 1 -e model is mainly composed of the followingthree parts signal collection and preprocessing data rec-ognition and classification and control of applicationequipment During the experiment the subject imaginedgestures in a static state while simultaneously completing theacquisition and preprocessing of the EEG signals throughthe device the processed data was then transmitted to theclassification module Finally the control of the applicationdevice was completed according to the recognition resultwhich realized the decoding of the subjectrsquos ldquomental gestureactionrdquo

During the experiment each subject imagined fivegestures in turn namely (i) thumb bending (ii) index fingerbending (iii) middle finger bending (iv) ring fingerbending and (v) little thumb bending -e gesture needs tobe imagined eight times During the experiment the subjectsneed to be completely still and the surrounding environ-ment was kept silent to avoid any noise interference -eexperiment recorder started to record and intercept the datafor 10 seconds after the EEG was stabilized -e subjectsstopped imagining the gesture with the end beep sound after10 seconds they can rest for half a minute until the next datacollection in this cycle

-e time schedule of a single experiment for data col-lection is shown in Figure 2 which is divided into four stepsas follows

Step 1 during the first five seconds of the acquisitionthe subject was completely relaxed and stillStep 2 imaginary movements of the intended gestureswere performed for 10 s in the brain after hearing aprompted soundStep 3 after 10 seconds the same sound was promptedfor the subject to stop imagining the gesturesStep 4 the subject rested for half a minute until the nextdata collection in this cycle

A total of 20 subjects participated in this experimentincluding 15 boys and 5 girls -e age of the subjects rangedbetween 18 and 40 with an average age of 25 All subjectswere in good health and met the requirements for partici-pating in the BCI experiment Prior to the experiment allsubjects were trained and studied imagery-based BCI theywere all informed of the experimental precautions to ensure

2 Computational Intelligence and Neuroscience

that the experimenter understood the entire process to avoidunnecessary factors affecting the experimental results

22 Experiment Platform -e EEG acquisition equipmentrequired for the experiment included the NCERP EEG andan evoked potential instrument developed by ShanghaiNuocheng Electric Co Ltd -e instrument consists of acomputer host a display an audio and video stimulationbox an EEG main control box a physiological amplificationbox and electrode composition -e equipment has a highsampling rate of up to 8 kHzCH and a 32-bit resolution thedata collected is more accurate It can adopt multiple col-lected data according to actual experimental needs to obtainmore EEG data characteristics -is instrument adopts anoninvasive EEG signal acquisition method and uses 24-channel silver electrodes As shown in Figure 3 the place-ment position adopts the international unified standard 1020 system method -e electrode cap and the physiologicalamplification box are connected to complete the physio-logical signal acquisition and amplification -e collecteddata is then transmitted to the EEG main control boxthrough the optical fiber and finally transmitted to thecomputer host through the USB interface -e constructioneffect of the experimental platform is shown in Figure 4

23 Signal Collection and Preprocessing During the exper-iment the data can be preprocessed by setting the param-eters of the EEG signal collection instrument Using theartifact correction method [30] remove the EOG artifactsfrom the collected EEG signal and a low-pass filter was set toremove the EEG signal that caused interference near the

50Hz power frequency -e valid data after preprocessingsuch as interception and arrangement is further processedwhich is normalized and mapped to the interval [minus1 1]Finally the processed data is compiled into a data set for theexperiment and the parameters of the data set are shown inTable 1

Since each partition of the human brain corresponds todifferent functions and the prefrontal cortex is responsiblefor processing imagination and thinking-related activitiesthe channels of F3 and F4 (as shown in Figure 3) datacorresponding to the electrode caps of the frontal lobe aremainly used to make the data set During the experiment

Relax Gesture

imagination Beep Beep

Rest for 30s

0 5s 15s Time (s)

Figure 2 EEG signal acquisition process

Datapreprocessing

Control

Classification

Figure 1 Imaginative BCI model

GNDFp2

Sp2

F4

C4

Fz P2

T6

F6

Oz

T4

Fp2Sp1

F7 F3

P3

C3T3

T5

O1 O2

Sp2

A1 A2

CzRef

Figure 3 Location of 24 guide electrodes

Computational Intelligence and Neuroscience 3

after the subjects heard the prompts they imagined eachgesture in turn At the same time the data recorder startedthe recording operation after the EEG stabilized inter-cepting 10 seconds of data and used it to make a data set-e data set used in this experiment is compiled from theexperimental data of 20 subjects from which 80 of the datais randomly selected for classification and recognition modeltraining and 20 is used for model testing these 20 subjectsuse one model

3 Dense LSTM Algorithm

In this section a new recognition model is established bycombining the Dense layer [31] with the LSTM network andthe model parameters are presented to achieve an optimalcombination

31 Dense LSTM -e classification algorithm module be-longs to the data processing and recognition part of the BCICommon classifiers such as the support vector machine(SVM) multilayer perceptron (MLP) and Bayes classifierare ubiquitous for which recognition accuracy is not highand the ldquoBCI blindrdquo issue exists among other problems Linet al used the LSTM algorithm to achieve higher efficiencybut it takes longer to extract feature values using the optimalwavelet packet transform method -us the Dense LSTM

model is proposed in this study As shown in Figure 5 thefirst half of the model is the LSTM layer followed by theadded Dense layer

-e LSTM network consists of three designed gates -eforget gate determines whether the output information ofthe previous moment is retained or discarded After theeffect on the output of the previous time and the input of thecurrent time the output is a value in the range of 0 to 1Utilizing the sigmoid function the role of the forget gate isshown in the following equation

ft sigmoid WTf times Stminus1 + U

Tf times xt + bf1113872 1113873 (1)

In the formula W and U mean the weight of gate Smeans the output data of the previous moment xmeans theinput data at the current moment b means the bias term ofgate the subscript ldquofrdquo is the name of the gate and t and tminus 1indicate different moments

-e role of the input gate is to control the input at thecurrent time which directly determines how much newinformation will be input into the hidden layer of the LSTM-e working principle of the input gate is shown in thefollowing equation

it sigmoid WTi times Stminus1 + U

Ti times xt + bi1113872 1113873 (2)

-e candidate gates calculate the total storage of theinput at the current time and the previous input informa-tion -e working process is shown in the followingequation

1113957ct tanh WTc times Stminus1 + U

Tc times xt + bc1113872 1113873 (3)

-e update of information while the LSTM is func-tioning is determined by the forget gate the input gate andthe candidate gate -e forget gate determines if the in-formation to be lost is equal to ft times ctminus1 and the input gateand the candidate gate determine if the new information isequal to it times 1113957ct which is added at the current momentCombining these two the hidden layer processes the newstate as shown in the following equation

ct ft times ctminus1 + it times 1113957ct (4)

-e output gate determines how much information isinput to the next moment that is st and the calculationprocess is obtained by ct -e output gate functions as shownin the following equations and otis the weight of the outputgate which is in the range of 0 to 1 -e size of ot willdetermine the information passed to the next moment

ot sigmoid WTo times Stminus1 + U

To times xt + bo1113872 1113873 (5)

St ot times tanh ct( 1113857 (6)

During the data collection process the collected EEGsignal is transmitted to the LSTM as input data and theresult following the analysis and processing of the LSTM istransmitted to the connected Dense network -e Densenetwork part is composed of two Dense layers and the twoDense layers have the same network structure Each Dense

Figure 4 NCERP EEG and evoked potential instrument

Table 1 Imagination movement data set

Name Imaginary data setNumber of subjects 20Number ofexperiments 40 timesperson

Experimental content Gesture imaginationData label 0 1 2 3 4Number of data 6400

Data preprocessing

(1) Remove EOG (electrooculogram)(2) 50Hz low-pass filtering

(3) Remove 50Hz power frequencynoise

(4) Normalization

4 Computational Intelligence and Neuroscience

network is composed of an input layer a hidden layer andan output layer -e structure is shown in Figure 6 allneurons between each adjacent layer are connected to eachother -rough the superposition of the composite networkthe feature propagation of the data can be strengthened sothat the network can mine deeper features in the data andpromote the convergence of the classification and recog-nition model in the training process

-e principle of the Dense network operation is shownin the following equations

atminus1 W11 lowast otminus1 + W12 lowast ot + W13 lowast ot+1 + b1 (7)

at W21 lowast otminus1 + W22 lowast tt + W23 lowast tt+1 + b2 (8)

at+1 W31 lowast otminus1 + W32 lowast ot + W33 lowast ot+1 + b3 (9)

Since the output of each network layer is a nonlinearcombination which impacts the recognition capability of themodel after the Dense network the activation function isused to produce a nonlinear effect on the output result tosolve the problem that the linear model cannot by improvingthe effectiveness and reliability of the classification andrecognition models -e principle of the activation functionof Softmax can be expressed by the following equation

σ(s)j e

sj

1113936KK1 e

sj K 1 2 3 (10)

-e activation function of Softmax maps the outputvalue of the Dense network to a vector(σ(s)1 σ(s)2 σ(s)3 σ(s)4 ) where σ(s)j is a real numberfor which the size is in the range of (0 1) indicating theprobability of belonging to each category in the multi-classification problem 1113936 σ(s)j is 1 -e network can finallyobtain the result of classification according to the probabilityof each classification with the respective mapped vector

Dense layer Dense layerLSTM layer

Stndash2

Stndash1

St+1

xt+1

Ct+1Ot+1 Ot2+1at+1 at2+1

Otndash1 Ot2ndash1

Ot

Ot

Ot

xt

Ot Ot2 at2at

ft

ft

ftit

it

St Ct

it

Ctndash2

Ctndash1

xtndash1

In

~Ct

~Ct

~Ct

atndash1 at2ndash1

Out

Figure 5 Flow chart of Dense LSTM algorithm

Inputlayer

Hiddenlayer

Outputlayer

Figure 6 Structure of the Dense layer

Computational Intelligence and Neuroscience 5

32 Model Parameter Settings -e classification model ofthe BCI proposed in this study is formed by connecting theLSTM network and the Dense network -e connection ofdifferent network layers has varying degrees of impact on therecognition effect -erefore it is necessary to set appro-priate network parameters to ensure optimal recognitioneffects

Herein the parameter unit size of the LSTM layer ispresented -e result is shown in Figure 7 By comparing therecognition accuracy rate and the time required for eachiteration when training the model when the unit size is 128the recognition accuracy rate is 9156 and each iterationtakes 052 secondsWhen the unit size is 256 the recognitionaccuracy rate is 9180 but the time required for each it-eration is 162 seconds -e recognition rate differencebetween them is not obvious but the iteration time is quitedifferent At this time the effect is optimal thus the unit ofthe LSTM layer is set to 128

We compare some versions of our Dense LSTM algo-rithm By testing the size of the parameter unit in differentDense networks and observing the training process of theclassification recognition model as well as the accuracy ofrecognition the results are shown in Table 2 When theLSTM layer is grafted to a Dense network during the processof reducing the unit size of the Dense network from 256 to128 the recognition accuracy rate increases rapidly from8232 to 8766 when the unit size decreases from 128 to32 the recognition accuracy rate slowly rises to 8952When the LSTM layer was grafted with two Dense networksand the unit size of the first Dense network was fixed at 256the unit size of the second Dense network was reduced from256 to 128 and from 64 to 32 During this process therecognition accuracy rate increased from 8543 to 8642and from 8859 to 8932 When the unit size of the firstDense network was fixed at 128 the unit size of the secondDense network was reduced from 128 to 64 and to 32 andthe recognition accuracy rate sequentially increased from8945 to 8945 and to 9022 When the unit size of thefirst Dense network was fixed at 64 and the unit size of thesecond Dense network was fixed at 32 changing the lossfunction to MSE the recognition accuracy rate was 9068After the loss function is changed to Softmax the recog-nition accuracy rate is 9156

Based on the comprehensive recognition accuracy andthe time spent in training and learning the classificationrecognition model the combination of the recognitionmodel LSTM layer and the Dense network layer and itsparameter unit selects the LSTM parameter combination of(128)-DENSE (64)-DENSE (32)

4 Results and Analysis

In this section the data set collected in the experiment isused to train and test the model to verify the feasibility of theproposed model by targeted verification of the convergencein the model training process and the recognition accuracyfor different groups of people All experiments have achievedthe control of the uHand20 manipulator palm that is thesubject simply imagines gestures in the brain when the

subject is at rest and the manipulator palm performs thesame gesture at the same time

41 Verification of Convergence After experimental com-parison the optimal model parameters of the model areobtained LSTM(128)-DENSE (64)-DENSE (32) -etraining process using the proposed Dense LSTM and LSTMalgorithms is shown in Figure 8 -e red broken line in-dicates the change of the recognition accuracy rate of theDense LSTM algorithm during the training process and theblue broken line indicates the accurate recognition of theLSTM model rate of change Apparently in the process of 0to 100 iterations the recognition accuracy of the two modelsrapidly increased to approximately 85 After 100 iterationsthe recognition accuracy increased relatively slowly After300 iterations the Dense LSTM algorithm was constant itwill grow slowly but the LSTM algorithm does not growFinally the recognition accuracy of the Dense LSTM al-gorithm reaches 9156 and the recognition accuracy of theLSTM algorithm is slightly lower than 90 During thetraining of the model the recognition accuracy of the DenseLSTM algorithm changes smoothly while the LSTM hassignificant jitters and several glitches and the model isunstable -erefore the Dense LSTM algorithm proposed inthis study presented an optimization effect

Table 2 Influence of the unit size of network parameters in eachlayer

Parameter unit Accuracy ()128-256 (LSTM-DENSE) 8232128-128 (LSTM-DENSE) 8766128-64 (LSTM-DENSE) 8893128-32 (LSTM-DENSE) 8952128-256-256 (LSTM-DENSE-DENSE) 8543128-256-128 (LSTM-DENSE-DENSE) 8642128-256-64 (LSTM-DENSE-DENSE) 8859128-256-32 (LSTM-DENSE-DENSE) 8932128-128-64 (LSTM-DENSE-DENSE) 8945128-128-32 (LSTM-DENSE-DENSE) 9022128-64-32 (LSTM-DENSE-DENSE) (MSE) 9068128-64-32 (LSTM-DENSE-DENSE) (Softmax) 9156

8716

8947

9156 9180

ndash03

02

07

12

17

22

8400850086008700880089009000910092009300

Tim

e (s)

Accu

racy

()

Unit

AccuracyTime (s)

32 64 128 256

Figure 7 -e influence of unit parameter of LSTM on experi-mental results

6 Computational Intelligence and Neuroscience

42 Verification of Recognition Accuracy As shown in Ta-ble 3 the average recognition rate and running time of thenewly proposed Dense LSTM algorithm is compared withseveral neural networks commonly used in traditional deeplearning BCI including Recurrent Neural Network (RNN)Convolutional Neural Networks (CNN) [32] and LSTM andfeature extraction-classifier algorithms in machine learningincluding support vector machine (SVM) and Bayesiannetwork -e results indicate that under the same parametersettings the Dense LSTM algorithm proposed in this studyhas a higher average recognition rate for all subjects so it ismore suitable for the classification of EEG signals in theimagery-based BCI To a certain extent it solves the LSTMnetwork data in the imagery-based BCI -e nonconvergenceproblem in the classification and recognition model trainingprocess improves the practicality and generalization perfor-mance of the imagery-based BCI All subjects used the sameclassification and recognition model and the recognitionaccuracy rate was 9156 -is somewhat resolves theproblems relative to ldquoBCI blindnessrdquo and ldquoone person onemodelrdquo that are common in BCIs making the application ofimaginary BCIs more widespread

-e trained classification and recognition model is ap-plied to the classification and recognition module of theimaginary BCI model and 5 out of the 20 subjects arerandomly selected for the gesture recognition cross-validationexperiment in the imagination mode Each gesture is done 20

times and the experimental results are shown in Table 4which represents the experimental results of five randomlyselected subjects Considering the experiment 5 subjectsmade 100 gestures for each gesture

-e imaginary BCI data was randomly selected for thefive age groups ranging 15sim20 years old 20sim25 years old25sim30 years old 30sim35 years old and 35sim40 years old forclassification -e test results are shown in Figure 9 theaccuracy of gesture recognition for all ages reached morethan 905 of which the highest recognition accuracy forages 25 to 30 years old reached 9258 -e farther theremaining four age groups were from the age range of 25 to30 years the lower the recognition accuracy is -e ex-perimental results show that age causes the recognition

LSTM-Dense

LSTM

0

02

03

04

05

06

07

08

09

100 200 300 400 500 600 700 800Epochs

ACC

Figure 8 LSTM and Dense LSTM model training process

Table 3 Comparison of average accuracy and running time ofclassification algorithm

Classificationalgorithm

Classification accuracy()

Running time(sec)

Dense LSTM 9156 379LSTM 9001 423RNN 8534 456GRU 8356 615CNN 8149 532SVM 7086 469Bayesian 6945 648

Computational Intelligence and Neuroscience 7

accuracy of the imaginary BCI to change but the impact isnot very significant It is related to the attention controlability of subjects of different ages

A total of 100 samples were randomly selected from thedata of female and male subjects for classification andrecognition -e experimental results are shown in Table 5Table 5 presents the classification and recognition of thegestures of female and male subjects among them theoverall recognition accuracy rate of female subjects is 968and that of male subjects was 892 -e recognition

accuracy of female subjects is slightly higher than that ofmale subjects and the recognition accuracy rates of the fivegestures are approximately the same

In order to prove the feasibility of the proposed algo-rithm we focused on one case study taking the middlefinger as an example -e number of subjectsrsquo samples isincreased on the basis of the original data set however theexperimental process is similar to the previous one Werandomly selected 5 subjects from all collected samples maleand female sample data set and different ages for

Table 4 Cross-validation experiment of gesture recognition

Recognized categoryActual gestures

Average accuraciesGesture 0 Gesture 1 Gesture 2 Gesture 3 Gesture 4

Subject 1 90 80 100 95 95 92Subject 2 95 85 90 90 90 90Subject 3 90 90 95 100 85 92Subject 4 80 90 95 85 90 88Subject 5 90 85 90 90 95 90

8900

9000

9100

9200

9300

15~20 20~25 25~30 30~35 35~40

Figure 9 Recognition accuracy of all ages

Table 5 Cross-validation experiment of different gender gesture recognition

Recognized categoryActual gestures

Gesture 0 () Gesture 1 () Gesture 2 () Gesture 3 () Gesture 4 ()Female 93 93 92 91 93Male 90 89 88 89 90

Table 6 -e middle finger case study

SamplesActual gestures

Average accuracies ()Subject 1 () Subject 2 () Subject 3 () Subject 4 () Subject 5 ()

Random samples 91 94 89 95 94 926Male 90 89 91 88 90 896Female 96 91 92 90 93 92415ndash20 years 89 90 89 91 87 89220ndash25 years 90 92 92 91 90 9125ndash30 years 91 93 92 91 94 92230ndash35 years 91 92 90 93 92 91635ndash40 years 90 92 90 92 93 914

Table 7 Significance test results

Source SS df MS F Frobgt FColumns 72078 4 180194 238 01381Rows 2874 2 143702 19 02117Error 60612 8 75765Total 16143 14

8 Computational Intelligence and Neuroscience

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 3: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

that the experimenter understood the entire process to avoidunnecessary factors affecting the experimental results

22 Experiment Platform -e EEG acquisition equipmentrequired for the experiment included the NCERP EEG andan evoked potential instrument developed by ShanghaiNuocheng Electric Co Ltd -e instrument consists of acomputer host a display an audio and video stimulationbox an EEG main control box a physiological amplificationbox and electrode composition -e equipment has a highsampling rate of up to 8 kHzCH and a 32-bit resolution thedata collected is more accurate It can adopt multiple col-lected data according to actual experimental needs to obtainmore EEG data characteristics -is instrument adopts anoninvasive EEG signal acquisition method and uses 24-channel silver electrodes As shown in Figure 3 the place-ment position adopts the international unified standard 1020 system method -e electrode cap and the physiologicalamplification box are connected to complete the physio-logical signal acquisition and amplification -e collecteddata is then transmitted to the EEG main control boxthrough the optical fiber and finally transmitted to thecomputer host through the USB interface -e constructioneffect of the experimental platform is shown in Figure 4

23 Signal Collection and Preprocessing During the exper-iment the data can be preprocessed by setting the param-eters of the EEG signal collection instrument Using theartifact correction method [30] remove the EOG artifactsfrom the collected EEG signal and a low-pass filter was set toremove the EEG signal that caused interference near the

50Hz power frequency -e valid data after preprocessingsuch as interception and arrangement is further processedwhich is normalized and mapped to the interval [minus1 1]Finally the processed data is compiled into a data set for theexperiment and the parameters of the data set are shown inTable 1

Since each partition of the human brain corresponds todifferent functions and the prefrontal cortex is responsiblefor processing imagination and thinking-related activitiesthe channels of F3 and F4 (as shown in Figure 3) datacorresponding to the electrode caps of the frontal lobe aremainly used to make the data set During the experiment

Relax Gesture

imagination Beep Beep

Rest for 30s

0 5s 15s Time (s)

Figure 2 EEG signal acquisition process

Datapreprocessing

Control

Classification

Figure 1 Imaginative BCI model

GNDFp2

Sp2

F4

C4

Fz P2

T6

F6

Oz

T4

Fp2Sp1

F7 F3

P3

C3T3

T5

O1 O2

Sp2

A1 A2

CzRef

Figure 3 Location of 24 guide electrodes

Computational Intelligence and Neuroscience 3

after the subjects heard the prompts they imagined eachgesture in turn At the same time the data recorder startedthe recording operation after the EEG stabilized inter-cepting 10 seconds of data and used it to make a data set-e data set used in this experiment is compiled from theexperimental data of 20 subjects from which 80 of the datais randomly selected for classification and recognition modeltraining and 20 is used for model testing these 20 subjectsuse one model

3 Dense LSTM Algorithm

In this section a new recognition model is established bycombining the Dense layer [31] with the LSTM network andthe model parameters are presented to achieve an optimalcombination

31 Dense LSTM -e classification algorithm module be-longs to the data processing and recognition part of the BCICommon classifiers such as the support vector machine(SVM) multilayer perceptron (MLP) and Bayes classifierare ubiquitous for which recognition accuracy is not highand the ldquoBCI blindrdquo issue exists among other problems Linet al used the LSTM algorithm to achieve higher efficiencybut it takes longer to extract feature values using the optimalwavelet packet transform method -us the Dense LSTM

model is proposed in this study As shown in Figure 5 thefirst half of the model is the LSTM layer followed by theadded Dense layer

-e LSTM network consists of three designed gates -eforget gate determines whether the output information ofthe previous moment is retained or discarded After theeffect on the output of the previous time and the input of thecurrent time the output is a value in the range of 0 to 1Utilizing the sigmoid function the role of the forget gate isshown in the following equation

ft sigmoid WTf times Stminus1 + U

Tf times xt + bf1113872 1113873 (1)

In the formula W and U mean the weight of gate Smeans the output data of the previous moment xmeans theinput data at the current moment b means the bias term ofgate the subscript ldquofrdquo is the name of the gate and t and tminus 1indicate different moments

-e role of the input gate is to control the input at thecurrent time which directly determines how much newinformation will be input into the hidden layer of the LSTM-e working principle of the input gate is shown in thefollowing equation

it sigmoid WTi times Stminus1 + U

Ti times xt + bi1113872 1113873 (2)

-e candidate gates calculate the total storage of theinput at the current time and the previous input informa-tion -e working process is shown in the followingequation

1113957ct tanh WTc times Stminus1 + U

Tc times xt + bc1113872 1113873 (3)

-e update of information while the LSTM is func-tioning is determined by the forget gate the input gate andthe candidate gate -e forget gate determines if the in-formation to be lost is equal to ft times ctminus1 and the input gateand the candidate gate determine if the new information isequal to it times 1113957ct which is added at the current momentCombining these two the hidden layer processes the newstate as shown in the following equation

ct ft times ctminus1 + it times 1113957ct (4)

-e output gate determines how much information isinput to the next moment that is st and the calculationprocess is obtained by ct -e output gate functions as shownin the following equations and otis the weight of the outputgate which is in the range of 0 to 1 -e size of ot willdetermine the information passed to the next moment

ot sigmoid WTo times Stminus1 + U

To times xt + bo1113872 1113873 (5)

St ot times tanh ct( 1113857 (6)

During the data collection process the collected EEGsignal is transmitted to the LSTM as input data and theresult following the analysis and processing of the LSTM istransmitted to the connected Dense network -e Densenetwork part is composed of two Dense layers and the twoDense layers have the same network structure Each Dense

Figure 4 NCERP EEG and evoked potential instrument

Table 1 Imagination movement data set

Name Imaginary data setNumber of subjects 20Number ofexperiments 40 timesperson

Experimental content Gesture imaginationData label 0 1 2 3 4Number of data 6400

Data preprocessing

(1) Remove EOG (electrooculogram)(2) 50Hz low-pass filtering

(3) Remove 50Hz power frequencynoise

(4) Normalization

4 Computational Intelligence and Neuroscience

network is composed of an input layer a hidden layer andan output layer -e structure is shown in Figure 6 allneurons between each adjacent layer are connected to eachother -rough the superposition of the composite networkthe feature propagation of the data can be strengthened sothat the network can mine deeper features in the data andpromote the convergence of the classification and recog-nition model in the training process

-e principle of the Dense network operation is shownin the following equations

atminus1 W11 lowast otminus1 + W12 lowast ot + W13 lowast ot+1 + b1 (7)

at W21 lowast otminus1 + W22 lowast tt + W23 lowast tt+1 + b2 (8)

at+1 W31 lowast otminus1 + W32 lowast ot + W33 lowast ot+1 + b3 (9)

Since the output of each network layer is a nonlinearcombination which impacts the recognition capability of themodel after the Dense network the activation function isused to produce a nonlinear effect on the output result tosolve the problem that the linear model cannot by improvingthe effectiveness and reliability of the classification andrecognition models -e principle of the activation functionof Softmax can be expressed by the following equation

σ(s)j e

sj

1113936KK1 e

sj K 1 2 3 (10)

-e activation function of Softmax maps the outputvalue of the Dense network to a vector(σ(s)1 σ(s)2 σ(s)3 σ(s)4 ) where σ(s)j is a real numberfor which the size is in the range of (0 1) indicating theprobability of belonging to each category in the multi-classification problem 1113936 σ(s)j is 1 -e network can finallyobtain the result of classification according to the probabilityof each classification with the respective mapped vector

Dense layer Dense layerLSTM layer

Stndash2

Stndash1

St+1

xt+1

Ct+1Ot+1 Ot2+1at+1 at2+1

Otndash1 Ot2ndash1

Ot

Ot

Ot

xt

Ot Ot2 at2at

ft

ft

ftit

it

St Ct

it

Ctndash2

Ctndash1

xtndash1

In

~Ct

~Ct

~Ct

atndash1 at2ndash1

Out

Figure 5 Flow chart of Dense LSTM algorithm

Inputlayer

Hiddenlayer

Outputlayer

Figure 6 Structure of the Dense layer

Computational Intelligence and Neuroscience 5

32 Model Parameter Settings -e classification model ofthe BCI proposed in this study is formed by connecting theLSTM network and the Dense network -e connection ofdifferent network layers has varying degrees of impact on therecognition effect -erefore it is necessary to set appro-priate network parameters to ensure optimal recognitioneffects

Herein the parameter unit size of the LSTM layer ispresented -e result is shown in Figure 7 By comparing therecognition accuracy rate and the time required for eachiteration when training the model when the unit size is 128the recognition accuracy rate is 9156 and each iterationtakes 052 secondsWhen the unit size is 256 the recognitionaccuracy rate is 9180 but the time required for each it-eration is 162 seconds -e recognition rate differencebetween them is not obvious but the iteration time is quitedifferent At this time the effect is optimal thus the unit ofthe LSTM layer is set to 128

We compare some versions of our Dense LSTM algo-rithm By testing the size of the parameter unit in differentDense networks and observing the training process of theclassification recognition model as well as the accuracy ofrecognition the results are shown in Table 2 When theLSTM layer is grafted to a Dense network during the processof reducing the unit size of the Dense network from 256 to128 the recognition accuracy rate increases rapidly from8232 to 8766 when the unit size decreases from 128 to32 the recognition accuracy rate slowly rises to 8952When the LSTM layer was grafted with two Dense networksand the unit size of the first Dense network was fixed at 256the unit size of the second Dense network was reduced from256 to 128 and from 64 to 32 During this process therecognition accuracy rate increased from 8543 to 8642and from 8859 to 8932 When the unit size of the firstDense network was fixed at 128 the unit size of the secondDense network was reduced from 128 to 64 and to 32 andthe recognition accuracy rate sequentially increased from8945 to 8945 and to 9022 When the unit size of thefirst Dense network was fixed at 64 and the unit size of thesecond Dense network was fixed at 32 changing the lossfunction to MSE the recognition accuracy rate was 9068After the loss function is changed to Softmax the recog-nition accuracy rate is 9156

Based on the comprehensive recognition accuracy andthe time spent in training and learning the classificationrecognition model the combination of the recognitionmodel LSTM layer and the Dense network layer and itsparameter unit selects the LSTM parameter combination of(128)-DENSE (64)-DENSE (32)

4 Results and Analysis

In this section the data set collected in the experiment isused to train and test the model to verify the feasibility of theproposed model by targeted verification of the convergencein the model training process and the recognition accuracyfor different groups of people All experiments have achievedthe control of the uHand20 manipulator palm that is thesubject simply imagines gestures in the brain when the

subject is at rest and the manipulator palm performs thesame gesture at the same time

41 Verification of Convergence After experimental com-parison the optimal model parameters of the model areobtained LSTM(128)-DENSE (64)-DENSE (32) -etraining process using the proposed Dense LSTM and LSTMalgorithms is shown in Figure 8 -e red broken line in-dicates the change of the recognition accuracy rate of theDense LSTM algorithm during the training process and theblue broken line indicates the accurate recognition of theLSTM model rate of change Apparently in the process of 0to 100 iterations the recognition accuracy of the two modelsrapidly increased to approximately 85 After 100 iterationsthe recognition accuracy increased relatively slowly After300 iterations the Dense LSTM algorithm was constant itwill grow slowly but the LSTM algorithm does not growFinally the recognition accuracy of the Dense LSTM al-gorithm reaches 9156 and the recognition accuracy of theLSTM algorithm is slightly lower than 90 During thetraining of the model the recognition accuracy of the DenseLSTM algorithm changes smoothly while the LSTM hassignificant jitters and several glitches and the model isunstable -erefore the Dense LSTM algorithm proposed inthis study presented an optimization effect

Table 2 Influence of the unit size of network parameters in eachlayer

Parameter unit Accuracy ()128-256 (LSTM-DENSE) 8232128-128 (LSTM-DENSE) 8766128-64 (LSTM-DENSE) 8893128-32 (LSTM-DENSE) 8952128-256-256 (LSTM-DENSE-DENSE) 8543128-256-128 (LSTM-DENSE-DENSE) 8642128-256-64 (LSTM-DENSE-DENSE) 8859128-256-32 (LSTM-DENSE-DENSE) 8932128-128-64 (LSTM-DENSE-DENSE) 8945128-128-32 (LSTM-DENSE-DENSE) 9022128-64-32 (LSTM-DENSE-DENSE) (MSE) 9068128-64-32 (LSTM-DENSE-DENSE) (Softmax) 9156

8716

8947

9156 9180

ndash03

02

07

12

17

22

8400850086008700880089009000910092009300

Tim

e (s)

Accu

racy

()

Unit

AccuracyTime (s)

32 64 128 256

Figure 7 -e influence of unit parameter of LSTM on experi-mental results

6 Computational Intelligence and Neuroscience

42 Verification of Recognition Accuracy As shown in Ta-ble 3 the average recognition rate and running time of thenewly proposed Dense LSTM algorithm is compared withseveral neural networks commonly used in traditional deeplearning BCI including Recurrent Neural Network (RNN)Convolutional Neural Networks (CNN) [32] and LSTM andfeature extraction-classifier algorithms in machine learningincluding support vector machine (SVM) and Bayesiannetwork -e results indicate that under the same parametersettings the Dense LSTM algorithm proposed in this studyhas a higher average recognition rate for all subjects so it ismore suitable for the classification of EEG signals in theimagery-based BCI To a certain extent it solves the LSTMnetwork data in the imagery-based BCI -e nonconvergenceproblem in the classification and recognition model trainingprocess improves the practicality and generalization perfor-mance of the imagery-based BCI All subjects used the sameclassification and recognition model and the recognitionaccuracy rate was 9156 -is somewhat resolves theproblems relative to ldquoBCI blindnessrdquo and ldquoone person onemodelrdquo that are common in BCIs making the application ofimaginary BCIs more widespread

-e trained classification and recognition model is ap-plied to the classification and recognition module of theimaginary BCI model and 5 out of the 20 subjects arerandomly selected for the gesture recognition cross-validationexperiment in the imagination mode Each gesture is done 20

times and the experimental results are shown in Table 4which represents the experimental results of five randomlyselected subjects Considering the experiment 5 subjectsmade 100 gestures for each gesture

-e imaginary BCI data was randomly selected for thefive age groups ranging 15sim20 years old 20sim25 years old25sim30 years old 30sim35 years old and 35sim40 years old forclassification -e test results are shown in Figure 9 theaccuracy of gesture recognition for all ages reached morethan 905 of which the highest recognition accuracy forages 25 to 30 years old reached 9258 -e farther theremaining four age groups were from the age range of 25 to30 years the lower the recognition accuracy is -e ex-perimental results show that age causes the recognition

LSTM-Dense

LSTM

0

02

03

04

05

06

07

08

09

100 200 300 400 500 600 700 800Epochs

ACC

Figure 8 LSTM and Dense LSTM model training process

Table 3 Comparison of average accuracy and running time ofclassification algorithm

Classificationalgorithm

Classification accuracy()

Running time(sec)

Dense LSTM 9156 379LSTM 9001 423RNN 8534 456GRU 8356 615CNN 8149 532SVM 7086 469Bayesian 6945 648

Computational Intelligence and Neuroscience 7

accuracy of the imaginary BCI to change but the impact isnot very significant It is related to the attention controlability of subjects of different ages

A total of 100 samples were randomly selected from thedata of female and male subjects for classification andrecognition -e experimental results are shown in Table 5Table 5 presents the classification and recognition of thegestures of female and male subjects among them theoverall recognition accuracy rate of female subjects is 968and that of male subjects was 892 -e recognition

accuracy of female subjects is slightly higher than that ofmale subjects and the recognition accuracy rates of the fivegestures are approximately the same

In order to prove the feasibility of the proposed algo-rithm we focused on one case study taking the middlefinger as an example -e number of subjectsrsquo samples isincreased on the basis of the original data set however theexperimental process is similar to the previous one Werandomly selected 5 subjects from all collected samples maleand female sample data set and different ages for

Table 4 Cross-validation experiment of gesture recognition

Recognized categoryActual gestures

Average accuraciesGesture 0 Gesture 1 Gesture 2 Gesture 3 Gesture 4

Subject 1 90 80 100 95 95 92Subject 2 95 85 90 90 90 90Subject 3 90 90 95 100 85 92Subject 4 80 90 95 85 90 88Subject 5 90 85 90 90 95 90

8900

9000

9100

9200

9300

15~20 20~25 25~30 30~35 35~40

Figure 9 Recognition accuracy of all ages

Table 5 Cross-validation experiment of different gender gesture recognition

Recognized categoryActual gestures

Gesture 0 () Gesture 1 () Gesture 2 () Gesture 3 () Gesture 4 ()Female 93 93 92 91 93Male 90 89 88 89 90

Table 6 -e middle finger case study

SamplesActual gestures

Average accuracies ()Subject 1 () Subject 2 () Subject 3 () Subject 4 () Subject 5 ()

Random samples 91 94 89 95 94 926Male 90 89 91 88 90 896Female 96 91 92 90 93 92415ndash20 years 89 90 89 91 87 89220ndash25 years 90 92 92 91 90 9125ndash30 years 91 93 92 91 94 92230ndash35 years 91 92 90 93 92 91635ndash40 years 90 92 90 92 93 914

Table 7 Significance test results

Source SS df MS F Frobgt FColumns 72078 4 180194 238 01381Rows 2874 2 143702 19 02117Error 60612 8 75765Total 16143 14

8 Computational Intelligence and Neuroscience

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 4: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

after the subjects heard the prompts they imagined eachgesture in turn At the same time the data recorder startedthe recording operation after the EEG stabilized inter-cepting 10 seconds of data and used it to make a data set-e data set used in this experiment is compiled from theexperimental data of 20 subjects from which 80 of the datais randomly selected for classification and recognition modeltraining and 20 is used for model testing these 20 subjectsuse one model

3 Dense LSTM Algorithm

In this section a new recognition model is established bycombining the Dense layer [31] with the LSTM network andthe model parameters are presented to achieve an optimalcombination

31 Dense LSTM -e classification algorithm module be-longs to the data processing and recognition part of the BCICommon classifiers such as the support vector machine(SVM) multilayer perceptron (MLP) and Bayes classifierare ubiquitous for which recognition accuracy is not highand the ldquoBCI blindrdquo issue exists among other problems Linet al used the LSTM algorithm to achieve higher efficiencybut it takes longer to extract feature values using the optimalwavelet packet transform method -us the Dense LSTM

model is proposed in this study As shown in Figure 5 thefirst half of the model is the LSTM layer followed by theadded Dense layer

-e LSTM network consists of three designed gates -eforget gate determines whether the output information ofthe previous moment is retained or discarded After theeffect on the output of the previous time and the input of thecurrent time the output is a value in the range of 0 to 1Utilizing the sigmoid function the role of the forget gate isshown in the following equation

ft sigmoid WTf times Stminus1 + U

Tf times xt + bf1113872 1113873 (1)

In the formula W and U mean the weight of gate Smeans the output data of the previous moment xmeans theinput data at the current moment b means the bias term ofgate the subscript ldquofrdquo is the name of the gate and t and tminus 1indicate different moments

-e role of the input gate is to control the input at thecurrent time which directly determines how much newinformation will be input into the hidden layer of the LSTM-e working principle of the input gate is shown in thefollowing equation

it sigmoid WTi times Stminus1 + U

Ti times xt + bi1113872 1113873 (2)

-e candidate gates calculate the total storage of theinput at the current time and the previous input informa-tion -e working process is shown in the followingequation

1113957ct tanh WTc times Stminus1 + U

Tc times xt + bc1113872 1113873 (3)

-e update of information while the LSTM is func-tioning is determined by the forget gate the input gate andthe candidate gate -e forget gate determines if the in-formation to be lost is equal to ft times ctminus1 and the input gateand the candidate gate determine if the new information isequal to it times 1113957ct which is added at the current momentCombining these two the hidden layer processes the newstate as shown in the following equation

ct ft times ctminus1 + it times 1113957ct (4)

-e output gate determines how much information isinput to the next moment that is st and the calculationprocess is obtained by ct -e output gate functions as shownin the following equations and otis the weight of the outputgate which is in the range of 0 to 1 -e size of ot willdetermine the information passed to the next moment

ot sigmoid WTo times Stminus1 + U

To times xt + bo1113872 1113873 (5)

St ot times tanh ct( 1113857 (6)

During the data collection process the collected EEGsignal is transmitted to the LSTM as input data and theresult following the analysis and processing of the LSTM istransmitted to the connected Dense network -e Densenetwork part is composed of two Dense layers and the twoDense layers have the same network structure Each Dense

Figure 4 NCERP EEG and evoked potential instrument

Table 1 Imagination movement data set

Name Imaginary data setNumber of subjects 20Number ofexperiments 40 timesperson

Experimental content Gesture imaginationData label 0 1 2 3 4Number of data 6400

Data preprocessing

(1) Remove EOG (electrooculogram)(2) 50Hz low-pass filtering

(3) Remove 50Hz power frequencynoise

(4) Normalization

4 Computational Intelligence and Neuroscience

network is composed of an input layer a hidden layer andan output layer -e structure is shown in Figure 6 allneurons between each adjacent layer are connected to eachother -rough the superposition of the composite networkthe feature propagation of the data can be strengthened sothat the network can mine deeper features in the data andpromote the convergence of the classification and recog-nition model in the training process

-e principle of the Dense network operation is shownin the following equations

atminus1 W11 lowast otminus1 + W12 lowast ot + W13 lowast ot+1 + b1 (7)

at W21 lowast otminus1 + W22 lowast tt + W23 lowast tt+1 + b2 (8)

at+1 W31 lowast otminus1 + W32 lowast ot + W33 lowast ot+1 + b3 (9)

Since the output of each network layer is a nonlinearcombination which impacts the recognition capability of themodel after the Dense network the activation function isused to produce a nonlinear effect on the output result tosolve the problem that the linear model cannot by improvingthe effectiveness and reliability of the classification andrecognition models -e principle of the activation functionof Softmax can be expressed by the following equation

σ(s)j e

sj

1113936KK1 e

sj K 1 2 3 (10)

-e activation function of Softmax maps the outputvalue of the Dense network to a vector(σ(s)1 σ(s)2 σ(s)3 σ(s)4 ) where σ(s)j is a real numberfor which the size is in the range of (0 1) indicating theprobability of belonging to each category in the multi-classification problem 1113936 σ(s)j is 1 -e network can finallyobtain the result of classification according to the probabilityof each classification with the respective mapped vector

Dense layer Dense layerLSTM layer

Stndash2

Stndash1

St+1

xt+1

Ct+1Ot+1 Ot2+1at+1 at2+1

Otndash1 Ot2ndash1

Ot

Ot

Ot

xt

Ot Ot2 at2at

ft

ft

ftit

it

St Ct

it

Ctndash2

Ctndash1

xtndash1

In

~Ct

~Ct

~Ct

atndash1 at2ndash1

Out

Figure 5 Flow chart of Dense LSTM algorithm

Inputlayer

Hiddenlayer

Outputlayer

Figure 6 Structure of the Dense layer

Computational Intelligence and Neuroscience 5

32 Model Parameter Settings -e classification model ofthe BCI proposed in this study is formed by connecting theLSTM network and the Dense network -e connection ofdifferent network layers has varying degrees of impact on therecognition effect -erefore it is necessary to set appro-priate network parameters to ensure optimal recognitioneffects

Herein the parameter unit size of the LSTM layer ispresented -e result is shown in Figure 7 By comparing therecognition accuracy rate and the time required for eachiteration when training the model when the unit size is 128the recognition accuracy rate is 9156 and each iterationtakes 052 secondsWhen the unit size is 256 the recognitionaccuracy rate is 9180 but the time required for each it-eration is 162 seconds -e recognition rate differencebetween them is not obvious but the iteration time is quitedifferent At this time the effect is optimal thus the unit ofthe LSTM layer is set to 128

We compare some versions of our Dense LSTM algo-rithm By testing the size of the parameter unit in differentDense networks and observing the training process of theclassification recognition model as well as the accuracy ofrecognition the results are shown in Table 2 When theLSTM layer is grafted to a Dense network during the processof reducing the unit size of the Dense network from 256 to128 the recognition accuracy rate increases rapidly from8232 to 8766 when the unit size decreases from 128 to32 the recognition accuracy rate slowly rises to 8952When the LSTM layer was grafted with two Dense networksand the unit size of the first Dense network was fixed at 256the unit size of the second Dense network was reduced from256 to 128 and from 64 to 32 During this process therecognition accuracy rate increased from 8543 to 8642and from 8859 to 8932 When the unit size of the firstDense network was fixed at 128 the unit size of the secondDense network was reduced from 128 to 64 and to 32 andthe recognition accuracy rate sequentially increased from8945 to 8945 and to 9022 When the unit size of thefirst Dense network was fixed at 64 and the unit size of thesecond Dense network was fixed at 32 changing the lossfunction to MSE the recognition accuracy rate was 9068After the loss function is changed to Softmax the recog-nition accuracy rate is 9156

Based on the comprehensive recognition accuracy andthe time spent in training and learning the classificationrecognition model the combination of the recognitionmodel LSTM layer and the Dense network layer and itsparameter unit selects the LSTM parameter combination of(128)-DENSE (64)-DENSE (32)

4 Results and Analysis

In this section the data set collected in the experiment isused to train and test the model to verify the feasibility of theproposed model by targeted verification of the convergencein the model training process and the recognition accuracyfor different groups of people All experiments have achievedthe control of the uHand20 manipulator palm that is thesubject simply imagines gestures in the brain when the

subject is at rest and the manipulator palm performs thesame gesture at the same time

41 Verification of Convergence After experimental com-parison the optimal model parameters of the model areobtained LSTM(128)-DENSE (64)-DENSE (32) -etraining process using the proposed Dense LSTM and LSTMalgorithms is shown in Figure 8 -e red broken line in-dicates the change of the recognition accuracy rate of theDense LSTM algorithm during the training process and theblue broken line indicates the accurate recognition of theLSTM model rate of change Apparently in the process of 0to 100 iterations the recognition accuracy of the two modelsrapidly increased to approximately 85 After 100 iterationsthe recognition accuracy increased relatively slowly After300 iterations the Dense LSTM algorithm was constant itwill grow slowly but the LSTM algorithm does not growFinally the recognition accuracy of the Dense LSTM al-gorithm reaches 9156 and the recognition accuracy of theLSTM algorithm is slightly lower than 90 During thetraining of the model the recognition accuracy of the DenseLSTM algorithm changes smoothly while the LSTM hassignificant jitters and several glitches and the model isunstable -erefore the Dense LSTM algorithm proposed inthis study presented an optimization effect

Table 2 Influence of the unit size of network parameters in eachlayer

Parameter unit Accuracy ()128-256 (LSTM-DENSE) 8232128-128 (LSTM-DENSE) 8766128-64 (LSTM-DENSE) 8893128-32 (LSTM-DENSE) 8952128-256-256 (LSTM-DENSE-DENSE) 8543128-256-128 (LSTM-DENSE-DENSE) 8642128-256-64 (LSTM-DENSE-DENSE) 8859128-256-32 (LSTM-DENSE-DENSE) 8932128-128-64 (LSTM-DENSE-DENSE) 8945128-128-32 (LSTM-DENSE-DENSE) 9022128-64-32 (LSTM-DENSE-DENSE) (MSE) 9068128-64-32 (LSTM-DENSE-DENSE) (Softmax) 9156

8716

8947

9156 9180

ndash03

02

07

12

17

22

8400850086008700880089009000910092009300

Tim

e (s)

Accu

racy

()

Unit

AccuracyTime (s)

32 64 128 256

Figure 7 -e influence of unit parameter of LSTM on experi-mental results

6 Computational Intelligence and Neuroscience

42 Verification of Recognition Accuracy As shown in Ta-ble 3 the average recognition rate and running time of thenewly proposed Dense LSTM algorithm is compared withseveral neural networks commonly used in traditional deeplearning BCI including Recurrent Neural Network (RNN)Convolutional Neural Networks (CNN) [32] and LSTM andfeature extraction-classifier algorithms in machine learningincluding support vector machine (SVM) and Bayesiannetwork -e results indicate that under the same parametersettings the Dense LSTM algorithm proposed in this studyhas a higher average recognition rate for all subjects so it ismore suitable for the classification of EEG signals in theimagery-based BCI To a certain extent it solves the LSTMnetwork data in the imagery-based BCI -e nonconvergenceproblem in the classification and recognition model trainingprocess improves the practicality and generalization perfor-mance of the imagery-based BCI All subjects used the sameclassification and recognition model and the recognitionaccuracy rate was 9156 -is somewhat resolves theproblems relative to ldquoBCI blindnessrdquo and ldquoone person onemodelrdquo that are common in BCIs making the application ofimaginary BCIs more widespread

-e trained classification and recognition model is ap-plied to the classification and recognition module of theimaginary BCI model and 5 out of the 20 subjects arerandomly selected for the gesture recognition cross-validationexperiment in the imagination mode Each gesture is done 20

times and the experimental results are shown in Table 4which represents the experimental results of five randomlyselected subjects Considering the experiment 5 subjectsmade 100 gestures for each gesture

-e imaginary BCI data was randomly selected for thefive age groups ranging 15sim20 years old 20sim25 years old25sim30 years old 30sim35 years old and 35sim40 years old forclassification -e test results are shown in Figure 9 theaccuracy of gesture recognition for all ages reached morethan 905 of which the highest recognition accuracy forages 25 to 30 years old reached 9258 -e farther theremaining four age groups were from the age range of 25 to30 years the lower the recognition accuracy is -e ex-perimental results show that age causes the recognition

LSTM-Dense

LSTM

0

02

03

04

05

06

07

08

09

100 200 300 400 500 600 700 800Epochs

ACC

Figure 8 LSTM and Dense LSTM model training process

Table 3 Comparison of average accuracy and running time ofclassification algorithm

Classificationalgorithm

Classification accuracy()

Running time(sec)

Dense LSTM 9156 379LSTM 9001 423RNN 8534 456GRU 8356 615CNN 8149 532SVM 7086 469Bayesian 6945 648

Computational Intelligence and Neuroscience 7

accuracy of the imaginary BCI to change but the impact isnot very significant It is related to the attention controlability of subjects of different ages

A total of 100 samples were randomly selected from thedata of female and male subjects for classification andrecognition -e experimental results are shown in Table 5Table 5 presents the classification and recognition of thegestures of female and male subjects among them theoverall recognition accuracy rate of female subjects is 968and that of male subjects was 892 -e recognition

accuracy of female subjects is slightly higher than that ofmale subjects and the recognition accuracy rates of the fivegestures are approximately the same

In order to prove the feasibility of the proposed algo-rithm we focused on one case study taking the middlefinger as an example -e number of subjectsrsquo samples isincreased on the basis of the original data set however theexperimental process is similar to the previous one Werandomly selected 5 subjects from all collected samples maleand female sample data set and different ages for

Table 4 Cross-validation experiment of gesture recognition

Recognized categoryActual gestures

Average accuraciesGesture 0 Gesture 1 Gesture 2 Gesture 3 Gesture 4

Subject 1 90 80 100 95 95 92Subject 2 95 85 90 90 90 90Subject 3 90 90 95 100 85 92Subject 4 80 90 95 85 90 88Subject 5 90 85 90 90 95 90

8900

9000

9100

9200

9300

15~20 20~25 25~30 30~35 35~40

Figure 9 Recognition accuracy of all ages

Table 5 Cross-validation experiment of different gender gesture recognition

Recognized categoryActual gestures

Gesture 0 () Gesture 1 () Gesture 2 () Gesture 3 () Gesture 4 ()Female 93 93 92 91 93Male 90 89 88 89 90

Table 6 -e middle finger case study

SamplesActual gestures

Average accuracies ()Subject 1 () Subject 2 () Subject 3 () Subject 4 () Subject 5 ()

Random samples 91 94 89 95 94 926Male 90 89 91 88 90 896Female 96 91 92 90 93 92415ndash20 years 89 90 89 91 87 89220ndash25 years 90 92 92 91 90 9125ndash30 years 91 93 92 91 94 92230ndash35 years 91 92 90 93 92 91635ndash40 years 90 92 90 92 93 914

Table 7 Significance test results

Source SS df MS F Frobgt FColumns 72078 4 180194 238 01381Rows 2874 2 143702 19 02117Error 60612 8 75765Total 16143 14

8 Computational Intelligence and Neuroscience

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 5: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

network is composed of an input layer a hidden layer andan output layer -e structure is shown in Figure 6 allneurons between each adjacent layer are connected to eachother -rough the superposition of the composite networkthe feature propagation of the data can be strengthened sothat the network can mine deeper features in the data andpromote the convergence of the classification and recog-nition model in the training process

-e principle of the Dense network operation is shownin the following equations

atminus1 W11 lowast otminus1 + W12 lowast ot + W13 lowast ot+1 + b1 (7)

at W21 lowast otminus1 + W22 lowast tt + W23 lowast tt+1 + b2 (8)

at+1 W31 lowast otminus1 + W32 lowast ot + W33 lowast ot+1 + b3 (9)

Since the output of each network layer is a nonlinearcombination which impacts the recognition capability of themodel after the Dense network the activation function isused to produce a nonlinear effect on the output result tosolve the problem that the linear model cannot by improvingthe effectiveness and reliability of the classification andrecognition models -e principle of the activation functionof Softmax can be expressed by the following equation

σ(s)j e

sj

1113936KK1 e

sj K 1 2 3 (10)

-e activation function of Softmax maps the outputvalue of the Dense network to a vector(σ(s)1 σ(s)2 σ(s)3 σ(s)4 ) where σ(s)j is a real numberfor which the size is in the range of (0 1) indicating theprobability of belonging to each category in the multi-classification problem 1113936 σ(s)j is 1 -e network can finallyobtain the result of classification according to the probabilityof each classification with the respective mapped vector

Dense layer Dense layerLSTM layer

Stndash2

Stndash1

St+1

xt+1

Ct+1Ot+1 Ot2+1at+1 at2+1

Otndash1 Ot2ndash1

Ot

Ot

Ot

xt

Ot Ot2 at2at

ft

ft

ftit

it

St Ct

it

Ctndash2

Ctndash1

xtndash1

In

~Ct

~Ct

~Ct

atndash1 at2ndash1

Out

Figure 5 Flow chart of Dense LSTM algorithm

Inputlayer

Hiddenlayer

Outputlayer

Figure 6 Structure of the Dense layer

Computational Intelligence and Neuroscience 5

32 Model Parameter Settings -e classification model ofthe BCI proposed in this study is formed by connecting theLSTM network and the Dense network -e connection ofdifferent network layers has varying degrees of impact on therecognition effect -erefore it is necessary to set appro-priate network parameters to ensure optimal recognitioneffects

Herein the parameter unit size of the LSTM layer ispresented -e result is shown in Figure 7 By comparing therecognition accuracy rate and the time required for eachiteration when training the model when the unit size is 128the recognition accuracy rate is 9156 and each iterationtakes 052 secondsWhen the unit size is 256 the recognitionaccuracy rate is 9180 but the time required for each it-eration is 162 seconds -e recognition rate differencebetween them is not obvious but the iteration time is quitedifferent At this time the effect is optimal thus the unit ofthe LSTM layer is set to 128

We compare some versions of our Dense LSTM algo-rithm By testing the size of the parameter unit in differentDense networks and observing the training process of theclassification recognition model as well as the accuracy ofrecognition the results are shown in Table 2 When theLSTM layer is grafted to a Dense network during the processof reducing the unit size of the Dense network from 256 to128 the recognition accuracy rate increases rapidly from8232 to 8766 when the unit size decreases from 128 to32 the recognition accuracy rate slowly rises to 8952When the LSTM layer was grafted with two Dense networksand the unit size of the first Dense network was fixed at 256the unit size of the second Dense network was reduced from256 to 128 and from 64 to 32 During this process therecognition accuracy rate increased from 8543 to 8642and from 8859 to 8932 When the unit size of the firstDense network was fixed at 128 the unit size of the secondDense network was reduced from 128 to 64 and to 32 andthe recognition accuracy rate sequentially increased from8945 to 8945 and to 9022 When the unit size of thefirst Dense network was fixed at 64 and the unit size of thesecond Dense network was fixed at 32 changing the lossfunction to MSE the recognition accuracy rate was 9068After the loss function is changed to Softmax the recog-nition accuracy rate is 9156

Based on the comprehensive recognition accuracy andthe time spent in training and learning the classificationrecognition model the combination of the recognitionmodel LSTM layer and the Dense network layer and itsparameter unit selects the LSTM parameter combination of(128)-DENSE (64)-DENSE (32)

4 Results and Analysis

In this section the data set collected in the experiment isused to train and test the model to verify the feasibility of theproposed model by targeted verification of the convergencein the model training process and the recognition accuracyfor different groups of people All experiments have achievedthe control of the uHand20 manipulator palm that is thesubject simply imagines gestures in the brain when the

subject is at rest and the manipulator palm performs thesame gesture at the same time

41 Verification of Convergence After experimental com-parison the optimal model parameters of the model areobtained LSTM(128)-DENSE (64)-DENSE (32) -etraining process using the proposed Dense LSTM and LSTMalgorithms is shown in Figure 8 -e red broken line in-dicates the change of the recognition accuracy rate of theDense LSTM algorithm during the training process and theblue broken line indicates the accurate recognition of theLSTM model rate of change Apparently in the process of 0to 100 iterations the recognition accuracy of the two modelsrapidly increased to approximately 85 After 100 iterationsthe recognition accuracy increased relatively slowly After300 iterations the Dense LSTM algorithm was constant itwill grow slowly but the LSTM algorithm does not growFinally the recognition accuracy of the Dense LSTM al-gorithm reaches 9156 and the recognition accuracy of theLSTM algorithm is slightly lower than 90 During thetraining of the model the recognition accuracy of the DenseLSTM algorithm changes smoothly while the LSTM hassignificant jitters and several glitches and the model isunstable -erefore the Dense LSTM algorithm proposed inthis study presented an optimization effect

Table 2 Influence of the unit size of network parameters in eachlayer

Parameter unit Accuracy ()128-256 (LSTM-DENSE) 8232128-128 (LSTM-DENSE) 8766128-64 (LSTM-DENSE) 8893128-32 (LSTM-DENSE) 8952128-256-256 (LSTM-DENSE-DENSE) 8543128-256-128 (LSTM-DENSE-DENSE) 8642128-256-64 (LSTM-DENSE-DENSE) 8859128-256-32 (LSTM-DENSE-DENSE) 8932128-128-64 (LSTM-DENSE-DENSE) 8945128-128-32 (LSTM-DENSE-DENSE) 9022128-64-32 (LSTM-DENSE-DENSE) (MSE) 9068128-64-32 (LSTM-DENSE-DENSE) (Softmax) 9156

8716

8947

9156 9180

ndash03

02

07

12

17

22

8400850086008700880089009000910092009300

Tim

e (s)

Accu

racy

()

Unit

AccuracyTime (s)

32 64 128 256

Figure 7 -e influence of unit parameter of LSTM on experi-mental results

6 Computational Intelligence and Neuroscience

42 Verification of Recognition Accuracy As shown in Ta-ble 3 the average recognition rate and running time of thenewly proposed Dense LSTM algorithm is compared withseveral neural networks commonly used in traditional deeplearning BCI including Recurrent Neural Network (RNN)Convolutional Neural Networks (CNN) [32] and LSTM andfeature extraction-classifier algorithms in machine learningincluding support vector machine (SVM) and Bayesiannetwork -e results indicate that under the same parametersettings the Dense LSTM algorithm proposed in this studyhas a higher average recognition rate for all subjects so it ismore suitable for the classification of EEG signals in theimagery-based BCI To a certain extent it solves the LSTMnetwork data in the imagery-based BCI -e nonconvergenceproblem in the classification and recognition model trainingprocess improves the practicality and generalization perfor-mance of the imagery-based BCI All subjects used the sameclassification and recognition model and the recognitionaccuracy rate was 9156 -is somewhat resolves theproblems relative to ldquoBCI blindnessrdquo and ldquoone person onemodelrdquo that are common in BCIs making the application ofimaginary BCIs more widespread

-e trained classification and recognition model is ap-plied to the classification and recognition module of theimaginary BCI model and 5 out of the 20 subjects arerandomly selected for the gesture recognition cross-validationexperiment in the imagination mode Each gesture is done 20

times and the experimental results are shown in Table 4which represents the experimental results of five randomlyselected subjects Considering the experiment 5 subjectsmade 100 gestures for each gesture

-e imaginary BCI data was randomly selected for thefive age groups ranging 15sim20 years old 20sim25 years old25sim30 years old 30sim35 years old and 35sim40 years old forclassification -e test results are shown in Figure 9 theaccuracy of gesture recognition for all ages reached morethan 905 of which the highest recognition accuracy forages 25 to 30 years old reached 9258 -e farther theremaining four age groups were from the age range of 25 to30 years the lower the recognition accuracy is -e ex-perimental results show that age causes the recognition

LSTM-Dense

LSTM

0

02

03

04

05

06

07

08

09

100 200 300 400 500 600 700 800Epochs

ACC

Figure 8 LSTM and Dense LSTM model training process

Table 3 Comparison of average accuracy and running time ofclassification algorithm

Classificationalgorithm

Classification accuracy()

Running time(sec)

Dense LSTM 9156 379LSTM 9001 423RNN 8534 456GRU 8356 615CNN 8149 532SVM 7086 469Bayesian 6945 648

Computational Intelligence and Neuroscience 7

accuracy of the imaginary BCI to change but the impact isnot very significant It is related to the attention controlability of subjects of different ages

A total of 100 samples were randomly selected from thedata of female and male subjects for classification andrecognition -e experimental results are shown in Table 5Table 5 presents the classification and recognition of thegestures of female and male subjects among them theoverall recognition accuracy rate of female subjects is 968and that of male subjects was 892 -e recognition

accuracy of female subjects is slightly higher than that ofmale subjects and the recognition accuracy rates of the fivegestures are approximately the same

In order to prove the feasibility of the proposed algo-rithm we focused on one case study taking the middlefinger as an example -e number of subjectsrsquo samples isincreased on the basis of the original data set however theexperimental process is similar to the previous one Werandomly selected 5 subjects from all collected samples maleand female sample data set and different ages for

Table 4 Cross-validation experiment of gesture recognition

Recognized categoryActual gestures

Average accuraciesGesture 0 Gesture 1 Gesture 2 Gesture 3 Gesture 4

Subject 1 90 80 100 95 95 92Subject 2 95 85 90 90 90 90Subject 3 90 90 95 100 85 92Subject 4 80 90 95 85 90 88Subject 5 90 85 90 90 95 90

8900

9000

9100

9200

9300

15~20 20~25 25~30 30~35 35~40

Figure 9 Recognition accuracy of all ages

Table 5 Cross-validation experiment of different gender gesture recognition

Recognized categoryActual gestures

Gesture 0 () Gesture 1 () Gesture 2 () Gesture 3 () Gesture 4 ()Female 93 93 92 91 93Male 90 89 88 89 90

Table 6 -e middle finger case study

SamplesActual gestures

Average accuracies ()Subject 1 () Subject 2 () Subject 3 () Subject 4 () Subject 5 ()

Random samples 91 94 89 95 94 926Male 90 89 91 88 90 896Female 96 91 92 90 93 92415ndash20 years 89 90 89 91 87 89220ndash25 years 90 92 92 91 90 9125ndash30 years 91 93 92 91 94 92230ndash35 years 91 92 90 93 92 91635ndash40 years 90 92 90 92 93 914

Table 7 Significance test results

Source SS df MS F Frobgt FColumns 72078 4 180194 238 01381Rows 2874 2 143702 19 02117Error 60612 8 75765Total 16143 14

8 Computational Intelligence and Neuroscience

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 6: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

32 Model Parameter Settings -e classification model ofthe BCI proposed in this study is formed by connecting theLSTM network and the Dense network -e connection ofdifferent network layers has varying degrees of impact on therecognition effect -erefore it is necessary to set appro-priate network parameters to ensure optimal recognitioneffects

Herein the parameter unit size of the LSTM layer ispresented -e result is shown in Figure 7 By comparing therecognition accuracy rate and the time required for eachiteration when training the model when the unit size is 128the recognition accuracy rate is 9156 and each iterationtakes 052 secondsWhen the unit size is 256 the recognitionaccuracy rate is 9180 but the time required for each it-eration is 162 seconds -e recognition rate differencebetween them is not obvious but the iteration time is quitedifferent At this time the effect is optimal thus the unit ofthe LSTM layer is set to 128

We compare some versions of our Dense LSTM algo-rithm By testing the size of the parameter unit in differentDense networks and observing the training process of theclassification recognition model as well as the accuracy ofrecognition the results are shown in Table 2 When theLSTM layer is grafted to a Dense network during the processof reducing the unit size of the Dense network from 256 to128 the recognition accuracy rate increases rapidly from8232 to 8766 when the unit size decreases from 128 to32 the recognition accuracy rate slowly rises to 8952When the LSTM layer was grafted with two Dense networksand the unit size of the first Dense network was fixed at 256the unit size of the second Dense network was reduced from256 to 128 and from 64 to 32 During this process therecognition accuracy rate increased from 8543 to 8642and from 8859 to 8932 When the unit size of the firstDense network was fixed at 128 the unit size of the secondDense network was reduced from 128 to 64 and to 32 andthe recognition accuracy rate sequentially increased from8945 to 8945 and to 9022 When the unit size of thefirst Dense network was fixed at 64 and the unit size of thesecond Dense network was fixed at 32 changing the lossfunction to MSE the recognition accuracy rate was 9068After the loss function is changed to Softmax the recog-nition accuracy rate is 9156

Based on the comprehensive recognition accuracy andthe time spent in training and learning the classificationrecognition model the combination of the recognitionmodel LSTM layer and the Dense network layer and itsparameter unit selects the LSTM parameter combination of(128)-DENSE (64)-DENSE (32)

4 Results and Analysis

In this section the data set collected in the experiment isused to train and test the model to verify the feasibility of theproposed model by targeted verification of the convergencein the model training process and the recognition accuracyfor different groups of people All experiments have achievedthe control of the uHand20 manipulator palm that is thesubject simply imagines gestures in the brain when the

subject is at rest and the manipulator palm performs thesame gesture at the same time

41 Verification of Convergence After experimental com-parison the optimal model parameters of the model areobtained LSTM(128)-DENSE (64)-DENSE (32) -etraining process using the proposed Dense LSTM and LSTMalgorithms is shown in Figure 8 -e red broken line in-dicates the change of the recognition accuracy rate of theDense LSTM algorithm during the training process and theblue broken line indicates the accurate recognition of theLSTM model rate of change Apparently in the process of 0to 100 iterations the recognition accuracy of the two modelsrapidly increased to approximately 85 After 100 iterationsthe recognition accuracy increased relatively slowly After300 iterations the Dense LSTM algorithm was constant itwill grow slowly but the LSTM algorithm does not growFinally the recognition accuracy of the Dense LSTM al-gorithm reaches 9156 and the recognition accuracy of theLSTM algorithm is slightly lower than 90 During thetraining of the model the recognition accuracy of the DenseLSTM algorithm changes smoothly while the LSTM hassignificant jitters and several glitches and the model isunstable -erefore the Dense LSTM algorithm proposed inthis study presented an optimization effect

Table 2 Influence of the unit size of network parameters in eachlayer

Parameter unit Accuracy ()128-256 (LSTM-DENSE) 8232128-128 (LSTM-DENSE) 8766128-64 (LSTM-DENSE) 8893128-32 (LSTM-DENSE) 8952128-256-256 (LSTM-DENSE-DENSE) 8543128-256-128 (LSTM-DENSE-DENSE) 8642128-256-64 (LSTM-DENSE-DENSE) 8859128-256-32 (LSTM-DENSE-DENSE) 8932128-128-64 (LSTM-DENSE-DENSE) 8945128-128-32 (LSTM-DENSE-DENSE) 9022128-64-32 (LSTM-DENSE-DENSE) (MSE) 9068128-64-32 (LSTM-DENSE-DENSE) (Softmax) 9156

8716

8947

9156 9180

ndash03

02

07

12

17

22

8400850086008700880089009000910092009300

Tim

e (s)

Accu

racy

()

Unit

AccuracyTime (s)

32 64 128 256

Figure 7 -e influence of unit parameter of LSTM on experi-mental results

6 Computational Intelligence and Neuroscience

42 Verification of Recognition Accuracy As shown in Ta-ble 3 the average recognition rate and running time of thenewly proposed Dense LSTM algorithm is compared withseveral neural networks commonly used in traditional deeplearning BCI including Recurrent Neural Network (RNN)Convolutional Neural Networks (CNN) [32] and LSTM andfeature extraction-classifier algorithms in machine learningincluding support vector machine (SVM) and Bayesiannetwork -e results indicate that under the same parametersettings the Dense LSTM algorithm proposed in this studyhas a higher average recognition rate for all subjects so it ismore suitable for the classification of EEG signals in theimagery-based BCI To a certain extent it solves the LSTMnetwork data in the imagery-based BCI -e nonconvergenceproblem in the classification and recognition model trainingprocess improves the practicality and generalization perfor-mance of the imagery-based BCI All subjects used the sameclassification and recognition model and the recognitionaccuracy rate was 9156 -is somewhat resolves theproblems relative to ldquoBCI blindnessrdquo and ldquoone person onemodelrdquo that are common in BCIs making the application ofimaginary BCIs more widespread

-e trained classification and recognition model is ap-plied to the classification and recognition module of theimaginary BCI model and 5 out of the 20 subjects arerandomly selected for the gesture recognition cross-validationexperiment in the imagination mode Each gesture is done 20

times and the experimental results are shown in Table 4which represents the experimental results of five randomlyselected subjects Considering the experiment 5 subjectsmade 100 gestures for each gesture

-e imaginary BCI data was randomly selected for thefive age groups ranging 15sim20 years old 20sim25 years old25sim30 years old 30sim35 years old and 35sim40 years old forclassification -e test results are shown in Figure 9 theaccuracy of gesture recognition for all ages reached morethan 905 of which the highest recognition accuracy forages 25 to 30 years old reached 9258 -e farther theremaining four age groups were from the age range of 25 to30 years the lower the recognition accuracy is -e ex-perimental results show that age causes the recognition

LSTM-Dense

LSTM

0

02

03

04

05

06

07

08

09

100 200 300 400 500 600 700 800Epochs

ACC

Figure 8 LSTM and Dense LSTM model training process

Table 3 Comparison of average accuracy and running time ofclassification algorithm

Classificationalgorithm

Classification accuracy()

Running time(sec)

Dense LSTM 9156 379LSTM 9001 423RNN 8534 456GRU 8356 615CNN 8149 532SVM 7086 469Bayesian 6945 648

Computational Intelligence and Neuroscience 7

accuracy of the imaginary BCI to change but the impact isnot very significant It is related to the attention controlability of subjects of different ages

A total of 100 samples were randomly selected from thedata of female and male subjects for classification andrecognition -e experimental results are shown in Table 5Table 5 presents the classification and recognition of thegestures of female and male subjects among them theoverall recognition accuracy rate of female subjects is 968and that of male subjects was 892 -e recognition

accuracy of female subjects is slightly higher than that ofmale subjects and the recognition accuracy rates of the fivegestures are approximately the same

In order to prove the feasibility of the proposed algo-rithm we focused on one case study taking the middlefinger as an example -e number of subjectsrsquo samples isincreased on the basis of the original data set however theexperimental process is similar to the previous one Werandomly selected 5 subjects from all collected samples maleand female sample data set and different ages for

Table 4 Cross-validation experiment of gesture recognition

Recognized categoryActual gestures

Average accuraciesGesture 0 Gesture 1 Gesture 2 Gesture 3 Gesture 4

Subject 1 90 80 100 95 95 92Subject 2 95 85 90 90 90 90Subject 3 90 90 95 100 85 92Subject 4 80 90 95 85 90 88Subject 5 90 85 90 90 95 90

8900

9000

9100

9200

9300

15~20 20~25 25~30 30~35 35~40

Figure 9 Recognition accuracy of all ages

Table 5 Cross-validation experiment of different gender gesture recognition

Recognized categoryActual gestures

Gesture 0 () Gesture 1 () Gesture 2 () Gesture 3 () Gesture 4 ()Female 93 93 92 91 93Male 90 89 88 89 90

Table 6 -e middle finger case study

SamplesActual gestures

Average accuracies ()Subject 1 () Subject 2 () Subject 3 () Subject 4 () Subject 5 ()

Random samples 91 94 89 95 94 926Male 90 89 91 88 90 896Female 96 91 92 90 93 92415ndash20 years 89 90 89 91 87 89220ndash25 years 90 92 92 91 90 9125ndash30 years 91 93 92 91 94 92230ndash35 years 91 92 90 93 92 91635ndash40 years 90 92 90 92 93 914

Table 7 Significance test results

Source SS df MS F Frobgt FColumns 72078 4 180194 238 01381Rows 2874 2 143702 19 02117Error 60612 8 75765Total 16143 14

8 Computational Intelligence and Neuroscience

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 7: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

42 Verification of Recognition Accuracy As shown in Ta-ble 3 the average recognition rate and running time of thenewly proposed Dense LSTM algorithm is compared withseveral neural networks commonly used in traditional deeplearning BCI including Recurrent Neural Network (RNN)Convolutional Neural Networks (CNN) [32] and LSTM andfeature extraction-classifier algorithms in machine learningincluding support vector machine (SVM) and Bayesiannetwork -e results indicate that under the same parametersettings the Dense LSTM algorithm proposed in this studyhas a higher average recognition rate for all subjects so it ismore suitable for the classification of EEG signals in theimagery-based BCI To a certain extent it solves the LSTMnetwork data in the imagery-based BCI -e nonconvergenceproblem in the classification and recognition model trainingprocess improves the practicality and generalization perfor-mance of the imagery-based BCI All subjects used the sameclassification and recognition model and the recognitionaccuracy rate was 9156 -is somewhat resolves theproblems relative to ldquoBCI blindnessrdquo and ldquoone person onemodelrdquo that are common in BCIs making the application ofimaginary BCIs more widespread

-e trained classification and recognition model is ap-plied to the classification and recognition module of theimaginary BCI model and 5 out of the 20 subjects arerandomly selected for the gesture recognition cross-validationexperiment in the imagination mode Each gesture is done 20

times and the experimental results are shown in Table 4which represents the experimental results of five randomlyselected subjects Considering the experiment 5 subjectsmade 100 gestures for each gesture

-e imaginary BCI data was randomly selected for thefive age groups ranging 15sim20 years old 20sim25 years old25sim30 years old 30sim35 years old and 35sim40 years old forclassification -e test results are shown in Figure 9 theaccuracy of gesture recognition for all ages reached morethan 905 of which the highest recognition accuracy forages 25 to 30 years old reached 9258 -e farther theremaining four age groups were from the age range of 25 to30 years the lower the recognition accuracy is -e ex-perimental results show that age causes the recognition

LSTM-Dense

LSTM

0

02

03

04

05

06

07

08

09

100 200 300 400 500 600 700 800Epochs

ACC

Figure 8 LSTM and Dense LSTM model training process

Table 3 Comparison of average accuracy and running time ofclassification algorithm

Classificationalgorithm

Classification accuracy()

Running time(sec)

Dense LSTM 9156 379LSTM 9001 423RNN 8534 456GRU 8356 615CNN 8149 532SVM 7086 469Bayesian 6945 648

Computational Intelligence and Neuroscience 7

accuracy of the imaginary BCI to change but the impact isnot very significant It is related to the attention controlability of subjects of different ages

A total of 100 samples were randomly selected from thedata of female and male subjects for classification andrecognition -e experimental results are shown in Table 5Table 5 presents the classification and recognition of thegestures of female and male subjects among them theoverall recognition accuracy rate of female subjects is 968and that of male subjects was 892 -e recognition

accuracy of female subjects is slightly higher than that ofmale subjects and the recognition accuracy rates of the fivegestures are approximately the same

In order to prove the feasibility of the proposed algo-rithm we focused on one case study taking the middlefinger as an example -e number of subjectsrsquo samples isincreased on the basis of the original data set however theexperimental process is similar to the previous one Werandomly selected 5 subjects from all collected samples maleand female sample data set and different ages for

Table 4 Cross-validation experiment of gesture recognition

Recognized categoryActual gestures

Average accuraciesGesture 0 Gesture 1 Gesture 2 Gesture 3 Gesture 4

Subject 1 90 80 100 95 95 92Subject 2 95 85 90 90 90 90Subject 3 90 90 95 100 85 92Subject 4 80 90 95 85 90 88Subject 5 90 85 90 90 95 90

8900

9000

9100

9200

9300

15~20 20~25 25~30 30~35 35~40

Figure 9 Recognition accuracy of all ages

Table 5 Cross-validation experiment of different gender gesture recognition

Recognized categoryActual gestures

Gesture 0 () Gesture 1 () Gesture 2 () Gesture 3 () Gesture 4 ()Female 93 93 92 91 93Male 90 89 88 89 90

Table 6 -e middle finger case study

SamplesActual gestures

Average accuracies ()Subject 1 () Subject 2 () Subject 3 () Subject 4 () Subject 5 ()

Random samples 91 94 89 95 94 926Male 90 89 91 88 90 896Female 96 91 92 90 93 92415ndash20 years 89 90 89 91 87 89220ndash25 years 90 92 92 91 90 9125ndash30 years 91 93 92 91 94 92230ndash35 years 91 92 90 93 92 91635ndash40 years 90 92 90 92 93 914

Table 7 Significance test results

Source SS df MS F Frobgt FColumns 72078 4 180194 238 01381Rows 2874 2 143702 19 02117Error 60612 8 75765Total 16143 14

8 Computational Intelligence and Neuroscience

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 8: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

accuracy of the imaginary BCI to change but the impact isnot very significant It is related to the attention controlability of subjects of different ages

A total of 100 samples were randomly selected from thedata of female and male subjects for classification andrecognition -e experimental results are shown in Table 5Table 5 presents the classification and recognition of thegestures of female and male subjects among them theoverall recognition accuracy rate of female subjects is 968and that of male subjects was 892 -e recognition

accuracy of female subjects is slightly higher than that ofmale subjects and the recognition accuracy rates of the fivegestures are approximately the same

In order to prove the feasibility of the proposed algo-rithm we focused on one case study taking the middlefinger as an example -e number of subjectsrsquo samples isincreased on the basis of the original data set however theexperimental process is similar to the previous one Werandomly selected 5 subjects from all collected samples maleand female sample data set and different ages for

Table 4 Cross-validation experiment of gesture recognition

Recognized categoryActual gestures

Average accuraciesGesture 0 Gesture 1 Gesture 2 Gesture 3 Gesture 4

Subject 1 90 80 100 95 95 92Subject 2 95 85 90 90 90 90Subject 3 90 90 95 100 85 92Subject 4 80 90 95 85 90 88Subject 5 90 85 90 90 95 90

8900

9000

9100

9200

9300

15~20 20~25 25~30 30~35 35~40

Figure 9 Recognition accuracy of all ages

Table 5 Cross-validation experiment of different gender gesture recognition

Recognized categoryActual gestures

Gesture 0 () Gesture 1 () Gesture 2 () Gesture 3 () Gesture 4 ()Female 93 93 92 91 93Male 90 89 88 89 90

Table 6 -e middle finger case study

SamplesActual gestures

Average accuracies ()Subject 1 () Subject 2 () Subject 3 () Subject 4 () Subject 5 ()

Random samples 91 94 89 95 94 926Male 90 89 91 88 90 896Female 96 91 92 90 93 92415ndash20 years 89 90 89 91 87 89220ndash25 years 90 92 92 91 90 9125ndash30 years 91 93 92 91 94 92230ndash35 years 91 92 90 93 92 91635ndash40 years 90 92 90 92 93 914

Table 7 Significance test results

Source SS df MS F Frobgt FColumns 72078 4 180194 238 01381Rows 2874 2 143702 19 02117Error 60612 8 75765Total 16143 14

8 Computational Intelligence and Neuroscience

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 9: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

recognition testing compared with the previous random testresults -e results are shown in Table 6 It can be seen fromthe results focusing on one case study analysis and ap-propriate with the addition of subject samples that theresults are basically the same as the previous results

Taking the middle finger as an example the significancetest is performed on random samples and different genderssample data -e experimental results are shown in Table 7From the results p(1) 01381 keeping the null hypothesiswhich means there is no significant difference betweendifferent subjects and p(2) 02117 the null hypothesis isalso kept and there is no significant difference betweensubjects of different genders

Similarly the significance test is performed on randomsamples and different age groups sample data the results areas follows p(1) 05619 which means there is no significantdifference between different subjects in different age groupsp(2) 01161 which means the model is also applicable tosubjects between different age groups and p(3) 05132which means the interaction between age and differentsubjects is not obvious

5 Conclusion

-e model and the algorithm for the problem of theimaginary BCI were investigated in this study -e contri-butions of this study addressing this challenging problem areas follows

Model a new imaginary BCI model was proposedModel training and testing were conducted with itsown data set and a new hardware platform was built toverify the feasibility of the modelAlgorithm due to the low recognition accuracy of thetraditional classification and recognition algorithmsand the problem of ldquoBCI blindnessrdquo a Dense LSTMalgorithm was proposed -is algorithm combines theLSTM network with the Dense network to improve therecognition accuracy and solve the problem of ldquoOneperson one modelrdquo

Experimental results showed that the recognition ac-curacy of the proposed Dense LSTM algorithm was as highas 9156 which is significantly better than other algorithmsand has sufficient generalization ability Future work willfocus on improving the recognition accuracy of the BCIunder various interference environments to improve thepracticability and effectiveness and be used in medical andrehabilitation fields in the future

Data Availability

-e data sets generated and analyzed during this currentstudy are available from the corresponding author on rea-sonable request

Conflicts of Interest

-e authors declare no conflicts of interest with respect tothe research authorship andor publication of this article

Acknowledgments

-is work was supported by the National Natural ScienceFoundation of China under Grant 61301250 Key Researchand Development Project of Shanxi Province under Grant201803D421035 Natural Science Foundation for YoungScientists of Shanxi Province under Grant 201901D211313and Research Project Supported by Shanxi ScholarshipCouncil of China under Grant HGKY2019080

References

[1] X Zhang L Yao XWang J Monaghan and DMcalpine ldquoAsurvey on deep learning based brain computer interfacerdquoRecent Advances and New Frontiers vol 1 no 1 2018

[2] H Berger ldquoUber das elektrenkephalogramm des menschenrdquoEuropean Archives of Psychiatry and Clinical Neurosciencevol 87 pp 527ndash570 1928

[3] W-T Shi Z-J Lyu S-T Tang T-L Chia and C-Y Yang ldquoAbionic hand controlled by hand gesture recognition based onsurface EMG signals a preliminary studyrdquo Biocybernetics andBiomedical Engineering vol 38 no 1 pp 126ndash135 2018

[4] Y Bai L Chen H Xue et al ldquoResearch on interactionmethods based on gesture sensing and smart devicesrdquoComputer and Digital Engineering vol 47 no 4 pp 990ndash9962019

[5] X Jiang L-K Merhi Z G Xiao and C Menon ldquoExplorationof Force myography and surface electromyography in handgesture classificationrdquoMedical Engineering amp Physics vol 41pp 63ndash73 2017

[6] W Yang J Lu and H Xie ldquoDynamic rehabilitation gesturerecognition based on optimal feature combination TNSE-BPmodelrdquo International Journal of Computational and Engi-neering vol 4 no 3 2019

[7] A Lee Y Cho S Jin and N Kim ldquoEnhancement of surgicalhand gesture recognition using a capsule network for acontactless interface in the operating roomrdquo ComputerMethods and Programs in Biomedicine vol 190 Article ID105385 2020

[8] C Li G Li J Lei et al ldquoA review of brain-computer interfacetechnologyrdquo Acta Electrochimica Sinica vol 33 no 7pp 1234ndash1241 2005

[9] L A Farwell and E Donchin ldquoTalking off the top of yourhead toward a mental prosthesis utilizing event-related brainpotentialsrdquo Electroencephalography and Clinical Neurophys-iology vol 70 no 6 pp 510ndash523 1988

[10] N Birbaumer T Hinterberger A Kubler and N Neumannldquo-e thought-translation device (TTD) neurobehavioralmechanisms and clinical outcomerdquo IEEE Transactions onNeural Systems and Rehabilitation Engineering vol 11 no 2pp 120ndash123 2003

[11] N Birbaumer N Ghanayim T Hinterberger et al ldquoAspelling device for the paralysedrdquo Nature vol 398 no 6725pp 297-298 1999

[12] S Gao Y Wang X Gao and B Hong ldquoVisual and auditorybrain-computer interfacesrdquo IEEE Transactions on Bio-MedicalEngineering vol 61 no 5 pp 1436ndash1447 2014

[13] M Cheng X Gao S Gao and D Xu ldquoDesign and imple-mentation of a brain-computer interface with high transferratesrdquo IEEE Transactions on Biomedical Engneering vol 49no 10 pp 1181ndash1186 2002

Computational Intelligence and Neuroscience 9

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience

Page 10: A Dense Long Short-Term Memory Model for Enhancing the ...recognition model’s trainingprocess is complicated. In thisstudy, a new BCI model with a Denselong short-term memory (Dense-LSTM)

[14] F Guo H Bo X Gao and S Gao ldquoA brainndashcomputer in-terface using motion-onset visual evoked potentialrdquo Journalof Neural Engineering vol 5 no 4 pp 477ndash485 2008

[15] G K Anumanchipalli J Chartier and E F Chang ldquoSpeechsynthesis from neural decoding of spoken sentencesrdquo Naturevol 568 no 7753 pp 493ndash498 2019

[16] J R Wolpaw D J Mcfarland G W Neat and C A FornerisldquoAn EEG-based brain-computer interface for cursor controlrdquoElectroencephalography and Clinical Neurophysiology vol 78no 3 pp 252ndash259 1991

[17] K Zuo ldquoERDERS analysis basis of motor imagery EEGsignalsrdquo China New Communications vol 18 no 4 p 1222016

[18] L Yu X Wang Y Lyu et al ldquoElectrophysiological evidencesfor the rotational uncertainty effect in the hand mental ro-tation an ERP and ERSERD studyrdquo Neuroscience vol 432pp 205ndash215 2020

[19] J Meng M Xu K Wang et al ldquoSeparable EEG featuresinduced by timing prediction for activerdquo Brain ComputerInterfaces vol 20 no 12 p 3588 2020

[20] L Yao Stimulation-Assisted EEG Signal Feature EnhancementMethod and Hybrid Brain-Computer Interface Shanghai JiaoTong University Shanghai China 2015

[21] M Li W Zhu M Zhang Y Sun and Z Wang ldquo-e novelrecognition method with optimal wavelet packet and LSTMbased recurrent neural networkrdquo in Proceedings of the IEEEInternational Conference on Mechatronics and Automationpp 584ndash589 IEEE Takamastu Japan August 2017

[22] J S Lin and R Shihb ldquoA motor-imagery BCI system based ondeep learning networks and its applicationsrdquo Evolving BCI8erapy-Engaging Brain State Dynamics vol 75 no 5 2018

[23] Y Jiao Y Zhang X Chen et al ldquoSparse group representationmodel for motor imagery EEG classificationrdquo IEEE Journal ofBiomedical and Health Informatics vol 23 no 2 pp 631ndash6412019

[24] H Tjandrasa and S Djanali ldquoClassification of EEG signalsusing single channel independent component analysis powerspectrum and linear discriminant analysisrdquo Lecture Notes inElectrical Engineering vol 387 pp 259ndash268 2016

[25] S G Mason A Bashashati M Fatourechi K F Navarro andG E Birch ldquoA comprehensive survey of brain interfacetechnology designsrdquo Annals of Biomedical Engineeringvol 35 no 2 pp 137ndash169 2007

[26] H Ramoser J Muller-Gerking and G Pfurtscheller ldquoOpti-mal spatial filtering of single trial EEG during imagined handmovementrdquo IEEE Transactions on Rehabilitation Engineeringvol 8 no 4 pp 441ndash446 2000

[27] H Jiayuan and J Ning ldquoBiometric from surface electro-myogram (sEMG) feasibility of user verification and iden-tification based on gesture recognitionrdquo Frontiers inBioengneering and Biotechnology vol 14 no 8 p 58 2020

[28] X Liang H Yong and J H Y Wang ldquoHuman connectomestructural and functional brain networksrdquo Chinese ScienceBulletin vol 55 no 16 pp 1565ndash1583 2010

[29] B Blankertz S Lemm M Treder S Haufe and K-R MullerldquoSingle-trial analysis and classification of ERP componentsmdashatutorialrdquo Neuroimage vol 56 no 2 pp 814ndash825 2011

[30] B W Mcmenamin A J Shackman J S Maxwell et alldquoValidation of ICA-based myogenic artifact correction forscalp and source-localized EEGrdquo Neuroimage vol 49 no 3pp 2416ndash2432 2010

[31] Y Fei P Lin and D Jun ldquoReview of convolutional neuralnetworkrdquo Chinese Journal of Computers vol 40 no 6pp 1229ndash1251 2017

[32] D Eigen D Krishnan and R Fergus ldquoRestoring an imagetaken through a window covered with dirt or rainrdquo in Pro-ceedings of the IEEE International Conference on ComputerVision pp 633ndash640 Sydney Australia December 2013

10 Computational Intelligence and Neuroscience