6

Click here to load reader

[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Self-organization,

  • Upload
    letuyen

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Self-organization,

(ICNN&B'05 Invited Paper)

Self-organization, Learning and LanguageFukang Fang

Institute ofNonequilibrium SystemsBeijing Normal UniversityBeijing, 100875, China

E-mail: [email protected]

Abstract-Several issues related to Learning process arediscussed from the viewpoint of self-organization in this paper.Though Hebbian rule and BCM rule are widely used inlearning process, there may be a more general mechanismbehind the rules and J structure with specific attractors maybe such kind of mechanism. Chinese character learning is agood example of learning process. A neural network based onHebbian rule is developed to learn Chinese grapheme. Theresults show that the Chinese character can be learned withappropriate parameters and integration method. Twoproperties in Chinese learning at behavioral level are discussed.However, at the level of neurons and neuron groups, a smallnetwork with dynamic synapse was put forward by Berger andthe complex cognitive activities such as sound recognition canbe achieved by such simple neural network. The suppositionthat there should be basic mechanism to govern cognitiveactivities is partly validated. Furthermore, information isinvolved in any learning process. How information changes isthe core problem of learning process and still open. Somediscussion on this problem is also given in this paper.

I. INTRODUCTION

Learning process in nervous system is the mostfundamental advanced cognitive activity of the brain. At thebehavior level, learning process is defined as behavioralchanges. However, from physical viewpoint, learning is adynamic process referred to a special kind of state changesin nervous system. I. Prigogine, Nobel Laureate, proposedthat state changes in various cognitive activities should besome kinds of self-organization processEl'. A. Babloyantz, amember of Progogine group, projected nervous system ontoa low-dimensional subsystem in which some principles ofbrain cognitive activities could be well represented. H.Haken took brain and nervous system as complex dynamicsystem with characteristics of emergency2]. On thisstandpoint, Haken investigated information transmission,neurons' firing synchronization and information integrationby self-organization theory. Dynamic models with keyvariables and parameters were developed to reflect thefundamental features of nervous system, especiallyemergency mechanism. Low-dimensional subspace anddynamics of nervous system are the core of such approach.Some kinds of attractors and emergency are supposed toplay key roles in cognitive activities such as learningprocess.

In this paper, two fundamental learning rules on the level

of synaptic dynamics are discussed and J structure issupposed to be general mechanism of learning process. ThenChinese character is chosen as an example of leamingprocess and relevant issues are investigated. Three aspectsof Chinese learning, including pronunciation, grapheme, andsemanteme, are discussed from viewpoint ofself-organization. A neural network based on Hebbian rule isdeveloped to learn Chinese grapheme. A specific Chinesecharacter is leamed with the neural network and the ways tointegrate activities of several neurons into the output of thewhole network are analyzed. Then, some specific rules inChinese learning at behavior level are explored. Theselearning behaviors must be accomplished throughself-organization and emergency. Sound recognition is theother aspects of learning. An eleven-neuron neural networkwith two layers was put forward to realize soundrecognition 3]. We believe that such network can beexpanded to discuss Chinese learning. The important thingof Berger's work is that the complex cognitive activitiessuch as sound recognition can be achieved by such simpleneural network. The supposition that there should be basicmechanism to govern cognitive activities is partly validated.Furthermore, information is involved in any learningprocess. Information is transmitted and processed by neurondynamics as well as synaptic dynamics. How informationchanges is the core problem of learning process and stillopen. Some discussion on this problem is also given in thispaper.

II. MAIN RESULTS

A. Attractors in learningprocessAccording to neuropsychologic studies, learning could be

taken as changes in synaptic strength since long-termchanges in the transmission properties of synapses provide aphysiological substrate for leaming and memory[4. At thelevel of synaptic dynamics, there are two fundamental typesof learning rule that postulate how synaptic strength shouldbe modified: Hebbian rule and BCM rule[5]. Within Hebbianrule, modifications in the synaptic transmission efficacy aredriven by correlations in the firing activity of pre- andpostsynaptic neurons. While Hebbian rule is based oncorrelation, BCM rule is based on activities of postsynapticneurons. The synaptic strength will increase if the average

0-7803-9422-4/05/$20.00 C2005 IEEE1906

Page 2: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Self-organization,

activity of postsynaptic neuron is above a threshold whichvaries with -the history of postsynaptic neuron activities.Otherwise, synaptic strength will decrease depicted as Fig. 1.Hebbian rule and BCM rule are supported by manyexperiments respectively.

dIt ii

'p-/ LTP

TD-- -----------------~- -

Vj

Fig 1. Bidirectional leaming rule.

Learning process can be seen as phase transition from theunlearned state to the leamed state, and low- dimensionaldynamic systems can be developed to describe mechanismof state changes in learning process. From this viewpoint,Hebbian rule and BCM rule can be supposed as tworepresentatives of certain general mechanism. Such generalmechanism may be J structure put forward in our earlierworks161. J structure with J-shaped evolution curve is atypical development mode where the system needs to pass abarrier from the initial state to another equilibrium state andthe end state is prior to the initial state. The essentialfeatures of learning process can be explored by attractors oflow-dimensional dynamic systems(singular points in thephase graph). The theoretical analysis showed that theminimal model of J structure can be a two-dimensionalautonomic dynamic system with multiple attractors,including node, saddle and limit cycle. The dynamic traits ofJ structure can be explored with variables and parameters indetails. For example, given the dynamic system as follows:

x = (x - c)(d - x){ = k(y - b)(a - y)(y + x - e)

d > c,a> b, k >0, b + d < e <a + dThe system has six singular points. According to analysis

of linear stability near the singularities, the one unstablenode is (c, a), the two stable nodes are (d, a) and (d,b),and the three saddles are (c, e - c), (d, e - d) and (c, b).The phase graph is given in Fig. 2.

c,a (d,a)

(d,e-

_c,>) * (d,b)x

Fig 2. The phase graph of multi-singularities dynamic system

Information increase is characteristics of leaming process.

In fact, J structure is some new kind of non-equilibriumphase transition information-involved, pointed out by VonNeumann first171. Furthermore, J structure is common in thelife evolution, technology innovation, educationdevelopment and so on. The complex phenomena incomplex system such as nervous system may be elucidatedby integrating J process and other processes.

B. Chinese character RecognitionThe learning process of Chinese character includes three

aspects: pronunciation, grapheme, and semanteme. In orderto simplify the research, we only study the process ofgrapheme acquisition in this subsection. Based on Hebbianrule, a model of Chinese character grapheme learning isdeveloped with the biological background of neurons andneuron population. A specific Chinese character is input intothe neural network and various factors affecting the learningresults is analyzed, especially the parameters in the modeland the ways of neurons' activities integration.The learning process would be divided into two processes:

from unlearned state to partial-learned state, frompartial-learned state to learned state. The model is composedof a two layers neural network, with the first layercompleting partial-learning and the second layer completingintegration. For the first layer, the input is a 160*160 latticeof Chinese character, and the output is composed of four160*160 RFs (Received Fields) with lateral interactionsamong each other. The input connects with each RFcorrespondingly and the weights are modified according topseudo-Hebb rule. The lateral interactions between RFsshow that the RFs in the small distance restrain each otherand the ones in the long distance enhance each other. Thesecond layer integrates the leaming results of the first layerinto a whole Chinese character. The input is the four RFsand the output is a 160*160 lattice, and each RF connectswith corresponding dots of the output.The learning process is simulated by putting the same

Chinese character into network repeatedly. .The weightsbetween the input and each RF of the first layer are initiallyset as Guass distribution indicating that each RF is onlysensitive to a certain part of the input. Given an inputcharacter x, the output of each RF i is computed as181:

yi = tanh(c+)c+ = csign(c)

C= (X.Wi+Evyjy)-0j*i

O(t) = 0.5(max {c(t - h), ... ,c(t - 1)} - min {c(t - h),* * ,c(t -1)})Where w, is the weight and 9(t) is a history-dependentthreshold for output. The weights corresponding to dots ofthe 160*160 lattice are modified according to pseudo-Hebb

rule as follows:wm, (t + 1) = Wmn(t) + 7(yx,M, (t)w,, (0) -y Wm. (t))

Where q is learning rate, an important parameter to

1907

I I i R f

Page 3: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Self-organization,

determining the learning results.Integration by the second layer is assumed as a process of

synchronizing all the outputs of the RFs. If and only if theoutput of each dot in the RFs is more than the activationthreshold and the sum of all the outputs of the RFs is morethan the intensity threshold, the final output is 1, otherwise0.The results show that the Chinese character can be

learned with the neural network as shown in Fig 3. The RFscan acquire part of the input gradually after a number oftimes when the parameters in the pseudo- Hebb learningrule are appropriate such as learning rate, nonnalizationconstant of the RFs, initial threshold etc. The second layerof the network can recover the grapheme, and the more thetimes trained, the more the details exhibited. When thelearning rate is small enough the result is bad though manytimes trained. On the contrary, when the learning rate isincreased the result is good even if a few times trained.When the initial threshold is initially set to a large valueeach dot in the output is 0 in spite of the increase of theleaning rate and the training times. Characters with differentcomplexity request different thresholds. Based on theanalysis above, we can draw a conclusion that: the learningrule is the core mechanism of emergency; choosing someimportant parameters accurately is the necessary conditionfor emergency; a certain number of training times ensuresideal learning effect. All of the results are consistent with

Fig. 3. Chinese character learning results by Hebbian rule.

In the simulations, two emergency processes were foundexisting in the process of Chinese character graphemeleaming, i.e., the generating process in which the input ofenvironment activates partial neurons and the integratingprocess in which the activities of partial neurons integrate toa whole activity and the input recurs in it. The fundamentalrule to govern the first emergency could be Hebb learningrule. Then what is the rule for the second emergency? Thesimulation in this paper considered a synchronizationmechanism. But various ways may play essential role inintegrating activities of neurons into the output of the whole

network. For example, there may be another learningprocess in the lateral interaction between four RFs. Thedifference between the first and the second emergency isthat the first emergency occurs in the interaction of inputand individual neurons and the second occurs in theinteraction of neurons. Another case is also possible. Thesecond emergency occurs by a mechanism of rewards andpunishment. That is, if the character pattern is effectivelyacquired by a RF, the RF will be rewarded and have higherpower in the integration. Some special attractors aresupposed to exist in leaming emergency and be worth offurther analysis.

C. properties in Chinese cognitionIn recent years, there have been growing interests in the

psycholinguistic study of orthographic acquisition inChinese. A unique feature of the Chinese orthography is thatit uses characters rather than alphabetic letters as the basicwriting unit, in square configurations that map mostly ontomeaningful morphemes rather than spoken phonemes.Processing or acquisition within this "fractal" organizationof characters may differ in important ways from that ofEnglish and other alphabetic languagesE91. In this subsection,two specific rules in Chinese learning at behavior level arediscussed from viewpoint of self-organization.The first rule is on the learning curve of

semantic-phonetic compoundsE93. The semantic-phoneticcompounds (ideophonetics) are the most interesting andimportant type of Chinese characters, composed of twomajor components: the semantic part (often called a radical)that gives information about the character's meaning, andthe phonetic part that gives partial information about thewhole character's pronunciation. An interestingphenomenon of Chinese learning is that the corrections offalse Chinese characters show quite different learningcurves according to the false types. The false Chinesecharacters can be classified into three types defined asfollows:Type 1: True characters where the combination and

components are both right. That is, the semantic part and thephonetic part are actual as well as in normal location.

Type 2: false component characters where thecombination of components is right but componentsthemselves are wrong. That is, the semantic part and thephonetic part are in right location, but there is somethingwrong in phonetic part.

Type 3: false combination characters where thecombination of components is wrong but componentsthemselves are right. That is, the semantic part and thephonetic part are both actual, but the semantic part is inwrong location.The mistake curve of children is obtained by behavior

experiments and is shown in Fig. 4. The results show thatthe true characters of type 1 can be leamed soon with theaccuracy of above 90%. The false component characters of

1908

Page 4: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Self-organization,

type 2 can not be learned at first but be corrected soon later.However, the false combination characters of type 3 aremost difficult to learn and the error lie at about 60% for longtime.

00.60.5

0.40.30.20. 10

4*1 ...... ....T p

.

I.s--Type l

--- Type2

.̂ I-Type 3

t

Fig. 4 The learning curve of semantic-phonetic compounds

The second rule is on phonetic awarenessl'°0. Increasingamounts of research has shown that there are regularities inthe mappings from Chinese phonograms to sound. Regularcharacters -- those in which the pronunciation is the same asthe phonetic radical, for example, i*/qingl/ -- are namedfaster than irregular characters, for example, At /qi2/.Consistent characters, those having the same phoneticradical and are pronounced the same, are named faster thaninconsistent characters.

Regularity and consistency effects are phonetic familyproperties. Children begin to show these effects after 2-3years of primary school. Visual word frequency is also animportant factor that influences the acquisition of Chinesephonograms. Children in their first two years of school canname high frequency characters correctly and rapidly, butchildren do not perform well on low frequency charactersuntil 5-6 years of school study. Family and frequencyalways interact with each other whereas regularity andconsistency effects can only be found for low frequencyitems. Thus, it is difficult to know what different rolesfamily and frequency play for leaming processing Chinesephonograms to sounds. Four models are tested to determinethe rules for processing Chinese phonograms, consisting ofdifferent difficulties of family and frequency. The resultsshow that frequency makes children more familiar withitems but acquisition of regularities requires exposure tomore families of characters, and to families of greater size.Thus, frequency and family play fundamentally differentroles in acquisition of Chinese characters.

There are some implications with the behavioral resultsabove. Firstly, it can be concluded that there is some specialmodes of learning process such as the leaming curve ofsemantic-phonetic compounds and key factors of learningprocess such as frequency and family of characters. Thebasic mechanisms and attractors such as J structure mayexist in the leaming process, but further studies on this issueare needed. Secondly, the behavioral phenomena at macrolevel have their physiological base, so some theoreticalmodels such as neural network should be developed toexplain the behavior features. On the other hand, the

exploration of fundamental features in learning process atthe behavioral level is helpful for testing relatedneuropsychologic results. At last, these learning tasks mustbe accomplished through self-organization and emergency.In fact, the behavioral results can be obtained by simulationmodels such as SOM (self-organization map).

D. Neural Network andsoundSynaptic dynamics play great role in complex cognitive

activities such as learning and memory. In this subsection,we will discuss a dynamic synapse model proposed by J. S.Liaw and T. W. Berger[33. The main concept is to incorporatethe various synaptic mechanisms into the general scheme ofneural information processing['11. The model was quitesimple since the neural network consisted of only elevenneurons with two layers. But it is amazing that the complexcognitive activities such as sound recognition can beachieved by such simple neural network.

In the dynamic synapse model, four dynamic synapsesconnected an excitatory presynaptic neuron to an inhibitoryneurons and the inhibitory neuron sends feedbackmodulation to all four presynaptic terminals just as shown inFig. 5. Each synapse consists of a presynaptic componentwhich determines when a quantum of neurotransmitter isreleased and a postsynaptic component which scales theamplitude of the excitatory postsynaptic potential (Epsp) inresponse to an event of neurotransmitter release. Thepotential of release, PR is a function of four factors: actionpotential (Ap), first and second components of facilitation(F1 and F2), and feedback modulation (Mod).

Fig. 5. Four dynamic synapses connecting an excitatory presynaptic neuronto an inhibitory neurons.

The mathematical equations of the dynamic synapse areas follows:

dRTR d = -R + kR * Ap

dtFl (t +1) =Fl (t) +kf -Ap -dt rf, -Fl(t)

'rf2 dt = -2 +2 *A

dModTmod dt =-Mod+ kMd * APit

For details, the paper by J.-S. Liaw and T. W. Berge(1996)can be referred. Here we emphasized that the mathematicalform of the dynamic synapse model is quite simple. So itcan be believed that the dynamic synapse model extractedthe key variables and parameter of information processing.

1909

Page 5: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Self-organization,

The dynamic synapse model can perform speechrecognition from unprocessed, noisy raw waveforms ofwords spoken by multiple speakers. To accomplish the task,a neural network is developed to consist of two layers, aninput layer and an output layer, with five excitatory neuronsin each layer, plus one inhibitory interneuron. Each inputneuron is connected to all of the output neurons and to theinterneuron, and each connection uses four dynamicsynapses. There is also a feedback connection from theinterneuron to each presynaptic terminal (Fig. 6). Thenetwork is trained to increase the cross-correlation of theoutput patterns for the same words while reducing that fordifferent words. The learning process of the networkFollowed the Hebbian and anti-Hebbian rules.

connected by an excitatory chemical synapse was studied byEguia et al.E121 The synaptic stimulus current sequence S isinjected into the bursting neuron Ni that is unidirectionallycoupled to a second bursting neuron N2 through anexcitatory chemical synapse CH. The informationtransmission is shown in Fig. 7. Eguia et al. put forwardinformation recovery inequality: I(S,R2) > I(S,JRI), whichis an apparent violation of the data processing inequality,implying that information can be recovered or even createdduring the process of transmission.

I1 neurons30 symnpas

Fig 7. Schematic diagram ofEguia' s model

I.m IIII till I l

Fig 6. Speech recognition by a small network with dynamic synapse.

The results show that the output neurons can generatetemporal patterns that are highly correlated with each otherafter training. That is, though the waveforms of the sameword spoken by different speaker are radically different, thetemporal patterns are virtually identical. So the speech canbe recognized by the small network.The most important thing about the dynamic synapse

model is that cognitive activities can be achieved by smallnetwork. So the supposition that there should be basicmechanism to govern cognitive activities is partly validated.We believe that such network can be expanded to discussChinese learning.

E. Information changes

Information is involved in any learning process. In brain,information is transmitted and processed by neurondynamics as well as synaptic dynamics. How informationchanges is the core problem of learning process and stillopen. Some discussion on this problem is given in thissubsection.

A. Borst brought forward the data processing inequality:I(S,R2) < I(S,R1) , which means that the mutualinformation between the input signal and the output of thesecond neuron that only receive input from the first neuronwon't be higher than the mutual information between theinput signal and the output of the first neuron. It is also heldfrom the conventional point of view of information theory.If we regard the process as a Markov chain, the dataprocessing inequality is an exact mathematical identity.Recently, signal transduction through two neurons

The information recovery inequality could be illustratedby Fig 8. A synaptic current J, consisting of sixteen pulses isinjected into the first neuron (Ni). Only four pulses aretransmitted into hyperpolarizations of Ni, whereas in thesecond neuron (N2) fourteen of the sixteen pulses could beidentified as events in the membrane potential. It means thatthe information is lost in Ni but is recovered in N2.

-3A} Ij

3S9

405

.I-' z ,0 il ^ V 0 1 00 9 V

0 1000 2000 3W0 4000 5000 7000 0000

t (Arb. wlitS)

Fig. 8. Time series of the synaptic input J,(t), the membrane potential ofthefirst neuron xi(t) and the membrane potential of the second neuron x2(t).

Eguia gives us such explanation about this phenomenon:the model he used is a dynamical system, and the rate ofspiking in NI is slightly modified and this information islost, but preserved in the dynamics of Ni. It can be utilizeddownstream to induce a hyperpolarization in N2, leading torecovery of the "lost" information. It is not difficult tounderstand the results. If we consider all the neurons arenonlinear in the system, the dynamic system itself can showvarieties of outcome by the nonlinear interaction. Althoughinformation may be partly lost in the transmission, there iscertainly enough space where the hidden information isstored and can be recovered by unstable actions of thenonlinear network element. Maybe the amount ofinformation will increase during the process, which implies

1910

Unprocessed noisy rawspeech wave forns

'Ho0Cb spVeaker I

"Hof'by speakr 2

Page 6: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Self-organization,

it is an information creation process. We can regard this asself-organization, which a characteristic phenomenon innonlinear physics, in dynamic neural system.

However, there are still some problems. The model didrecover the information which had been hidden. But underthe condition of the information theory, the informationtransmitted in the model was not changed. A comparativeinvestigation on this problem is made. If we added a thirdand fourth neuron to the model, the information would belost. Furthermore, we simulate information transmissionwith dynamic synapse model proposed by J. S. Liaw and T.W. Berger. The results show that the information was lost inthe transmission when we used only two neurons. Whenusing the whole model with eleven neurons, the informationwasn't changed.We believe that information in brain should be

transmitted and processed with information increasing,especially in the process of cognition and learning. Whilesome theoretical models and simulations show informationrecovery in information transmission, such recovery effect isnot evident in our simulations. It seems that models ofneural circuit that are closer to actual brain should bedeveloped to explore this problem.

HII. CONCLUSIONS

The leaming process of the brain is complex and shouldbe taken as some kind of self-organization process withemergency. The essential features of leaming process can beexplored by attractors of low-dimensional dynamic systems.Some more general mechanism should exist behind variouslearning rules and J structure with specific attractors may besuch kind of mechanism. On the other hand, the complexcognitive activities such as sound recognition can beachieved by the simple neural network put forward by Liawand Berger, partly validating that there should be basicmechanism to govern cognitive activities.

Chinese character leaming may be a good example ofleaming process which includes three aspects: pronunciation,grapheme, and semanteme. A neural network based onHebbian rule is developed to learn Chinese grapheme. Theleaming process would be divided into two processes: fromunlearned state to partial-learned state, from partial-leamedstate to leamed state. The results show that the Chinesecharacter can be learned with appropriate parameters andintegration method. In the simulations, two emergencyprocesses were found in the process of Chinese charactergrapheme learning, i.e., the generating process in which theinput of environment activates partial neurons and theintegrating process in which the activities of partial neuronsintegrate to a whole activity and the input recurs in it. Whilethe fundamental rule to govern the first emergency could beHebbian learning rule, more studies are need to explore therule for the second emergency.

Information is involved in any leaming process. In brain,

information is transmitted and processed by neurondynamics as well as synaptic dynamics. We believe thatinformation in brain should be transmitted and processedwith information increasing, especially in the process ofcognition and learning. But it is still open how informationis processed in brain. Though information recoveryproposed by Eguia et al provided some ideas on the problem,it seems that models of neural circuit that are closer to actualbrain should be developed.

ACKNOWLEDGMENT

The authors would like to thank Professor ShimonEdelman, professor Shu Hua, Dr. Chen Jiawei, Dr. ChenLiujun, Dr. Liuyan and Dr. Li Lishu for their help. Thiswork was supported in part by the National ScienceFoundation under grant no. 60374010, etc.

REFERENCES

[1] I. Prigogine, "Laws of nature and human conduct: specificities andunifying themes", The Proceedings of Conference on Laws ofNatureandHuman Conduct, Belgium, 1985, pp: 5-8

[2] H. Haken, "Principles of brain functioning: a synergetic approach tobrain activity", Behavior and Cognition, Springer, 2000

[3] J. S. Liaw and T. W. Berger, " Dynamic synapse: a new concept ofneural representation and computation", Hippocampus, 1996, vol. 6, pp:591-600

[4] L. F. Abbott and W. G. Regehr, "Synaptic computation", Nature, 2004,vol. 431, pp: 796-803

[5] E. L. Beienenstock, L. N. Cooper, and P. W. Munro, "Theory for thedevelopment of neuron selectivity: orientation specificity and binocularinteraction in visual cortex". Journal Neuroscience, 1982, vol. 2, pp:32-48

[6] Fang Fukang, Chen Qinghua, "The J Structure in Economic EvolvingProcess", Journal ofSystems Science and Complexity, 2003, vol. 16(3),pp: 327-338

[7] von Neumann, "The general and logical theory of automata", InA.H.Taub, Ed., John von Neumann, Collected Works, Vol. 5: Design ofComputers, Theory of Automata and Numerical Analysis. OxfordUniversity Press, 1963:288-328, 1948

[8] B. P. Hiles, N. Intrator, and S. Edelman, "Unsupervised learning ofvisual structure", Journal of Vision, 2002, 2(7), 74a,http://journalofvision.org/2/7/74/, doi: 10.1 167/2.7.74.

[9] H. Shu, and R. C. Anderson, "Learning to read Chinese: Thedevelopment of metalinguistic awareness", In J. Wang, A. W. Inhoff,H.-C. Chen (eds.). Reading Chinese script: A cognitive analysis 1998,(pp. 1-18). Mahwah, NJ: Lawrence Erlbaum.

[10] H. Shu, R. C. Anderson, and N. Wu, "Phonetic awareness: Knowledgeof orthographyphonology relationships in character acquisition ofChinese children", Journal of Educational Psychology, 2000, vol. 92,pp: 56-62.

[11] J. S. Liaw, T. W. Berger, "Dynamic synapse: Harnessing thecomputing power of synaptic dynamics", Neurocomputing 1999, 26-27,pp: 199-206

[12] M. C. Eguia, M. I. Rabinovich, and H. D. Abarbanel, "Informationtransmission and recovery in neural communication channels", Phys.Rev. E 2000, vol. 62, pp: 7111-7122

1911