15
J Psycholinguist Res DOI 10.1007/s10936-016-9432-4 Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language Joshua T. Williams 1,2,3,5 · Sharlene D. Newman 1,2,4 © Springer Science+Business Media New York 2016 Abstract A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been rel- atively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel language activation in M2L2 learners of sign language and to characterize the influence of spoken language and sign language neighborhood density on the activation of ASL signs. A priming paradigm was used in which the neighbors of the sign target were activated with a spoken English word and compared the activation of the targets in sparse and dense neighborhoods. Neighborhood density effects in auditory primed lexical decision task were then compared to previous reports of native deaf signers who were only processing sign language. Results indicated reversed neighborhood density effects in M2L2 learners relative to those in deaf signers such that there were inhibitory effects of handshape density and facilitatory effects of location density. Additionally, increased inhibition for signs in dense handshape neigh- borhoods was greater for high proficiency L2 learners. These findings support recent models of the hearing bimodal bilingual lexicon, which posit lateral links between spoken language and sign language lexical representations. Keywords Sign language · Lexicon · Second language acquisition · Neighborhood density B Joshua T. Williams [email protected] 1 Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA 2 Program in Cognitive Science, Indiana University, Bloomington, IN, USA 3 Speech and Hearing Sciences, Indiana University, Bloomington, IN, USA 4 Program in Neuroscience, Indiana University, Bloomington, IN, USA 5 Cognitive Neuroimaging Laboratory, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA 123

Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

Embed Size (px)

Citation preview

Page 1: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist ResDOI 10.1007/s10936-016-9432-4

Spoken Language Activation Alters Subsequent SignLanguage Activation in L2 Learners of American SignLanguage

Joshua T. Williams1,2,3,5 · Sharlene D. Newman1,2,4

© Springer Science+Business Media New York 2016

Abstract A large body of literature has characterized unimodal monolingual and bilinguallexicons and how neighborhood density affects lexical access; however there have been rel-atively fewer studies that generalize these findings to bimodal (M2) second language (L2)learners of sign languages. The goal of the current study was to investigate parallel languageactivation in M2L2 learners of sign language and to characterize the influence of spokenlanguage and sign language neighborhood density on the activation of ASL signs. A primingparadigm was used in which the neighbors of the sign target were activated with a spokenEnglish word and compared the activation of the targets in sparse and dense neighborhoods.Neighborhood density effects in auditory primed lexical decision task were then comparedto previous reports of native deaf signers who were only processing sign language. Resultsindicated reversed neighborhood density effects in M2L2 learners relative to those in deafsigners such that there were inhibitory effects of handshape density and facilitatory effectsof location density. Additionally, increased inhibition for signs in dense handshape neigh-borhoods was greater for high proficiency L2 learners. These findings support recent modelsof the hearing bimodal bilingual lexicon, which posit lateral links between spoken languageand sign language lexical representations.

Keywords Sign language · Lexicon · Second language acquisition ·Neighborhood density

B Joshua T. [email protected]

1 Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA

2 Program in Cognitive Science, Indiana University, Bloomington, IN, USA

3 Speech and Hearing Sciences, Indiana University, Bloomington, IN, USA

4 Program in Neuroscience, Indiana University, Bloomington, IN, USA

5 CognitiveNeuroimagingLaboratory, IndianaUniversity, 1101E. 10thStreet, Bloomington, IN47405,USA

123

Page 2: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

Introduction

One of the key processes of language communication is the ability to map perceptual-linguistic informationonto abstractmental representations and retrieve lexical representationsfrom long-termmemory. Themapping of perceptual (acoustic or visual) information onto lex-ical representations is called lexical access. Many of the models and theories of lexical accesshave been rooted in psycholinguistic investigations of acoustic speech perception or visualword recognition (Luce andPisoni 1998;Marslen-Wilson 1987;Norris 1994;McClelland andElman 1986). Relatively few studies have generalized these models to visual languages suchas sign language, despite the fact that a great deal of research has shown that the fundamentalaspects of spoken and sign language linguistic structure are similar across language modal-ities (Sandler and Lillo-Martin 2006; Emmorey and Casey 2002). Only relatively recentlyhave a number of researchers investigated psycholinguistic parallels of spoken and sign lan-guage processing (Berent et al. 2013; Sandler and Lillo-Martin 2006). Our knowledge of howindividuals who store and process both spoken and sign languages from birth, also known asbimodal bilinguals, is even more sparse. Additionally, our knowledge of how hearing adultlearners of a sign language (M2L2 learners) is lacking. In the present study, we investigatedcross-modal priming inM2L2 learners in order to advance our understanding of how a spokenlanguage influences lexical access of sign language.

American Sign Language (ASL) is the primary language of d/Deaf1 and hard-of-hearingindividuals in theUnited States and parts ofNorthAmerica. Similar to spoken languages, signlanguages are composed of arbitrary phonological units. Sign language phonology differsfrom spoken language phonology insofar as there are at least three sublexical features of signphonology: handshape, location, and movement (Liddell and Johnson 1989; Sandler 1989;Brentari 1998). Handshape comprises the configuration and the selected fingers and joints ofthe articulating hands during sign production. Location refers to the place on the body wherethe sign is being articulated. Movement comprises the directionality and path features of thehands during sign production. Depending on the model of sign language phonology there aredifferences in the specification of the sublexical features such that handshape has a highly-detailed hierarchical structure and location is unitarily specified (Brentari 1998; Sandler1989). Furthermore, some models classify handshape and location as inherent features, andmovement as a prosodic feature (Brentari 1998). The level of representational specificity maybe important to models of sign language lexical access. A model of the sign lexicon suggeststhat morphological restrictions shape the lexicon based on phonological alternation betweenthese parameters (Fernald and Napoli 2000). It is important to test psycholinguistic modelsagainst sign language to delineate language-general mechanisms that apply to all linguisticsystems across sensorimotor modalities.

Most models of spoken word recognition characterize lexical access as a competitiveprocess inwhich incoming perceptual-linguistic information ismatched to prospective lexicalcandidates (Luce and Pisoni 1998; Marslen-Wilson 1987; Norris 1994; McClelland andElman 1986). Perceptual-linguistic information activates several lexical candidates basedon degree of similarity to the perceptual information and candidate characteristics (e.g.,frequency). Lexical retrieval of a single word is successful once competition between similarlexical candidates has been resolved. Many have posited that the lexicon is organized ingroups of similar lexical items called neighborhoods. Neighborhoods are constructed such

1 Capitalized Deaf often refers to those individuals who were born deaf and consider themselves part of Deafculture, including using American Sign Language, whereas the lowercase deaf often refers just to audiologicalstatus among those who are late-deafened or do not identify with the Deaf community.

123

Page 3: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

that lexical items gain membership in a phonological neighborhood as long as there is aone-phoneme difference between each member (Luce and Pisoni 1998). Often, neighborsare related to one another by addition, subtraction, or substitution of a single phoneme in thatword (e.g., mat has hat, bat, met, match and math as some of its neighbors). Neighborhooddensity is computationally derived by the number of similar words related to a target (Luceand Pisoni 1998). Words that have many neighbors are said to be members of a denseneighborhood, whereas words that have few neighbors are said to be members of a sparseneighborhood (Vitevitch 2003).

Neighborhood density has been shown to affect speech recognition. Studies have shownthat words that reside in sparse neighborhoods are often recognized faster and more accu-rately than those in dense neighborhoods (Luce and Pisoni 1998; Vitevitch and Luce 1999;Vitevitch 2003). It is assumed that there is greater competition amongst neighbors in adense neighborhood during word recognition, which slows word recognition latencies (Luceand Pisoni 1998). In other words, competitors within a neighborhood share inhibitory linkssuch that activation of a neighbor spreads inhibition to the target. The differential effects ofneighborhood density simply arise out of the fact that there are fewer neighbors in sparseneighborhoods to send inhibition to the target word relative to dense neighborhoods. Othermodels refine competition in neighborhoods as a variable function such that inhibitory linksbetween competitors can vary as a function of activation strength (Chen and Mirman 2012).Therefore, even competitors in a dense neighborhood can share less inhibition given that theyare weakly activated. Given these neighborhood density-modulated models of lexical accessin spoken language, predictions can be made regarding their applicability to sign languageprocessing.

Previous psycholinguistic studies of sign language recognition have shown that lexicalaccess follows a two-step process involving sublexical and lexical levels (Emmorey andCorina 1990; Corina and Emmorey 1993; Hildebrandt and Corina 2002; Mayberry andWitcher 2005; Carreiras et al. 2008). For native signers, location is identified early duringsign perception with little error while handshape is identified later and has greater variabilityin its errors. Similar to native signers, bimodal bilinguals (and M2L2 learners of sign lan-guage) identify location with few errors in perception (Bochner et al. 2011; Williams andNewman 2016a). Unlike native signers, bimodal bilinguals and M2L2 learners have highererror rates for handshape but perceive it early in sign recognition, relatively at the same timeas location (Morford and Carlson 2011; Grosvald et al. 2012; Morford et al. 2008; Williamsand Newman 2016b). In summary, previous studies have shown that lexical access in signlanguage occurs at both sublexical and lexical stages of processing, but may differ based onsublexical features (i.e., handshape or location) and on language experience.

Neighborhood effects are present in sign language lexical access, but differ slightly fromthose seen in spoken language. Neighborhood density in sign language can be defined inmany ways. Neighborhoods can be defined based on minimal pairs that share two of thethree sublexical features (Mayberry and Witcher 2005). Since there are few minimal pairs insign language, other studies have taken different approaches to phonological similarity (Vander Kooij 2002) insofar as phonological similarity (i.e., neighborhood density) is defined asthose signs that share only one sublexical feature (Carreiras et al. 2008; Caselli and Cohen-Goldberg 2014). Others suggest that membership can be based on phonological alternationsin one or more parameters, where the size of lexical families (or neighborhoods) vary wildly(Fernald and Napoli 2000). Definition of one-parameter overlap bifurcates neighborhooddensity into (at least) two types based on the shared sublexical features: handshape or locationneighborhood density.

123

Page 4: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

Bifurcation of neighborhood density based on sublexical features has revealed differentialeffects for handshape and location neighborhood density in sign perception. Carreiras et al.(2008) investigated lexical access of Spanish Sign Language (Lengua de Signos Española;LSE) modulated by neighborhood density in deaf signers. The authors found that signs thatreside in dense location neighborhoods (i.e., many neighbors that share the same location) areharder to identify than those in sparse location neighborhoods. Conversely, they found thatsigns in dense handshape neighborhoods are easier to identify than those in sparse handshapeneighborhoods. In effect, neighbors that share the location feature create greater inhibitionthan those with handshape features. Corina and Emmorey (1993) found similar inhibitoryeffects when signs were primed with neighbors that shared the location feature. Caselli andCohen-Goldberg (2014) have explained these differential neighborhood effects using a com-putational model of the sign lexicon. They found that inhibitory effects of location arise dueto the early identification in the time course of sign perception and their richer sublexical fre-quency. Early identification creates greater inhibition to the lexical sign over a longer periodof time. On the other hand, handshape features are identified late in sign processing, whichdoes not allow for increased inhibition through the time course of processing. Additionally,the authors implicated increased resting state activation for location because it has greaterrepresentational specificity within the lexicon. Conversely, handshape is less specified (evi-denced by greater errors and variation in perception) and therefore has weaker resting stateactivation. From a more general perspective, Caselli and Cohen-Goldberg (2014) argued thatstrong neighbors (i.e., location) inhibit lexical access, whereas weak neighbors (i.e., hand-shape) facilitate lexical access, similar to what has been seen in the spoken language literature(see Chen and Mirman 2012).

The samemechanisms have been observed for lexical access in spoken and sign languages;however, activation varies based on the type of sublexical features. Therefore, the questionremains as to how lexical access happens in hearing M2L2 learners who store and processtwo languages in different language modalities. Specifically, it is interesting to explore howactivation in a spoken language influences the activation of sign language given that they usethe same mechanisms, but the relationships within neighborhoods differ. Hearing bimodalbilinguals (and M2L2 learners) show parallel language activation of both spoken and signlanguages (Emmorey et al. 2008; Giezen and Emmorey 2016; Shook and Marian 2012;Williams andNewman2016a). Theparallel activation of languages in twodifferentmodalitiesinfluences activation in a uniqueway.Williams andNewman (2016a) demonstrated in a cross-modal priming task that activation spreads from a spoken lexical item (e.g., keys, /kiz/) toits spoken neighbor (e.g., cheese,/tSiz/), then to its sign equivalent (e.g., cheese) and furtherto its phonologically-related second language (L2) sign neighbor (e.g., paper) in bimodalbilinguals, suggesting that lateral links (e.g., the theoretical connection between cheese andcheese) between spoken and sign lexical items allow for cross-modal identity activation.Essentially, there is concomitant activation of semantically equivalent (i.e., identity) lexicalitems such that the activation of keys also activates its identity sign keys.2 As activationspreads to its spoken word neighbors, it will also activate cheese and its identity sign cheese.Lateral links seem to be responsible for the identity activation in bimodal bilinguals dueto the absence of phonological overlap (Shook and Marian 2012). Lateral links betweenspoken and sign lexical items have been implicated in unique patterns of lexical activationin bimodal bilinguals. It was posited that since every spoken word co-activates its identitysign (e.g., the lexical representation of the spoken word cat would also activate its identitysign representation cat), then the neighborhood structure of the spoken language affects

2 Per linguistic convention, ASL signs are represented in small capital letters.

123

Page 5: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

the spread of activation to lexical signs. If this hypothesis is correct, then hearing bimodalbilinguals and M2L2 learners should exhibit differential sign activation when primed byspoken language.

The present study specifically sets out to ask whether the activation patterns during signlexical access can be altered by spoken language input in hearing M2L2 learners of sign lan-guage, which will in turn contribute theories on lateral co-activation in the bimodal bilinguallexicon. The patterns of inhibition and facilitation during lexical access of ASL sign targetswhen preceded by an auditory English prime were explored.We specifically wanted to exam-ine how priming influences the relationship between sparse and dense neighborhood densityeffects. That is, Carreiras et al. (2008) used an unprimed lexical decision task to show thatdense handshape neighbors are facilitatory, but dense location neighbors are inhibitory. Inthe present study, sign targets were primed by an auditory English word. The primed reactiontimes from the sparse neighborhoods were compared against those in the dense neighbor-hoods in order to examine the differential effects of priming of neighborhood density. Theprediction is that the spoken English prime would activate the English lexical item, its com-petitors, and their sign identity equivalents, with weak activation to the sign neighbors. Theactivation of the semantically equivalent identity representation would increase the restingstate activation to that sign target. When the sign target appears on the screen, it will havegreater activation energy than its neighbors. Therefore, since these learners are receivingauditory spoken language input, the facilitatory (or weak inhibitory) input from handshapeneighbors and the strong inhibitory input from location neighbors will have opposite effectson the activation of the sign relative to those seen in deaf signers who only saw visual signlanguage input (cf. Carreiras et al. 2008). In other words, sign targets that reside in densehandshape neighborhoods will have longer response latencies than those in sparse handshapeneighborhoods and sign targets that reside in dense location neighborhoods will have shorterresponse latencies than those in sparse location neighborhoods.

Methods

Participants

Students from IndianaUniversity (IU) and IndianaUniversity-PurdueUniversity Indianapolis(IUPUI) participated in this study following Indiana University Institutional Review Boardregulations. Twenty-five native English speakers (male = 7) who were currently enrolledin an intermediate to advanced American Sign Language courses at IU (n = 12) or IUPUI(n = 13) participated.

ASL proficiency was measured by a self-rating questionnaire and a vocabulary test. Self-ratings have been shown to correlate with measured proficiency (MacIntyre et al. 1997;Bachman and Palmer 1989). The self-ratings questionnaire asked the students to rate howthey felt their proficiency was on a scale including “very poor,” “fair,” “functional,” “good”,“very good”, and “ near native” in speaking and understanding.ASL students self-rated theirproficiency, ranging from 2 to 7 (M = 4.728, SD = 1.053).

The vocabulary test was constructed by taking the signs from the current ASL textbooksacross all 4 semesters (Smith et al. 1988a, b, 2008). During the vocabulary test, participantsviewed video clips of spoken or signed words. Participants were required to type in thetranslation into English. The computer scored their translations for correct answers, includingany synonyms (e.g., bathroom or restroom would be accepted for bathroom). A total score

123

Page 6: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

correct out of 142 was used as the participants proficiency score. M2L2 learners proficiencyranged from 34 to 102 (M = 78.24, SD = 1.312).

A composite score was created in order to capture both self-rating and vocabulary per-formance for a general score of proficiency. The composite score was calculated by creatinga proportion of the percent correct on the vocabulary test and their overall self-rating. Theproficiency scores ranged from 0 to 1. A composite of 0 indicates a naïve learner, 0.5 indi-cates an intermediate learner, and a 1 indicates a native-like learner. This score has beenused previously and shown to be more sensitive to capturing proficiency (Williams and New-man 2016a). Composite proficiency scores for the ASL students ranged from 0.46 to 0.86(mean = 0.612, SD = 0.123).

Stimuli

An L2 English-ASL corpus was constructed to get neighborhood density counts of the stim-uli. Commonly-taught American Sign Language textbooks that have been used at IndianaUniversity for beginner to advance foreign language courses (1st–4th semester) were anno-tated (Smith et al. 1988a, b, 2008; Madsen 1972; Humphries and Padden 1992). In totalthere were 590 ASL signs included, with all duplicates removed. Each sign was coded forthe English gloss. To-be-learned unit numbers across semesters were also coded for a proxymeasure of order of acquisition. ASL phonological features such as dominant handshape,nondominant handshape, dominant handshape markedness, initial location and final location(if there was displaced movement), initial location markedness were coded. Phonologicalinformation for hand orientation and movement were omitted. Phonological features such ashandshape and location were coded based on the Prosodic Model (Brentari 1998) and similartypes of coding can be found in larger corpora of sign languages (e.g., Gutierrez-Sigut et al.2015).

Phonological neighborhood density was derived for both handshape neighborhoods andlocation neighborhoods. Handshape or location phonological neighborhood density was cal-culated by taking the number of signs within the corpus that also shared that handshape orlocation, but differed along the other two dimensions (i.e., either handshape or location, andmovement; see Luce and Pisoni 1998 for similar neighborhood density measures in spokenlanguage). The number of signs that shared a given handshape constituted the handshapeneighborhood such that if only 3 signs in the corpus shared the same handshape (while dif-fering in location), then this would be a sparse handshape neighborhood. Similarly, if manysigns shared the same handshape, then this would be a dense handshape neighborhood. Thesame was true for location neighborhoods.

Sixty ASL signs were selected as targets that differed in sublexical features and neighbor-hood density. Thirty signs differed based on their handshape neighborhood density and 30differed based on their location neighborhood density, while controlling for the other. Thatis, 15 signs resided in dense handshape neighborhoods (M = 44.80, SD = 5.49), while 15resided in sparse handshape neighborhoods [M = 3.06, SD = 1.03; F(1, 29) = 836.321,p < 0.0001], but they did not differ in their location neighborhood density [F < 1]. Simi-larly, 15 signs resided in dense location neighborhoods (M = 98, SD = 35.13), while 15werein sparse neighborhoods [M = 3.60, SD = 2.92; F(1, 29) = 107.553, p < 0.001], but werematched for handshape neighborhood density [F < 1]. All of the targets were matched forEnglish frequency and English phonological neighborhood size across conditions [Fs < 1].

Sixty nonsign targets were created by taking the real sign targets and swapping out thetarget sublexical features with a randomly selected feature within the same neighborhood.For example, the target enjoy resides in a dense handshape neighborhood (vis-à-vis its

123

Page 7: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

condition). enjoy contains the flat five handshape with a circular movement on the chest. Arandomly selected handshape that typically resides in dense handshape neighborhoods (asdetermined by the corpus, e.g., claw) replaced it to create a nonsign. This was to ensure that(1) the nonsigns were difficult to tell apart from the real signs and (2) they did not violatethe density structure of the language. All nonsigns were judged by a native signer to benon-existent and felt natural to produce, but no other phonotactic metrics were used. A nativesigner video recorded the sign targets at a slow, but naturalistic rate in front of a blue-greybackdrop. Videos were edited from one frame before the signer lifted his hands to producethe sign to one frame after his hands came to a rest at his sides. Video durations did not differacross conditions [F < 1].

Sixty aural primes were selected by taking an English rhyme of the sign target. Theselection criteria of English rhymes varied in no particular way. This approach was chosenin order to diminish for systematic effects of rhyme type. English rhymes often kept the rimeand codas constant across prime-target pairs; however, several pairs nevertheless varied in allsyllable positions. For example, if the sign target is enjoy its prime target would be destroy.Aural primeswerematched across conditions for English frequency [F(3, 59) = 2.043, p =0.077] and English phonological neighborhood density [F < 1]. A native English speakerrecorded aural primes usingAudacity software at 44,100Hz and a 16bits per second samplingrate. The primes were edited to a constant duration of 975ms.

A2×2×2 designwas usedwith lexicality (sign vs. nonsign), sublexical feature (handshapevs. location), and neighborhood density (sparse vs. dense) as the within subjects factors.As stated previously, the current study aims to compare primed activation of a target in asparse neighborhood relative to a dense neighborhood in order to examine the differentialeffects of priming of neighborhood density; thus, we excluded an unrelated baseline becausethe comparison was between sparse and dense neighborhoods and comparing the resultsrelative to those found in Carreiras et al. (2008). In that way, the current design diverges fromtraditional priming procedures where reaction times relative to related prime-target pairs arecompared with those of unrelated prime target pairs.

Procedure

The experiment was presented on a 27-in. iMac with 3.2GHz Intel Core i5 processor usingPsychoPy software (Peirce 2007). The participants heard the English aural prime, followedby an interstimulus interval of 250ms, and then the ASL target. Prime-target pairs wererandomly presented for each subject. They made a lexical decision to the target by pressingthe ‘0’ key with their right index finger for real signs and ‘1’ key with their left index fingerfor nonsigns. The participants were encouraged to respond as quickly and as accurately aspossible. Reaction times were recorded relative to the onset of the video target.

Results

Reaction times for only correct trials were used the following analyses. Reaction times thatwere two standard deviations above or below the mean were omitted in the analysis (4.89%).An analysis of variance (ANOVA) with the three-factor design was performed to test if therewere density differences in each neighborhood type.

The analysis revealed (see Fig. 1) a significant effect of lexicality [F(1, 23) = 10.549,p < 0.01, eta-squared = 0.314] such that real signs (M = 1619, SE = 42) wereresponded to quicker than nonsigns (M = 1704, SE = 57). There was a trending effect

123

Page 8: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

Fig. 1 Mean reaction times in milliseconds for each condition

of parameter [F(1, 23) = 3.507, p = 0.074, eta-squared = 0.132], where participantswere slower to respond to signs in handshape neighborhoods (M = 1673, SE = 51) thanlocation neighborhoods (M = 1650, SE = 47). There was a significant effect of density[F(1, 23) = 10.492, p < 0.01, eta-squared = 0.313] such that signs in dense neighborhood(M=1634, SE=49) were faster than those in sparse neighborhoods (M=1689, SE=50).There was a significant interaction between lexicality and parameter [F(1, 23) = 8.910,p < 0.01, eta-squared=0.279] where there was a reversal for the nonsigns in locationneighborhoods (M=1711, SE=56) such that they take longer than signs in location neigh-borhoods (M=1588, SE=40). There was a significant interaction between parameter anddensity [F(1, 23)= 21.988, p < 0.001, eta-squared = 0.489] where the reaction times werethe same across densities for handshape, but showed longer latencies for sparse neighbor-hoods (M=1703, SE=50) than dense neighborhoods (M=1597, SE=45). There were nosignificant interactions between lexicality and density or lexicality, density, and parameter.

Analysis of the error data revealed a significant main effect of lexicality where partic-ipants were more accurate for nonsigns (M=85.1%, SE=1.8%) than signs [M=73.8%,SE=3.4%; F(1, 23) = 7.449, p < 0.05, eta-squared=0.245], which suggests a relativebias to responding a target is a nonsign. The only significant interaction was between parame-ter and density [F(1, 23) = 14.066, p < 0.001, eta-squared=0.379] such that participantsresponded to signs in sparse handshape neighborhoods (M=81.4%, SE=2.1%) more accu-

123

Page 9: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

Fig. 2 Mean density effects are shown. Density effects are calculated by taking the mean reaction time forthe lexical decision of signs that reside in a sparse sign neighborhood for a given subject and subtracting itfrom the mean reaction time for dense signs for that same subject. This measure best illustrates the inhibitoryand facilitative roles of parameter density on sign retrieval

rately than dense handshape neighborhoods (M=78.8%, SE=2.4%), but more accuratelyfor signs in dense location neighborhoods (M=82.4%, SE=2.2%) compared to sparselocation neighborhoods (M=75.3%, SE=1.7%).

Density effects (see Fig. 2) were calculated to get a clearer picture of the trends. Densityeffects were calculated by taking the reaction times for the targets in sparse neighbor-hoods and subtracting them from the reaction times of targets in dense neighborhoods (i.e.,Density Effect = DenseRT − Sparse RT). In relation to previous literature in spoken lan-guage research, typical density effects for perception would be positive such that words indense neighborhoods have longer response latencies than those in sparse neighborhoods. Arepeated measures ANOVA with the factors of lexicality and parameter, found only a sig-nificant effect of parameter [F(1, 23) = 21.988, p < 0.001, eta-squared=0.489] such thattargets in location neighborhoods (M = −106, SE=22) were more negative than targets inhandshape neighborhoods (M = −3, SE=18). There was no main effect of lexicality andno interaction between lexicality and parameter.

A linear regression analysis (see Fig. 3) was performed with density effects as thedependent variable and composite proficiency as the predictor variable in order to deter-mine whether density effects are modulated with M2L2 learner proficiency. Proficiency

123

Page 10: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

Fig. 3 The regression analysis between composite proficiency and density effects for both handshape andlocation neighborhoods

significantly accounted for 18.2% of the variance in density effects for targets that sharedhandshape features in real ASL signs [F(1, 23) = 4.881, p < 0.05, R2 = 0.182]. Pro-ficiency did not account for any other variance in the density effects for either signs ornonsigns.

General Discussion

There are relatively few studies examining neighborhood density effects on sign activation,and even fewer that characterize these effects in M2L2 learners. The goal of the currentstudy was to investigate parallel language activation in M2L2 learners and to characterizethe influence of sign language neighborhood density on the activation of ASL signs. Weused an unconventional priming paradigm in which we activated the neighbors of the signtarget with a spoken English word and compared the activation of the targets in sparse anddense neighborhoods. The results reported showed that the effects of handshape and locationdensity found in native and near-native deaf signers during (only) sign language process-ing are reversed when primed with a spoken language, which was as predicted. Reversaleffects indicates that there were inhibitory effects of handshape density—signs that residedin sparse handshape neighborhoods were recognized faster than those in dense handshapeneighborhoods—and facilitatory effects of location density—signs that resided in dense loca-

123

Page 11: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

tion neighborhoods were responded to faster than those in sparse neighborhoods. Lastly, theresults revealed that increased inhibition for signs in dense handshape neighborhoods wasgreater for high proficiency M2L2 learners. Next, these findings and their implications forlexical activation in M2L2 learners are discussed.

Parallel language activation has been observed for a variety of bilingual groups acrossmultiple tasks. Unimodal bilinguals demonstrate parallel language activation during reading(Van Heuven et al. 1998; Schwartz et al. 2007), speech perception (Marian and Spivey 2003),and speech production (Kroll et al. 2008). Parallel language activation has also been seen inL1 processing and is often modulated by language dominance and proficiency. In the presentstudy, M2L2 learners demonstrated parallel language activation as English auditory primesaffected the activation patterns of ASL signs. Additionally, findings of parallel languageactivation contribute to the handful of studies that have found parallel language activation inperception (Shook and Marian 2012; Williams and Newman 2016a), production (Emmoreyet al. 2008; Giezen and Emmorey 2016; Van Hell et al. 2009), and reading (Morford et al.2011).

Previous studies of neighborhood density effects in sign activation have found that signsthat share location features (i.e., reside in location neighborhoods) are inhibited by the com-petition of its sign neighbors, whereas signs that share handshape features are facilitated byweakened competition of their neighbors (Carreiras et al. 2008; Caselli and Cohen-Goldberg2014). The differential effects based on sublexical features are thought to arise out of thetimecourse of sign perception and the resting state activation of each sublexical feature(Caselli and Cohen-Goldberg 2014). Location features are identified earlier than handshapefeatures in native sign perception (Emmorey and Corina 1990; Clark and Grosjean 1982). Invarious models of sign phonology, location is also thought to have greater specification ofits phonological representation than handshape (Brentari 1998; Sandler 1989), which maycause increased competition amongst neighbors in a dense location neighborhood (CaselliandCohen-Goldberg 2014). In the present study, the results indicated opposite effects of thosepreviously reported in the literature (cf. Carreiras et al. 2008; Caselli and Cohen-Goldberg2014). M2L2 learners responded to the lexicality of a sign target that resided in a dense hand-shape neighborhood more slowly than those in sparse handshape neighborhoods. In otherwords, there was greater inhibition from the neighbors in the dense handshape neighborhoodthan those in the sparse, which is opposite of what has been recently reported. Similarly, inthe present study M2L2 learners responded faster to signs in dense location neighborhoodscompared to those in sparse neighborhoods; there was greater facilitation from the neighborsin the dense location neighborhood, which again is opposite of previous findings The reversaleffects found for M2L2 learners in this study can likely be attributed to both the strong roleof auditory, or spoken, representations in the hearing bimodal lexicon when processing bothlanguages compared to native sign (bilingual) lexicon when only one language is stronglyactivated.

The hearing bimodal bilingual lexicon must accommodate both a spoken language anda sign language. This allows for lexical characteristics from the spoken first language toinfluence activation to the signed second language through lateral links between spokenand signed lexical items (Giezen and Emmorey 2016; Shook and Marian 2012; Williamsand Newman 2016a). Due to these lateral links and divergent phonological systems, theonly lexical activation that spreads from a spoken language lexical item to a sign languagelexical item is that of a lexical equivalent. In other words, when auditory English input(e.g., keys) is introduced into the hearing bimodal bilingual lexicon both the English lexicalitem (e.g., keys) and the sign identity translation (e.g., keys) are activated and subsequentspreading of activation is only to neighbors or rhymes of the English target (e.g. cheese) and

123

Page 12: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

Fig. 4 Spreadingof activation in the hearingbimodal bilingual lexicon.Auditory input from theprimeactivatesthe spoken lexical representation bear. The equivalent identity sign bear is co-activated (Shook and Marian2012; Williams and Newman 2016a). Also, other lexical representations are also activated within the L1 bearneighborhood, such as hair, fair, and chair. Activation of these spoken neighbors also co-activate their identitysigns, e.g., chair and chair. For illustrative purposes, let’s assume that chair resides in a dense handshapeneighborhood. As such, the neighbors feed each other facilitative activation (cf. bear in a sparse neighborhoodthat receives inhibitory input from neighbors). When chair is co-activated via sequential neighbor activation(i.e., chair) from the original prime bear, the sign neighbors (e.g., train, spoon, name) of chair are theninhibited, blocking their facilitative effects. As such opposite effects of sign retrieval occur relative to nativeor experienced deaf signers when both primes and targets are signed

its sign identity translation (e.g., cheese), but not as much to the sign neighbors (e.g. paper3;Williams and Newman 2016a). Because of this pattern of activation in the hearing bimodalbilingual lexicon, the input of the sublexical features is different from those in previousstudies (cf. Carreiras et al. 2008; Caselli and Cohen-Goldberg 2014).

More specifically, in hearing bimodal bilinguals (and in this task specifically), the targetswere primed with an auditory English rhyme. Priming with an English word brings the signtarget to greater resting state activation. For example (see Fig. 4), the sign target chair ispreceded by the underlying English word bear. The prime causes the English word bearand its sign identity translation bear to be activated. Furthermore, it causes its Englishrhyme neighbor chair to be activated, which feeds activation to its sign identity translationchair via lateral links. The activation that is fed to chair brings its resting state activationgreater than its sign neighbors, which are not robustly activated (see Williams and Newman

3 paper and cheese are phonological neighbors in sign language because they share the same location andhandshape, but differ by movement (see Brentari 1998).

123

Page 13: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

2016a). Increased activation to the target sends concomitant inhibition to its sign neighbors.Phonological alternation and morphological restrictions determine the path of sign neighborco-activation (Fernald and Napoli 2000). Therefore, since handshape neighbors have beenshown to send increased facilitation and they are now inhibited, less facilitation is sent to theneighbors,which in turn delays lexical retrieval to the sign in dense handshape neighborhoods.

A similar mechanism can explain the results for the facilitatory effects seen in denselocation neighborhoods. Through the same mechanism, the sign target sit would receiveincrease activation from the English prime hit via lateral links, increased inhibition is sentto the neighbors of sit. Increased neighbor inhibition reduced strong inhibitory input to thetarget, which in turns facilitates the sign retrieval in a dense neighborhood relative to that ina spare neighborhood. Therefore, the reason there is increased activation of signs with denselocation neighborhoods is because the priming from the auditory prime creates increasedactivation of the target and suppresses inhibitory feedback from its neighbors.

Regression analyses indicated that the dense handshape neighborhoods create greaterinhibition for more proficient M2L2 learners. A monotonic increase in inhibition for onlydense handshape neighborhoods for more proficient M2L2 learners can be explained by howsigns are processed. In sign perception tasks, M2L2 learners have been shown to perceivehandshape quickly (cf. Emmorey and Corina 1990) and have a greater reliance on handshapefeatures during perception (Morford and Carlson 2011). As proposed by Caselli and Cohen-Goldberg (2014), the early activation of sublexical features creates greater inhibition. Earlieridentification of location features in native signers is partly responsible for the inhibition oflexical retrieval for signs in dense location neighborhoods. Additionally, greater frequency inthe underlying structure of the sublexical feature increase inhibitory effects on recognition;thus, location has higher resting state activation for native signers. However, overrelianceand earlier detection of handshape in sign perception by M2L2 learners would feed greaterinhibitory activation from the neighborhoods in dense handshape neighborhoods. As profi-ciency in theASL sign perception increases, the input fromhandshape featuresmonotonicallyincreases the inhibition from its neighbors.

In summary, the present findings are in line with a number of recent findings that sug-gest hearing M2L2 learners co-activate both their spoken and sign languages during signlanguage processing and lexical retrieval for sign language is similar to that of spoken lan-guage. Lexical and sublexical characteristics from both languages influence the activationof lexical signs. Furthermore, the findings suggest differential effects for various sublexi-cal features (i.e., handshape and location) during online sign perception. It can be claimedthat the comparison between the learners in the present study to those in previous studies isweak because the two groups are vastly different (i.e., proficiency, level of spoken languageinput, language experience, etc.). However, the findings herein still significantly contributeto our understanding of how spoken languages alter lexical retrieval of signs. Furthermore,the present findings are consistent with previous work on neighborhood effects on sign per-ception when re-interpreted through our model of the hearing bimodal bilingual lexicon. Onthat note, these findings support previously posited models of the hearing bimodal bilinguallexicon in which lateral lexical links spread activation to identity sign translations, whichimpact neighborhood activation in sign language. More studies need to be carried out inorder to further specify this bimodal bilingual model.

Acknowledgments Supported by theNational ScienceFoundationGraduateResearchFellowship #1342962(JTW). A special thanks is dedicated to Edwin Rivera for his work on recording the stimuli and Jeremy Keatonfor his diligent work on collecting the data. We also appreciate Dr. Julie White’s assistance in data collectionand recruitment at IUPUI.

123

Page 14: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

References

Bachman, L. F., & Palmer, A. S. (1989). The construct validation of self-ratings of communicative languageability. Language Testing, 6(1), 14–29.

Berent, I., Dupuis, A., & Brentari, D. (2013). Amodal aspects of linguistic design. PloS One, 8(4), e60617.Bochner, J. H., Christie, K., Hauser, P. C., & Searls, J. M. (2011). When is a difference really different?

Learners’ discrimination of linguistic contrasts in American Sign Language. Language Learning, 61(4),1302–1327.

Brentari, D. (1998). A prosodic model of sign language phonology. Cambridge, MA: MIT Press.Carreiras, M., Gutiérrez-Sigut, E., Baquero, S., & Corina, D. (2008). Lexical processing in Spanish sign

language (LSE). Journal of Memory and Language, 58(1), 100–122.Caselli, N. K., & Cohen-Goldberg, A. M. (2014). Lexical access in sign language: A computational model.

Frontiers in Psychology, 5, 1–11.Chen, Q., &Mirman, D. (2012). Competition and cooperation among similar representations: Toward a unified

account of facilitative and inhibitory effects of lexical neighbors. Psychological Review, 119(2), 417.Clark, L. E., & Grosjean, F. (1982). Sign recognition processes in American Sign Language: The effect of

context. Language and Speech, 25(4), 325–340.Corina, D. P., & Emmorey, K. (1993, November). Lexical priming in American sign language. In 34th annual

meeting of the psychonomics society.Emmorey, K., & Casey, S. (2002). Gesture, thought, and spatial language. In K. R. Coventry & P. Olivier

(Eds.), Spatial language (pp. 87–101). Springer, Netherlands.Emmorey, K., & Corina, D. (1990). Lexical recognition in sign language: Effects of phonetic structure and

morphology. Perceptual and Motor Skills, 71(3f), 1227–1252.Emmorey, K., Borinstein, H. B., Thompson, R., & Gollan, T. H. (2008). Bimodal bilingualism. Bilingualism,

11(1), 43–61.Fernald, T. B., & Napoli, D. J. (2000). Exploitation of morphological possibilities in signed languages. Sign

Language & Linguistics, 3(1), 3–58.Giezen, M. R., & Emmorey, K. (2016). Language co-activation and lexical selection in bimodal bilinguals:

Evidence from picture-word interference. Bilingualism: Language and Cognition, 19(2), 264–276.Grosvald,M.,Lachaud,C.,&Corina,D. (2012).Handshapemonitoring:Evaluationof linguistic andperceptual

factors in the processing of American Sign Language. Language and Cognitive Processes, 27(1), 117–141.

Gutierrez-Sigut, E., Costello, B., Baus, C., & Carreiras, M. (2015). LSE-Sign: A lexical database for SpanishSign Language. Behavior Research Methods, 48, 123–137.

Hildebrandt, U., & Corina, D. (2002). Phonological similarity in American Sign Language. Language andCognitive Processes, 17(6), 593–612.

Humphries, T., & Padden, C. (1992). Learning American Sign Language: Levels I & II-Beginning & Interme-diate. Boston: Pearson Eduction, Inc.

Kroll, J. F., Bobb, S. C., Misra, M., & Guo, T. (2008). Language selection in bilingual speech: Evidence forinhibitory processes. Acta Psychologica, 128(3), 416–430.

Liddell, S. K., & Johnson, R. E. (1989). American Sign Language: The phonological base. Sign LanguageStudies, 64(1), 195–277.

Luce, P. A., & Pisoni, D. B. (1998). Recognizing spoken words: The neighborhood activation model. Ear andHearing, 19(1), 1.

MacIntyre, P. D., Noels, K. A., & Clément, R. (1997). Biases in self-ratings of second language proficiency:The role of language anxiety. Language Learning, 47(2), 265–287.

Madsen, W. J. (1972). Conversational Sign Language II: An Intermediate-advanced Manual. Washington,DC: Gallaudet University Press.

Marian, V., & Spivey,M. (2003). Competing activation in bilingual language processing:Within-and between-language competition. Bilingualism: Language and Cognition, 6(02), 97–115.

Marslen-Wilson, W. D. (1987). Functional parallelism in spoken word-recognition. Cognition, 25, 71–102.Mayberry, R. I., & Witcher, P. (2005). Age of acquisition effects on lexical access in ASL: Evidence for the

psychological reality of phonological processing in sign language. In 30th Boston University conferenceon language development.

McClelland, J. L., & Elman, J. L. (1986). The TRACE model of speech perception. Cognitive Psychology,18(1), 1–86.

Morford, J. P., & Carlson, M. L. (2011). Sign perception and recognition in non-native signers of ASL.Language Learning and Development, 7(2), 149–168.

Morford, J. P., Grieve-Smith, A. B., MacFarlane, J., Staley, J., & Waters, G. (2008). Effects of languageexperience on the perception of American Sign Language. Cognition, 109(1), 41–53.

123

Page 15: Spoken Language Activation Alters Subsequent Sign …cnilab/williams-psycholing-research.pdf · Spoken Language Activation Alters Subsequent Sign Language Activation in ... (Lengua

J Psycholinguist Res

Morford, J. P., Wilkinson, E., Villwock, A., Piñar, P., & Kroll, J. F. (2011). When deaf signers read English:Do written words activate their sign translations? Cognition, 118(2), 286–292.

Norris, D. (1994). Shortlist: A connectionist model of continuous speech recognition. Cognition, 52(3), 189–234.

Peirce, J.W. (2007). PsychoPy—Psychophysics software in Python. Journal of NeuroscienceMethods, 162(1),8–13.

Sandler, W. (1989). Phonological representation of the sign: Linearity and nonlinearity in American SignLanguage (Vol. 32). Berlin: Walter de Gruyter.

Sandler, W., & Lillo-Martin, D. (2006). Sign language and linguistic universals. Cambridge: CambridgeUniversity Press.

Schwartz, A. I., Kroll, J. F., & Diaz, M. (2007). Reading words in Spanish and English: Mapping orthographyto phonology in two languages. Language and Cognitive Processes, 22(1), 106–129.

Shook, A., &Marian, V. (2012). Bimodal bilinguals co-activate both languages during spoken comprehension.Cognition, 124(3), 314–324.

Smith, C., Lentz, E., & Mikos, K. (1988). Signing naturally, Level 1. San Diego, CA: DawnSignPress.Smith, C., Lentz, E., & Mikos, K. (1988). Signing naturally, Level 2. San Diego, CA: DawnSignPress.Smith, C., Lentz, E., & Mikos, K. (2008). Signing naturally (1st ed.). San Diego, CA: DawnSignPress.Van der Kooij, E. (2002). Phonological Categories in Sign Language of the Netherlands. LOT, Utrecht: The

Role of Phonetic Implementation and Iconicity.Van Hell, J. G., Ormel, E., van der Loop, J., & Hermans, D. (2009). Cross-Language interaction in unimodal

and bimodal bilinguals. In Paper presented at the 16th conference of the European society for cognitivepsychology. Poland: Krakow. September 2–5.

Van Heuven, W. J., Dijkstra, T., & Grainger, J. (1998). Orthographic neighborhood effects in bilingual wordrecognition. Journal of Memory and Language, 39(3), 458–483.

Vitevitch, M. S. (2003). The influence of sublexical and lexical representations on the processing of spokenwords in English. Clinical Linguistics & Phonetics, 17(6), 487–499.

Vitevitch, M. S., & Luce, P. A. (1999). Probabilistic phonotactics and neighborhood activation in spoken wordrecognition. Journal of Memory and Language, 40(3), 374–408.

Williams, J. T., & Newman, S. D. (2016a). Interlanguage dynamics and lexical networks in nonnative L2signers of ASL: Cross-modal rhyme priming. Bilingualism: Language and Cognition, 19(3), 453–470.doi:10.1017/S136672891500019X.

Williams, J. T., & Newman, S. D. (2016b). Phonological substitution errors in L2 ASL sentence processingby hearing M2L2 learners. Second Language Research. doi:10.1177/0267658315626211.

123