38
Phonetically Driven Phonology: The Role of Optimality Theory and Inductive Grounding 1 Bruce P. Hayes UCLA Final version: August 1997 Written for the proceedings volume of the 1996 Milwaukee Conference on Formalism and Functionalism in Linguistics Abstract Functionalist phonetic literature has shown how the phonologies of human languages are arranged to facilitate ease of articulation and perception. The explanatory force of phonological theory is greatly increased if it can directly access these research results. There are two formal mechanisms that together can facilitate the link-up of formal to functional work. As others have noted, Optimality Theory, with its emphasis on directly incorporating principles of markedness, can serve as part of the bridge. Another mechanism is proposed here: an algorithm for inductive grounding permits the language learner to access the knowledge gained from experience in articulation and perception, and form from it the appropriate set of formal phonological constraints. 1 I would like to thank Marco Baroni, Patricia Keating, Donca Steriade, and members of talk audiences at MIT, Arizona, Tilburg, Utrecht, Cornell, UC Santa Cruz, UCLA, and the Formalism/Functionalism conference for helpful input in the preparation of this paper.

Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically Driven Phonology:

The Role of Optimality

Theory and Inductive Grounding1

Bruce P. Hayes

UCLA

Final version: August 1997

Written for the proceedings volume of the 1996 Milwaukee Conferenceon Formalism and Functionalism in Linguistics

Abstract

Functionalist phonetic literature has shown how the phonologies of human languages arearranged to facilitate ease of articulation and perception. The explanatory force of phonologicaltheory is greatly increased if it can directly access these research results. There are two formalmechanisms that together can facilitate the link-up of formal to functional work. As others havenoted, Optimality Theory, with its emphasis on directly incorporating principles of markedness,can serve as part of the bridge. Another mechanism is proposed here: an algorithm forinductive grounding permits the language learner to access the knowledge gained fromexperience in articulation and perception, and form from it the appropriate set of formalphonological constraints.

1 I would like to thank Marco Baroni, Patricia Keating, Donca Steriade, and members of talk

audiences at MIT, Arizona, Tilburg, Utrecht, Cornell, UC Santa Cruz, UCLA, and theFormalism/Functionalism conference for helpful input in the preparation of this paper.

Page 2: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 2

1. Phonological Functionalism

The difference between formalist and functionalist approaches in linguistics has takendifferent forms in different areas. For phonology, and particularly for the study of fully-productive sound patterns, the functionalist approach has traditionally been phonetic incharacter. For some time, work in the phonetic literature, such as Ohala (1974, 1978, 1981,1983), Ohala and OHALA (1993), Liljencrants and Lindblom (1972), Lindblom (1983, 1990),and Westbury and Keating (1986), has argued that the sound patterns of languages areeffectively arranged to facilitate ease of articulation and distinctness of contrasting forms inperception. In this view, much of the patterning of phonology reflects principles of gooddesign.2

In contemporary phonological theorizing, such a view has not been widely adopted.Phonology has been modeled as a formal system, set up to mirror the characteristicphonological behavior of languages. Occasionally, scholars have made a nod towards thephonetic sensibleness of a particular proposal. But on the whole, the divide between formal andfunctionalist approaches in phonology has been as deep as anywhere else in the study oflanguage.

It would be pointless (albeit fun) to discuss reasons for this based on the sociology of thefields of phonetics and phonology. More pertinently, I will claim that part of the problem hasbeen that phonological theory has not until recently advanced to the point where a seriouscoming to grips with phonetic functionalism would be workable.

2. Optimality Theory

The novel approach to linguistic theorizing known as Optimality Theory (Prince andSmolensky 1993) appears to offer the prospect of a major change in this situation. Here aresome of the basic premises of the theory as I construe it.

First, phonological grammar is not arranged in the manner of Chomsky and Halle (1968),in essence as an assembly line converting underlying to surface representations in a series of

2 The last sentence defines what is meant here by “functionalism”. Unfortunately, this term

has come to denote a particular viewpoint on a large number of other issues, for instance, thestability and integrity of the grammar as a cognitive system, the desirability of explicit formalanalysis, the validity of intuited data, and so forth. It is quite possible to hold mainstreamgenerativist views on all of these other issues (yes, the grammar is a stable, cohesive cognitivesystem; yes, formal analysis is crucial to research success; yes, intuitive judgments when carefullygathered are very useful; etc.) but be functionalist in the sense of believing that the formal systemof grammar characteristically reflects principles of good design. This is more or less my ownposition.

Page 3: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 3

steps. Instead, the phonology selects an output form from the set of logical possibilities. Itmakes its selection using a large set of constraints, which specify what is “good” about anoutput, in the following two ways:

(1) a. Phonotactics: “The output should have phonological property X.”b. Faithfulness: “The output should resemble the input in possessing property Y.”

Phonotactic constraints express properties of phonological markedness, which are typicallyuncontroversial. For example, they require that syllables be open, or that front vowels beunrounded, and so on. The Faithfulness constraints embody a detailed factorization of what itmeans for the output to resemble the input; they are fully satisfied when the output is identical tothe input.

Constraints can conflict with each other. Often, it is impossible for the output to have thedesired phonotactic properties and also be faithful to the input; or for two different phonotacticconstraints to be satisfied simultaneously. Therefore, all constraints are prioritized; that is,ranked. Prioritization drives a specific winnowing process (not described here) that ultimatelyselects the output of the grammar from the set of logical possibilities by ruling out all but a singlewinner.3

I will take the general line that Optimality Theory is a good thing. First, it shares the virtuesof other formal theories: when well implemented, such theories provide falsifiability, so that theerrors in an analysis can lead to improvement or replacement. Further, formal theoriescharacteristically increase the pattern recognition capacity of the analyst. For example, it wasonly when the formal theory of moras was introduced (Hyman 1985) that it became clear thatcompensatory phonological processes always conserve mora count (see Hyman, and forelaboration Hayes 1989).4

Second, Optimality Theory has permitted solutions to problems that simply were nottreatable in earlier theories. Examples are the metrical phonology of Guugu Yimidhirr (Kager,to appear), or the long-standing ordering paradoxes involving phonology and reduplication(McCarthy and Prince 1995).

3 For explicit presentation of Optimality Theory the reader is referred to Prince and

Smolensky’s original work (1993) or to Archangeli and Langendoen’s (1997) text.

4 Here are two other examples. (a) The formal theory of intonational representationdeveloped by Pierrehumbert (1980) led her to discover English intonational contours not previouslynoticed. (b) The theory of prosodic domains of Selkirk (1980) led Hayes and Lahiri (1991) to findclose links between intonation and segmental phonology in Bengali that would not have otherwisebeen observed.

Page 4: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 4

Most crucially, Optimality Theory has the advantage of allowing us to incorporate generalprinciples of markedness into language-specific analyses. Previously, a formal phonologyconsisted of a set of somewhat arbitrary-looking rules. The analyst could only look at the rules“from the outside” and determine how they reflect general principles of markedness (or at best,supplement the rules with additional markedness principles, as in Chomsky and Halle (1968,Ch. 9), Schachter (1969), or Chen (1973)). Under Optimality Theory, the principles ofmarkedness (stated explicitly and ranked) form the sole ingredients of the language-specificanalysis. The mechanism of selection by ranked constraints turns out to be such an amazinglypowerful device that it can do all the rest. Since rankings are the only arbitrary element in thesystem, the principled character of language-specific analyses is greatly increased. This isnecessarily an argument by assertion, but I believe a fair comparison of the many phonologicalanalyses of the same material in both frameworks would support it.5

3. What is a Principled Constraint?

The question of what makes a constraint “principled” is one that may be debated. Thecurrently most popular answer, I think, relies on typological evidence: a principled constraint isone that “does work” in many languages, and does it in different ways.

But there is another answer to the question of what makes a constraint principled: aconstraint can be justified on functional grounds. In the case of phonetic functionalism, a well-motivated phonological constraint would be one that either renders speech easier to articulate orrenders contrasting forms easier to distinguish perceptually. From the functionalist point of view,such constraints are a priori plausible, under the reasonable hypothesis that language is abiological system that is designed to perform its job well and efficiently.

Optimality Theory thus presents a new and important opportunity to phonological theorists.Given that the theory thrives on principled constraints, and given that functionally motivatedphonetic constraints are inherently principled, the clear route to take is to explore how much ofphonology can be constructed on this basis. One might call such an approach “phonetically-driven Optimality-theoretic phonology.” A theory of this kind would help close the long-standing and regrettable gap between phonology and phonetics.

4. Research in Phonetically-Driven Optimality-Theoretic Phonology

The position just taken regarding phonetics and Optimality Theory is not original with me,but is inspired by ongoing research, much of inspired by Donca Steriade, which attempts tomake use of OT to produce phonetically-driven formal accounts of various phonologicalphenomena.

5 Much of the Optimality Theory analytical literature is currently posted on the World Wide

Web at http://ruccs.rutgers.edu/roa.html, and may be downloaded.

Page 5: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 5

For instance, Steriade (1993, 1997) considers the very basic question of segmentalphonotactics in phonology: what segments are allowed to occur where? Her perspective is anovel one, taking the line that perception is the dominant factor. Roughly speaking, Steriadesuggests that segments preferentially occur where they can best be heard. The crucial part isthat many segments (for example, voiceless stops) are rendered audible largely or entirely by thecontextual acoustic cues that they engender on neighboring segments through coarticulation. Insuch a situation, it is clearly to the advantage of particular languages to place strong restrictionson the phonological locations of such segments.

Following this approach, and incorporating a number of results from research in speechperception, Steriade is able to reconstruct the traditional typology of “segment licensing,”including what was previously imagined to be an across-the-board preference for consonants tooccur in syllable onset position. She goes on to show that there in fact are areas where thisputative preference fails as an account of segmental phonotactics: one example is thepreference for retroflexes to occur postvocalically (in either onset or coda); preglottalizedsonorants work similarly. As Steriade shows, these otherwise-baffling cases have specificexplanations, based on the peculiar acoustics of the segments involved. She then makes use ofOptimality Theory to develop explicit formal analyses of the relevant cases.

Phonetically-driven approaches similar to Steriade’s have lead to progress in theunderstanding of various other areas of phonology: place assimilation (Jun 1995a,b; Myers, thisvolume), vowel harmony (Archangeli and Pulleyblank 1994, Kaun 1995a,b), vowel-consonantinteractions (Flemming 1995), syllable weight (Gordon, 1997), laryngeal features for vowels(Silverman 1995), non-local assimilation (Gafos 1996), and lenition (Kirchner, in progress).6

5. The Hardest Part

What is crucial here (and recognized in earlier work) is that a research result in phonetics isnot same thing as a phonological constraint. To go from one to the other is to bridge a largegap. Indeed, the situation facing phonetically-driven Optimality-theoretic phonology is a ratherodd one. In many cases, the phonetic research that explains the phonological pattern has beendone very well and is quite convincing; it is only the question of how to incorporate it into aformal phonology that is difficult. An appropriate motto for the research program describedhere is: we seek to go beyond mere explanation to achieve actual description.

6 A number of these areas are also addressed in the related “Functional Phonology” of

Boersma (1997).

Page 6: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 6

In what follows, I will propose a particular way to attain phonetically-driven phonologicaldescription.7 Since I presuppose Optimality Theory, what is crucially needed is a means toobtain phonetically-motivated constraints.

In any functionalist approach to linguistics, an important question to consider is: who is incharge? That is, short of divine intervention, languages cannot become functionally welldesigned by themselves; there has to be some mechanism responsible. In the view I will adopt,phonology is claimed to be phonetically natural because the constraints it includes are (at leastpartially) the product of grammar design, carried out intelligently (that is, unconsciously, butwith an intelligent algorithm) by language learners.

Before turning to this design process, I will first emphasize its most important aspect: thereis a considerable gap between the raw patterns of phonetics and phonological constraints.Once the character of this divergence is clear, then the proposed nature of the design processwill make more sense.

6. Why Constraints Do Not “Emerge” From The Phonetics

There are a number of reasons that suggest that phonetic patterns cannot serve as a direct,unmediated basis for phonology. (For more discussion of this issue, see Anderson (1981) andKeating (1985)).

6.1 Variation and Gradience

First, phonetics involves gradient and variable phenomena, whereas phonology ischaracteristically categorial and far less variable. Here is an example: Hayes and Stivers (inprogress) set out to explain phonetically a widespread pattern whereby languages requirepostnasal obstruents to be voiced. The particular mechanism we propose is reviewed below;for now it suffices that it appears to be verified by quantitative aerodynamic modeling andshould be applicable in any language in which obstruents may follow nasals.

Since the mechanism posited is automatic, we might expect to find it operating even inlanguages like English that do not have postnasal voicing as a phonological process. Testing thisprediction, Hayes and Stivers examined the amount of closure voicing (in milliseconds) ofEnglish /p/ in the environments / m ___ versus / r ___. Sure enough, for all five subjects in theexperiment, there was significantly more /p/ voicing after /m/ than after /r/, as our mechanismpredicted. But the effect was purely quantitative: except in the most rapid and casual speech

7 Since this is only one approach among many, the reader is urged to compare it with

Steriade’s work, as well as Flemming (1995), Boersma (1997), and Kirchner (in progress).

Page 7: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 7

styles, our speakers fully succeeded in maintaining the phonemic contrast of /p/ with /b/ (whichwe also examined) in postnasal position. The phonetic mechanism simply produces aquantitative distribution of voicing that is skewed toward voicedness after nasals. Moreover,the distribution of values we observed varied greatly: the amount of voicing we found in /mp/ranged from 13% up to (in a few cases) over 60% of the closure duration of the /p/.

In contrast, there are other languages in which the postnasal voicing effect is trulyphonological. For example, in Ecuadorian Quechua (Orr 1962), at suffix boundaries, it isphonologically illegal for a voiceless stop to follow a nasal, and voiced stops are substituted forvoiceless; thus sac&a-pi ‘jungle-loc.’ but atam-bi ‘frog-loc.’ For suffixes, there is no contrast of

voiced versus voiceless in postnasal position. Clearly, English differs from Quechua in having“merely phonetic” postnasal voicing, as opposed to true phonological postnasal voicing.8 Wemight say that Ecuadorian Quechua follows a categorial strategy: in the suffix context it simplydoesn’t even try to produce the (phonetically difficult) sequence nasal + voiceless obstruent.English follows a “bend but don’t break” strategy, allowing a highly variable increase in degreeof voicing after nasals, but nevertheless maintaining a contrast.

I would claim then, that in English we see postnasal voicing “in the raw,” as a true phoneticeffect, whereas in Ecuadorian Quechua the phonology treats it as a categorial phenomenon.The Quechua case is what needs additional treatment: it is a kind of leap from simply allowing aphonetic effect to influence the quantitative outcomes to arranging the phonology so that, in therelevant context, an entire contrast is wiped out.9

6.2 Symmetry

Let us consider a second argument. I claim that phonetics is asymmetrical, whereasphonology is usually symmetrical. Since the phonetic difficulty of articulation and perceptionfollows from the interaction of complex physical and perceptual systems, we cannot in thegeneral case expect the regions of phonetic space characterized by a particular difficulty level tocorrespond to phonological categories.

To make this clear, consider a particular case, involving the difficulty of producing voicedand voiceless stops. The basic phonetics (here, aerodynamics) has been studied by Ohala

8 The difference is clearly reminiscent of the notion of “phonologization” discussed in Hyman

(1976) and earlier, though Hyman’s main focus is on historical contrast redistributions such asthose found in tonogenesis.

9 Actually, this paragraph slights the complexity of phonetic implementation. FollowingPierrehumbert (1980) and Keating (1985), I assume that there is also a phonetic component in thegrammar, which computes physical outcomes from surface phonological representations. It, too, Ithink, is Optimality-theoretic and makes use of inductive grounding (below). I cannot addressthese issues here for lack of space.

Page 8: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 8

(1983) and by Westbury and Keating (1986). Roughly, voicing is possible whenever asufficient drop in air pressure occurs across the glottis. In a stop, this is a delicate matter for thespeaker to arrange, since free escape of the oral air is impeded. Stop voicing is influenced byquite a few different factors, of which just a few are reviewed here.

(a) Place of articulation. In a “fronter” place like labial, a large, soft vocal tract wallsurface surrounds the trapped air in the mouth. During closure, this surface retracts underincreasing air pressure, so that more incoming air is accommodated. This helps maintain thetransglottal pressure drop. Since there is more yielding wall surface in labials (and moregenerally, at fronter places of articulation), we predict that the voiced state should be relativelyeasier for fronter places. Further, since the yielding-wall effect actually makes it harder to turnoff voicing, we predict that voicelessness should be harder for fronter places.

(b) Closure duration. The longer a stop is held, the harder it will be to accommodate thecontinuing transglottal flow, and thus maintain voicing. Thus, voicelessness should be favoredfor geminates and for stops in post-obstruent position. (The latter case assumes that, as isusual, the articulation of the stop and the preceding obstruent are temporally overlapped, so noair escape can occur between them.)

(c) Postnasal position. As just noted, there are phonetic reasons why voicing of stopsshould be considerably favored when a nasal consonant immediately precedes the stop.

(d) Phrasal position. Characteristically, voicing is harder to maintain in utterance-initialand utterance-final position, since the subglottal pressure that drives voicing tends to be lower inthese positions.

As Ohala (1983) and others have made clear, these phonetic factors are abundantlyreflected in phonological patterning. (a) Gaps in stop inventories that have both voiced andvoiceless series typically occur at locations where the size of the oral chamber makes voicing orvoicelessness difficult; thus at *[p] or *[g], as documented by Ferguson (1975), Locke (1983),and several sources cited by Ohala (p. 195). (b) Clusters in which a voiced obstruent followsanother obstruent are also avoided, for instance in Latin stems (Devine and Stephens 1977), orin German colloquial speech (Mangold 1962: 45). Geminate obstruents are a similar case: theylikewise are often required to be voiceless, as in Japanese (Vance 1987: 42), West Greenlandic(Rischel 1974), or !Xõõ (Traill 1981: 165). (c) Languages very frequently ban voiceless stopsafter nasals, with varying overt phonological effects depending on how the constraints areranked (Pater 1995, 1996; Hayes and Stivers, in progress). (d) Voicing is favored in medialposition, and disfavored in initial and final position, following the subglottal pressure contour(Westbury and Keating 1986).10

10 Interestingly, Westbury and Keating’s (1986) modeling work found no articulatory support

for the large typological difference between final devoicing (ubiquitous) and initial devoicing(somewhat unusual; see Westbury and Keating for cases). Recent work by Steriade (in progress)

Page 9: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 9

Plainly, the phonetics can serve here as a rich source of phonological explanation, since thetypology matches the phonetic mechanisms so well. However, if we try to do this in a naive,direct way, difficulties immediately set in.

Suppose that we concoct a landscape of stop voicing difficulty (2) which encodesvalues for difficulty (zero = maximal ease) on an arbitrary scale for a set of phonologicalconfigurations. For simplicity, we will consider only a subset of the effects mentioned above.

(2) Landscape of Difficulty for Voiced Stops: Three Places, Four Environments

b d g

[-son] ___ 43 50 52

# ___ 23 27 35

[+son, -nas] ___ 10 20 30

[+nas] ___ 0 0 0 contour line: 25

The chart in (2) was constructed using a software aerodynamic vocal tract model implementedat UCLA (Keating 1984). The basis of the chart is explained below in section 10; for now, itmay be considered simply a listing of “difficulty units” for voicing in various phonologicalconfigurations. It can be seen that the model has generated patterns that are qualitativelycorrect: the further back in the mouth a place of articulation is, the harder it is to maintainvoicing. Moreover, the rows of the chart reflect the greater difficulty of maintaining voicing afterobstruents and initially, as well as the greater ease after nasals.

What is crucial about the chart is that it reflects the trading relationships that are alwaysfound in the physical system for voicing. One cannot say, for example, that velars are alwaysharder to voice, because velars in certain positions are easier to voice than labials in others.Similarly, the environments / # ___ versus / [+son, –nas] ___ do not define a consistent cutoff invoicing difficulty, since [g] in the environment/ [+son, –nas] ___ is harder than [b,d] in the environment / # ___.

The dotted line on the chart represents a particular “contour line” for phonetic difficulty,”analogous to a contour line for altitude on a physical map. A language that truly “wanted” to

that relates the phonology of voicing to its perceptual cues at consonant releases would appear tofill this explanatory gap.

Page 10: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 10

behave in a phonetically rational way might ban all phonological configurations that exceeded thecontour line, as in (3a). Translating this particular contour line into descriptive phonologicallanguage, we have the formulation of (3b):

(3) A Hypothetical Phonological Constraint

a. *any voiced stop that characteristically requires more than 25 units of effortb. *post-obstruent voiced stops,

*[d,g] in initial position,*[g] after oral sonorants

Note that [g] is permitted by (3), but only postnasally.

I would contend that a constraint like (3) (however formulated) is relatively unlikely tooccur in a real phonology. What occurs instead are constraints that are likewise phoneticallysensible, but which possess formal symmetry. Here are some real-world examples, with thelanguages they are taken from:

(4) a. *Voiced obstruent word-finally (Polish)b. *Voiced obstruent after another obstruent (Latin)c. *Voiced obstruent geminate (Japanese)d. *Voiced velar obstruents (Dutch)

These constraints ban symmetrical regions of phonological space, not regions bounded bycontour lines of phonetic difficulty. Nevertheless, they are phonetically sensible in a certain way:in the aggregate, the configurations that they forbid are more difficult aerodynamically than theconfigurations that they allow. Thus constraints like (5) would be quite unexpected:

(5) a. *Voiceless obstruent word-finally (compare (4a))b. *Voiceless obstruent after another obstruent (compare (4b))c. *Voiceless obstruent geminate (compare (4c))d. *Voiceless velar obstruents (compare (4d))

To generalize: I believe that constraints are typically natural, in that the set of cases thatthey ban is phonetically harder than the complement set. But the “boundary lines” that dividethe prohibited cases from the legal ones are characteristically statable in rather simple terms,with a small logical conjunction of feature predicates. In other words, phonological constraintstend to ban phonetic difficulty in simple, formally symmetrical ways (cf. Kiparsky 1995: 659).The constraint (3) is very sensible phonetically, but apparently too logically complex to appearin natural languages (or, at least, in more than a very few of them).

A further demonstration makes this point in a different way. Consider first that EgyptianArabic (Harrell et al. 1963) bans the voiceless bilabial stop [p]. This is both phonetically

Page 11: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 11

sensible and empirically ordinary, as noted above. What is very striking about the ban,however, is that it extends even to geminates: Cairene has words like [yikubb] ‘he spills’, butno analogous words like *[yikupp].11 As noted earlier, voiced obstruent geminates are cross-linguistically rare, for good phonetic reasons.

A near-minimal comparison with Arabic is Japanese, which (some unassimilatedborrowings aside) is one of the languages that bans voiced obstruent geminates. Since Japanesehas [pp] but not [bb], there is an interesting contradiction: in Arabic [bb] is well formed and[pp] is ill formed, whereas in Japanese it is just the opposite.

The contradiction is resolved in the context of the formal phonological constraints that areresponsible. Japanese allows [pp], and forbids [bb], as part of a general ban on voicedobstruent geminates. Such a ban is phonetically sensible, because obstruent voicing is hard tomaintain over long closures. Arabic allows [bb], and bans [pp], as part of a phoneticallysensible ban on voiceless labial stops. The latter ban is phonetically sensible because of thelarge expanding oral chamber wall surface in labials. The opposite effects thus result fromformally general phonological constraints, each with a phonetically natural core.

The tentative conclusion here is that the influence of phonetics in phonology is not direct,but is mediated by structural constraints that are under some pressure toward formal symmetry.A phonology that was directly driven by phonetic naturalness would, I think, be likely to missthis point.

The gap between the phonetic difficulty patterns and the phonology is thus still there,waiting to be bridged. Clearly, languages are well designed from a phonetic point of view.What is needed, I believe, is a way of accounting for this design that also allows principles ofstructural symmetry to play a role.

7. A Scheme for Phonological Grammar Design

Grammars could in principle be designed at two levels. Within the species as a whole, it isoften held there is a Universal Grammar, invariant among non-pathological individuals, whichdetermines much of the form of possible languages. Another sense in which grammar could bedesigned, outlined by Kiparsky and Menn (1977: 58), is at the level of the individual, who isengaged from infancy on in the process of constructing a grammar, one that will ultimatelygenerate the ambient language or something close to it. Could the language learner be adesigner of grammars? If so, how might she go about it?

11 It is fairly safe to infer that this is not just idealized phonemic transcription on Harrell et

al.’s part, since elsewhere they do record allophonic [p] resulting from a process of regressivevoicing assimilation in obstruents.

Page 12: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 12

From the discussion above, it would seem plausible that grammatical design withinphonology aims at a compromise between formal symmetry and accurate reflection of phoneticdifficulty. What follows is a tentative attempt to specify what phonological design could be like.It is very far from being confirmed, but I think it important at least to get started by laying out aconcrete proposal.

The task of phonological grammar design, under Optimality Theory, has two parts: gainingaccess to constraints (here, by inventing them), and forming a grammar by ranking theconstraints. The strategy taken is to suppose that constraints are invented in great profusion, buttrimmed back by the constraint ranking algorithm.

The particular process whereby constraints are invented I will call inductive grounding.The term “grounded,” which describes constraints that have a phonetic basis, was introducedby Archangeli and Pulleyblank (1994). “Inductive” means that the constraints are learned byprocessing input data.

8. Inductive Grounding I: Evaluating Constraint Effectiveness

The language learner has, in principle, an excellent vantage point for learning phoneticallygrounded constraints. Unlike any experimenter observing her externally, the child is actuallyoperating her own production and perception apparatus, and plausibly would have direct accessto the degree of difficulty of articulations and to the perceptual confusability of different acousticsignals.

Beyond the capacity to judge phonetic difficulty from experience, a language learner wouldalso require the ability to generalize across tokens, creating a phonetic map of the range ofpossible articulations and acoustic forms.

Considering for the moment only articulation, I will suppose that the language learner is ableto assess the difficulty of particular phonological configurations, using measures such as themaximum articulatory force needed to execute the configuration, or perhaps simple energyexpenditure.12 Further, we must suppose that the learner is able to generalize from experience,arriving at a measure of the characteristic difficulty of particular phonological configurations,which would abstract away from the variation found at various speaking rates and degrees ofcasualness, as well as the variable perceptual clarity that different degrees of articulatoryprecision will produce. Pursuing such a course, the learner could in principle arrive at a

12 As Katherine Demuth has pointed out to me, one should probably also consider motor-

planning difficulty; for example, the difficulty very young children have in employing more thanone place of articulation per word. Since such difficulty is at present impossible to estimate, Imust stick to physical difficulty for now.

Page 13: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 13

phonetic map of the space of articulatory difficulty. 13 A tentative example of a phonetic map isgiven below under (13).

Given a phonetic map drawn from experience, a language learner could in principle use it toconstruct phonetically grounded constraints; hence the term “inductive grounding.” Theinductive grounding algorithm I will suggest here supposes the following.

First, I assume constraints are constructed profusely, as arbitrary well-formedcombinations of the primitive elements of phonological theory; thus, with just the features [nasal]and [voice], we would get *[+nasal][-voice], *[+nasal][+voice], *[-nasal, -voice], *[+nasal],and so on. In principle, this involves some risk, since the number of constraints to beconsidered grows exponentially with the number of formal elements included in their structuraldescriptions. However, if as suggested above, constraints are under some pressure towardformal simplicity, it is likely that the size of the search space can be kept under control.

Second, candidate constraints are assessed for their degree of grounding, accessing thephonetic map with a procedure I will now describe.

A grounded constraint is one that is phonetically sensible; that is, it bans things that arephonetically hard, and allows things that are phonetically easy. Taking a given candidatephonological constraint C, and any two entries E1 and E2 in the phonetic map, there are fourlogical possibilities:

(6) a. Both E1 and E2 violate C.b. Both E1 and E2 obey C.c. E1 violates C and E2 obeys C.d. E1 obeys C and E2 violates C.

We will ignore all pairs of types (a) and (b) (same-outcome) as irrelevant to the assessmentof C. Among the remaining possibilities, we can distinguish cases where the constraint makesan error from those in which it makes a correct prediction.

(7) a. Correct predictions

E1 violates C and E2 obeys C; E1 is harder than E2.E1 obeys C and E2 violates C; E1 is easier than E2.

13 Obviously, this task itself involves quite non-trivial learning. An encouraging reference

from this viewpoint is Kelly and Martin (1994), who provide a fascinating survey of the ability ofhumans and other species to form statistical generalizations and to estimate relative magnitudesfrom experience.

Page 14: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 14

b. Errors

E1 obeys C and E2 violates C; E1 is harder than E2.E1 violates C and E2 obeys C; E1 is easier than E2.

Since the goal of a constraint is to exclude hard things and include easy things, we canestablish a simple metric of constraint effectiveness simply by examining all possible pairs {E1,E2} drawn from the phonetic map. The definition below presumes a particular phonologicalstructural description defining a constraint, and a phonetic map against which the constraint maybe tested:

(8) Constraint effectiveness

Effectiveness = Correct predictions / (Correct predictions + Errors)

On this scale, “perfect” constraints receive a value of 1, since they always ban things thatare relatively harder, and never things that are relatively easier. Useless constraints, which banthings in an arbitrary way with no connection to their phonetic difficulty, receive a value of 0.5;and utterly perverse constraints, which ban only relatively easy things, get a value of 0. Clearly,the language learner should seek constraints with high effectiveness values.

It is more complicated to define constraint effectiveness for perceptual distinctness.Flemming (1995) has argued persuasively that perceptual distinctness can only be definedsyntagmatically in perceptual space: for instance, [ˆ] is a fine vowel, indeed the preferred high

vowel in a vertical vowel system such as Marshallese, where it is the only high vowel (Choi1992). But where [i] and [u] occur as phonemes, as in most languages, [ˆ] is a poor vowel, due

to its acoustic proximity to (thus, confusability with) [i] and [u]. Assuming the correctness ofFlemming’s position, we must evaluate not individual entries in the phonetic map, but pairs ofentries. And since constraint effectiveness is determined by comparing cases that a constrainttreats differently, we must deal with pairs of pairs. In various cases I have explored, thisprocedure leads to coherent results, but as there are further complications, I will consider onlyarticulation here, with the intent of dealing with perception elsewhere.

9. Inductive Grounding II: Selecting the Grounded Constraints

Merely defining constraint effectiveness does not provide an explicit definition of agrounded constraint. If we only allowed constraints that showed a maximally good fit to thephonetic map (effectiveness value 1), then only a few simple constraints would be possible, andmost of the permitted constraints would be very complex, like the “contour line constraint” in(3) above. This would be wrong on both counts. First, my judgment, based on experience inphonological typology, is that there are many constraints, in fact, dismayingly many, unless wecome up with a reasonable source for them. Thus, we want the inductive grounding algorithm to

Page 15: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 15

generate a very rich (but thoroughly principled) constraint set. Second, as already argued, wewant to keep constraints from being heavily “tailored” to fit the phonetic pattern. Realconstraints seldom achieve such a perfect fit; rather, they deviate in the direction of structuralsimplicity.

A simple way to accomplish this deviation, as well as to provide a rich constraint set, is torely on the notion of local maximum; in particular, local maxima of constraint effectiveness.Typically, local maxima are recognized as difficult problems for language learners, preventing thelearner from arriving at the correct final state. A complex, multiply dimensioned pattern typicallyhas many local maxima, but (definitionally) only one global one. But for our purposes, a localmaximum is an excellent thing, because it permits a large number of constraints to emerge from agiven phonetic map.

To make the idea explicit, here are some definitions:

(9) Constraint space is the complete (infinite) set of possible constraints. It isgenerated by locating all legal combinations of the primitive formal elements ofa particular phonological theory.

(10) Two constraints are neighbors in constraint space if the structural descriptionof one may be obtained from that of the other by a single primitive formalsubstitution (switching a feature value; addition or loss of a feature orassociation line, etc.; the exact set of substitutions will depend on thephonological theory employed).

(11) Constraint C1 is said to be less complex than constraint C2 iff the structuraldescription of C1 is properly included in the structural description of C2 (cf.Koutsoudas et al. 1974: 8-9).

Using these definitions, we can now state an explicit characterization of phonetic grounding:

(12) Defn.: grounded

Given a phonological constraint C and a phonetic map M, C is said to begrounded with respect to M if the phonetic effectiveness of C is greater thanthat of all neighbors of C of equal or lesser complexity.

Definition (12) uses the notion of local maximum, by requiring that C only exceed itsneighbors in effectiveness. But (12) also goes beyond local maxima in a crucial sense: theneighbors that one must consider are only neighbors of equal or lesser complexity. It is this biasthat permits the system to output relatively simple constraints even when their match to thephonetic map is imperfect.

Page 16: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 16

The definition of phonetic grounding in (12) is obviously quite speculative, but I wouldclaim the following virtues for it: (a) Assuming that a reasonably accurate phonetic map can beconstructed, it specifies precisely which constraints are grounded with respect to that map, thussatisfying the requirement of explicitness. (b) The formally simple constraints that a given mapyields are not just a few phonetically-perfect ones, but a large number, each a local effectivenessmaximum within the domain of equally or less-complex constraints. (c) Constraints are able tosacrifice perfect phonetic accuracy for formal symmetry, since the competitors with which theyare compared are only those of equal or lesser complexity.

10. An Application of Inductive Grounding

Here is a worked out example. To begin, we need a plausible phonetic map, for which Ipropose (13):

(13) A Phonetic Difficulty Map for Six Stops in Four Environments

p t k b d g

[–son] ___ 7 0 0 43 50 52# ___ 10 0 0 23 27 35[+son,–nas] ___ 45 28 15 10 20 30[+nas] ___ 155 135 107 0 0 0

I obtained this map by using a software aerodynamic vocal tract model. This model wasdeveloped originally by Rothenberg (1968) as an electrical circuit model, and is currentlyimplemented in a software version in the UCLA Phonetics Laboratory. This version (or itsclose ancestors) are described in Westbury (1983), Keating (1984), and Westbury andKeating (1986). Roughly, the model takes as input specific quantitative values for a large set ofarticulations, and outputs the consequences of these articulations for voicing, that is, theparticular ranges of milliseconds during which the vocal folds are vibrating. The units in chart(13) represent articulatory deviations from a posited maximally-easy average vocal fold openingof 175 microns; these deviations are in the positive direction for voiceless segments (since glottalabduction inhibits voicing) and negative for voiced (since glottal adduction encourages it).

I used the model in an effort to give plausible quantitative support to the scheme to befollowed here. However, it should be emphasized that obtaining reasonable estimates ofarticulatory difficulty from the model requires one to make a large number of relatively arbitraryassumptions, reviewed in the footnote below.14 What makes the procedure defensible is that

14 (a) In real life, numerous articulations other than glottal adduction influence voicing(Westbury 1979, 1983); I have used glottal adduction alone, despite the lack of realism, to reducephonetic difficulty to a single physical scale. To permit a uniform criterion of perceptual adequacy,the right-side environment for all stops was assumed to be prevocalic, which of course addsanother caveat to the results.

Page 17: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 17

the outcomes that it produces are qualitatively reasonable: examining the map, the reader willfind that all the relevant phonetic tendencies described above in section 6.2 are reflectedquantitatively in the map. Thus, voiced stops are most difficult after an obstruent, somewhateasier in initial position, easier still after sonorants, and easiest postnasally. The reverse patternholds for voiceless stops. Further, for any given environment, stops are easier to produce asvoiced (and harder as voiceless) when they are in fronter places of articulation.

I will now derive a number of phonological constraints from the phonetic map of (13) bymeans of inductive grounding. The chart in (14) lists some of the work that must be done. Thefirst column gives what I take to be a fairly substantial list of the most plausible constraints(given what the chart is suitable for testing), along with all of their simpler neighbors. I haveimposed a relatively arbitrary limit of formal complexity on this candidate set, under theassumption that language learners either cannot or will not posit extremely complex constraints.The second column gives the phonetic effectiveness value for the candidate constraints,calculated by the method laid out in (9)-(12) and exemplified below.15 Finally, the third columnlists all the neighbor constraints for each main entry that are equally or more simple, taking theassumption that these neighbors are obtained by either a feature value switch or by deletion ofsingle elements from the structural description.

(14) Constraint Effec–tive–ness

Neighbors

a. *[+nasal][+voice] 0.000 *[+nasal][–voice], *[–nasal][+voice], *[+voice], *[+nasal]b. *[+nasal][–voice] 1.000 *[+nasal][+voice], *[–nasal][–voice], *[–voice], *[+nasal]c. *[–nasal][+voice] 0.701 *[–nasal][–voice], *[+nasal][+voice], *[+voice], *[–nasal]d. *[–nasal][–voice] 0.357 *[–nasal][+voice], *[+nasal][–voice], *[–voice], *[–nasal]

(b) Inputs to the aerodynamic model were as in Keating (1984), modified for the postnasal

environment as in Hayes and Stivers (1996).

(c) The criterion for adequate perceptual voicelessness was that the release of the stopshould be voiceless and there should be at least a 50 msec voiceless interval (half of the stop’s100 assumed msec closure duration). The criterion for perceptual voicing was that the release ofthe stop should be voiced, and at least half of the stop closure should be voiced. Precedingobstruents and nasals were assumed to overlap with the target stop, so they added only 50 msecto the total consonant closure.

(d) Since I had no basis for assessing what the true maximally-easy vocal fold opening is, Iwas forced (for this one parameter) to “let the theory decide”; picking the value of 175 as the onethat best matched observed phonological typology.

15 Note that for some constraints, the effectiveness value cannot be calculated. When aconstraint excludes or permits every entry in the map, then the formula for effectiveness in (8) willhave a zero denominator. The only constraints for which this arose here were constraintsincluded just because they were neighbors of other constraints.

Page 18: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 18

e. *[+son][+voice] 0.500 *[+son][–voice], *[–son][+voice], *[+voice], *[+son]f. *[+son][–voice] 0.861 *[+son][+voice], *[–son][–voice], *[–voice], *[+son]g. *[–son][+voice] 0.841 *[–son][–voice], *[+son][+voice], *[+voice], *[–son]h. *[–son][–voice] 0.094 *[–son][+voice], *[+son][–voice], *[–voice], *[–son]i. *[LAB, +voice] 0.425 *[LAB, –voice], *[COR, +voice], *[DORS, +voice], *[LAB],

*[+voice]

j. *[LAB, –voice] 0.633 *[LAB, +voice], *[COR, –voice], *[DORS, –voice], *[LAB], *[–voice]

k. *[COR, +voice] 0.500 *[COR, –voice], *[LAB, +voice], *[DORS, +voice], *[COR],*[+voice]

l. *[COR, –voice] 0.443 *[COR, +voice], *[LAB, –voice], *[DORS, –voice], *[COR], *[–voice]

m. *[DORS, +voice] 0.608 *[DORS, –voice],*[LAB, +voice], *[COR,+voice], *[DORS],*[+voice]

n. *[DORS, –voice] 0.371 *[DORS, +voice], *[LAB, –voice], *[COR,–voice], *[DORS], *[–voice]

o. *[+voice] unless LAB 0.568 *[–voice] unless LAB, *[+voice] unless COR, *[+voice] unlessDORS,*[ ] unless LAB, *[+voice]

p. *[–voice] unless LAB 0.388 *[+voice] unless LAB, *[–voice] unless COR, *[–voice] unlessDORS,*[ ] unless LAB, *[–voice]

q. *[+voice] unless COR 0.521 *[–voice] unless COR, *[+voice] unless LAB, *[+voice] unlessDORS,*[ ] unless COR, *[+voice]

r. *[–voice] unless COR 0.513 *[+voice] unless COR, *[–voice] unless LAB, *[–voice] unlessDORS,*[ ] unless COR, *[–voice]

s. *[+voice] unless DORS 0.453 *[–voice] unless DORS, *[+voice] unless LAB, *[+voice] unlessCOR,*[ ] unless DORS, *[+voice]

t. *[–voice] unless DORS 0.556 *[+voice] unless DORS, *[–voice] unless LAB, *[–voice] unlessCOR,*[ ] unless DORS, *[–voice]

u. *[LAB] 0.541 *[COR], *[DORS]v. *[COR] 0.466 *[LAB], *[DORS]w. *[DORS] 0.491 *[LAB], *[COR]x. *[ ] unless LAB 0.459 *[ ] unless COR, *[ ] unless DORSy. *[ ] unless COR 0.534 *[ ] unless LAB, *[ ] unless DORSz. *[ ] unless DORS 0.509 *[ ] unless LAB, *[ ] unless CORaa. *[+voice] 0.519 *[–voice]bb. *[–voice] 0.481 *[+voice]

cc. *[+nasal] (undeter-

mined)

*[–nasal]

dd. *[–nasal] (undeter-

mined)

*[+nasal]

ee. *[+son] (undeter-

mined)

*[–son]

Page 19: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 19

ff. *[–son] (undeter-

mined)

*[+son]

Here is an example of how effectiveness was computed for individual constraints. Theconstraint *[LAB, –voice] bans [p]; this ban is phonetically natural (for reasons already given)and would thus be expected to have a reasonably high effectiveness value. I repeat the phoneticmap below, this time with letters a-x, permitting reference to the entries:

(15) p t k b d g

[–son] ___ a: 7 b: 0 c: 0 d: 43 e: 50 f: 52# ___ g: 10 h: 0 i: 0 j: 23 k: 27 l: 35[+son,–nas] ___ m: 45 n: 28 o: 15 p: 10 q: 20 r: 30[+nas] ___ s: 155 t: 135 u: 107 v: 0 w: 0 x: 0

*[LAB, –voice] bans the shaded region of the map. If it is to be effective, then pairwisecomparisons between banned cells and unbanned ones should predominantly come out with thebanned cells being more difficult. Here is the outcome; “>“ means “is harder than”:

(16) a. Correct Predictions: 50 b. Incorrect Predictions: 29

a > b, c, h, i, v-x a < d-f, j-l, n-r, t, ug > b, c, h, i, v-x g < d-f, j-l, n, o, q, r, t, um > b, c, d, h-l, n-r, v-x m < e, f, t, us > b-f, h-l, n-r, t-x s < (none)

The computed effectiveness value is 50/(50 + 29), or .633, which is what was listed in (14j).

The neighbors of *[LAB, –voice] that have equal or lesser complexity are listed belowwith their effectiveness values:

(17) Constraint Effectiveness Justification for neighbor status

*[LAB, +voice] 0.425 switch value of [voice]*[COR, –voice] 0.443 switch value of PLACE*[DORS, –voice] 0.371 switch value of PLACE*[LAB] 0.541 delete [+voice]*[–voice] 0.481 delete [LAB]

Since *[LAB, –voice] at .633 exceeds all of its neighbors in effectiveness, the definition (12)designates it as phonetically grounded with respect to the phonetic map (13).

Repeating this procedure, we find that the constraints listed in (18) emerge as phoneticallygrounded. In the chart below, I give some mnemonic labels, often embodying a particular effect

Page 20: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 20

that a constraint might have. However, the reader should bear in mind that in Optimality Theorythe empirical effects of a constraint can range much more widely than the label indicates; see forexample Pater (1995, 1996).

(18) Constraint Effectiveness Characteristic Effect

a. *[+nasal][–voice] 1.000 postnasal voicingb. *[+son][–voice] 0.861 postsonorant voicingc. *[–son][+voice] 0.841 postobstruent devoicingd. *[–nasal][+voice] 0.701 postoral devoicinge. *[LAB, –voice] 0.633 *pf. *[DORS, +voice] 0.608 *gg. *[+voice] unless LAB 0.568 /b/ is the only voiced stoph. *[–voice] unless DORS 0.556 /k/ is the only voiceless stopi. *[LAB] 0.541 *labialsj. *[ ] unless COR 0.534 COR is the only placek. *[+voice] 0.519 voicing prohibited

The other constraints are designated by the algorithm as not grounded, because they arenot local effectiveness maxima:

(19) Constraint Effectiveness Characteristic Effect

a. *[+voice] unless COR 0.521 /d/ is the only voiced stopb. *[–voice] unless COR 0.513 /t/ is the only voiceless stopc. *[ ] unless DORS 0.509 DORS is the only placed. *[COR, +voice] 0.500 *de. *[+son][+voice] 0.500 postsonorant devoicingf. *[DORS] 0.491 *dorsalsg. *[–voice] 0.481 voicing obligatoryh. *[COR] 0.466 *coronalsi. *[ ] unless LAB 0.459 LAB is the only placej. *[+voice] unless DORS 0.453 /g/ is the only voiced stopk. *[COR, –voice] 0.443 *tl. *[LAB, +voice] 0.425 *bm.*[–voice] unless LAB 0.388 /p/ is the only voiceless stopn. *[DORS, –voice] 0.371 *ko. *[–nasal][–voice] 0.357 postoral voicingp. *[–son][–voice] 0.094 postobstruent voicingq. *[+nasal][+voice] 0.000 postnasal devoicing

The neighbor constraint that “defeats” each of (19) may be determined by consulting chart (14).

Page 21: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 21

Lastly, there are four constraints (*[+nasal], *[–nasal], *[+son], and *[–son]) for which thealgorithm makes no decision, since the map of (13) does not bear on their status. Theseconstraints were included simply to provide neighbors for the truly relevant constraints. Iassume they could be evaluated by a more comprehensive map.

Did the simulation work? If the map in (13) is valid, and if languages adopt only groundedconstraints, then the constraints of (18) should be empirically attested, and those of (19) not.

(a) The “finest” grounded constraint, with effectiveness value 1, is (18a), [+nasal][-voice].This constraint is indeed widely attested, with noticeable empirical effects in perhaps 7.6% ofthe world’s languages (estimate from Hayes and Stivers, in progress). Voicing in sonorant-adjacent positions ((18b), *[+son][–voice]) and devoicing in obstruent clusters ((18c), *[–son][+voice]) is also quite common.

(b) The chart also includes all the characteristic place-related voicing patterns: the bans onfronter voiceless stops and on backer voiceless ones (18e-h).

(c) Two of the simpler constraints, (18i) *[LAB] and (18j) *[ ] unless COR, do play arole in phonologies (see Rood 1975 and Smolensky 1993), but their appearance in the chart isprobably accidental. The phonetic map used here is suitable only for testing constraints onobstruent voicing, not place inventories. A legitimate test of the constraints that target placewould require a much larger phonetic map.

(d) Likewise, the blanket ban on voicing ((18k)*[+voice]) makes sense only if oneremembers that the map (18) only compares obstruents. Since voicing in sonorants is veryeasy, it is likely that in a fuller simulation, in which the map included sonorants, the constraint thatwould actually emerge is *[–sonorant, +voice]. This is well attested: for example, 45 of the317 languages in Maddieson’s (1984) survey lack voiced obstruents.

(e) The only non-artifactual constraint designated as grounded that probably is notlegitimate is (18d), [–nasal][+voice], which would impose devoicing after oral segments. It hasbeen suggested by Steriade (1995) and others that [nasal] is a privative feature, being employedin phonological representations only to designate overt nasality. If this is so, then [–nasal][+voice] would not appear in the candidate set.

(f) We can also consider the constraints of (19), which emerge from the simulationdesignated as not grounded. My impression, based on my own typological experience, is thatthese constraints are indeed rare or unattested in actual languages. Obviously, carefultypological work would be needed to affirm this conclusion.

I would conclude that the inductive grounding procedure, applied in this narrow domain,does indeed single out the phonologically-stated constraints that match typology. It is interestingthat some of the constraints (for example (18e) *[LAB, –voice]) do not record extremely high

Page 22: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 22

effectiveness scores, but are nevertheless fairly well attested (19 languages of the 317 inMaddieson (1984) show a stop gap at [p]). This suggests, as before, that formal symmetry,and not just phonetic effectiveness, plays a role in constraint creation.

11. The Remainder of the Task of Phonological Acquisition

Above I have outlined a procedure that, equipped with full-scale phonetic maps, couldgenerate large numbers of grounded constraints. What are we to do with them, in order toobtain actual grammars?

In Optimality Theory, the answer is simply: rank them. Tesar and Smolensky (1993,1995, 1996) have demonstrated an algorithm, called Constraint Demotion, that ranksconstraints using input data, with high computational efficiency. I suggest that thepromiscuously-generated constraints from inductive grounding could simply be fed into theConstraint Demotion algorithm. The algorithm will rank a few of them high in the grammar, thegreat majority very low. In Optimality Theory, a constraint that is ranked low enough willtypically have no empirical effects at all. Thus, the Constraint Demotion algorithm can weed outthe constraints that, while grounded, are inappropriate for the language being learned.

The combined effect of inductive grounding and the Constraint Demotion algorithm is inprinciple the construction of a large chunk of the phonology. The further ingredients neededwould be constraints that have non-phonetic origins. These include: (a) the Faithfulnessconstraints; these perhaps result from their own inductive procedure, applied to the inputvocabulary; (b) functionally-based constraints that are not of phonetic origin: for example,rhythmically-based constraints (Hayes 1995), or constraints on paradigm uniformity.Moreover, the child must also learn the phonological representations of the lexicon, a task thatbecomes non-trivial when these diverge from surface forms (Tesar and Smolensky 1996). Evenso, I believe that getting the phonetic constraints right would be a large step towards phonology.

12. Acquisition Evidence

The above discussion was entirely formal in character, attempting to develop an abstractscheme that was at least explicit enough to be confronted with actual data. But what of realchildren? Is there any evidence that they can generate formally symmetrical constraints fromtheir own phonetic experience?

In considering this question, I will refer to a very substantial research tradition inphonological acquisition. To summarize the results quickly and in inadequate detail, it appearsthat the following hold:

(a) Children’s perceptions are well ahead of their productions (Smith 1973; Braine 1974:284; Ingram 1989: 162-8; Eimas 1996: 32). Although in certain cases (Macken 1980a, 1995)a child’s errors can be shown to be the result of misperception, there is strong evidence that

Page 23: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 23

children can internalize many adult-like lexical forms that are neutralized only in their ownproductions.

(b) Children naturally develop procedures to reduce the complexity of adult forms tosomething they can handle with their limited articulatory abilities. These procedures frequentlydevelop sufficient regularity that it is reasonable to refer to them as the child’s own phonology;that is, a phonology that serves to map adult surface forms (or perhaps something deeper) intochild surface forms.

(c) The phonology of children is elaborate beyond what is required to reduce the child’sspeech to something easily pronounceable. For example, Amahl, the subject of Smith (1973),developed a remarkable form of “labiality flopping,” whereby the labiality of the /w/ in (forexample) /kwi n/ ‘queen’ migrated rightward, surfacing on the final consonant and converting it

to /m/: [gi m]. Another extraordinary migration (string: [»trINs]) is documented by Hamp

(1974).

(d) Lastly, children’s phonologies are to a fair degree specific to the individual child(indeed, to a particular child at a particular phase of acquisition). There is no such thing as“English infantile phonology”; only the phonologies created by particular children.

These results, which I take to be relatively uncontroversial, lead to the conclusion(Kiparsky and Menn 1977; Macken 1995) that to some degree, phonology is not merelylearned by children, but is to some extent also created by them.

Let us assume, following Gnanadesikan (1995), Pater (1996), and others, that the child’spersonal phonology is Optimality-theoretic, and consider some of the constraints that childrenhave created.

(a) Amahl Smith, at age 2 years, 60 days, rendered all stops (irrespective of underlyingform) as voiceless unaspirated lenis initially, voiced in medial position, and voiceless finally; thus[»b 88ebu] ‘table’, [a˘t] ‘hard’, [»w´˘gin] ‘working’. Plainly, such realizations cannot have been an

imitation of adult speech; they were Amahl’s own invention. Equally plainly, the constraintsAmahl adopted have a real role in the phonology of languages other than English; consider forinstance the voicing of intervocalic stops in Korean, or the devoicing of final stops in German.Finally, as noted above, Amahl’s constraints render articulation easier, by imposing the defaultvalues predicted on aerodynamic grounds.

(b) Amahl also required every consonant to be either prevocalic or final, so he producedno consonant clusters. The phonetic naturalness of such a pattern has been argued for bySteriade (1997); and it has been observed in adult language in the phonology of Gokana(Hyman 1982, 1985).

Page 24: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 24

(c) Children who impose gaps in their stop inventories at [p] or at [g], contrary to adultinput, are described by Ferguson (1975) and Macken (1980b). These gaps are analogous tothe gaps of adult languages noted in section 6.2. They are phonetically natural, and indeed arepredicted by the phonetically grounded constraints (18e,f) derived in the simulation above.

(d) Both Ferguson (1975: 11) and Locke (1983: 120) report cases of children who(against input evidence) require all postnasal obstruents to be voiced. Again, this is phoneticallynatural, derived under my simulation (18a), and typologically commonplace.

In all of these cases, the point to observe is that children have the capacity to obtainconstraints that are phonetically grounded, formally simple, and not available from the ambientlanguage data. I conclude that a good case can be made that children really do have a means offabricating constraints that reflect phonetic naturalness, perhaps by

Page 25: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 25

something like the method of inductive grounding laid out above.16

To conclude this section, we must complete the explanatory chain by establishingappropriate links between child-innovated constraints and adult phonology. There are twopossibilities.

First, there is the link of learnability: it is possible that the child’s search for the adultgrammar is aided by the child’s hypothesis that the adult grammar will contain groundedconstraints. Thus, in principle, the ability to access the set of grounded constraints could speedacquisition, though I think it would be hard at present to obtain serious evidence on this point.

Second, there is the diachronic link. Suppose that certain constraints fabricated byindividual children manage to survive into the adult speech community, perhaps by beingadopted in a peer group of young children.17 This would account for the characteristicnaturalness and formal symmetry of adult constraints, without positing that naturalness is acriterion for learnability.

Either of these hypotheses would account for the characteristic appearance of groundedconstraints in adult grammars.18

16 This is not to say that all children’s constraints are the same as adults’. For example, the

slower tempo of child speech (Smith 1978) means that children escape the phonetic difficulties of“antigemination,” which have been explained phonetically by Locke (1983: 174) and Odden (1988:470). For this reason, children can indulge in widespread consonant harmony, which theantigemination effect rules out for adults. The lesser articulatory skill of children is probably thecause of frequent stop-for-fricative substitutions; adults, who are more skillful but in a biggerhurry, tend instead toward lenition, with intervocalic spirantization. I assume that as children cometo speak faster and with greater articulatory control, their phonetic maps change, with anaccompanying shift towards adult-like constraints. For further discussion of this issue, see Locke(1983) and Macken (1995).

17 The reader, who almost certainly speaks a normatively-imposed standard language, mightfind this counterintuitive, unless (s)he remembers that most languages are colloquial, non-standardvarieties. As Hock (1986: 466-7) remarks, nonstandard languages change quite a bit more rapidlythan standard ones. I would conjecture that this is because they suppress the innovations ofchildren with considerably less force. The abundance of non-standard English dialects thatreplace [T,D] with [f,v] or [t,d] (both normal childhood substitutions) is a good illustration. Giventhat such dialects are geographically remote from each other, it seems very likely that thesesubstitutions are childhood inheritances (Wells 1982: 96-7).

18 The indebtedness of this whole section to the work of Stampe (1972) and Donegan andStampe (1979) should be clear. The approach I have taken could be viewed as an attempt toextend Stampe and Donegan’s work, making use of Optimality Theory to establish a more directconnection between phonetics and child phonology. In Optimality Theory, one need merelyspecify in a constraint what is phonetically hard, with the Faithfulness constraints determining

Page 26: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 26

13. Innate Knowledge

Lurking in the background of much of this discussion is a belief widely held by formallinguists: that much (most?) of linguistic structure is specified innately, and does not have to belearned by any procedure at all. For Optimality Theory, it is suggested (for example) by Tesarand Smolensky (1993: 1) that all the constraints might be innate, so that the creation of grammarin the child would largely reduce to the task of ranking these already-known constraints. To thecontrary, I have been assuming that constraints need not necessarily be innate, but onlyaccessible in some way to the language learner, perhaps by inductive grounding.

On the whole, it is very hard to make this issue an empirical one. I know of two sources offacts that might bear on the question.

First, there are phonetically grounded constraints that govern uncommon sounds. Amongthese are the constraints discovered by Steriade requiring postvocalic position for retroflexesand for preglottalized sonorants. In Maddieson’s (1984) survey, only 66 of the 317 languagessampled had retroflexes, and only 20 had laryngealized sonorants of any sort. Similarly,implosives and ejectives display place asymmetries much like the place asymmetries for voicedand voiceless stops, respectively (though more robust), and have similar aerodynamicexplanations (see Maddieson, Chap. 7, and references cited there). Implosives occur in only32 languages of the Maddieson sample, ejectives in 52.

If the proto-stages of human language likewise seldom deployed retroflexes, preglottalizedsonorants, implosives, and ejectives, then during most of the period for the evolution oflanguage, there can have been little selectional pressure to deploy these sounds in any particularway. There is no selective advantage to possessing an innate constraint on the distribution ofretroflexes if the language you are learning doesn’t have any. From the viewpoint of inductivegrounding, in contrast, such constraints are unproblematic: children can obtain the phoneticmaps necessary for acquiring them from the practice they obtain in imitating an ambient languagethat happens to have the relevant sounds.19

A very different source of evidence on the innateness question comes from Locke andPearson (1990). These authors studied a child who was deprived of articulatory practice forpart of her infancy because of a temporarily-installed tracheal tube. What they found suggeststhat learning through phonetic self-exploration may indeed be important to acquisition, as the

what particular “fix” is adopted to avoid phonotactic violations. In contrast, Natural Phonologyrequires a massive proliferation of processes, each needed to characterize one particular strategyfor avoiding phonetic difficulty.

19 This paragraph presupposes that any innate principles of language did arise by naturalselection. For defense of this view, and criticism of the alternative possibilities, see Pinker andBloom (1990) and Dennett (1995).

Page 27: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 27

child they studied was delayed considerably in phonological acquisition once the tube wasremoved. Locke and Pearson are cautious in interpreting this result, but in principle suchresearch could provide serious empirical data on the question of the innateness of phoneticconstraints.

14. Ungrounded Constraints

It has often been emphasized that a language’s phonological structure is not alwayssensible. A language may have a system that is synchronically odd, as a result of a conspiracyof historical circumstances such as borrowing, or a peculiar sequence of changes, each onenatural (Bach and Harms 1972; Hayes 1995: 219-21).

One possible example comes from Northern Italian, which shows the rather odd pattern ofvoicing /s/ intervocalically but not postnasally. The pattern is productive, as Baroni’s (1996)recent testing indicates. The sequence of events that gave rise to this pattern historically was(a) loss of all nasals before /s/ (early Romance); (b) intervocalic /s/ voicing; (c) reintroduction of/ns/ sequences in learned borrowings from Latin, pronounced faithfully to the Latin original(Maiden 1995; 14, 63, 76, 84). While it is not clear whether purely-intervocalic voicing isgrounded (in my simulation, it depends on the feature system used), nevertheless the NorthernItalian phenomenon does seem somewhat peculiar in light of the pattern of phonetic difficultyinvolved.

A perusal of Maddieson (1984) will show a number of stop systems that have gaps atplaces other than the expected *[p] and *[g]. Although Ohala (1983) suggests additionalfactors that may influence voicing-gap patterns, it appears likely that many of these systems arealso accidents of history, and must be attributed to ungrounded constraints.

Two points seems worth making about ungrounded constraints. First, if grammars reallydo permit them, then they must have some source. I would conjecture that the source isinduction, in this case not over the learner’s phonetic experience but over the input data:eventually, the child figures out such constraints from negative evidence; that is, from systematic,consistent, long-term absence of a particular structure in the input data. Such constraints wouldbe the rough analogues in the present theory of Stampe’s (1973) ‘rules’, as opposed to thegrounded constraints, which correspond roughly to Stampe’s ‘processes’.

Second, if the distinction between inductively grounded constraints (learned from internalexperience) and learned constraints (learned from gaps in input data) is true, then it should bedetectable. Here are some possible ways to detect it:

(a) Children who innovate constraints in their own speech should never innovate anungrounded constraint.

Page 28: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 28

(b) In principle, grounding could influence intuitive judgments. For instance, DoncaSteriade has point out in lectures that in English, hypothetical forms like [rtap], with a grosssonority violation, sound much worse than forms like [ktap], with a lesser violation. This isdespite that fact that neither form occurs in the English input data. I would conjecture that thedifference in judgment has its origins in the phonetic naturalness of the two configurations. Byway of contrast, we might expect purely learned, ungrounded constraints to provide judgmentsrelated to the lexicon; that is, to the degree to which the child’s input justifies the inductiveconclusion that a particular segment or sequence is absent.

(c) Borrowed words might also provide evidence: new borrowed phonemes andsequences should be more easily pronounced if they merely violate arbitrary learned constraintsthan if they violate phonetically grounded ones.20

What emerges here is that, while the existence of ungrounded constraints makes it harderto test a theory of phonetic grounding, it does not make it impossible.

15. Consequences of Inductive Grounding for Feature Theory

A major line of evolution in feature theories (traceable, for example, through Jakobson,Fant and Halle 1951, Chomsky and Halle 1968, and Sagey 1986) has been one of increasingphonetic literalism: the features have gradually come closer to depicting what is going on inthe mouth during speech. Autosegmental representations, which permit an idealized depiction ofthe timing of individual articulators, increase the degree of literalism.

In one sense, this has been a positive development: since phonology is mostly phoneticallygrounded, formal representations that include a more precise depiction of the phonetics will dobetter in many cases than those that do not. However, I believe that detailed consideration ofvarious cases indicates that the “phonetic literalist” research program for feature theory has notreally achieved its goals. Inductive grounding suggests what may be a better direction forfeature theory to follow.

The problem is that phonetics is very complicated, and involves physical and perceptualsystems that interact in many ways. Ordinary phonological representations, even those designedwith an eye on phonetic form, are simply not rich enough to characterize all the things that canhappen (Flemming 1995).

Perhaps the plainest example of this is the mechanism of postnasal voicing, investigated byHayes and Stivers (in progress). Hayes and Stivers suggest that the widespread preference for

20 A frustrating interfering factor here is that the adult speakers have had massive practice,

for years, in pronouncing precisely the sounds of their language. Presumably, this has substantialeffects on their phonetic maps.

Page 29: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 29

postnasal voicing follows from a quite elaborate set of contraposed phonetic tendencies. First,obstruents tend to voice in nasal-adjacent position because nasals induce on them a slight,coarticulatory nasal “leak,” at a level that does not render obstruents perceptually nasal, butdoes encourage voicing by venting oral pressure. Second, a peculiar tendency of the velum torise while the velar port is closed during a nasal-to-obstruent transition (and correspondingly,fall while closed in obstruent-to-nasal transitions) produces a kind of “velar pumping,” whichyields an anti-voicing effect in obstruent + nasal sequences (thus negating the voicing effect ofnasal leak) but a pro-voicing effect in nasal + obstruent sequences (reinforcing the nasal leakeffect). Putting these effects together (and modeling them quantitatively), Hayes and Stiverspredict specifically post-nasal voicing, which is what agrees with typology.

In principle, the highly detailed and specific phonetic effects studied by Hayes and Stiverscould be encoded in the phonology: spreading principles would depict the coarticulation, andspecial new features would depict the resulting aerodynamic effects. With such features, theconstraint against postnasal voiceless obstruents would come out as something like (20):

(20) * –sonorant–voice+minor nasal leak+rarefactive velar pumping

But the last two features in (20) are hopeless as members of a phonological feature inventory,as they play no role at all elsewhere in phonology: they define no natural classes, do not spread,and are completely redundant.

Inductive grounding covers the ban on postnasal voicelessness by addressing a phoneticmap, as shown above. The features it uses in formulating and scanning the map ([voice],[nasal], and [sonorant]) are almost totally uncontroversial, being pervasively relevant to manyaspects of phonology. Moreover, inductive grounding accounts for why nasal-adjacent voicingof obstruents is always in post-nasal position, never prenasal. As Pater (1996) notes, this is aconspicuous gap in the recent analysis by Ito, Mester, and Padgett (1995), which treats thephenomenon as voicing spread.

Consider another area in which phonetically literalist feature theory fails: phonologicalassimilations that have more than one trigger. For example, Hyman (1973) notes various tonalrules in which a H(igh) tone becomes L(ow) or rising (LH) after the sequence L-toned vowel +voiced obstruent. In such a process, both the L tone and the voiced obstruent must beconsidered as factors contributing to the change, as each one often triggers the lowering effectby itself in other languages. To my knowledge, there is no featural account that covers “two-trigger” phenomena, because the autosegmental theory of assimilation only allows a singletrigger to spread its feature value onto the target.

Page 30: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 30

Inductive grounding appears to be a more promising approach here, because phoneticeffects can be additive. A H tone is harder to produce after a voiced obstruent (for thephonetics, see Ohala 1978 and Hombert 1978); it is also harder when the preceding vowel is L;and it is hardest of all when both environments are present. Thus inductive grounding wouldplausibly single out the crucial constraint in (21) as grounded:

(21) * V C V–son+voice

L H

Two-trigger processes in phonology are quite common: typical examples are intersonorantvoicing and intervocalic lenition (Kirchner, in progress).

The upshot of this discussion is this: it would be a mistake for phonologists to continue toformulate feature theory by attempting to construct simple schematic representations capable ofmirroring the extremely complex behaviors of phonetics. This is not really a feasible task, andinductive grounding provides a more realistic alternative.

What is the right direction for feature theory, then? A better approach, I think, is toconstrue the feature system as describing how the language learner/user categorizesphonological form. The phonetic experience that must be entered into phonetic maps isextremely variegated; so for a map to be at all coherent or useful, the experience must be sortedinto salient phonological categories. I believe that features form the basis of these categories.The categories of feature theory are also what serve as the raw material for the constraints thatare tested against the phonetic maps. In principle, a feature inventory that is not especiallyliteralist would, because it is small, reduce the hypothesis space that must be considered in thefabrication of constraints by inductive grounding, and thus render the search for effectiveconstraints more feasible.

As for what research strategy would confirm particular constraints: the crucial diagnosticwould be based on a property of constraints covered above in section 6.2, namely theirtendency to be formally symmetrical at the expense of close fit to the phonetics. It is preciselywhen constraints deviate in minor ways from perfect grounding that we can infer that formalsymmetry is playing a role. The features can be justified by whether they capture the necessaryformal symmetry. Thus, for example, even though the phonetic mechanisms needed to producea voiced intervocalic stop in Korean are not exactly the same for all the Korean places ofarticulation, the fact that all of the places participate in parallel in an intervocalic voicing processsuggests that [voice] is an authentic phonological feature of Korean. I would expect that mostof the relatively uncontroversial current features, such as [coronal], [round], [nasal], and [back],could be justified in this way.

Page 31: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 31

16. Local Conclusions

To sum up the main content of this paper: I have suggested, following much earlier work,that phonological constraints are often phonetic in character. They are not phonetics itself, butcould in principle be “read off” the phonetics. Most of what I have said has been an effort tospecify what this “reading off” could consist of.

The hypotheses considered have been, in increasing order of specificity: (a) Learnersextract phonological constraints from their own experience; (b) In constructing constraints,learners execute a trade-off between phonetic accuracy and formal simplicity; (c) Learners gothrough the logical space of possible phonological constraints, seeking local maxima of goodphonetic fit, and at each point comparing candidate constraints only with rivals of equal orgreater simplicity.

I have further suggested that the data of child language support the view that children canand do create constraints by inductive grounding, and made suggestions regarding how featuretheory might work under an inductive grounding approach.

17. General Conclusions

In principle, the approach taken here to functional factors in language is applicableelsewhere in linguistics. The basic idea has been that functional factors are representedindirectly: they enter in at the level of language design, leading to the construction of formalgrammars that are functionally good, with a bias toward formal symmetry. I have posited thatthe functional factors make themselves apparent in “maps,” compiled from the experience of thelanguage learner. Inductive grounding creates constraints that reflect the functional principles, ina way that is somewhat indirect, due to their formal character. Finally, constraint ranking moldsthe raw set of constraints into a full and explicit grammar. If the approach of Optimality Theoryis correct, such grammars will do full justice to the amazing intricacy of linguistic phenomena.

If this view of things is right, there are a number of things we should expect to find in thelinguistic data.

First, grammar should emerge rather consistently as functionally good. In the area ofphonology, I am encouraged in this respect by my reading of the literature cited in section 4: byconsistently examining their data with the question “why” in mind, the authors of this work havebeen able to expand considerably the domain of phonological facts that have plausible phoneticexplanations.

Second, we should find that functional goodness appears in grammar not directly, butmediated by grammatical constraints, with a strong bias toward formal symmetry.

Page 32: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 32

Third, we should find a pervasive role for violable grammatical constraints, as OptimalityTheory claims, since constraints based on functional principles have no a priori claim toinviolability.

As noted earlier, very little in what is assumed here need be posited as innate knowledge.In principle, only the procedure for inductive grounding and the mechanisms of OptimalityTheory itself need be innate, the rest being learned. But I am not at all a priori opposed topositing that parts of grammar and phonology are genetically encoded. This view seemsespecially cogent in domains of grammar that abound in “projection puzzles” (Baker 1979).

However, I do have a suggestion regarding research strategy: arguments for innateprinciples can only be made stronger when inductive alternatives are addressed and refuted.By this I mean both induction from internally-generated maps, as discussed here, and alsoordinary induction from the input data. When induction has been explicitly shown to beinadequate, innateness is left in a much stronger position as the only non-mystical alternative.

18. Coda: “Good Reductionism” in Linguistics

Dennett (1995) has recently written a book that combines an excellent tutorial inevolutionary theory with interesting discussion of the relationship of evolution to cognitivescience. Dennett suggests that an appropriate stance for a cognitive scientist to take is a form of“Good Reductionism,” which may be characterized as follows:

(a) Good Reductionism acknowledges the wonderful richness and complexity of cognitivephenomena, and thus is the opposite of the trivializing “Greedy Reductionism.”

(b) Good Reductionism takes engineering, not physics, as its physical-science model. Thereason is that natural selection tends to produce incrementally-engineered solutions, rather thanproceeding with bold, fundamental moves.

(c) But on rare occasions, natural selection produces a “crane,” a particular trick that canmake the apparently-miraculous phenomena of biology emerge from mundane origins. Examplesof cranes include the “Baldwin Effect,” described by Dennett (1991: 184-7; 1995: 77-80); orsexual reproduction (Dennett 1995: 323).

(d) Cranes are opposed by Dennett to “skyhooks,” which explain the apparentlymiraculous by positing actual miracles. Skyhooks are obviously scientifically inappropriate, buthave been proposed by scientists surprisingly often, he claims.

The approach taken here might be construed as an attempt to engage in Good Reductionistphonology, steering between the twin perils of reckless Skyhook Seeking and head-in-the-sandGreedy Reductionism. The two cranes I have posited are Optimality Theory and inductivegrounding. These, and only these, must be assumed to be innate. Elsewhere, the approach has

Page 33: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 33

been incrementalist: the goal is to reconstruct the miraculous complexities of phonologicalsystems incrementally, using materials that are directly accessible to the language learner.

The approach proposed is formalist in that it seeks to attain utterly explicit and completephonological description. It is functionalist in that it seeks to obtain much of the content ofphonology from external, functional principles, by means of inductive grounding. What emerges,I hope, is somewhat different from what has dominated either traditional formalist or traditionalfunctionalist thinking.

Page 34: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 34

References

Anderson, Stephen R. 1981. “Why phonology isn’t ‘natural’”. Linguistic Inquiry 12: 493-539.

Archangeli, Diana and D. Terence Langendoen. 1997. Optimality Theory: An Overview.Oxford: Blackwell.

Archangeli, Diana and Douglas Pulleyblank. 1994. Grounded Phonology. Cambridge, MA:MIT Press.

Bach, Emmon and Robert Harms. 1972. “How do languages get crazy rules?”. In Robert P.Stockwell and Ronald K. S. Macaulay (eds.), Linguistic Change and GenerativeTheory. Bloomington: Indiana University Press, 1-21.

Baker, C. L. 1979. “Syntactic theory and the Projection Problem.” Linguistic Inquiry 10:533-581.

Baroni, Marco. 1996. “Morphological conditions on the distribution of [s] and [z] in StandardNorthern Italian.” Ms., Department of Linguistics, UCLA, Los Angeles, CA.

Boersma, Paul. 1997. “The elements of functional phonology.” ROA-173, Rutgers OptimalityArchive, http://ruccs.rutgers.edu/roa.html.

Braine, Martin S. 1974. “On what might constitute learnable phonology.” Language 50: 270-299.

Chen, Matthew. 1973. “On the formal expression of natural rules in phonology.” Journal ofLinguistics 9: 223-249.

Choi, John-Dongwook. 1992. Phonetic underspecification and target-interpolation: anacoustic study of Marshallese vowel allophony. Ph.D. dissertation, UCLA, Los Angeles,CA.

Chomsky, Noam and Morris Halle. 1968. The Sound Pattern of English. New York: Harperand Row.

Dennett, Daniel C. 1995. Darwin’s Dangerous Idea: Evolution and the Meanings of Life.New York: Simon and Schuster.

Dennett, Daniel C. 1991. Consciousness Explained. Boston: Little, Brown.Devine, A.M. and Laurence D. Stephens. 1977. Two Studies in Latin Phonology. Saratoga,

CA: Anma Libri.Donegan, Patricia Jane and David Stampe. 1979. “The study of Natural Phonology.” In Daniel

A. Dinnsen (ed.), Current Approaches to Phonological Theory. Bloomington: IndianaUniversity Press, 126-73.

Eimas, Peter. 1996. “The perception and representation of speech by infants.” In James L.Morgan and Katherine Demuth (eds.), Signal to Syntax: Bootstrapping from Speech toGrammar in Early Acquisition. Mahwah, NJ: Lawrence Erlbaum Associates, 25-39.

Ferguson, Charles. 1975. “Sound patterns in language acquisition.” In Daniel P. Dato (ed.),Developmental Psycholinguistics: Theory and Applications, Georgetown UniversityRound Table on Languages and Linguistics, 1975. Washington, DC: GeorgetownUniversity Press, 1-16.

Page 35: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 35

Flemming, Edward. 1995. Perceptual Features in Phonology. Ph.D. dissertation, UCLA,Los Angeles, CA.

Gnanadesikan, Amalia. 1995. “Markedness and Faithfulness constraints in child phonology.”ROA-67, Rutgers Optimality Archive, http://ruccs.rutgers.edu/roa.html.

Gordon, Matt. 1997. “A phonetically-driven account of syllable weight.” Ms., Dept. ofLinguistics, UCLA, Los Angeles, CA.

Hamp, Eric P. 1974. “Wortphonologie.” Journal of Child Language 1: 287-288.Harrell, Richard, Laila Y. Tewfik, and George D. Selim. 1963. Lessons in Colloquial

Egyptian Arabic. Washington, DC: Georgetown University Press.Hayes, Bruce. 1989. “Compensatory lengthening in moraic phonology.” Linguistic Inquiry 20:

253-306.Hayes, Bruce. 1995. Metrical Stress Theory: Principles and Case Studies. Chicago:

University of Chicago Press.Hayes, Bruce and Aditi Lahiri. 1991. “Bengali intonational phonology.” Natural Language

and Linguistic Theory 9: 47-96.Hayes, Bruce and Tanya Stivers. In progress. “The phonetics of postnasal voicing.” Ms., Dept.

of Linguistics, UCLA, Los Angeles, CA.Hock, Hans Henrich. 1986. Principles of Historical Linguistics. Berlin: Mouton de Gruyter.Hombert, Jean-Marie. 1978. “Consonant types, vowel quality, and tone.” In Victoria Fromkin

(ed.), Tone: A Linguistic Survey. New York: Academic Press, 77-111.Hyman, Larry M. 1973. “The role of consonant types in natural tonal assimilations.” In Larry

M. Hyman (ed.), Consonant Types and Tone, Southern California Occasional Papers inLinguistics No. 1, 151-176.

Hyman, Larry M. 1976. “Phonologization.” In Alphonse Juilland, A.M. Devine, and LaurenceD. Stephens (eds.), Linguistic Studies Offered to Joseph Greenberg on the Occasion ofHis Sixtieth Birthday. Saratoga, CA: Anma Libri.

Hyman, Larry M. 1982. “The representation of nasality in Gokana.” In Harry van der Hulst andNorval Smith (eds.), The Structure of Phonological Representations, Part I. Dordrecht:Foris, 111-130.

Hyman, Larry M. 1985. A Theory of Phonological Weight. Dordrecht: Foris.Ingram, David. 1989. First Language Acquisition: Method, Description, and Explanation.

Cambridge: Cambridge University Press.Ito, Junko, Armin Mester, and Jaye Padgett. 1995. “Licensing and underspecification in

Optimality Theory.” Linguistic Inquiry 26: 571-613.Jakobson, Roman, C. Gunnar M. Fant, and Morris Halle. 1951. Preliminaries to Speech

Analysis: The Distinctive Features and their Correlates. Cambridge, MA: MIT Press.Jun, Jongho. 1995a. “Place assimilation as the result of conflicting perceptual and articulatory

constraints.” Proceedings of the West Coast Conference on Formal Linguistics 14.Jun, Jongho. 1995b. Perceptual and Articulatory Factors in Place Assimilation: An

Optimality Theoretic Approach. Ph.D. dissertation, UCLA, Los Angeles, CA.Kager, René. To appear. “Stem disyllabicity in Guugu Yimidhirr.” to appear in Marina Nespor

and Norval Smith. eds.), HIL Phonology Papers II.

Page 36: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 36

Kaun, Abigail. 1995a. An Optimality-Theoretic Typology of Rounding Harmony. Ph.D.dissertation, UCLA, Los Angeles, CA.

Kaun, Abigail. 1995b. “An Optimality-Theoretic account of rounding harmony typology.”Proceedings of the Thirteenth West Coast Conference on Formal Linguistics.Stanford, CA: Center for the Study of Language and Information, 78-92.

Keating, Patricia. 1984. “Aerodynamic modeling at UCLA.” UCLA Working Papers inPhonetics 54: 18-28.

Keating, Patricia A. 1985. “Universal phonetics and the organization of grammars.” In Fromkin,Victoria, ed. Phonetic Linguistics: Essays in Honor of Peter Ladefoged. Orlando, FL:Academic Press, 115-132.

Kelly, Michael H. and Susanne Martin. 1994. “Domain-general abilities applied to domain-specific tasks: sensitivity to probabilities in perception, cognition, and language.” Lingua92: 105-140.

Kiparsky, Paul. 1995. “The phonological basis of sound change.” In John Goldsmith (ed.), TheHandbook of Phonological Theory. Oxford: Blackwell, 640-670.

Kiparsky, Paul and Lisa Menn. 1977. “On the acquisition of phonology.” In J. Macnamara(ed.), Language Learning and Thought. Academic Press.

Kirchner, Robert. In progress. Lenition in Phonetically-Based Optimality-TheoreticPhonology. Ph.D. dissertation, UCLA, Los Angeles, CA.

Koutsoudas, Andreas, Gerald Sanders, and Craig Noll. 1974. “The application of phonologicalrules.” Language 50: 1-28.

Liljencrants, Johan, and Björn Lindblom. 1972. “Numerical simulation of vowel systems: therole of perceptual contrast.” Language 48: 839-862.

Lindblom, Björn. 1983. “Economy of speech gestures.” In Peter F. MacNeilage (ed.), TheProduction of Speech. New York: Springer-Verlag, 217-245.

Lindblom, Björn. 1990. “Explaining phonetic variation: a sketch of the H&H theory.” InWilliam J. Hardcastle and Alain Marchal (eds.), Speech Production and SpeechModelling. Dordrecht: Kluwer, 403-439.

Locke, John L. 1983. Phonological Acquisition and Change. New York: Academic Press.Locke, John L. and Dawn M. Pearson. 1990. “Linguistic significance of babbling: evidence

from a tracheostomized infant.” Journal of Child Language 17: 1-16.Macken, Marlys A. 1980a. “The child’s lexical representation: the ‘puzzle-puddle-pickle’

evidence.” Journal of Linguistics 16: 1-17.Macken, Marlys A. 1980b. “Aspects of the acquisition of stop systems: a cross-linguistic

perspective.” In Grace H. Yeni-Komshian, James F. Kavanagh, and Charles A. Ferguson(eds.), Child Phonology, Volume 2: Production. New York: Academic Press, 143-168.

Macken, Marlys A. 1995. “Phonological acquisition.” In John Goldsmith (ed.), TheHandbook of Phonological Theory. Oxford: Blackwell, 671-696.

Maddieson, Ian. 1984. Patterns of Sounds. Cambridge: Cambridge University Press.Maiden, Martin. 1995. A Linguistic History of Italian. London: Longman.Mangold, Max. 1962. Duden Aussprachewörterbuch. Mannheim: Bibliographisches Institut.McCarthy, John J. and Alan S. Prince. 1995. “Faithfulness and reduplicative identity.” In Jill N.

Beckman, Laura Walsh Dickey, and Suzanne Urbanczyk (eds.), Papers in Optimality

Page 37: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 37

Theory, University of Massachusetts Occasional Papers 18. Dept. of Linguistics, Universityof Massachusetts, Amherst, 249-384.

Odden, David. 1988. “Anti antigemination and the OCP.” Linguistic Inquiry 19: 451-475.Ohala, John J. 1974. “Phonetic explanation in phonology.” In Papers from the Parasession

on Natural Phonology. Chicago Linguistic Society, 251-74.Ohala, John. J. 1978. “Production of tone.” In Victoria Fromkin (ed.), Tone: A Linguistic

Survey. New York: Academic Press, 5-39.Ohala, John. 1981. “The listener as a source of sound change.” Papers from the Parasession

on Language and Behavior, Chicago Linguistic Society, 178-203.Ohala, John J. 1983. “The origin of sound patterns in vocal tract constraints.” In MacNeilage,

Peter F. ed. The Production of Speech. New York: Springer, 189-216.Ohala, John J. and Manjari Ohala. 1993. “The phonetics of nasal phonology: theorems and

data.” In Huffman, Marie K. ed. Krakow, Rena A. ed. Nasals, Nasalization, and theVelum. San Diego: Academic Press.

Orr, Carolyn. 1962. “Ecuadorian Quichua phonology.” In Benjamin Elson (ed.), Studies inEcuadorian Indian Languages I. Norman, OK: Summer Institute of Linguistics.

Pater, Joe. 1995. “Austronesian nasal substitution and other NC effects.” to appear in RenéKager, Harry van der Hulst, and Wim Zonneveld (eds.), Proceedings of the Workshop onProsodic Morphology.

Pater, Joe. 1996. “*NC.” Proceedings of the North East Linguistic Society 26, GraduateLinguistic Student Association, University of Massachusetts, Amherst, 227-239.

Pater, Joe. 1996. “Minimal violation and phonological development.” Ms., University of BritishColumbia, Vancouver.

Pierrehumbert, Janet B. 1980. The Phonology and Phonetics of English Intonation. Ph.D.dissertation, Massachusetts Institute of Technology. Distributed 1987 by Indiana UniversityLinguistics Club, Bloomington.

Pinker, Steven and Paul Bloom. 1990. “Natural language and natural selection.” Behavioraland Brain Sciences 13: 707-784.

Prince, Alan and Paul Smolensky. 1993. Optimality Theory: Constraint Interaction inGenerative Grammar. To appear: Cambridge, MA: MIT Press.

Rischel, Jørgen. 1974. Topics in West Greenlandic Phonology. Copenhagen: AkademiskForlag.

Rood, David. 1975. “The implications of Wichita phonology.” Language 51: 315-337.Rothenberg, Martin. 1968. The Breath-Stream Dynamics of Simple-Released-Plosive

Production. Bibliotheca phonetica, no. 6, Basel: Karger.Sagey, Elisabeth. 1986. The representation of features in non-linear phonology. Ph.D.

dissertation, Massachusetts Institute of Technology, Cambridge, MA. Published 1990:New York: Garland Press.

Schachter, Paul. 1969. “Natural assimilation rules in Akan.” International Journal ofAmerican Linguistics 35: 342-355.

Selkirk, Elizabeth. 1980. “Prosodic domains in phonology: Sanskrit revisited.” In MarkAronoff and Mary-Louise Kean (eds.), Juncture: A Collection of Original Papers.Saratoga, CA: Anma Libri, 107-29.

Page 38: Phonetically Driven Phonology: The Role of Optimality ...linguistics.ucla.edu/people/hayes/papers/Hayes1999...Phonetically Driven Phonology: The Role of Optimality Theory and Inductive

Phonetically-Driven Phonology p. 38

Silverman, Daniel. 1995. Acoustic Transparency and Opacity. Ph.D. dissertation, UCLA,Los Angeles, CA.

Smith, Neilson. 1973. The Acquisition of Phonology. Cambridge: Cambridge UniversityPress.

Smith, Bruce L. 1978. “Temporal aspects of English speech production: A developmentalperspective.” Journal of Phonetics 6: 37-67.

Smolensky, Paul. 1993. “Harmony, markedness, and phonological activity.” ROA-37, RutgersOptimality Archive, http://ruccs.rutgers.edu/roa.html.

Stampe, David. 1973. A Dissertation on Natural Phonology. Ph.D. dissertation, University ofChicago. Distributed 1979 by Indiana University Linguistics Club, Bloomington.

Steriade, Donca. 1993. “Positional neutralization.” presentation at the 23rd meeting of theNortheastern Linguistic Society, University of Massachusetts, Amherst.

Steriade, Donca. 1995. “Underspecification and markedness.” In John Goldsmith (ed.), TheHandbook of Phonological Theory. Oxford: Blackwell, 114-174.

Steriade, Donca. 1997. “Phonetics in phonology: the case of laryngeal neutralization.” Ms.,UCLA, Los Angeles, CA.

Tesar, Bruce and Paul Smolensky. 1993. “The learning of Optimality Theory: An algorithm andsome basic complexity results.” ROA-52, Rutgers Optimality Archive,http://ruccs.rutgers.edu/roa.html.

Tesar, Bruce and Paul Smolensky. 1995. “The learnability of Optimality Theory.” Proceedingsof the Thirteenth West Coast Conference on Formal Linguistics. Stanford, CA: Centerfor the Study of Language and Information, 122-137.

Tesar, Bruce B. and Paul Smolensky. 1996. “Learnability in Optimality Theory.” RutgersOptimality Archive 110, http://ruccs.rutgers.edu/roa.html.

Traill, Anthony. 1981. Phonetic and Phonological Studies of !Xõõ Bushman. Ph.D.dissertation, University of the Witwatersrand, Johannesburg.

Vance, Timothy J. 1987. An Introduction to Japanese Phonology. Albany: State Universityof New York Press.

Wells, John. 1982. Accents of English I: An Introduction. Cambridge: Cambridge UniversityPress.

Westbury, John R. 1979. Aspects of the Temporal Control of Voicing in ConsonantClusters in English, Texas Linguistic Forum 14, Department of Linguistics, University ofTexas at Austin.

Westbury, John. 1983. “Enlargement of the supraglottal cavity and its relation to stop consonantvoicing.” Journal of the Acoustical Society of America 74: 1322-36.

Westbury, John and Patricia Keating. 1986. “On the naturalness of stop consonant voicing.”Journal of Linguistics 22: 145-166.

Dept. of LinguisticsUCLALos Angeles, CA 90095-1543

[email protected]