[go: up one dir, main page]

Jump to content

Catherine Browman

From Wikipedia, the free encyclopedia
Catherine P. Browman
Born1945
DiedJuly 18, 2008
NationalityAmerican
Education
OccupationPhonologist
Known forArticulatory Phonology
SpouseRichard Moore (married unknown–1973)

Catherine Phebe Browman ([ˈkæθrɪn ˈfibi ˈbraʊ̯mən]; 1945–18 July 2008[1]) was an American linguist and speech scientist. She received her Ph.D. in linguistics from the University of California, Los Angeles (UCLA) in 1978.[2] Browman was a research scientist at Bell Laboratories in New Jersey (1967–1972). While at Bell Laboratories, she was known for her work on speech synthesis using demisyllables (a half syllable unit, divided at the center of the syllable nucleus).[3] She later worked as researcher at Haskins Laboratories in New Haven, Connecticut (1982–1998). She was best known for developing, with Louis Goldstein, of the theory of articulatory phonology, a gesture-based approach to phonological and phonetic structure. The theoretical approach is incorporated in a computational model that generates speech from a gesturally-specified lexicon. Browman was made an honorary member of the Association for Laboratory Phonology.[4]

Life and career

[edit]

Early life and family

[edit]

Catherine Browman was born in Missoula, Montana, in 1945. Her father, Ludwig Browman, worked on the faculty as a zoologist for the University of Montana, and her mother, Audra Browman, held a Ph.D. in biochemistry and worked as a historian in the Missoula area. Browman was the youngest of four siblings. She had two older brothers, Andrew and David Browman, as well as an older sister, Audra Adelberger.[5]

University of Montana, where Browman worked on her undergraduate degree from 1963 to 1967.
University of Montana, where Browman worked on her undergraduate degree from 1963 to 1967.

Higher education and career

[edit]

Browman received a Bachelor of Arts in mathematics from the University of Montana. After graduating in 1967, she moved to New Jersey and worked as a programmer for Bell Telephone Laboratories in Murray Hill. Shortly after, she began working as an Associate Member of Technical Staff in the Acoustic Research Department at Bell Telephone Laboratories where she contributed to the creation of "the first Bell Laboratories text-to-speech system". The software was demonstrated at the 1972 International Conference of Speech Communication and Processing in Boston.[6] Browman’s work in the Acoustic Research Department motivated her to return to higher education. In 1972, Browman enrolled at the University of California, Los Angeles. She studied under Peter Ladefoged and worked in a phonetics lab alongside Victoria Fromkin and others.[5]

Dissertation

[edit]
Bell Laboratories, where Browman worked from 1967 to 1972 developing text-to-speech software.
Bell Laboratories, where Browman worked from 1967 to 1972 developing text-to-speech software.

Browman’s dissertation, titled "Tip of the Tongue and Slip of the Ear: Implications for Language Processing",[2] analyzed and compared the lexical retrieval errors (the tip-of-the-tongue phenomenon) and the perceptual errors (“slips of the ear”) that occur during casual conversation.[7] The dissertation is divided into four chapters.

The first chapter provides a general description of the tip-of-the-tongue phenomenon; Browman analyzes the role of unit size (full syllable, sub-syllable, consonant cluster, etc.), within-unit position, and stress in this phenomenon. She points out that, whereas the first consonant in a word is recalled mostly on its own, the last consonant in a stressed syllable is usually recalled with the preceding vowel.[8] The second chapter covers a general description of “slip of the ear” data and analyzes perceptual errors. Browman discusses how the majority of perceptual errors occur within a word, and further that there is a tendency to perceive words as shorter than they actually are.[9]

The third chapter carries on to investigate perceptual errors within the word. Here, Browman cites two sources of perceptual errors: low-level acoustic misanalysis and interference from higher lexical levels.[10] The final chapter compares lexical and perceptual errors to each other and to the information in the acoustic signal. Browman notes a common mechanism to both errors, namely, a mechanism that focuses attention on the beginning and end of a word and the initial portion of the stressed syllable.[11]

Career post-Ph.D.

[edit]

Browman graduated with a Ph.D. in linguistics in 1978 after defending her dissertation on language processing. After graduating, Browman returned to Bell Telephone Laboratories to work as a postdoc with Osamu Fujimura. The two developed “Lingua”, a new demi-syllable based speech-to-text system.[12]

Haskins Laboratories, where Browman conducted phonological research from 1982 to 1998.
Haskins Laboratories, where Browman conducted phonological research from 1982 to 1998.

Browman taught in the Linguistics Department at New York University from 1982-1984. Upon leaving NYU, she was replaced by Noriko Umeda, whom Browman had worked with at Bell Laboratories prior to graduate school. Later that same year, Browman began her career at Haskins Laboratories in New Haven, Connecticut where she would develop Articulatory Phonology,[5] her most significant contribution to the field of linguistics.

Life outside linguistics

[edit]
Merrill Hall at the Asilomar Conference Center, where celebration of Browman's academic work took place in 2019.
Merrill Hall at the Asilomar Conference Center, where celebration of Browman's academic work took place in 2019.

Browman enjoyed hiking in her home state of Montana, as well as the Southwest of the United States. In addition to outdoor adventures, she enjoyed dance. Starting in the late 1980s, Browman taught “Dances of Universal Peace” in both New Jersey and Connecticut.[1]

Later life

[edit]

In 1987, Browman was diagnosed with multiple sclerosis. She gave her final public talk at the 1993 Laboratory Phonology Meeting held in Oxford, England. Two years later, she lost her ability to walk, but, determined to continue advancing her ideas, continued to work from home on grant proposals until her death. Browman passed away in her home on July 18, 2008.[5] Although no official memorial was held, an unofficial celebration of her work took place during an articulatory phonology conference at the Asilomar Conference Center in Monterey, California, in 2019.

Major accomplishments

[edit]

Articulatory phonology

[edit]

Browman's most cited contribution to the field of linguistics is in the subfield of phonology. Along with her research partner, Louis M. Goldstein, she proposed the theory of articulatory phonology early on in her research at Haskins Laboratory. Articulatory phonology creates phonological representations by describing utterances as patterns of overlapping gestures by the oral articulators. These gestural units account for both spatial and temporal properties of speech and reflect the movement of the articulators.[13] For example, the gesture involved in producing [p] includes closing the lips and spreading the glottis. This differs from previous phonological theories which captured linguistically significant aspects in speech as non-overlapping sequences of segmental units built from features. Articulatory phonology allows for overlapping gestures and temporal relations between articulators to be included in the phonological representation.[13] Articulatory phonology further posits that gestures are “prelinguistic units of action” that are harnessed for phonological structuring, suggesting a theory of phonological development.[14]

Gestures

[edit]

Gestures are the most basic unit of articulatory phonology, and are defined in terms of Elliot Saltzman’s task dynamics. These were instantiated in a gestural-computational model[15] at Haskins Laboratories that combines articulatory phonology and task dynamics with the articulatory synthesis system developed by Philip Rubin, Paul Mermelstein, and colleagues. In order to visualize what an utterance looks like, this model uses mathematics that describe damped mass-spring movements to characterize the articulators’ trajectories. According to Browman, two important features of gestures are specified using this model.[16] Firstly, gestures are speech tasks that represent the formation and release of oral constrictions, an action that usually involves the motion of multiple articulators. Secondly, gestures are defined by their characteristic motions through space and over time.

The anatomy of the vocal tract which Browman studied for her theory of articulatory phonology.

Speech tasks are further specified by tract variables. There are eight tract variables in Articulatory Phonology: lip protrusion (LP or PRO), lip aperture (LA), tongue tip constriction location (TTCL), tongue tip constriction degree (TTCD), tongue bodied constriction location (TBCL), tongue body constriction degree (TBCD), velic aperture (VEL), and glottal aperture (GLO). These tract variables have several values and specify the location of the constriction and the extent of the constriction of an oral articulator.[16] Constriction degree values include: closed, critical, narrow, mid, and wide; constriction location values include: protruded, labial, dental, alveolar, postalveolar, palatal, velar, uvular, and pharyngeal.[17] For example, [t] consists of the gestures "GLO wide" (to indicate voicelessness) and "TT alveolar closed" (to indicate place and extent of constriction).

Gestures are also units of phonological contrast, and so if two lexical items differ by (1) the presence or absence of a gesture, (2) parameter difference among gestures, or (3) the organization of gestures, the items can be said to contrast. Parameters, in this case, refer to constriction location, stiffness (distinguishes vowels and glides), and dampening (distinguishes flaps and stops).[14] Contrasts can be seen in gestural scores. As an example, Browman illustrates that ‘add’ and ‘had’ differ by only a glottal gesture. She also explains that, whereas ‘had’ and ‘add’ previously would have been analyzed as differing by the absence of a segment (/h/) and ‘bad’ and ‘pad’ by a single feature. ([voice]), the use of gestures conveys both contrasts by the presence or absence of gesture, simplifying the analysis.[14]

Syllable structure

[edit]
A standard diagram of syllable structure, consisting of a consonant in the onset (O) and coda (C) positions and a vowel in the nucleus (N) position. The nucleus and coda together make up the rhyme (R).

Browman takes two approaches in analyzing syllable patterns. In the first approach, she describes a local organization in which individual gestures are coordinated with other individual gestures. In the second approach, she describes a global organization in which gestures form larger groupings. Browman analyzes articulatory evidence from American English words containing different kinds of consonants and clusters. Under Bowman's Articulatory Phonology analysis, the relation between the syllable-initial consonant and the following vowel gesture is defined by a global measure. In contrast, the relation of the syllable-final consonants and the preceding vowel is based on local organization.[18]

Syllable-initial consonant features
[edit]

Browman compared English words containing different numbers of consonants in their onsets. She found that as more consonants are added (example: sat, spat, splat), the timing of the whole onset cluster is adjusted. Browman notes that the timing of the onset can be defined by averaging the center (the time when the articulator reaches its place of articulation) of each onset consonant to produce one center for the whole consonant cluster, which she denotes this a c-center. As an example, in spat, the /s/ is articulated earlier than it would be in sat and the /p/ is articulated later than it would be in pat, but the average of the centers of /s/ and /p/ is equivalent to the center of the /p/ in pat. This interaction between consonants is what Browman calls global organization.[18]

Syllable-final consonant features
[edit]

Browman also compared English words containing different numbers of consonants in their codas. She found that the first consonant in the coda has a constant timing relationship with the preceding vowel, which is not affected by the addition of more consonants to the coda. For example, the /t/ in spit and the /t/ in spits are timed in the same way, and do not shift their centers in a cluster like would occur in syllable initial clusters. Additionally, an additional consonant to the coda has a constant timing relationship with the first consonant in the coda. This constant timing of coda consonants in relation to each other is what Browman calls local organization.[18]

Paper presentations

[edit]

"The Natural Mnemopath: or, What You Know About Words You Forget"

[edit]

Browman’s paper, "The Natural Mnemopath: or, What You Know About Words You Forget", was presented at the 86th meeting of the Acoustical Society of America. In this paper, Browman discusses the possible mechanisms that people use to retrieve words from their memory. In order to study what these mechanisms are, Browman compares “approximation words” produced by the individual with the “target words” during the tip-of-the-tongue phenomenon. A “target word” is the word that an individual is trying to bring to mind and say out loud, whereas an “approximation word” is the word that is produced in place of the target word that could not be recalled fully. For example, in a situation where one wants to produce “disintegration” (the target) but cannot fully recall it, one may produce “degradation” (the approximation). By comparing the qualities of “approximation words” with their corresponding “target word”, Browman investigates what features of words people use in recalling them. Her final analysis reveals that semantic factors, syntactic categories, syllable number, and the initial phoneme and grapheme are known prior to recall.[19]

"Frigidity or feature detectors-slips of the ear"

[edit]

Another one of her papers, "Frigidity or feature detectors-slips of the ear", was presented at the 90th meeting of the Acoustical Society of America in San Francisco, California in 1975. The paper discusses how the mistakes made in perceptual processing can indicate the mechanisms involved in perception. In this study, over 150 misperceptions were collected by Browman and other researchers. The misperceptions were then categorized in terms of phonemic similarity and location with respect to unit boundaries (word and syllable boundaries). Browman notes four types of changes in lexical structure in the perception of spoken words, namely shifts in final word boundaries, insertions of final word boundaries, deletions of final word boundaries, and insertions of syllables. Respectively, examples of each of these changes include notary public/nota republic, herpes zoister/her peas oyster, popping really slow/prodigal son, and Freudian/accordion.[20]

"Targetless schwa: an articulatory analysis"

[edit]

Her paper "Targetless schwa: an articulatory analysis" was presented at the Second Conference of Laboratory Phonology in Edinburgh, Scotland which ran from June 28, 1989 to July 3, 1989. In this paper, Browman analyzes the movements of the tongue in utterances involving the production of the schwa. Her purpose in carrying out this experiment was to investigate whether there exists a specific schwa tongue target. The tongue position for schwa is similar to the tongue's resting position (when it is inactive). This lead researchers like Bryan Gick and Ian Wilson to cite schwa-sounds as not having a specified target.[21] Browman's paper looks at data from the Tokyo X-ray archive produced by a speaker of American English. To see what gestures could underlie the schwa, Browman analyzes the production of [pVpəpVp] sequences. The results found that during the gap between the second and third lip closures, the tongue body moves toward a schwa-like position, where a schwa sound is then articulated. With these results, she concludes that target position for schwa is sometimes specified.[22]

Assessments and role in modern controversy

[edit]

Positive reception

[edit]

Bowman's Articulatory Phonology has been noted by other phonologists, like Nancy Hall, as being successful in analyzing the way pronunciation changes during casual speech. Hall points out that when spoken in a casual conversation, some sounds in words blend into their surroundings or disappear altogether. This contrasts with carefully spoken, isolated words whose sounds are all audible. Hall notes that most phonological models analyze these changes with phonological rules that apply at certain rates of speech, while Browman’s theory explains these alterations as resulting from a reduction of gestures or an increase of their overlap.[23]

Critique

[edit]
A list of suprasegmental features which phonologists believe need to be worked on in the theory of articulatory phonology.

Nancy Hall has also criticized Browman’s Articulatory Phonology for its lack of attention to different phonological phenomena. Hall points out that there are several sounds for which no one has worked out what types of articulatory gestures are involved. Without concrete gestures for sounds, articulatory phonology is not able to represent, and therefore not able to analyze, language phenomena that involve those sounds. Additionally, Hall criticizes Browman’s theory as lacking in sufficient suprasegmental structure, as it gives primacy to the movements of the articulators rather than prosodic features. Along these same lines, Articulatory Phonology is criticized for having an underdeveloped view of tone (including lexical tone and intonation), metrical structure (like feet and stress), and periodic morphology. After her critique, Hall does recognize that these gaps in Articulatory Phonology are due to both the general lack of understanding of stress and tone production and the small number of researchers working on Articulatory Phonology.[23]

Involvement in modern controversy

[edit]

Overview of phonological debate

[edit]

An ongoing debate among phonologists revolves around the specification of the interface between phonology and phonetics, and the extent to which phonological representations should differ from phonetic representations, and vice versa. The two sides to this debate are those, like Janet Pierrehumbert, who believe that phonological and phonetic representations are essentially different from one another, and those that believe they ought to be as similar as possible. Browman took the latter positions as she believes that the articulatory gesture is the single basic unit for both phonological and phonetic representations.

Browman’s perspective

[edit]

From Browman’s point of view in the theory of articulatory phonology, the relationships between the events of the vocal tract (phonetics) should have as close a correlation as possible with the language-specific treatment of sounds (phonology). Her basic unit, the articulatory gesture, is an abstract description of the articulatory events occurring in the vocal tract during speech. Browman believes that, in defining phonological units using these gestures, researchers can provide a set of articulatory-based natural classes, specify core aspects of phonological structure in particular languages, and account for phonological variations (allophonic variation, coarticulation, and speech errors). Thus, from Browman’s side, there is no interface between phonology and phonetics, as their representations are the same.[14]

Browman’s opponents

[edit]

In contrast to Browman, for phonologists/phoneticians supporting the former view, the kind of representations necessary for phonology and phonetics are fundamentally different. These researchers believe that the categorical alterations of phonology and the imprecise phonetic movements in speech cannot be captured by the same representation. For example, Pierrehumbert argues that phonetic representations must be quantitative and physically-based in the articulators, while phonological representations must be qualitative and symbolic of the cognitive perceptions of sounds.[24]

Pierrehumbert's position, a subtype of what is referred to as the Targets and Interpolation model (and also utilized by Patricia Keating and Susan Hertz), relates a feature to one or more parameters in a domain (for example the feature [nasal] specifies the parameter “velic opening”) and interprets the value of the feature ([+nasal] means some amount of velic opening over some time interval). In this model, features specify the targets toward which the articulators aim, and between which the articulators move. This system gives primary focus to the targets themselves and secondary focus to the movements in between. This contrasts with Browman’s Articulatory Phonology which treats movements towards and away from these targets as equally important.[25]

Weighing the two positions

[edit]

Browman’s Articulatory Phonology explains phonological observations in a way that reflects the physical reality of the articulators. For example, Browman explains the “disappearance” of [t] in the phrase perfect memory. In fast speech, when this [t] is not heard, it would be described as being deleted along with all of its features under the Targets and Interpolations model. However, in Browman’s analysis, the [t] is still in fact articulated (the blade of the tongue makes the corresponding gesture), but it is “hidden” by temporal overlap of the preceding and following articulations ([k] and [m]). This same system can be used to explain assimilations.[25] In Browman’s analysis, resulting assimilations/deletions occur “more-or-less” and not “all-or-none” as other theories are bound to; whereas articulatory phonology can account for both gradient and categorical information, previous theories have to adhere to the categorical.

Selected publications

[edit]
  • Browman, Catherine P.: Rules for demisyllable synthesis using LINGUA, a language interpreter. In: Proc. IEEE, ICASSP'80. New York : IEEE, 1980, S. 561–564
  • Browman, C. P., Goldstein, L., Kelso, J. A. S., Rubin, P. E., & Saltzman, E. (1984). Articulatory synthesis from underlying dynamics. Journal of the Acoustical Society of America, 75, S22.
  • Browman, C. P., & Goldstein, L. M. (1986). Towards an articulatory phonology. Phonology Yearbook, 3, 219–252.
  • Browman, C. P. (1986). The Hunting of the Quark: The Particle in English. Language and Speech, Vol. 29, Part 4, 311–334.
  • Browman, C. P., & Goldstein, L. (1990). Gestural specification using dynamically-defined articulatory structures. Journal of Phonetics, 18, 299–320.
  • Browman, C. P.,& Goldstein, L. (1991). Tiers in articulatory phonology, with some implications for casual speech. In J. Kingston and M. E. Beckman (eds), Papers in Laboratory Phonology I: Between the Grammar and the Physics of Speech. Cambridge, U. K.: Cambridge University Press. (pp. 341–376).
  • Browman, C. P., & Goldstein, L. (1992). Articulatory Phonology: An Overview. Phonetica, 49, 155–180.
  • Browman, C.P. & Goldstein, L. (2000). Competing constraints on intergestural coordination and self-organization of phonological structures. Bulletin de la Communication Parlée, no. 5, p. 25–34.
  • Goldstein, L., & Browman, C. P. (1986) Representation of voicing contrasts using articulatory gestures. Journal of Phonetics, 14, 339–342.
  • Saltzman, E., Rubin, P. E., Goldstein, L., & Browman, C. P. (1987). Task-dynamic modeling of interarticulator coordination. Journal of the Acoustical Society of America, 82, S15.

References

[edit]
  1. ^ a b "Catherine Phebe Browman". www.haskins.yale.edu. Archived from the original on 2021-04-18. Retrieved 2021-04-18.
  2. ^ a b "Ph.D. Recipients". 2011-09-16. Archived from the original on 2011-09-16. Retrieved 2021-04-18.
  3. ^ ""Klatt Record" Audio Examples". festvox.org. Retrieved 2021-04-18.
  4. ^ "About the Association for Laboratory Phonology | labphon". labphon.org. Retrieved 2021-04-18.
  5. ^ a b c d "Catherine P. Browman 1945–2008" (PDF). Archived (PDF) from the original on 2008-10-31.
  6. ^ Klatt, D.H. (1987). "Review of Text-to-Speech Conversation for English". p. 785. Archived from the original on 2013-01-03.
  7. ^ Browman, Catherine P. (1978-08-01). "WPP, No. 42: Tip of the Tongue and Slip of the Ear: Implications for Language Processing". {{cite journal}}: Cite journal requires |journal= (help)
  8. ^ Browman P., Catherine (1978-08-01). "WPP, No. 42: Tip of the Tongue and Slip of the Ear: Implications for Language Processing". pp. 1–51. Archived from the original on 2016-09-28.
  9. ^ Browman, Catherine P. (1978-08-01). "WPP, No. 42: Tip of the Tongue and Slip of the Ear: Implications for Language Processing". pp. 52–76. Archived from the original on 2016-09-28.
  10. ^ Browman, Catherine P. (1978-08-01). "WPP, No. 42: Tip of the Tongue and Slip of the Ear: Implications for Language Processing". pp. 77–93. Archived from the original on 2016-09-28.
  11. ^ Browman, Catherine P. (1978-08-01). "WPP, No. 42: Tip of the Tongue and Slip of the Ear: Implications for Language Processing". pp. 94–100. Archived from the original on 2016-09-28.
  12. ^ Browman, Catherine P. (1980). "Demisyllabic speech synthesis". The Journal of the Acoustical Society of America. 67, S13 (S1): S13. Bibcode:1980ASAJ...67R..13B. doi:10.1121/1.2018063.
  13. ^ a b Browman, Catherine P., Goldstein, Louis M. (1986). "Towards and Articulatory Phonology" (PDF). Phonology Yearbook. 3: 219–252. doi:10.1017/S0952675700000658. JSTOR 4615400. S2CID 62153433 – via JSTOR.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  14. ^ a b c d Browman, Catherine P, Golstein, Louis M. (1992). "Articulatory Phonology: An Overview" (PDF). Phonetica. 49 (3–4): 155–180. doi:10.1159/000261913. PMID 1488456. S2CID 18762167.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  15. ^ "Haskins Gestural Model". Haskins Laboratories. Retrieved 2022-08-01.
  16. ^ a b Browman, Catherine P., Goldstein, Louis M. (1990). "Gestural specification using dynamically-defined articulatory structures". Journal of Phonetics. 18 (3): 299–320. doi:10.1016/S0095-4470(19)30376-6.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  17. ^ Browman, Catherine P., Goldstein, Louis M. (1989). "Articulatory Gestures as Phonological Units" (PDF). Phonology. 6 (2): 209. doi:10.1017/S0952675700001019. JSTOR 4419998. S2CID 4646833 – via JSTOR.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  18. ^ a b c Browman, C.; Goldstein, L. (1988). "Some Notes on Syllable Structure in Articulatory Phonology". Phonetica. 45 (2–4): 140–155. doi:10.1159/000261823. PMID 3255974. S2CID 15241003.
  19. ^ Browman, Catherine P. (1976). "The Natural Mnemopath: or, what You Know About Words You Forget" (PDF). UCLA Working Papers in Phonetics. 31: 62–67 – via escholarship.
  20. ^ Browman, Catherine P. (1976). "Frigidity or Feature Detectors - Slips of the Ear" (PDF). UCLA Working Papers in Phonetics. 31: 68–71 – via escholarship.
  21. ^ Gick, Bryan, Wilson, Ian (2006). "Excrescent schwa and vowel laxing: Cross-linguistic responses to conflicting articulatory targets" (PDF). Laboratory Phonology. 8: 635–659. doi:10.1515/9783110197211.3.635. ISBN 978-3-11-017678-0 – via Google Scholar.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  22. ^ Browman, Catherine P., Goldstein, Louis M. (1990). ""Targetless" Schwa: An Articulatory Analysis". Haskins Laboratories Status Report of Speech Research. SR-101/102: 194–219. CiteSeerX 10.1.1.454.7839.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  23. ^ a b Hall, Nancy (2010-09-06). "Articulatory Phonology". Language and Linguistics Compass. 4 (9): 818–830. doi:10.1111/j.1749-818X.2010.00236.x – via Google Scholar.
  24. ^ Pierrehumbert, Janet (1990). "Phonological and phonetic representation" (PDF). Journal of Phonetics. 8 (3): 375–394. doi:10.1016/S0095-4470(19)30380-8.
  25. ^ a b Keating, Patricia A. (1996-08-01). "The Phonology-Phonetics Interface" (PDF). UCLA Working Papers in Phonetics. 92: 45–60 – via Google Scholar.
[edit]