Forum Replies Created
-
AuthorPosts
-
I admit that I’m not an articulatory phonetician, and there seems to be quite a bit of variety when it comes to describing the parts of the tongue. The division of the tongue into parts is arbitrary and conventional and the choice of terminology seems to depend on your reasons for referring to the various parts. My sense is that “front, mid, back” refer to the body of the tongue, which is considered to be located behind the blade. In reality these articulatory parameters are continuous rather than discrete, so classifying a particular sound as involving a particular part of the tongue is not necessarily all that straightforward.
Yes, you’re on the right track. A major difference between taps/flaps and stops is that there’s no buildup of air pressure behind the occlusion. A tap makes contact, but not long enough to “stop” the airflow. (At least in theory. I’m not actually sure whether this has been demonstrated instrumentally.)
You’re definitely on the right track. There are a couple of ways that consonants and vowels can be defined, phonetically and phonologically. Phonetically approximants are very vowel-like, though they might have steeper transitions than vowels and probably have rather shorter durations. Phonologically, approximants often behave more like consonants in the ways you’ve pointed out, i.e. not generally appearing in syllable nuclei. Of course, the definition of syllables and what might make a good nucleus will depend on the specific language that you’re dealing with, and how it all fits into the phonological system as a whole.
A question for the ages! The likely answer is probably that the “primary” cardinal vowels more or less align to the English that was spoken by Daniel Jones, and the “secondary” cardinal vowels correspond to the rounded/unrounded counterparts of those.
This is a great question. The text of the video is imprecise in that I refer to “English” when I should really be referring to certain varieties, or even more specifically my own speech. The main point is that [b] in the canonical sense of the IPA might be expected to be produced with pre-voicing, but in actual language data we often hear/interpret sounds as belonging to a certain category even though they lack a particular characteristic. In many varieties of North American English the cue to stop voicing the length of the vowel that precedes the stop, rather than voicing during the stop closure itself.
What I mean is that although we can roughly relate things like tongue position to vowel quality, they are really described more according to their auditory impression.
This appears to be true for all languages and refers mainly to the lack of clear acoustic indicators of word boundaries. That is, if you were to look at a recorded utterance of spontaneous connected speech, you would not be able to identify the beginnings and ends of words based on the acoustics unless you already knew what the words were in the first place. This is true regardless of the language — to my knowledge no language places pauses between words in connected speech, for example.
That being said, languages (and even varieties of a single language) can and do differ in the application of connected speech processes. For example, (some varieties of) English may allow for consonant assimilation across word boundaries while others may not. In Mandarin Chinese, consonants appear to be less affected by such processes, but tones are coarticulated across syllables and words in complex ways (Shen. (1990). Tonal coarticulation in Mandarin https://doi.org/10.1016/S0095-4470(19)30394-8).
For more on cross-language similarities in connected speech processes, you may be interested in this paper (though the sample comparisons are limited to only a few European languages): Barry & Andreeva. (2002). Cross-language similarities and differences in spontaneous speech patterns https://doi.org/10.1017/S0025100301001050
Jonathan Dowse has also made a resource of extended IPA symbols with accompanying audio: https://jbdowse.com/ipa/
Keith Johnson’s companion page to A Course in Phonetics also provides spectrograms for each phone in the IPA: https://linguistics.berkeley.edu/acip/
Left click on a symbol to listen to it. Right click on a symbol to see the spectrogram.
1) We could introduce all sorts of further levels of detail as you point out, which may or may not be useful. Here is a paper from Bob Ladd addressing a similar question about phonemic categories.
It’s also possible that phonemes and phones aren’t the “right” way to characterize linguistic structure in speech at all.
2) I don’t think this question has a straightforward answer. The process of hearing and then categorizing a sound has to involve the physical structures of the ear/cochlea/implant as well as the brain. Auditory research may focus on the physical mechanics of hearing, the perceptual result of hearing, or various combinations of the two.
If you’d like to browse some research in these areas you might consult the Speech Communication and Psychological and Physiological Acoustics sections of the Journal of the Acoustical Society of America. The links above take you to the most recent issue but of course back issues are also worth perusing.
The International Phonetic Association also provides audio from a number of speakers in their interactive chart.
“Phone” refers to sounds from an acoustic as well as an articulatory perspective. One way to think about phones is that they are directly related to the physical reality of the sound in some way, either its production or its acoustic properties. Although technically there are no acoustic definitions for IPA symbols, we use them as a shorthand for both acoustic and articulatory descriptions of speech. Importantly, phones are generally things that can be captured, recorded or measured directly. In contrast, the auditory sensation of the sound is more closely related to the notion of the phoneme, and involves perceptual judgments and categorizations that are more difficult to observe, requiring indirect observation via auditory perception experiments, behavioral tasks, or listener judgments.
In the Sound editor window > File > Preferences > untick “show IPA chart”.
-
AuthorPosts