I is for Intelligibility

28 05 2017

man on phoneI phoned my Spanish internet provider the other day and tried to explain a problem I was experiencing. Clearly, I was unintelligible because the operator immediately switched me to an English-speaking operator. Even then, I had trouble getting my message across, because I didn’t know how to say ‘tráfico de datos’ in English. Was I again being unintelligible, or simply incomprehensible?

This reminded me that, in a session on my MA TESOL course last summer, during a discussion on the goals of pronunciation teaching, one student mentioned the fact that she’d heard that there was a distinction between intelligibility and comprehensibility, and she asked me to explain the difference.

I volunteered an off-the-cuff explanation (as one does!), suggesting that intelligibility is a function of speakers (and particularly of their pronunciation), while comprehensibility (invoking Krashen) is a function of texts. Or, put another way, output is (to a greater or lesser extent) intelligible, while input is (to a greater or lesser extent) comprehensible.

Even as I said this, I could see there were problems. Communication is by definition reciprocal, so is it possible to gauge either intelligibility or comprehensibility without reference to interlocutors – either listeners or readers? Moreover, whether listening to a speaker or reading a text, your degree of understanding is going to be experienced in a similar way: ‘I understand it a bit, a lot, or not at all.’

Since that awkward day (sorry, Autumn, my bad!), I’ve had a chance to research the difference.  For example, Munro, Derwing  and Morton (2006, p. 112), referencing earlier work by the first two authors, define intelligibility ‘as the extent to which a speaker’s utterance is actually understood’, whereas comprehensibility ‘refers to the listener’s estimation of difficulty in understanding an utterance’. (So I wasn’t entirely wrong, perhaps). They further distinguish both from accentedness, i.e. the degree to which the pronunciation of an utterance sounds different from an expected production pattern.’ And they add: ‘Although comprehensibility and accentedness are related to intelligibility, they are partially independent dimensions of L2 speech.  An utterance that is rated by a listener as “heavily accented,” for instance, might still be understood perfectly by the same listener.  Furthermore, two utterances that are fully intelligible might entail perceptibly distinct degrees of processing difficulty, such that they are rated differently for comprehensibility.’

On the other hand, and markedly differently, Nelson (2011), referencing papers by Smith (1992) and Smith and Nelson (1985), defines intelligibility as ‘word and/or utterance recognition, involving the sound system’, and comprehensibility as ‘word/utterance meaning, or locutionary force’. To further complicate matters, they introduce the term interpretability, i.e. ‘the meaning behind the word/utterance, or illocutionary force’.

MacKay (2002, p. 52) helpfully (?) unpacks these distinctions with an example:

If a listener recognises that the word salt is an English word rather than a Spanish word, English is then intelligible to him or her. If the listener in addition knows the meaning of the word, it is comprehensible, and if he or she understands that the phrase, ‘Do you have any salt?’, is intended to be a request for salt, then he or she is said to be able to interpret the language.

Put another way, if you’re having trouble understanding someone, it may be a case of not recognizing what they’re saying (likely their fault), or not knowing what they mean (probably your fault), or not knowing what their intention is (could be anyone’s fault). Going back to my exchange on the phone, I can sort of apply these distinctions, but I’m also wondering if accentedness was the reason why I was switched to the English-speaking operator, since the first operator made no attempt even to negotiate some sort of understanding. (Mercifully, in a subsequent conversation with yet another operator, I was actually congratulated on my Spanish – probably because, although heavily accented, I was intelligible. Or do I mean  comprehensible?)

cómo es carlosThis raises another issue related to intelligibility: that it is highly subjective. As Rajagopolan (2010, p. 467) argues, ‘No matter how one tries to define intelligibility from a neutral standpoint, the question that cries out for an answer is: “intelligible for who?”’ Why was I intelligible to one of my interlocutors but not to another? Was it, indeed, nothing to do with accent at all, but more to do with attitude? After all, it is not accents that are intelligible, it is people.  I never tire of quoting Bamgbose (1998) on the subject: ‘Preoccupation with intelligibility has often taken an abstract form characterized by decontextualised comparison of varieties. The point is often missed that it is people, not language codes, that understand one another” (quoted in Jenkins,  2007, p. 84). Thus, intelligibility may have as much to do with our overall impression of a speaker as it has to do with the intrusiveness of their accent (or lack thereof) – not dissimilar to the notion of ‘comfortable intelligibility’ (Kenworthy 1987) or ‘perceived fluency’ (Lennon 2000, cited in Götz 2013).

Either way, this doesn’t provide a lot of solace to those who have to assess a learner’s pronunciation, as in the kinds of oral tests favoured by many public exams nowadays, using descriptors such as these:

  • is easy to understand throughout; L1 accent has minimal effect on intelligibility
  • can generally be understood throughout, though mispronunciation of individual words or sounds reduces clarity at times

What are the chances that any two raters will agree?

References

Götz, S. (2013) Fluency in native and nonnative English speech. Amsterdam: John Benjamins.

Jenkins, J.  (2007) English as a Lingua Franca: Attitude and Identity. Oxford: Oxford University Press.

Kenworthy J (1987) Teaching English Pronunciation. Harlow:  Longman.

McKay, S. (2002) Teaching English as an International Language. Oxford: Oxford University Press.

Munro, M.J., Derwing, T.M., & Morton, S.L. (2006) ‘The mutual intelligibility of L2 speech.’ Studies in Second language Acquisition, 28.

Nelson, C. L. (2011). Intelligibility in World Englishes: Theory and Application. New York: Routledge.

Rajagopolan, K. (2010) ‘The soft ideological underbelly of the notion of intelligibility in discussions about “World Englishes”.’ Applied Linguistics, 31/3.

Smith, L. E. (1992). Spread of English and issues of intelligibility. In B. B. Kachru (ed.) The Other Tongue: English across Cultures (Second Edition). Urbana, IL: University of Illinois Press.

Smith, L. E. and Nelson, C. L. (1985). ‘International intelligibility of English: Directions and resources.’ World Englishes, 4(3).

Illustrations by Quentin Blake from Success with English, by Geoffrey Broughton, Penguin Education, 1968.





I is for Intonation

22 02 2015

For someone who has never enjoyed – nor succeeded at – teaching intonation, I was gratified to find that John Wells shares my scepticism. In his latest book, Sounds Interesting: Observations on English and general phonetics (Wells 2014) he writes:

Most learners of English as an additional language… are not taught intonation and do not study intonation. Yet they do not speak English on a monotone. A few may be gifted mimics who succeed in imitating intonation along with everything else in the phonetics of the target language. For most, though, their intonation patterns are presumably those of their first language, transferred to English.

The same applies to English learners of foreign languages.

On the whole, even though this may make the speaker sound strange, typical of their origin, boring or annoying, it seems not to cause much of an actual breakdown in communication. How can this be?

It must be because the principles of intonation in language are sufficiently universal for us to be able to rely on them even in a foreign-language situation.

Wells Sounds InterestingWells (who, I hope I don’t have to remind you, is probably Britain’s foremost phonetician) goes on to look at the different functions of intonation in terms of their universality. The three systems in which intonation is implicated are: 1. the tonality system, i.e. the chunking of speech into meaningful units; 2. the tonicity system, i.e. the assigning of nuclear stress within these units; and 3. the tone system, i.e. the use of changes in pitch to convey certain kinds of meaning, such as assertion vs non-assertion, completion vs non-completion, high involvement vs low involvement.

Of the three, he argues that tonality and the meaningful use of tones seem both to be linguistic universals. Tonicity, on the other hand, does not. Whereas in English we would ask

Do you want your coffee WITH milk or withOUT milk?

in Spanish this would more likely be:

¿Quiere el café con LECHe or sin LECHe?

Given the way that nuclear stress plays an important role in flagging new information in discourse, this would seem to be something worth teaching, if not for production, at least for recognition.

human_body faceA quick scan of a number of current coursebooks suggests that it is an area that does indeed get fairly regular – if not detailed – treatment. But so too do the other, supposedly universal, features of intonation, such as the use of a wide pitch span, or high key, to signal politeness. Or the different intonation contours of wh- and yes/no questions. Or the use of falling intonation to signal the end of a list. And so on.

Are we wasting our students’ time? If their goal is to be communicatively effective in international contexts, probably yes. In making her case for a lingua franca phonological core, Jennifer Jenkins (2000, p. 153) argues:

Even if it were possible to teach pitch in the classroom, I do not believe that the use of “native speaker” pitch movements matters very much for intelligibility in interactions among [non-native speakers]. This feature of the intonation system seldom leads to communication problems in the [interlanguage talk] data …

But, anticipating Wells, she goes on to argue:

Nuclear stress, however is a completely different story [and] it is crucial for intelligibility in interlanguage talk (ibid.).

With regard to the redundancy of teaching the rest of the systems, Wells (who happens to be a fluent speaker of Esperanto) nails his case thus:

These points about intonation in EFL applied equally to intonation in Esperanto: somehow speakers manage to understand one another in the language very well despite the lack of any agreed, taught or described intonation system.

References:

Jenkins, J. 2000. The Phonology of English as an International Language. Oxford: Oxford University Press.

Wells, J.C. 2014. Sounds Interesting: Observations on English and general phonetics. Cambridge: Cambridge University Press.

(This post started life as a thread on the Facebook site of the ELT Writers Connected group.)





P is for Phoneme

17 03 2013

aeIs the phoneme dead?

We’ve been doing a unit on phonology, and my doubts about the phoneme are partly a reflection of my students’ own difficulties with the concept.  Not surprisingly, I’ve been having to tease out the difference between phonemic symbols and phonetic symbols, and even between phonology and phonics.

But all the time I’ve been dreading the day when someone challenges this definition (from An A to Z):

‘A phoneme is one of the distinctive sounds of a particular language. That is to say, it is not any sound, but it is a sound that, to speakers of the language, cannot be replaced with another sound without causing a change in meaning’.

The definition has an authoritative ring to it, not least because it simply re-states what by many is considered a founding principle of all linguistics. Listen to Jakobson (1990: 230) who practically bellows the fact: ’The linguistic value … of any phoneme in any language whatever, is only its power to distinguish the word containing this phoneme from any words which, similar in all other respects, contain some other phoneme’ (emphasis in original).

dHow is it, then, that we regularly teach that the ‘s’ at the end of cats is a different phoneme than the ‘s’ at the end of dogs?  If different phonemes flag different meanings, what change of meaning is represented in the difference between /s/ and /z/? Or, for that matter, between final /t/ and final /d/, as in chased and killed?   If there is no difference in meaning (since /s/ and /z/ both index plurality, and /t/ and /d/ both index past tense), aren’t they simply different ways of pronouncing the same phoneme?

Phonemes, after all, are not phones, i.e. sounds. Acoustically speaking there are many different ways – even for a single speaker – of realizing a specific phoneme. This is why Daniel Jones (1950: 7) defined phonemes as ‘small families of sounds, each family consisting of an important sound of the language together with other related sounds’ (my emphasis). These related sounds are the different allophones of the phoneme.

Hence the analogy with chess pieces: the way individual chess pieces are designed will vary from set to set, but they will always bear certain family resemblances, bishops all having mitres, and knights having horse heads, etc. More important than their form (and one reason that this analogy seems to work so well),  is the relationship that they have with one another, including the ‘rules’ that constrain the way that they may behave. Bishops can’t do what knights do, nor go where knights go, and vice versa.

Phonemes – like chess pieces – are defined in relation to one another. As Bloomfield (1935: 81) put it, ‘the phoneme is kept distinct from all other phonemes of its language. Thus, we speak the vowel of a word like pen in a great many ways, but not in any way that belongs to the vowel of pin, and not in any way that belongs to the vowel of pan: the three types are kept rigidly apart.’

ngIn fact, a purely structuralist argument would say it’s not actually about meaning at all, it’s about ‘complementary distribution’, or, as Jones (1950: 132) puts it (also bellowing): ‘NO ONE MEMBER EVER OCCURS IN A  WORD IN THE SAME PHONETIC CONTEXT AS ANY OTHER MEMBER’.  That is to say, the /s/ at the end of cats and the /z/ at the end of dogs never occur where the other occurs, and vice versa. But is this true? What happens to the /z/ at the end of dogs in the sentence: The dogs seem restless? Hasn’t it become /s/?

Ah, yes, you say – but sounds in connected speech are influenced by their environment, blending with or accommodating to the sounds around them. The true test for a phoneme is if it distinguishes isolated words, like pin and pen – those infamous minimal pairs. But when are words ever isolated? When does the phonetic environment not have an effect?  And isn’t the voiced /z/ at the end of dogs, and the unvoiced /s/ at the end of cats also an effect of the phonetic environment? That is to say, where does connected speech start becoming connected if not at the juxtaposition of two sounds?

It gets even trickier when we consider weak forms. There are at least two different ways of saying can, as in I can dance: I /kæn/ dance, or I /kən/ dance. Both are possible, even where the stress remains on dance. The latter is simply more reduced. But the meaning is unchanged. [kæn] and [kən] are not minimal pairs. They are different phonetic realizations of the same word (hence the square brackets). Phonetic. Not phonemic. Shouldn’t, therefore, they both be transcribed as /kæn/?

In researching this, I’ve encountered a lot of debate as to whether the concept of the phoneme has any currency at all any more. As one scholar puts it, ‘the phoneme, to all appearances, no longer holds a central place in phonological theory’ (Dresher 2011: 241). The problem seems to boil down to one of identification: is the phoneme a physical thing that can be objectively described, or is it psychological – a mental representation independent of the nature of the acoustic signal?

eThe answer to the first question (is it physical?) seems to be no, there are no ‘distinctive features’ or family resemblances (such as voicing or lip-rounding) that unequivocally categorize sounds as belonging to one phoneme family and not another.

On the other hand, there is some evidence, including neurological, that the phoneme does have a psychological reality, and that speakers of languages that share the same sounds will perceive these sounds differently, according to whether they flag meaning differences or not. (This is analogous to the idea that if your language does not distinguish between blue and green, you will see both blue and green as being shades of the same colour).  This, in turn, is consistent with Jakobson’s claim that ‘if we compare any two particular languages, we will see that from an acoustic and motor point of view their sounds could be identical, while the way they are grouped into phonemes is different’ (p. 223).

It’s not for nothing, therefore, that the concept of the phoneme has given us the very valuable distinction between emic and etic, i.e. the perspective of the insider vs that of the outsider. Phonemes capture something that we, the insiders, intuit about language, even if their objective reality is elusive. We know that pronunciation impacts on meaning, even if we don’t quite know how.

Perhaps Jakobson (op. cit. 230) had good reason to claim, therefore, that ‘the phoneme functions, ergo it exists’.

References:

Bloomfield, L. (1935) Language, London: George Allen & Unwin.

Dresher, E. (2011) ‘The Phoneme’, in van Oostendorp, M., Ewen, C.J., Hume, E., & Rice, K. (eds) The Blackwell Companion to Phonology, Oxford: Blackwell, available here

Jakobson, R. (1990) On Language, edited by Waugh, L.R. & Monville-Burston, M., Cambridge, Mass: Harvard University Press.

Jones, D. (1950) The Phoneme: Its nature and use, Cambridge: W. Heffer & Sons.

Illustrations from the very clever phonemic chart that comes with English File (Oxenden, C. and Seligson, P., 1996, Oxford University Press).





A is for Accommodation

6 01 2013

You may well have seen this YouTube clip a month or so ago: British footballer Joey Barton is interviewed in France not long after having debuted for the Marseille football club.  Much commented upon – and mocked – was his thick French accent, despite his being a native speaker of English and speaking little or no French. The Daily Mail, for example, described it as ‘an embarrassing display’ and ‘a comedy French accent’. Judge for yourself…

What Barton of course was doing (although neither he nor the Daily Mail named it as such) was accommodating his accent to that of his audience. Accommodation, as Robin Walker (2010: 97) reminds us, is ‘the ability to adjust your speech and aspects of spoken communication so that they become more (or less) like that of your interlocutors’.  David Crystal (2003: 6) adds that, ‘among the reasons why people converge towards the speech pattern of their listener are the desires to identify more closely with the listener, to win social approval, or simply to increase the communicative efficiency of the interaction’.

Winning social approval may well have motivated Barton, a newcomer to the region, to assume a French accent. But more important still was the need to be intelligible: in his defence he had said that ‘it is very difficult to do a press conference in Scouse for a room full of French journalists. The alternative is to speak like a ‘Allo Allo!’ character’.

Whatever the reason, Barton’s much-publicized accommodation is a good, if extreme, example of what most of us tend to do naturally and instinctively, and not just at the level of accent.  Jenny Jenkins (2000: 169) identifies a wide range of linguistic and prosodic features that are subject to convergence between speakers, ‘such as speech rate, pauses, utterance length, pronunciation and… non-vocal features such as smiling and gaze’.

Basic English 1 two figures01And, as Richardson et al., (2008: 75) note, ‘conversational partners do not limit their behavioural coordination to speech. They spontaneously move in synchrony with each other’s speech rhythms’, a finding which is likened to the ‘synchrony, swing, and coordination’ displayed by members of a jazz band. The researchers tracked the posture and gaze position of conversants to show that this coordination is not simply a byproduct of the interaction, but the physical embodiment of the speakers’ cognitive alignment – ‘an intimate temporal coupling between conversants’ (p. 88) or, (in T.S.Eliot’s words) ‘the whole consort dancing together’.

Arguably, accommodation occurs not only at the paralinguistic level, but at the linguistic one too. As we speak, for example, we are continuously monitoring our interlocutor’s degree of understanding, and adjusting our message accordingly. This is especially obvious in the way we talk to children and non-native speakers, forms of talk called  ‘caretaker talk’ and ‘foreigner talk’, respectively. Both varieties are characterized by considerable simplification, although there are significant differences. Caretaker talk is often pitched higher and is slower than talk used with adults, but, while simpler, is nearly always grammatically well-formed. Foreigner talk, on the other hand, tolerates greater use of non-grammatical, pidgin-like forms, as in ‘me wait you here’, or ‘you like drink much, no?’

Various theories have been proposed as to how speakers modify their talk like this. One is that they ‘regress’ to an early stage in their own language development. Another is that they negotiate a mutually-intelligible degree of communication. A third (and this is really a form of accommodation) is that they simply match their language to that of their interlocutor, imitating its simplifications, including its lack of grammatical accuracy. Rod Ellis (1994: 265), however, thinks that this explanation is unlikely, as ‘it is probably asking too much of learners’ interlocutors to measure simultaneously the learners’ phonology, lexicon, syntax, and discourse with sufficient accuracy to adjust their own language output’.

However, this was written before the discovery of ‘mirror neurons’, and their key role in enabling imitative behavior.  As Iacoboni (2008: 91-92) observes, ‘the fact that the major language area of the human brain is also a critical area for imitation and contains mirror neurons offers a new view of language and cognition in general’.  According to Iacobini, it is because of these mirror neurons that ‘during conversations we imitate each other’s expressions, even each other’s syntactic constructions… If one person engaged in a dialogue uses the word “sofa” rather than the word “couch,” the other person engaged in the dialogue will do the same’ (op. cit. 97-98).

It seems, then, that as humans we are hard-wired to imitate one another.

Basic English 1 two figures02So, what are the implications for language teaching? In the interests both of intelligibility and establishing ‘comity’, Joey Barton’s adaptive accent strategy may be the way to go. For learners of English, whose interlocutors may not themselves be native speakers, this may mean learning to adapt to other non-native speaker accents. As Jenkins (2007: 238) argues, ‘in international communication, the ability to accommodate  to interlocutors with other first languages than one’s own… is a far more important skill than the ability to imitate the English of a native speaker.’

So, in the interests of mutual intelligibility, rather than teaching pronunciation per se, maybe we should be teaching accommodation skills. The question, of course, is how?

References:

Crystal, D. (2003) A Dictionary of Linguistics and Phonetics (5th edition) Oxford: Blackwell.

Ellis, R. (1994) The Study of Second language Acquisition, Oxford: Oxford University Press.

Iacoboni, M. (2008) Mirroring People: The New Science of How We Connect with Others, New York: Farrar, Straus and Giroux,

Jenkins, J. (2000) The Phonology of English as an International Language, Oxford: Oxford University Press.

Jenkins, J. (2007) English as a Lingua Franca: Attitude and Identity, Oxford: Oxford University Press.

Basic English 1 two figures03Richardson, D.C., Dale, R., & Shockley, K., (2008) ‘Synchrony and swaying in conversation: coordination, temporal dynamics, and communication,’ in Wachsmuth, I., Lenzen, M., & Knoblich, G. (eds) Embodied Communication in Humans and Machines, Oxford: Oxford University Press.

Walker, R. (2010) Teaching the Pronunciation of English as a Lingua Franca, Oxford: Oxford University Press.

Illustrations from Ogden, C.K. (ed.) (n.d.) The Basic Way to English, London: Evans Brothers.





V is for Voice setting

18 12 2011

A correspondent has reminded me of an article I wrote – ages ago – on voice setting (you can read it here):

I have just read your article ‘Having a good jaw: voice setting phonology’, and having noted the year in which it was published, I am interested to find out if you or anyone else, has conducted any studies on the exercises you suggested?

Never mind the mouth, check out the tash!

Just to remind you, voice setting – or ‘bases of articulation’ –  is the general term for those “general differences in tension, in tongue shape, in pressure of the articulators, in lip and cheek and jaw posture and movement, which run through the whole articulatory process” (O’Connor 1973:289).  It’s argued that voice settings vary from language to language, e.g.

“In English the lips and jaw move little, in French they move much more, with vigorous lip-rounding and spreading: the cheeks are relaxed in English but tensed in French: the tongue-tip is tenser in English and more used than in French, where the blade is dominant, and so on.” (O’Connor op.cit.)

Over the years I’ve collected  a number of non-specialist descriptions – from novels and poems, principally – that nicely capture voice setting characteristics. Here’s a selection:

“His voice rang like a metal clipper hitting a bucket and he spoke English. Proper English … he sprinkled ers and even errers in his sentences as liberally as he gave out his twisted-mouth smiles. His lips pulled not down… but to the side, and his head lay on one side or the other, but never straight on the end of his neck”. (Maya Angelou I Know How the Caged Bird Sings).

When you hear it languishing

and hooing and cooing and sidling through the front teeth,

the oxford voice

or worse still

the would-be oxford voice

you don’t even laugh anymore, you can’t …

(D.H.Lawrence: “The Oxford Voice”)

“Watching him twisting his mouth into that intelligently ironical shape that is necessary for the production of Dutch noises, I was reminded of how much I liked the semi-gargling sound Netherlanders make, brewing each word up at the back of their throats and then having to unpick it with their teeth.”  (Howard Jacobson: The Land of Oz)

What I was arguing (in the aforementioned article) was that accurate pronunciation at the segmental level (i.e. of individual sounds) is at least partly contingent on adjusting to the specific vocal setting for the language you’re trying to speak. That is to say, accent is as much an effect of top-down features as it is of bottom-up ones. Hence, it might repay teachers of pronunciation to start working on these top-down features first, in advance of fine-tuning for  phonemic distinctions.

To that end, I suggested an activity sequence that included awareness-raising activities such as watching videos of speakers with the sound off, in order to try and guess what language they are speaking, or role play activities where learners attempt to speak their own language with a marked English (RP or GA) accent, in the way that – for example – Brits or ‘gringos’ are portrayed locally in the movies. This might lead to some discussion as to what is actually happening – physically – when you ‘speak with an English accent’.

Read my lips

But, to answer my correspondent’s question, I don’t know of any follow-up to these suggestions, or, for that matter, of any research into the pedagogical applications of voice setting theory at all.  Besides, I’m wondering if – in this era of English as a Lingua Franca – is it really all that necessary to take such drastic steps to ‘nativise’ learners’ accents?

References:

O’Connor, J.D. 1973. Phonetics. Harmondsworth: Penguin.

Thornbury, S. 1993. Having a good jaw: voice-setting phonology. ELT Journal, 47/2, 126-31.

Illustrations from Jones, D. 1932. An Outline of English Phonetics (3rd edn.) Leipzig: Teubner.





P is for Phonemic Chart

8 08 2010

(That’s phonEMIC, not phonETIC, by the way. There’s a big difference!)

Ever since I’ve been teaching in the US I’ve been challenged by the need to devise a chart of the phonemes of American English (General American or GA) that can be used in the same way as the original British English (RP) chart, both as a training and a teaching tool. (Incidentally, it’s an often overlooked fact that the layout of the original RP chart – along with lots of ways of exploiting it in class – is due to the work of Adrian Underhill).

Adrian Underhill's 'Sound Foundations' Chart (Macmillan)

In fact, the search for a GA equivalent goes back even earlier, to 1995, when I was assessing a CELTA course here in New York and was surprised to find that the language analysis trainer was trying to knock the round peg of GA sounds into the square hole of the RP chart. Fifteen years later I discover that not much has changed: another large training organisation here is using an “Americanized” version of the original RP chart, but one which not only includes five more vowel sounds than GA is normally credited with having, but adds two diphthongs ( /ʌɪ/ and /ɔʊ/) that, as far as I know, belong to no known variety of English!

Of course, the problem of devising a GA chart is complicated by the fact that – unlike the case of RP – there is no single, agreed upon, system of transcribing American vowels. (Compare any two American learners’ dictionaries, for instance). This is probably due to the fact that, while there is less accent variation across North America than there is within the British Isles, there is no single variety that can (or is allowed to) claim the prestigious status that RP enjoys.

In 2007, while teaching at SIT in Brattleboro, Vermont, I came up with a chart that was based closely on the description in Celce-Murcia et al. (1996) – see inset below (click to expand).

GA chart (after Celce-Murcia et. al., 1996)

The layout of the chart attempts to reflect the elegance of Adrian’s RP chart, with the consonants ranged from front-of-mouth to back-of-mouth obstruction, and the vowels roughly mapped on to the classic (Daniel Jones?) vowel quadrant. In terms of the symbols, the consonants were not a problem: the only change involved changing the symbol /j/ for a /y/. The vowels were another story.

First of all the layout had to be reconfigured to accommodate the fewer vowel sounds of GA (16 vs 20 in RP). While the three ‘heterogenous’ diphthongs are separated out and colour-coded, no attempt was made to distinguish the simple vowels from the vowels with an adjacent glide (/iy/, /ey/, /ow/) since the latter, technically, are not diphthongs.  Nor were combinations with /r/ (such as /ır/ and /or/) included, since, technically, these are not individual phonemes but are attempts to represent the way certain vowel sounds are “colored” by the consonants that follow them (which may be /r/, /l/ or /rl/). The only exception I made was the case of /ɜr/ which, as Celce-Murcia et al. point out, is used “to capture a significant difference in quality between the /ʌ/ in bud and the /ɜr/ in bird” (p. 105) and which they include as their “15th phoneme” of North American English (the 16th being the schwa).  Finally, an optional superscript /r/ was added to the schwa, because the combination of schwa and post-vocalic /r/ is often distinguished from schwa, phonetically, by being transcribed with a different symbol (ɚ). This represents the (phonemic) difference in GA between the final vowels in cheeta and cheater, for example. Note also that both /ɔ/ and /ɑ/ are represented in the chart, in deference to those varieties of GA that do distinguish between caught and cot.

This chart has served OK over the years, but I’ve not been entirely happy with it – not least because of the use of the consonant symbols /y/ and /w/ to flag lengthening and lip rounding, as well as the clumsy superscript [r]s. So I revisted the literature, and came up with a new one, based on the description in Roca and Johnson (1999). The consonants remain as they were. The main differences to the vowels is that I’ve abandoned the /y/ and /w/ add-ons, susbtituting symbols that more accurately realise the phonetic qualities of the homogeneous (adjacent glide) and heterogeneous (non-adjacent glide) diphthongs, colour-coding these respectively, as well as substituting the symbol ɚ for the r-coloured schwa alternative, and /ɝ/ for the r-coloured vowel in bird. I’ve also re-positioned /ʌ/ so that its central and back quality is more accurately represented, and turned the division between /ɔ/ and /ɑ/ into a dotted line to flag that, in some varieties, these two sounds are not distinguished. You can view a pdf version of the revised chart here: AmE phonemic chart v.5

All comments will be gratefully received and acknowledged.

References:

Celce-Murcia, M., Brinton, D.M., and Goodwin, J.M. (1996) Teaching Pronunciation. Cambridge University Press.

Roca, I., and Johnson, W. (1999) A Course in Phonology. Oxford: Blackwell.

Appendix:

Click here ( US phonemic chart ) to see a pdf version of Adrian Underhill’s GA Chart – mentioned in his comments below. (Thanks, Adrian!)





P is for Pronunciation

1 08 2010

Read my lips

I’ve just completed a nine-hour block of sessions on phonology on the MA TESOL course that I’m teaching at the New School. Apart from the inevitable (and sometimes intractable) problems involved in reconfiguring my knowledge of phonology so as to accommodate North American accents, the question that simply will not go away is this: Can pronunciation be taught?

As a teacher, I have to confess that I can’t recall any enduring effects for teaching pronunciation in class – but then, I very seldom addressed it in any kind of segregated, pre-emptive fashion. Most of my ‘teaching’ of pronunciation was reactive –  a case of responding to learners’ mispronunciations with either real or feigned incomprehension. There are only two pron-focused lessons that I can remember feeling good about: one was where I used an inductive approach to guide a group of fairly advanced learners to work out the rules (or, better, tendencies) of word stress in polysyllabic words (the students seemed generally impressed that the system was not as arbitrary as it had appeared), and another where I used a banal dialogue that happened to be in the students’ workbook to highlight the different spellings of the /ay/ phoneme – a lesson that was more about spelling than pronunciation, really – but, again, one that helped dispel the myth that there are zero sound-spelling relationships in English.

As a second language learner, any attempts to improve my pronunciation have fallen (almost literally) on deaf ears. I remember being told by a well-intentioned Spanish teacher: “Your problem is that you use the English ‘t’ sound instead of the Spanish one”. To which I replied, “No, the ‘t’ sound is the very least of my problems! My problem is that I don’t know the endings of the verbs, that I don’t have an extensive vocabulary, that I can’t produce more than two words at a time. … and so on”. That is to say, in the greater scheme of things, the phonetic rendering of a single consonant sound was not going to help me become a proficient speaker of Spanish. Nor was it something I would be able to focus any attention on, when my attention was so totally absorbed with simply getting the right words out in the right order. And nor, at the end of the day, would I ever be able to rid myself of my wretched English accent, however hard I tried (assuming, of course, I wanted to).

Hence, I’m fairly sceptical about the value of teaching pronunciation, and I suspect that most of the exercises and activities that belong to the canonical pron-teaching repertoire probably have only incidental learning benefits.  A minimal pairs exercise (of the ship vs sheep type) might teach some useful vocabulary; a jazz chant might reinforce a frequently used chunk. But neither is likely to improve a learner’s pronunciation. Certain learners (a small minority, I suspect) with good ears and a real motivation to “sound like a native speaker” might just squeeze some benefit out of a pron lesson, but for the majority it will probably just wash right over them.

In An A-Z of ELT, I hint obliquely at these doubts – doubts which I claim are justified by research studies. What studies?

Well, here’s one for starters. In an early attempt to tease out the factors that predicted good pronunciation, Suter (1976) co-opted a panel of non-specialist informants to assess the pronunciation of 61 English learners from a range of language backgrounds and with different histories of exposure and instruction. Twelve biographical factors were found to correlate with good pronunciation, and, in a subsequent re-analysis of the data (Purcell and Suter 1980), these were reduced to just four. These four predictors of acceptable pronunciation were (in degree of importance):

  • the learner’s first language (i.e., all things being equal, a speaker of, say, Swedish is more likely to pronounce English better than a speaker of, say, Vietnamese)
  • aptitude for oral mimcry (i.e. ‘having a good ear’)
  • length of residency in an English-speaking environment
  • strength of  concern for pronunciation accuracy

Significantly, none of the above factors is really within the teacher’s control (although the last – the motivtaional one – could arguably be nurtured by the teacher). Nevertheless, the learners’ histories of instruction seemed not to have impacted in any significant way on the accuracy of their pronunciation. The researchers commented: “One of the most obvious [implications of the study] relates to the fact that teachers and classrooms seem to have had remarkably little to do with how well our students pronounced English”.

Now, is this bad news (we can’t do much to help our learners achieve acceptable standards of pronunciation)? Or is it good news (we don’t have to teach pronunciation, and can spend the time saved on more important stuff)?

References:

Purcell, E.T., and Suter, R.W. 1980. Predictors of Pronunciation Accuracy: a Re-examination. Language Learning, 30, 271-287.

Suter, R.W. 1976. Predictors of Pronunciation Accuracy in Second Language Learning. Language Learning, 26: 233-253.