
Knish
Why is baseball called be-su-bu-ro in Japanese? Why do most learners say clothiz and not clothes? Why am I called Escott by Spanish speakers and Arabic speakers alike? Why can we say /gz/ when it is the middle of a word (exam) and at the end of a word (dogs) but not at the beginning? (Check a dictionary if you are in any doubt). Why are clash and crash recognizably English words but cnash is not? Is it because it’s hard to say? Well, not if you can say knish, which – if you live in New York, and like to eat them – you regularly do. It’s not that we can’t say cnash, cfash or cpash – we just don’t.
Why? The answer is, of course, is to be found in phonotactics, i.e. the study of the sound combinations that are permissible in any given language. (Important note: we are talking about sound combinations – not letter combinations – this is not about spelling). In Japanese, syllables are limited to a single consonant plus vowel construction (CV), with strong constraints on whether another consonant can be added (CVC). Hence be-su-bu-ro for baseball. And bat-to for bat, and su-to-rai-ku for strike (Zsiga 2006). As for Escott: Spanish does not allow words to begin with /s/ plus another consonant – hence the insertion of word-initial /ɛ/, which gives *Escott (like escuela, estado, etc) – a process called epenthesis. (Epenthesis accounts for the extra vowel English speakers insert in certain regular past tense combinations: liked, loved, but wanted.)

Shmuck with knish
English allows for many more consonant clusters than, say, Japanese or Hawaiian (with its only 13 phonemes in all), but nothing like some languages, like Russian. According to O’Connor (1973, p. 231) ‘there are 289 initial consonant clusters in Russian as compared with 50 in English.’ English almost makes up for this by allowing many more word-final clusters (think, for example, of sixth and glimpsed – CVCCC and CCVCCCC, respectively) but Russian still has the edge(142 to 130). Of course, these figures don’t exhaust the possibilities that are available in each language: there are 24 consonant sounds in English, so, theoretically, there are 242 two-consonant combinations, and 243 three-consonant combinations. But we use only a tiny fraction of them. And some combinations are only found in borrowings from other languages, like knish and shmuck. (Theoretically, as O’Connor points out, ‘it is possible to imagine two different languages with the same inventory of phonemes but whose phonemes combine together in quite different ways’ [p. 229]. In which case, a phonemic chart on the classroom wall would be of much less use than a chart of all the combinations).
Likewise, there is no theoretical limit as to which consonants can appear at the beginning of a syllable or at the end of it. But, ‘whereas in English all but the consonants /h, ŋ, j and w/ may occur both initially and finally in CVC syllables, i.e. 20 out of the total 24, in Cantonese only 6 out of a total of 20 occur in both positions, since only /p, t, k, m, n, ŋ/ occur in final position, the remainder being confined to initial position’ (O’Connor, p. 232).
It’s this kind of information that is often missing from comparisons of different languages. This was driven home recently as I reviewed a case study assignment that my MA students have been doing, in which they were asked to analyze the pronunciation difficulties of a learner of their choice. What often puzzles them is that the learner might produce a sound correctly in one word, but not in another – in some cases, even leaving it out completely. The answer, of course, is not in phonemics, but in phonotactics: it’s all about where the sound is, and in what combinations. And it is perhaps just as significant a cause of L1 interference as are phonemic differences. Yet, apart from mentions of consonant clusters, there a few if any references to phonotactics in the pedagogical literature. (In The New A-Z of ELT, phonotactics gets a mention in the entry on consonant clusters, but – note to self! – phonotactics is not just about consonants: it also deals with vowel sequences, and which vowels habitually follow which consonants.)
Phonotactics is also of interest to researchers into language acquisition, since our sensitivity to what sound sequences are permissible in our first language seems to become entrenched at a very early age. Ellis (2002, p. 149), for example, quotes research that showed ‘that 8-month-old infants exposed for only 2 minutes to unbroken strings of nonsense syllables (e.g., bidakupado) are able to detect the difference between three-syllable sequences that appeared as a unit and sequences that also appeared in their learning set but in random order. These infants achieved this learning on the basis of statistical analysis of phonotactic sequence data, right at the age when their caregivers start to notice systematic evidence of their recognising words.’

Knishery
Such findings lend support to usage-based theories of language acquisition (e.g. Christiansen and Chater 2016), where sequence processing and learning – not just of sounds but also of lexical and grammatical items – may be the mechanism that drives acquisition. It seems we are genetically programmed to recognize and internalize complex sequences: there is neurobiological evidence, for example, that shows considerable overlap of the mechanisms involved in language learning and the learning of other kinds of sequences, such as musical tunes. As Ellis (op.cit.), summarizing the evidence, concludes, ‘much of language learning is the gradual strengthening of associations between co-occurring elements of the language and… fluent language performance is the exploitation of this probabilistic knowledge’ (p.173). What starts as phonotactics ends up as collocation, morphology and syntax.
References
Christiansen, M.H. & Chater, N. (2016) Creating language: integrating evolution, acquisition, and processing. Cambridge, Mass.: MIT Press.
Ellis, N.C. (2002) ‘Frequency effects in language processing: a review with implications for theories of implicit and explicit language acquisition.’ Studies in SLA, 24/2.
O’Connor, J.D. (1973) Phonetics. Harmondsworth: Penguin.
Zsiga, E. (2006) ‘The sounds of language,’ in Fasold, R.W. & Connor-Linton, J. (eds) An introduction to language and linguistics. Cambridge: Cambridge University Press.
Recent comments