The case for humans being innately and uniquely endowed with a ‘language instinct’ rests largely on the ‘poverty of the stimulus’ argument, or what is sometimes called ‘Plato’s problem’: How do we know so much when the evidence available to us is so meagre?
As Harris (1993: 57-8) elaborates:
‘One of the most remarkable facts about human languages – which are highly abstract, very complex, infinite phenomena – is that children acquire them in an astonishingly short period of time, despite haphazard and degenerate data (the “stimulus”). Children hear relatively few examples of most sentence types, they get little or no correction beyond pronunciation (not even that), and they are exposed to a bewildering array of false starts, unlabelled mistakes, half sentences and the like.’
Is this really true? Is the stimulus really so impoverished?
The quantity of the stimulus – i.e. the input available to a child – is certainly not impoverished: it has been estimated (Cameron-Faulkner et al. 2003) that children hear around 7,000 utterances a day, of which 2,000 are questions (cited in Scheffler 2015). This suggests that in their first five years children are exposed to 12.5m meaningful utterances. At an average of, say, ten words an utterance this is larger than the entire British National Corpus (100m words), from which several hefty grammars and dictionaries have been derived.
What about the quality? While it’s true that the speech between adults often includes ‘disfluencies’ of the type mentioned by Harris above, studies suggest that ‘motherese’ (i.e. the variety that caregivers typically use when interacting with their children) ‘is unswervingly well formed’ (Newport et al. 1977, cited in Sampson 2005). In one study ‘only one utterance out of 1500 spoken to the children was a disfluency’ (ibid.).
Chomsky and his followers would argue that, even if this were true, the child will have little or no exposure to certain rare structures that, in a short time, she will nevertheless know are grammatical. Ergo, this knowledge must derive from the deep structures of universal grammar.
One much-cited example is the question-form of the sentence with two auxiliaries, e.g. The boy who was crying is sleeping now. How does the child know that the question form requires fronting of the second of the two auxiliaries (Is the boy who was crying sleeping now?), and not the first: *Was the boy who crying is sleeping now?, especially if, as Chomsky insists, the number of naturally-occurring examples is ‘vanishingly small’: ‘A person might go through much or all of his life without ever having been exposed to relevant evidence’ (Chomsky 1980: 40). The explanation must be that the child is drawing on their inborn knowledge that grammatical transformations are structure-dependent.
A quick scroll through a corpus, however, reveals that the stimulus is not as impoverished as Chomsky claims. Pullum & Scholz (2002, cited in Sampson op. cit), using a corpus of newspaper texts, found that 12% of the yes/no questions in the corpus were of the type that would refute the ‘invert the first auxiliary’ hypothesis. (It is significant that Chomsky impatiently dismisses the need to consult corpus data, on the grounds that, as a native speaker, he intuitively knows what is grammatical and what is not. Unsurprisingly, therefore, generative linguists are constantly, even obsessively, fiddling around with implausible but supposedly grammatically well-formed sentences such as John is too stubborn to expect anyone to talk to and What did you wonder how to do? [cited in Macaulay 2011]).
But even if it were the case that the (spoken) input might be deficient in certain complex syntactic structures, you do not need to hypothesize ‘deep structure’ to account for the fact that a question of the type *Was the boy who crying is sleeping now? is simply not an option.
Why not? Because language is not, as Chomsky views it, a formal system of abstract symbols whose units (such as its words) are subject to mathematical operations, a perspective that ‘assumes that syntax can be separated from meaning’ (Evans 2014: 172). Rather, language is acquired, stored and used as meaningful constructions (or ‘syntax-semantics mappings’). Children do not process sentences from left to right looking for an available auxiliary to move. (They don’t even think of sentences as having a left and a right). They process utterances in terms of the meanings they encode. And meaning ‘isn’t just abstract mental symbols; it’s a creative process, in which people construct virtual experiences – embodied simulations – in their mind’s eye’ (Bergen 2012: 16).
Thus, the child who is exposed to noun phrase constructions of the type the little boy who lives down the lane or the house that Jack built understands (from the way they are used in context) that these are coherent, semantic units that can’t be spliced and re-joined at will. Is the little boy sleeping? and Is the little boy who lives down the lane sleeping? are composed of analogous chunks and hence obey the same kind of syntactic constraints.
What’s more, experiments on adults using invented syntactic constructions suggest that patterns can be learned on the basis of relatively little input. Boyd et al. (2009: 84) report that ‘even small amounts of exposure were enough (a) to build representations that persisted significantly beyond the exposure event, and (b) to support production.’ A little stimulus goes a long way.
In the end, we may never know if the poverty of the stimulus argument is right or not – not, at least, until computer models of neural networks are demonstrably able to learn a language without being syntactically preprogrammed to do so. As Daniel Everett (2012: 101) writes, ‘No one has proven that the poverty of the stimulus argument, or Plato’s Problem, is wrong. But nor has anyone shown that it is correct either. The task is daunting if anyone ever takes it up. One would have to show that language cannot be learned from available data. No one has done this. But until someone does, talk of a universal grammar or language instinct is no more than speculation.’
References
Bergen, B.K.(2012) Louder than words: The new science of how the mind makes meaning. New York: Basic Books.
Boyd, J.K., Gottschalk, E.A., & Goldberg, A.E. (2009) ‘Linking rule acquisition in novel phrasal constructions.’ In Ellis, N.C. & Larsen-Freeman, D. (eds) Language as a complex adaptive system. Chichester: John Wiley & Sons.
Cameron-Faulkner, T., Lieven, E. & Tomasello, M. (2003) ‘A construction based analysis of child directed speech.’ Cognitive Science 27/6.
Chomsky, N. (1980) various contributions to the Royaumont Symposium, Piatelli-Palmarini (ed.) Language and Learning: The debate between Jean Piajet and Noam Chomsky. London: Routledge & Kegan Paul.
Evans, V. (2014) The Language Myth: Why language is not an instinct. Cambridge: Cambridge University Press.
Everett, D. (2012) Language: The cultural tool. London: Profile Books.
Harris, R.A. (1993) The Linguistics Wars. New York: Oxford University Press.
Macaulay, K.S. (2011) Seven Ways of Looking at Language. Houndmills: Palgrave Macmillan.
Pullum, G.K. & Scholz, B.C. (2002) ‘Empirical assessment of stimulus poverty arguments.’ Linguistic Review, 19.
Sampson, G. (2005) The Language Instinct Debate (Revised edition). London: Continuum.
Scheffler, P. (2015) ‘Lexical priming and explicit grammar in foreign language instruction.’ ELT Journal, 69/1.
PS: There will be no more new posts until the end of summer and things calm down again.
Recent comments