P is for Poverty of the stimulus

7 06 2015

plato_bustThe case for humans being innately and uniquely endowed with a ‘language instinct’ rests largely on the ‘poverty of the stimulus’ argument, or what is sometimes called ‘Plato’s problem’: How do we know so much when the evidence available to us is so meagre?

As Harris (1993: 57-8) elaborates:

‘One of the most remarkable facts about human languages – which are highly abstract, very complex, infinite phenomena – is that children acquire them in an astonishingly short period of time, despite haphazard and degenerate data (the “stimulus”). Children hear relatively few examples of most sentence types, they get little or no correction beyond pronunciation (not even that), and they are exposed to a bewildering array of false starts, unlabelled mistakes, half sentences and the like.’

Is this really true? Is the stimulus really so impoverished?

The quantity of the stimulus – i.e. the input available to a child –  is certainly not impoverished: it has been estimated (Cameron-Faulkner et al. 2003) that children hear around 7,000 utterances a day, of which 2,000 are questions (cited in Scheffler 2015). This suggests that in their first five years children are exposed to 12.5m meaningful utterances. At an average of, say, ten words an utterance this is larger than the entire British National Corpus (100m words), from which several hefty grammars and dictionaries have been derived.

What about the quality? While it’s true that the speech between adults often includes ‘disfluencies’ of the type mentioned by Harris above, studies suggest that ‘motherese’ (i.e. the variety that caregivers typically use when interacting with their children) ‘is unswervingly well formed’ (Newport et al. 1977, cited in Sampson 2005). In one study ‘only one utterance out of 1500 spoken to the children was a disfluency’ (ibid.).

Chomsky and his followers would argue that, even if this were true, the child will have little or no exposure to certain rare structures that, in a short time, she will nevertheless know are grammatical. Ergo, this knowledge must derive from the deep structures of universal grammar.

One much-cited example is the question-form of the sentence with two auxiliaries, e.g. The boy who was crying is sleeping now. How does the child know that the question form requires fronting of the second of the two auxiliaries (Is the boy who was crying sleeping now?), and not the first: *Was the boy who crying is sleeping now?, especially if, as Chomsky insists, the number of naturally-occurring examples is ‘vanishingly small’: ‘A person might go through much or all of his life without ever having been exposed to relevant evidence’ (Chomsky 1980: 40). The explanation must be that the child is drawing on their inborn knowledge that grammatical transformations are structure-dependent.

The_mother_of_JohnA quick scroll through a corpus, however, reveals that the stimulus is not as impoverished as Chomsky claims. Pullum & Scholz (2002, cited in Sampson op. cit), using a corpus of newspaper texts, found that 12% of the yes/no questions in the corpus were of the type that would refute the ‘invert the first auxiliary’ hypothesis. (It is significant that Chomsky impatiently dismisses the need to consult corpus data, on the grounds that, as a native speaker, he intuitively knows what is grammatical and what is not. Unsurprisingly, therefore, generative linguists are constantly, even obsessively, fiddling around with implausible but supposedly grammatically well-formed sentences such as John is too stubborn to expect anyone to talk to and What did you wonder how to do? [cited in Macaulay 2011]).

But even if it were the case that the (spoken) input might be deficient in certain complex syntactic structures, you do not need to hypothesize ‘deep structure’ to account for the fact that a question of the type *Was the boy who crying is sleeping now? is simply not an option.

Why not? Because language is not, as Chomsky views it, a formal system of abstract symbols whose units (such as its words) are subject to mathematical operations, a perspective that ‘assumes that syntax can be separated from meaning’ (Evans 2014: 172).  Rather, language is acquired, stored and used as meaningful constructions (or ‘syntax-semantics mappings’).  Children do not process sentences from left to right looking for an available auxiliary to move. (They don’t even think of sentences as having a left and a right). They process utterances in terms of the meanings they encode. And meaning ‘isn’t just abstract mental symbols; it’s a creative process, in which people construct virtual experiences – embodied simulations – in their mind’s eye’ (Bergen 2012: 16).

Thus, the child who is exposed to noun phrase constructions of the type the little boy who lives down the lane or the house that Jack built understands (from the way they are used in context) that these are coherent, semantic units that can’t be spliced and re-joined at will.  Is the little boy sleeping? and Is the little boy who lives down the lane sleeping? are composed of analogous chunks and hence obey the same kind of syntactic constraints.

What’s more, experiments on adults using invented syntactic constructions suggest that patterns can be learned on the basis of relatively little input. Boyd et al. (2009: 84) report that ‘even small amounts of exposure were enough (a) to build representations that persisted significantly beyond the exposure event, and (b) to support production.’  A little stimulus goes a long way.

daniel-everett-dont-sleep-there-are-snakes-life-and-langauge-in-the-amazonian-jungleIn the end, we may never know if the poverty of the stimulus argument is right or not – not, at least, until computer models of neural networks are demonstrably able to learn a language without being syntactically preprogrammed to do so. As Daniel Everett (2012: 101) writes, ‘No one has proven that the poverty of the stimulus argument, or Plato’s Problem, is wrong. But nor has anyone shown that it is correct either. The task is daunting if anyone ever takes it up. One would have to show that language cannot be learned from available data. No one has done this. But until someone does, talk of a universal grammar or language instinct is no more than speculation.’

References

Bergen, B.K.(2012) Louder than words: The new science of how the mind makes meaning. New York: Basic Books.

Boyd, J.K., Gottschalk, E.A., & Goldberg, A.E. (2009) ‘Linking rule acquisition in novel phrasal constructions.’ In Ellis, N.C. & Larsen-Freeman, D. (eds) Language as a complex adaptive system. Chichester: John Wiley & Sons.

Cameron-Faulkner, T., Lieven, E. & Tomasello, M. (2003) ‘A construction based analysis of child directed speech.’ Cognitive Science 27/6.

Chomsky, N. (1980) various contributions to the Royaumont Symposium, Piatelli-Palmarini (ed.) Language and Learning: The debate between Jean Piajet and Noam Chomsky. London: Routledge & Kegan Paul.

Evans, V. (2014) The Language Myth: Why language is not an instinct. Cambridge: Cambridge University Press.

Everett, D. (2012) Language: The cultural tool. London: Profile Books.

Harris, R.A. (1993) The Linguistics Wars. New York: Oxford University Press.

Macaulay, K.S. (2011) Seven Ways of Looking at Language. Houndmills: Palgrave Macmillan.

Pullum, G.K. & Scholz, B.C. (2002) ‘Empirical assessment of stimulus poverty arguments.’ Linguistic Review, 19.

Sampson, G. (2005) The Language Instinct Debate (Revised edition). London: Continuum.

Scheffler, P. (2015) ‘Lexical priming and explicit grammar in foreign language instruction.’ ELT Journal, 69/1.

 PS: There will be no more new posts until the end of summer and things calm down again.





X is for X-bar Theory

6 01 2010

There’s no entry for X in the A-Z. If there were, one of the few candidates would be X-bar theory, which is in fact a (1981) refinement of Chomsky’s theory of language known as Transformational-generative (TG) grammar. (For what it’s worth, X-bar theory argues that all phrases – whether noun phrases, verb phrases etc – have the same structure, and that this structure is a linguistic universal, i.e. it’s common to English, Japanese, probably even Klingon).

Noam Chomsky

As I say, there’s no entry for X-bar theory. There’s no entry for TG grammar in the A-Z, either. Nor for its predecessor, generative grammar. Nor for Government and Binding theory, nor the Principles and Parameters theory, nor the Minimalist program. In fact, there’s no mention of Chomsky or any of his theories in the entry on Grammar at all.

This might strike some readers as odd, even perverse. At best, negligent. After all, the study of TG grammar (or any of its offshoots) is a key component of  any self-respecting linguistics course on any MA TESOL program in the US. It is often the only theory of grammar that is studied. In fact, in many of the standard texts, such as Fromkin et al. (2007) An Introduction to Language, or the Ohio State University Language Files (ed. Stewart and Vaillette, 2001) it’s not even called TG Grammar, nor ascribed to Chomsky by name. It’s simply the grammar that is. The one and only. And, just in case you don’t know which one I’m talking about, it’s the one that involves the endless “tree-diagramming” of (invariably invented) sentences, like The child found a puppy and Where has Pete put his bone? (both from Fromkin et al.).

Simple tree diagram

So why did I ignore it?

Well, to be perfectly honest, I’m not sure that I really understand it. I can get my head around basic Phrase-structure grammar, even X-bar theory, and just about understand what Theta-theory is on about. But, as much as I want to get to grips with the Minimalist program (not least because it seems to be arguing a central role for lexis in determining syntactic structure), I’m struggling. In the end, all those upside-down trees leave me cross-eyed.

But there are more cogent reasons – both linguistic and pedagogical – for treating TG grammar cautiously, it seems to me. On linguistic grounds the theory seems flawed since it is based entirely on invented sentences in their written form. Try to apply the descriptive framework to spoken data – e.g. authentic utterances like: But the spa, you might want to use it, you know  – and it just doesn’t fit. By the same token, TG grammarians will reject forms as being ungrammatical even when they are commonly attested (one of the texts I consulted disallows the sentence John bought what? for example). Chomsky’s dogged insistence on making the “well-formed” sentence the centrepiece of his theory of language seems to undermine the whole enterprise – this, along with the misguided notion that all sentences are generated from the word up, and are hence all entirely original. (Chomsky’s acolyte, Stephen Pinker, woefully betrays his ignorance of developments in corpus linguistics by claiming – in The Language Instinct – that “virtually every sentence that a person utters or understands is a brand-new combination of words, appearing for the first time in the history of the universe” (1994, p. 22) Compare this to the corpus linguist, John Sinclair’s (1991) claim that “by far the majority of text is made of the occurrence of common words in common patterns”).

The blinkered disavowal of the validity of performance data is, of course, a side-effect of their (Chomsky’s, Pinker’s etc)  mentalist agenda, which is to demonstrate both the universality and innateness of their grammar. This means cherry-picking your examples (and re-configuring your theory fairly regularly so as to accommodate new, potentially disruptive, evidence), while consigning anything that doesn’t fit to the damaged goods bin – the one labelled “performance”.

But, more importantly for me, is the lack of pedagogical applicability. For a start, the idealised nature of Chomskyan grammar seems to bear only an accidental relationship to the way language is actually stored and generated in the brain. Chomsky (to his credit) was quick to acknowledge this.  Way back in 1965, he wrote: “When we say that a sentence has a certain derivation with respect to a particular generative grammar, we say nothing about how the speaker or hearer might proceed, in some practical or efficient way, to construct such a derivation.” (Aspects of the Theory of Syntax. Cambridge, Mass.: MIT Press, p. 9.)  Quite right: the tree diagram describes the structure “after the event”, but it ignores the fact that (as David Brazil put it) “discourse [is] something that is now-happening, bit by bit, in time, with the language being assembled as the speaker goes along” (1995, p. 37). If the TG representation of grammar has little or no psychological foundation, it would seem to be fairly useless for teaching purposes.  I take heart, therefore, from a statement made by Bernard Spolsky, after a think-tank on the applications of Chomskyan grammar, held in the late sixties. He concluded:

Linguistics and its hyphenated fields have a great deal to offer to language teachers, but the fullest benefit can only come when their implications are integrated and formed into a sound theory of language pedagogy. Because linguistics is only indirectly applicable to language teaching, changes in linguistic theory or arguments amongst linguists should not disturb language teachers

(Spolsky, B. (1970) Linguistics and language pedagogy – applications or implications? [emphasis added]) 

So, why does the teaching of TG grammar (including X-bar theory) persist in the US academic context? And was I wrong to ignore it in the A-Z?