If I had to reduce language learning to the bare essentials and then construct a methodology around those essentials, it might look something like this (from Edmund White’s autobiographical novel The Farewell Symphony):
“[Lucrezia’s] teaching method was clever. She invited me to gossip away in Italian as best I could, discussing what I would ordinarily discuss in English; when stumped for the next expression, I’d pause. She’d then provide the missing word. I’d write it down in a notebook I kept week after week. … Day after day I trekked to Lucrezia’s and she tore out the seams of my shoddy, ill-fitting Italian and found ways to tailor it to my needs and interests.”
Whatever theoretical lens you view this through, Lucrezia’s ‘method’ contains the right mix. Those who subscribe to the ‘learning-is-information-processing’ view will approve of the output + feedback cycle and the covert focus on form. Those of a sociocultural bent will applaud Lucrezia’s scaffolding of learning affordances at the point of need. Dynamic systems theorists will invoke ‘the soft-assembly of language resources in a coupled system’. What’s more, my own recent experience of trying to re-animate my moribund Spanish suggests that the single most effective learning strategy was ‘instructional conversation’ with a friend in a bar. That is to say, the same kind of ‘clever method’ that White celebrates above.
But, of course, unless you have a willing partner, such intensive one-to-one treatment is costly and not always available. Could this kind of conversation-based mediation be engineered digitally? Is there an app for it?
Interactive software that replicates human conversation has long been a dream of researchers ever since Alan Turing proposed the ‘Turing Test’ in the 1950s, which challenged programmers to design a machine that could outwit a jury into thinking that they were interacting with a real person.
While no one has yet met Turing’s conditions in any convincing way, programs such as ‘chatterbots’ have certainly managed to fool some of the people some of the time. Could they substitute for a real interlocutor, in the way, say, that a computer can substitute for a chess player?
It’s unlikely. Conversation, unlike chess, is not constrained by a finite number of moves. Even the most sophisticated program based on ‘big data’, i.e. one that could scan a corpus of millions or even billions of conversations, and then select its responses accordingly, would still be a simulation. Crucially, what the program would lack is the capacity to ‘get into the mind’ of its conversational partner and intuit his or her intentions. In a word, it would lack intersubjectivity.
Intersubjectivity is ‘the sharing of experiential content (e.g., feelings, perceptions, thoughts, and linguistic meanings) among a plurality of subjects’ (Zlatev et al 2008, p.1). It appears to be a uniquely human faculty. Indeed, some researchers go so far as to claim that ‘the human mind is quintessentially a shared mind and that intersubjectivity is at the heart of what makes us human’ (op.cit. p. 2). Play, collaborative work, conversation and teaching are all dependent on this capacity to ‘know what the other person is thinking’. Lucrezia’s ability to second-guess White’s communicative needs is a consequence of their ‘shared mind’.
It is intersubjectivity that enables effective teachers to pitch their instructional interventions at just the right level, and at the right moment. Indeed, Vygotsky’s notion of the ‘zone of proximal development’ (ZPD) is premised on the notion of intersubjectivity. As van Lier (1996, p. 191) observes:
‘How do we, as caretakers or educators, ensure that our teaching actions are located in the ZPD, especially if we do not really have any precise idea of the innate timetable of every learner? In answer to this question, researchers in the Vygotskian mould propose that social interaction, by virtue of its orientation towards mutual engagement and intersubjectivity, is likely to home in on the ZPD and stay with it.’
Intersubjectivity develops at a very early age – even before the development of language – as a consequence of joint attention on collaborative tasks and routines. Pointing, touching, gaze, and body alignment all contribute to this sharing of attention that is a prerequisite for the emergence of intersubjectivity.
In this sense, intersubjectivity is both situated and embodied: ‘Intersubjectivity is achieved on the basis of how participants orient to one another and to the here-and-now context of an interaction’ (Kramsch 2009, p. 19). Even in adulthood we are acutely sensitive to the ‘body language’ of our conversational partners: ‘A conversation consists of an elaborate sequence of actions – speaking, gesturing, maintaining the correct body language – which conversants must carefully select and time with respect to one another’ (Richardson, et al. 2008, p. 77). And teaching, arguably, is more effective when it is supported by gesture, eye contact and physical alignment. Sime (2008, p. 274), for example, has observed how teachers’ ‘nonverbal behaviours’ frame classroom interactions, whereby ‘a developed sense of intersubjectivity seems to exist, where both learners and teacher share a common set of gestural meanings that are regularly deployed during interaction’.
So, could a computer program replicate (as opposed to simulate) the intersubjectivity that underpins Lucrezia’s method? It seems unlikely. For a start, no amount of data can configure a computer to imagine what it would be like to experience the world from my point of view, with my body and my mind.
Moreover, the disembodied nature of computer-mediated instruction would hardly seem conducive to the ‘situatedness’ that is a condition for intersubjectivity. As Kramsch observes, ‘Teaching the multilingual subject means teaching language as a living form, experienced and remembered bodily’ (2009, p. 191). It is not accidental, I would suggest, that White enlists a very physical metaphor to capture the essence of Lucrezia’s method: ‘She tore out the seams of my shoddy, ill-fitting Italian and found ways to tailor it to my needs and interests.’
There is no app for that.
Kramsch, C. 2009. The multilingual subject. Oxford: Oxford University Press.
Richardson, D.C., Dale, R. & Shockley, K. 2008. ‘Synchrony and swing in conversation: coordination, temporal dynamics, and communication’, in Wachsmuth, I., Lenzen, M. & Knoblich, G. (eds) Embodied communication in humans and machines, Oxford: Oxford University Press.
Sime, D. 2008. ‘”Because of her gesture, it’s very easy to understand” – Learners’ perceptions of teachers’ gestures in the foreign language class.’ In McCafferty, S.G. & Stam, G. (eds) Gesture: Second language acquisition and classroom research. London: Routledge.
Van Lier, L. 1996. Interaction in the language curriculum: Awareness, autonomy & authenticity. Harlow: Longman.
White, E. 1997. The farewell symphony. London: Chatto & Windus.
Zlatev, J., Racine, T.P., Sinha, C., & Itkonen, E. (eds) 2008. The shared mind: Perspectives on intersubjectivity. Amsterdam: John Benjamins.
Illustrations from Alexander, L.G. 1968. Look, listen, learn! London: Longman.
A version of this post first appeared on the ELTjam blog in November 2014.
To me it is much simpler. Focus on listening and reading. Start with simple short lessons and as quickly as possible progress to meaningful content. Your interest drives you. As your vocabulary and sense of the language grows your readiness to speak builds. Save conversations for when you really understand what people are saying, and when you have something meaningful to talk about, in other words real situations. This has always worked very well for me, in all the languages I have learned. It is amazing how speaking ability flows from solid comprehension ability.
Thanks, Steve – I certainly support the idea of providing copious comprehensible input. I’m not sure that it’s the case, though, that ‘readiness to speak’ follows. In my case, reading several million words of comprehensible Spanish text every year for 25 years has not ‘converted’ into spoken fluency. What I sense I lacked was ongoing opportunities to talk about it.
I agree Scott. We eventually need to speak, and to speak a lot. The greater the quantity of interesting and meaningful input we have already absorbed, the better we are able to defend ourselves on a variety of subjects when we speak. I have found in a variety of languages that with good listening comprehension and a large passive vocabulary, my speaking skills activate quite quickly when the need or opportunity arises.
A computer or program can pass the Turing Test just by replying to text inserted into a program, and then replying with text. It doesn’t need to speak. To replicate a one-one class in the way that you describe, there is one other huge hurdle that needs to be jumped. The program would need to recognise the speech of a non-native speaker who may have a very low level of English. That’s very difficult because although speech-recognition software has made leaps and bounds in recent years, I doubt it is good enough to understand all the different ways that a learner might have for producing a particular sound.
To give one example, like you I live in Spain. When there’s a power cut, I have to phone the electricity company. I am answered by a machine which asks me to state the purpose of my call. When I say “averia” or whatever the Spanish word I need is, the program cannot understand me,. It cannot even guess at the correct response to my request, despite the fact that there is a finite set of words that a person might say in that situation. Furthermore, even certain native speakers complain that voice-recognition software cannot understand or respond correctly to their accents.
So the idea that an app can perform the task of a teacher in this way is a long, long way off, thankfully.
True, Alastair – I’ve experienced the frustration of non-comprehending machine communication myself. (And the YouTube video of the two Scotsmen in the (RP) voice-operated elevator makes the same point very wittily). It’s not just a matter of recognizing a greater range of phonetic realizations and then taking a stab (using phonotactics, among other things) at predicting the targeted words, but the machine will have to cope with non-standard forms of words and of syntax (what teachers call errors). We human beings have a hard enough time interacting with language learners, so it’s hard to imagine a machine doing any better. At least we have the advantage of being able to project ‘into the learner’s head’.
Dear Scott,
Thank you so much for resuming your blog. What a treat to get another helping of “food for thought” to savour a lazy Sunday morning’s brunch.
Couldn’t agree more with what you are saying in this post — so far it’s only a human who can anticipate, predict and help with verbalizing another human’s thoughts. I believe that testing this ability should be included in some Teaching Aptitude Test (:-) to screen the best candidates to be trained as teachers. And I’ve always wondered whether it’s something that can be trained and acquired or it’s innate. The shorter the temporal distance between the student’s meaning potential (thought) and the language supply, the more successful the development of the student’s linguistic resources is. Is it an axiom or still a theorem in modern teaching?
Yes, you are absolutely right that there is no such an app yet. Yet science fiction predicts that such a device will be invented sooner or later. Just think of Asimo, a robot who is reported to be controlled by the power of thought! Isn’t that amazing? Though the invention of such a device which can read people’s thought raises a lot of ethical issues – what if it is even able to read some deeply hidden secrets? (the science fiction novel ‘Solaris’ by Stanislaw Lem and its screen version by Andrey Tarkovsky raise this issue)
And there are certain constraints to this ability, well expressed in Bachtin’s notion of ‘otherness’. Every person’s life experience is unique, idiosyncratic, culture-bound. Generalisations we are able to make about other people’s thoughts are very approximate. In social psychology there is a notion of “projection” — when an individual projects her intentions and thoughts onto another individual, governed solely by her own life experience. The content of this projection is pre-determined and restricted by the repertoire of the person’s idiosyncratic representations of the world. And this is the main cause of all misunderstandings among people, especially at the international level or between genders. We may ascribe our own thought to other people when they in fact have even never thought of it.
Considering all this, how much of the student’s mind can we really read and isn’t there a danger of unobtrusive editing their thoughts to match our expectations?
Thank you once again for this opportunity to share to overcome “otherness”.
Hi Svetlana – nice to have you commenting again! Yes. of course, you are right – there are limits to intersubjectivity – we cannot literally read one another’s minds.. Perhaps that is why language was ‘invented’ – to compensate for the limitations. As Günter Kress (I think) put it, talk is motivated by difference – not sameness. Not that it always works to reduce difference:
Krazy Kat: Why is “language” Ignatz?
Ignatz: “Language” is that we may understand one another.
Krazy Kat: Is that so?
Ignatz: Yes that’s so.
Krazy Kat: Can you unda stend a Finn or a Leplender, or a Oshkosher, huh?
Ignatz: No.
Krazy kat: Can a Finn or a Leplender, or a Oshkosher unda stend you?
Ignatz: No.
Krazy Kat: Then I would say lenguage is that we may misunda stend each udda.
🙂 A great joke! Thanks!
But what if language was invented to conceal what we are really thinking about? In some cases it might be too face-threatening to read each other’s mind. That’s what the joke implies, right?
This post brought to mind how the game of Peekaboo – a game playing in all known cultures – is a sociocultural tool that fosters both, intersubjectvity, and can also be seen as a “readiness” activity preparing children for interaction and turn taking.
Indeed, Gabriel – the challenge (and the fun) of many children’s games relies on trying to second-guess your opponent’s intentions. (Which was why I chose pictures of children playing hide-and-seek to illustrate the blog). Dogs, on the other hand, never doubt that the stick will be thrown, however often you ‘trick’ them.
Thanks my best teacher ever !!!!