Z is for Zero Uncertainty

31 07 2011

Mr Grumpy Blogger

Here is a listening sequence that could be from any current coursebook:

Pre-listening: 1. What do you know about garden gnomes? Have you ever had a garden gnome? Have you ever lost a garden gnome? Etc. Scrum down and talk to your mates. [This is the ‘activating schema’ stage]

2. Here are some words you’d better know: toadstool, abducted, postcard, package tour, gnomic… etc. [= Pre-teach some vocab that may or may not be crucial to an understanding of the text]

3. Here is a picture of a man looking at a postcard showing his garden gnome in St Peter’s Square. Here he is again, being interviewed. What questions is he being asked? What answers is he giving? [= Activating predictive skills so as to make listening to the ensuing interview more or less redundant]

Listening

1. Listen to this [pretend] interview with a [pretend] person whose [pretend] garden gnome was nicked, and do this task.

Put the interviewer’s questions in order. (The first one has been done for you).

  1. And how does this story end?
  2. I hear you lost your gnome. Tell me about it. (1)
  3. So what did you do?

[= easy gist listening question – so easy you don’t actually have to listen to the text to do it]

2. Listen again, and say why these words are mentioned: Red Square; front lawn; the Great Pyramid of Cheops. [= deeper level processing – and this is as deep as it gets]

Follow-up

Imagine you are a garden gnome who has been kidnapped and sent abroad. Write a postcard detailing your adventure. [= er, follow-up]

Why do I have problems with this kind of sequence?

Well, apart from the naff content and the scripted nature of the text (why are 90% of all coursebook listenings still scripted?), I really can’t figure out in what way learners are any better off after the process than they were before it.

Can we say, hand on heart, that this very superficial treatment of spoken texts has improved their listening skills one jot? For a start, by activating their top-down processing skills (world knowledge, predictive abilities, etc) and by setting only the easiest of gist checking questions, the learners have been so cushioned against having to engage with the language in the text at anything but the most superficial level that it’s very difficult to see how such a sequence prepares them for real-life listening at all, let alone teaches them anything new about the language.

This is like looking at the target language from 30,000 feet. But that’s where the learners are already. They’re very used to not really understanding texts, so why should they want to not really understand them in the classroom, too?

While it may get students into a text (and compensate for the lack of visual information, in the case of audio-only listening tasks), an over-dependence on top-down processing (i.e. using background knowledge, non-linguistic and contextual clues, etc) may delude both learners and teachers into thinking that linguistic information can safely be ignored. Or that having no more unanswered questions about a text (a state that Frank Smith calls ‘zero uncertainty’) is not a realistic, nor even a desirable, outcome.

As a second language user, I hate having unanswered questions. I hate being in the cinema at an Almodóvar film surrounded by cackling Spaniards, and not getting the joke. I hate missing the plane because I misheard the announcement and went to the wrong gate. I don’t like 50% uncertainty, or even 5% uncertainty. I crave zero uncertainty.

Students transcribing (photo courtesy of Eltpics)

So, how would I improve the sequence? Simply by the addition of further layers and layers of questions that probe and probe and probe at the learners’ emergent understanding, until not a word has been by-passed, not a discourse marker ignored, not a verb ending overlooked, and not a question left unanswered. And the sequence would culminate in a word-by-word transcription task – not of the whole text, necessarily – but of a decent-sized chunk of it.

But, to withstand the weight of so much probing, I would need a text that was of much more intrinsic interest, educational value, and linguistic capital than one about abducted garden gnomes!

Reference:

Smith, F. (2004) Understanding Reading (6th edition). Lawrence Erlbaum.





P is for Primate language

24 07 2011

I’ve just seen this somewhat dispiriting documentary about Nim, one of a number of primates who have been sequestered, domesticated, scrutinised, feted, and ultimately abandoned in the name of linguistic research.  Even the shots of the Columbia University forecourt that I walk through every day failed to enliven a story of wanton cruelty, institutional pettiness, dodgy science and bad hair.

The film charts a succession of sudden, traumatic abductions, starting when baby Nim was snatched screaming from his mother’s arms. Over a period of several years, with only humans to interact with, the young chimp was taught to sign, using an adapted form of  American Sign Language, and acquiring a working vocabulary of several hundred words. When he outgrew his cute and cuddly stage, and/or when the funding ran out, he was packed off to a sort of primate Guantánamo Bay. The story is only slightly redeemed by the efforts of one of his former minders to track him down. Even in his hoary old age, Nim still retains a trace of his former competence, pathetically signing ‘play’ from within the bars of his prison.

Columbia University

Frustratingly, the film hardly touches on the linguistic controversies that fuelled this research. In the 1970s, when this unhappy story took place, the debate as to whether only humans are innately equipped with a modular language acquisition device (LAD) was still a fairly hot issue. Not for nothing was Nim named Nim Chimpsky, in (cute) recognition of Noam Chomsky’s role as the leading protagonist of the debate.

What was at stake was this: if highly intelligent apes, exposed to a similar linguistic environment as human children, could acquire an extensive lexicon, but fail to develop even the rudiments of a ‘grammar’, this would go some way towards supporting the view that humans are uniquely hard-wired for language acquisition. On the other hand, if evidence of syntax, however primitive, could be demonstrated, Chomsky’s notion of a ‘Universal Grammar’ (UG) would either need to be extended to nonhuman primates, or it would need to be re-evaluated entirely.

And the findings? Nim’s vocabulary was impressive, but more impressive still was his ability to form two-sign, three-sign, and even longer strings: MORE EAT, HUG NIM, BANANA EAT ME NIM, etc. Moreover, a superficial analysis of the data would suggest that Nim was operating according to some kind of embryonic grammar, producing word order patterns not dissimilar to those of human children’s first utterances. For example, he consistently placed the sign for MORE in front of the sign it modified:  MORE TICKLE, MORE DRINK, etc. But, as Jean Aitchison (1983) notes, “a closer analysis showed that the appearance of order was an illusion. Nim simply had a statistical preference for placing certain words in certain places, while other words showed no such preference” (p.55).

However, as Roger Brown(1973) argued, with regard to similar results for Washoe, an earlier case study of primate signing, “While appropriate order can be used as evidence for the intention to express semantic relations, the lack of such order does not establish the absence of such intentions” (p. 41). This is because the use of appropriate word order, of the verb-object type, for example, as in GIVE BALL, is not strictly necessary, since the context in which the utterances are generated usually resolves any ambiguity. That is to say, the pragmatics of the situation renders syntax redundant. But if that is the case, why do (human) children show evidence of a proto-syntax right from the start?

In the end, we don’t seem to be much the wiser as to whether the higher primates have a rudimentary LAD, despite all the anguish that was inflicted in trying to find one. Nor, for that matter, do we really know whether humans have an LAD either, or whether their faculty for language acquisition isn’t just a spin-off of their vastly more developed cognitive capacities.

What we do know is that the chimpanzees who have been studied do not use their linguistic capacities in the same way as humans, even very young ones, do. Nim, for example, rarely initiated a conversation, and was unable to grasp the basics of turn-taking. As Aitchison (1983, p. 57) concludes, “Nim did not use his signs in the structured, creative, social way that is characteristic of human children”.

In fact, Nim’s ‘language’ was simply a more elaborated version of the way chimpanzees use gestures and vocalizations in the wild: to regulate two-way social interactions such as grooming, feeding, and play. As Tomasello (2003, p. 11) puts it, nonhuman primate communication functions “almost exclusively for imperative motives, to request a behavior of others, not to share attention or information with others in a disinterested manner”.

As someone once said, “Your dog can tell you he is hungry, but not that his father was poor but happy”.

References:

Aitchison, J. (1983). The Articulate Mammal: An Introduction to Psycholinguistics (2nd edn). New York: Universe Books.

Brown, R. (1973). A First Language: The Early Stages. Cambridge, MA.: Harvard University Press.

Tomasello, M. (2003) Constructing a Language: A Usage-based Theory of Language Acquisition. Cambridge, MA.: Harvard University Press.





V is for Variability

17 07 2011

“O, swear not by the moon, the inconstant moon,
That monthly changes in her circled orb,
Lest that thy love prove likewise variable.”

(Romeo & Juliet)

I have Shakespeare on the brain at the moment, having splashed out on tickets for three of the five shows that the Royal Shakespeare Company is putting on in New York this summer.

And, just by chance, I came across a fascinating book on Shakespeare’s grammar, first published in 1870, which details not only the differences between Elizabethan and modern grammar, but also documents – even celebrates – the enormous variability in the former. As the author, E.A. Abbott notes, in Elizabethan English, at least on a superficial view, “any irregularities whatever, whether in the formation of words or in the combination of words into sentences, are allowable” (Abbott,  1870, 1966, p. 5).

He proceeds to itemise some of the inconsistencies of Shakespeare’s grammar: “Every variety of apparent grammatical inaccuracy meets us. He for him, him for he; spoke and took, for spoken and taken; plural nominatives with singular verbs;  relatives omitted where they are now considered necessary; … double negatives; double comparatives (‘more better,’ &c.) and superlatives; … and lastly, some verbs apparently with two nominatives, and others without any nominative at all” (p. 6).

As examples of how this variability manifests itself even within the same sentence, consider the following:

None are so surely caught when they are catch’d  (Love‘s Labour Lost)

Where is thy husband now? Where be thy brothers? Where are thy children? (Richard III)

If thou beest not immortal, look about you (Julius Caesar)

I never loved you much; but I ha’ prais’d ye (Anthony and Cleopatra)

Makes both my body pine and soul to languish (Pericles)

Is there not wars? Is there not employment? (2 Henry IV)

Part of the RSC's summer season in NYC

Of course, we can attribute a lot of Shakespeare’s ‘errors’ to the requirements of prosody or to the negligence of typesetters. But many more may be due to what Abbott calls “the unfixed nature of the language”: “It must be remembered that the Elizabethan was a transitional period in the history of the English language” (p. 6). Hence the seemingly free variation between is and be, between thou forms and you forms, and between ye and you.  Likewise,  do-questions freeely alternate with verb inversion:

Countess. Do you love my son?
Helena. Your pardon, noble mistress!
Countess. Love you my son?
Helena. Do not you love him, madam?

(All’s Well That Ends Well)

Elizabethan English was clearly in a state of flux, but is English any less variable now than it was in Shakespeare’s time, I wonder?  Think of the way adjective + -er comparatives are yielding to more + adjective  forms (see this comment on a previous post), or of the way past conditionals are mutating (which I wrote about here), or of the way I’m loving it is just the tip of an iceberg whereby stative verbs are becoming dynamic (mentioned in passing here). Think of the way the present perfect/past simple distinction has  become elided in some registers of American English (Did you have breakfast yet?) or how like has become an all-purpose quotative (He’s like ‘Who, me?’) or how going forward has become a marker of futurity.

That variation is a fact of linguistic life has long been recognised by sociolinguists. As William Labov wrote, as long ago as 1969:

“One of the fundamental principles of sociolinguistic investigation might simply be stated as There are no single-style speakers. By this we mean that every speaker will show some variation in phonological and syntactic rules according to the immediate context in which he is speaking” (1969, 2003, p. 234).

More recently, as seen through the lens of complex systems theory, all language use – whether the language of a social group or the language of an individual – is subject to constant variation. “A language is not a fixed system. It varies in usage over speakers, places, and time” (Ellis, 2009, p. 139).  Shakespeare’s language was probably no more nor less variable than that of an English speaker today. As Diane Larsen-Freeman (2010, p. 53)  puts it: “From a Complexity Theory perspective, flux is an integral part of any system. It is not as though there was some uniform norm from which individuals deviate. Variability stems from the ongoing self-organization of systems of activity”. In other words, variability, both at the level of the social group or at the level of the individual, is not ‘noise’ or ‘error’, but is in integral part of the system as it evolves and adapts.

If language is in a constant state of flux, and if there is no such thing as ‘deviation from the norm’ – that is to say, if there is no error, as traditionally conceived – where does that leave us,  as course designers, language teachers, and language testers? Put another way, how do we align the inherent variability of the learner’s emergent system with the inherent variability of the way that the language is being used by its speakers? If language is like “the inconstant moon/that monthly changes in her circled orb”, how do we get the measure of it?

In attempting to provide a direction, Larsen-Freeman (2010, p. 53) is instructive:

“We need to take into account learners’ histories, orientations and intentions, thoughts and feelings. We need to consider the tasks that learners perform and to consider each performance anew — stable and predictable in part, but at the same time, variable, flexible, and dynamically adapted to fit the changing situation. Learners actively transform their linguistic world; they do not just conform to it”.

References:

Abbot, E.A. 1870. A Shakespearean Grammar: An attempt to illustrate some of the differences between Elizabethan and Modern English. London: Macmillan, re-published in 1966 by Dover Publications, New York.

Ellis, N. 2009. ‘Optimizing the input: frequency and sampling in usage-based and form-focused learning.’ In Long, M. & Doughty, C. (eds.) The Handbook of Language Teaching. Oxford: Wiley-Blackwell.

Labov, W. 1969. ‘Some sociolinguistic principles’. Reprinted in Paulston, C.B., & Tucker, G.R. (eds.) (2003) Sociolinguistics: The Essential Readings. Oxford: Blackwell.

Larsen-Freeman, D. 2010. ‘The dynamic co-adaptation of cognitive and social views: A Complexity Theory perspective’. In Batstone, R. (ed.) Sociocognitive Perspectives on Language Use and Language Learning. Oxford: Oxford University Press.





P is for Practicum

10 07 2011

Teaching practice, MA TESOL at The New School

As part of a Methods course I am teaching at the moment, I am observing teachers-in-training working with especially constituted classes of ‘guinea pig’ students.

Trainers who work on CELTA or DELTA courses, or on other pre- or in-service schemes, will be familiar with the teaching practice (or practicum) set-up. The trainee teachers plan their classes collaboratively, and then take turns to teach a segment of the overall lesson. The trainer (me, in this case) takes a corner seat, mutely observes the succession of ‘teaching slots’, and then conducts a joint feedback session with the trainee teachers either immediately afterwards, or on a subsequent day.

The more I do this, the more uncomfortable I feel with the process on at least two counts. One I’ll call logistical, and the other—for want of a better term—I’ll call existential.

First: the logistics. The trainer’s role, as silent, impassive observer, noting every move,  and delivering the feedback retrospectively, seems to run counter to what we now understand about skill acquisition. Cognitive learning theory has long recognised that feedback in ‘real operating conditions’—i.e. while you’re actually engaged in a task —is generally more powerful and more durable than feedback delivered after the event. More recently, a sociocultural perspective argues that skills are best learned through ‘assisted performance’, where the expert and the novice work collaboratively on a task, the former modelling and scaffolding the necessary sub-skills, and mediating the activity by means of well-placed interventions, such as commands, gestures, or gaze. In this way, and assuming an optimal state of readiness (aka the zone of proximal development) novices begin to appropriate the necessary skills, until they are capable of regulating them independently.

All this would seem to argue against the traditional practicum structure, with the trainer detached from the activity, and the feedback delivered ‘cold’. In fact, I’m finding that, on my present course, the sessions in which we ‘workshop’ lessons as a group in a micro-teaching format, with the trainees teaching their colleagues and me intervening as they do so, are both less stressful for the trainees and (I think) more productive in terms of their developmental outcomes. Here is an example of what I mean: a group has prepared a presentation of used to, and one of the team has volunteered to demonstrate it to the class.

The milling activity

Of course, micro-teaching lacks the authenticity of real classrooms, so the next step might involve taking a more interventionist role during the actual teaching practice, in the form, for example, of team-teaching, or of ‘coaching from the sidelines’, i.e. intervening more actively during the teaching practice lessons. In fact, I did this last week, gesticulating like a football coach in order to prompt the trainee who was teaching at the time to stop what he was doing and to pre-teach a question form, in advance of the milling activity that he was about to launch into. He got the hint, took the necessary steps, and the activity—I think—was all the better for it.

And now for the ‘existential’ problem, which goes much deeper. Sitting at the back of the room, or even intervening from the sidelines, I can’t help wondering what my role really is here. All these teachers I’m watching are so different, in terms of style, personality, experience, professional needs and aspirations, teaching contexts, and so on. And yet I get the sense I am trying to shoehorn them into a way of teaching that is very much ‘one-size-fits-all’.

Thinking back, I realise, uncomfortably, that, over the years that I have been working with teachers-in-training, my intentions as a trainer have always been more prescriptive than I would have admitted at the time. Initially, as a fairly inexperienced Director of Studies, these intentions took the form of wanting to turn my newly-trained teachers into clones of myself: “Do it like this (because this is the way I do it)”. Then, as a CELTA trainer, it was all about getting the trainees to teach in the way that the ‘method’ dictated. Of course, we used to deny that there was a ‘CELTA method’. It was all about eclecticism, surely. Looking back, I now realise that, if the CELTA course offered a range of methodological choices, this range was in fact fairly limited. Or even, very limited, given the way that a small set of global coursebooks determined (and still determine) the prevailing approach.

When I became an in-service trainer, working on DELTA courses, I paid lip-service to the notion that it was professional teacher development that should drive the agenda, and hence encouraged my trainees to look beyond the narrow confines of their CELTA ‘method’, to experiment, to reflect, and to adapt their teaching to their specific contexts. This, of course, ignored the fact that DELTA is an externally examined course, with a very clearly specified syllabus and success criteria – and, moreover, that the teachers are still using (and therefore are still constrained by) the same coursebooks.

Now, as I sit and watch and take notes I realise at least two things:

1. Whatever I say and do, these teachers will change only to the extent that their own beliefs, values, self-image, personality, previous experience etc will allow them; and

2. Whatever change that they do make, they will likely revert to their ‘default’ setting as soon as my back is turned. The teacher who is the entertainer, or the lecturer, or the football coach, or the social worker, will always be the entertainer, lecturer, football coach, etc.

Hence, all I can hope to do is help them become the best (= most effective, but also the most fulfilled) teacher that they themselves can possibly be – irrespective of how I myself teach, or whatever method is the flavour of the month, or whatever materials they happen to be using, or whatever context they happen to be teaching in.

And how do I do this?  Probably not by sitting at the back of the room and taking notes.





Q is for Queer

3 07 2011

As the sign suggests, with the passing of the same-sex marriage bill, it’s been a good time to be gay in New York. It’s also interesting – from a linguistic point of view – to track the effect that these social changes are having on language. The Merriam-Webster Online Dictionary, for example, has only just amended its definition of marriage in order to make it more inclusive.

I’m also intrigued by the (still relatively slow)  increase in the use of the collocations his husband and her wife – wordings that, as teachers, we might instinctively ‘correct’. The COCA corpus records 13 instances of the former, and 27 of the latter, but none before 1990 (see chart). Here are a couple of examples:

Occurrences of 'his husband' in the COCA Corpus (click to enlarge)

….whole day was unbelievable, ” said Mr. Adams, who now lives with his husband, Fred Davie, 51, in Brooklyn. (NY Times, 2007)

Jules has aspirations toward starting a landscaping business while her wife, Nic (Annette Bening), works long hours and drinks too much wine (Esquire Magazine, 2010)

Interestingly, the much bigger (155 billion word)  GoogleBooks corpus documents examples of his husband from as early as it has records. E.g.

Sometimes he travelled the country with goods in the character of a married woman, having changed his maiden name for that of his husband who carried the pack. (1813)

The Google Books corpus, in conjunction with the handy ngram viewer, also allows us to plot the relative frequency of the terms gay and queer (see chart below) and to track the way that both terms have been reclaimed – resuscitated, even (although it would require a more fine-grained analysis to discriminate between the neutral and pejorative uses of both these words).

Gay? Queer? What’s the difference? While gay describes both a sexual preference and a life style – and therefore collocates mostly with men, marriage, rights, people, community (according to the COCA Corpus) –  queer connotes an attitude or stance. Its most frequent collocates are eye, nation, theory and studies.  Here’s how the Urban Dictionary defines queer: “Originaly [sic] meant strange or odd. Now stands for anyone who is sexualy [sic] different but may or may not mean gay. Queer covers any type of gender or sexual attitudes that are outside of the mainstream of one man one woman monogamy”.  In other words, queer is a reaction against what is called – in the literature – heteronormativity.

Heteronormativity is defined as “the organisation of all patterns of thought, awareness and belief around the presumption of universal heterosexual desire, behaviour and identity” (Baker, 2008, p. 209). A good example of this came my way the other day, when I picked up a second-hand translation of André Gide’s somewhat bashful defence of homosexuality, Corydon, first published in 1911. As an aside, Gide cites, with some derision, the French translator of Walt Whitman’s poems, who re-works “the friend whose embracing awakes me…” as “l’amie [feminine] qui...” Gide adds that the translator’s “desire to draw his hero [i.e. Whitman] onto the side of heterosexuality is so great, that when he translates ‘the heaving sea’ he finds it necessary to add ‘like a woman’s bosom'” (p. 195). This is heteronormativity taken to ludicrous extremes.

(I’m wondering – very quietly – if the trend to describe one’s same-sex married partner as my husband/wife is not also a teeny-weeny bit heteronormative. But hey).

Unsurprisingly, given their global remit, heteronormativity is rife in ELT coursebooks too.  But I’ve discussed this before so I’m not going to wade in again. Besides, I suspect it’s a lost cause. Instead, I want to take a quick look at another queer collocation: queer pedagogy.

Relative frequency of gay & queer over two centuries (click to enlarge)

Queer pedagogy is a development from feminist pedagogy, in itself heavily influenced by critical pedagogy. On feminist pedagogy in ESL, Crookes (2009, p. 193) quotes Vandrick (1994) and her call for a pedagogy, in which the classroom ideally functions as a “liberatory environment, in which students also teach, and are subjects not objects; and in which consciousness could be changed, and the old weaknesses (racisim, classism, homophobia, etc.) expelled”. Crookes comments that the practical implications of these goals would require teachers to foreground group process skills, cooperation,  networking and being inclusive.

By extension, a queer pedagogy (according to the entry in Wikipedia) also “explores and interrogates the student/teacher relationship, the role of identities in the classroom, the role of eroticism in the teaching process, the nature of disciplines and curriculum, and the connection between the classroom and the broader community with a goal of being both a set of theoretical tools for pedagogical critique … and/or a set of practical tools for those doing pedagogical work”.

Is it too fanciful to suggest that Dogme ELT, in aligning with these goals, and in the way that it attempts to position itself in contradistinction to the ‘normal’ in language teaching, might also be a little bit queer?

References:

Baker, P. (2008) Sexed Texts: Language, Gender and Sexuality. London: Equinox.

Crookes, G. (2009) Values, Philosophies, and Beliefs in TESOL: Making a Statement. Cambridge: Cambridge University Press.

Gide, A. (1950) Corydon. New York: Farrar, Straus & Co.

Vandrick, S. (1994) Feminist pedagogy and ESL. College English, 4/2, 69-92.





E is for Eliciting

26 06 2011

"Guess what I'm thinking"

Why do I have an allergic reaction to eliciting? Why do teacher-led question-and-answer sequences that go like this bring me out in a rash?

T:  Look at this picture. How many people can you see?

St 1: Two

T: Good. They are a man and a ….?

St 2: Woman.

T:  Good. What might their relationship be?

St 2:  Friends?

T:  No.

St 3: Husband and wife?

T: No.

St 4: Brother and sister?

T: No.

St 5: Co-authors of a field guide to Bulgarian mushrooms?

T: Yes.  And what might they be saying to each other?… etc , etc, ad nauseam.

I seldom see students really engaged by this kind of routine. On the contrary, they are often either wary or truculent, trying to second-guess where this relentless line of questioning is taking them.  Worse, it’s often at the beginning of an activity, such as the preamble to a listening or reading task, that you find these eliciting sequences, and there’s nothing more calculated to put the learners in a bad mood than being asked to guess in public.  I always advise my trainee teachers to avoid, at all costs, starting an observed lesson with an eliciting sequence: it’s the kiss of death. Instead, ask the learners a few real questions (How was you day?). Or tell them something interesting about yourself, and then see how they respond. Maybe they will tell you something interesting back.

Curiously, in the literature on classroom talk, eliciting-type questions, like the ones above, are often wrongly categorised as display questions.  In contrast to real questions (i.e. questions, like What did you do at the weekend?, which are motivated by a genuine desire to plug a gap in the asker’s knowledge), display questions are questions that the teacher knows the answer to, but which invite students to display their knowledge, as in What’s the capital of Peru? Eliciting-type questions, on the other hand, typically require the learners, not to display what they know, but to guess what they don’t.  Eliciting sequences, at their worst, resemble a surreal game-show where contestants speculate as to what the conjuror is hiding up his sleeve. Or a game of charades with ill-defined rules.

"One word, two syllables..."

Of course, the intention behind eliciting is a worthy one: it serves not only to maximise speaking opportunities, but to involve the learners actively in the construction of knowledge, building from the known to the unknown. In the case of genuine display questions (What is the past of go?), eliciting helps diagnose the present state of the learners’ knowledge.  And, in a sense, it models the cut-and-thrust of real interaction, where conversational turns are contingent upon one another. Not for nothing were these eliciting sequences called ‘conversations’ in early Direct Method textbooks. Eliciting is now (wrongly, in my opinion) re-branded as either dialogic teaching or scaffolding.

On pre-service training courses, it makes a certain sense that trainee teachers are encouraged to elicit in preference to what is often the default, delivery mode of presentation, where the teacher simply lectures. To be fair, eliciting is not quite as mind-numbing as prolonged sequences of chalk-and-talk (or what, in this age of interactive whiteboards, might better be called tap-and-rap). But, like many good things, eliciting is horribly over-used.

A friend, who, like most Spanish-speakers,  has spent many years in English language classrooms, had this to say about it:

“It’s that task at the beginning of the unit that I really hate, when  the teacher comes and shows you a photo and asks you Who are these people and what do you think are they going to do?  And the answer is that these people are models and they have been posing for this photo — that is the real answer — but the teacher — what they want us to invent is a certain story that only the teacher knows the answer to, so it ends up being more a game than an English class”.

Does eliciting carry over into real life, I sometimes wonder? Do such teachers go home to their loved ones and say “Hello, darling. Where might I have been? What sort of day might I have had? What might I be feeling like?…”





A is for Aspect (2)

19 06 2011

In this second short video on the English tense and aspect system, I take  a look at perfect aspect.