An A-Z of ELT

8 12 2009

In 2006 I wrote An A-Z of ELT – an encyclopedia-dictionary of terminology relating to English language and English language teaching. As soon as it was published (by Macmillan) I was already planning an update. Hence this blog, which I used as a means both of revising and updating existing entries, and with a view to creating new entries.

I’m pleased to say that the new edition of An A-Z of ELT has now been published – called The New A-Z of ELT – informed in no small part by this blog. In anticipation of yet another edition, let the blogging continue!
I support the round
Some of the most popular posts on this blog have been re-worked in the form of an e-book, called Big Questions in ELT, which is published by The Round.





E is for Emergence

23 07 2017

path.JPG“Out of the slimy mud of words … there spring[s] the perfect order of speech” (T.S. Eliot).

Eliot’s use of the verb ‘spring’ suggests that language emerges instantly and fully-formed, like a rabbit out of a hat. Historical linguists, sociolinguists and researchers into language acquisition (both first and second) suggest that the processes of language evolution and development are slower – and messier. To capture this messy, evolving quality, many scholars enlist the term emergence.

In what sense (or senses), then, does language emerge? There are at least three dimensions along which language, and specifically grammar, can be said to be emergent: over historical time; in the course of an individual’s lifetime; and in the moment-to-moment interactions in the language classroom.

Languages emerge over time. Pidgins, for example, emerge out of the contact between people with mutually unintelligible mother tongues. Creoles emerge when these pidgins are acquired as a first language by children in pidgin-speaking communities. English itself is the product of creolizing processes, as speakers of different local dialects came into contact with each other and with successive waves of invaders.  There are some that argue that ELF – English as a lingua franca – is yet another instance of an emergent variety.

Because, of course, English continues to evolve. The emergence of the future marker ‘going to’ is a case in point: in Shakespeare’s day, if you were to ‘going to meet someone’ you were literally moving in the direction of the projected meeting place. Over the course of a century or so, ‘going to’ became a metaphorical way of expressing a future intention. By the twentieth century it had further metamorphosed into the contracted form ‘gonna’. Such changes do not happen overnight nor are they ordained by some higher authority or by some genetic disposition. Arguably, everything we call grammar has emerged through similar processes, whereby lexical words become ‘grammaticalized’ to perform certain needed functions, and then, through repeated use, become established in a speech community. According to this view, ‘grammar is seen as … the set of sedimented conventions that have been routinized out of the more frequently occurring ways of saying things’ (Hopper 1998: 163).

Language emerges, too, in the course of an individual’s lifetime, primarily their infancy, as argued by proponents of usage-based theories of language acquisition – those theories that propose that linguistic competence is the product of an individual’s innumerable experiences of language in use.  As Nick Ellis (1998, p. 657) puts it:

Emergentists believe that simple learning mechanisms, operating in and across the human systems for perception, motor-action and cognition as they are exposed to language data as part of a communicatively-rich human social environment by an organism eager to exploit the functionality of language, suffice to drive the emergence of complex language representations.

path 01.JPGThese ‘rule abstraction’ processes have been modelled using connectionist networks, i.e. computerized simulations of the way neural pathways are sensitive to frequency information and are strengthened accordingly, to the point that they display rule-like learning behaviours – even when they have no prior grammatical knowledge (Ellis et al. 2016).

In other words, the system continuously upgrades itself using general  (rather than language-specific) learning faculties, a view that challenges ‘innatist’ theories of language acquisition, as argued by – among others – Steven Pinker in The language instinct (1994).

From a complex systems perspective, the emergent nature of language learning is consistent with the view that, as John Holland (1998, p. 3) puts it: ‘a small number of rules or laws can generate systems of surprising complexity,’ a capacity that is ‘compounded when the elements of the system include some capacity, however elementary, for adaptation or learning’ (p. 5). While humans have this capacity, they are also constrained in terms of how information (in the form of language) can be processed in real time, and these constraints explain why languages share common features (so-called language universals) which, as Christiansen and Chater (2016) argue, are simply tendencies, ‘rather than the rigid categories of [Universal Grammar]’ (p.87).

Finally, language emerges in second language learning situations, especially when learners are engaged in communicative interaction. The learner talks; others respond. It is the scaffolding and recasting, along with the subsequent review, of these learner-initiated episodes that drives acquisition, argue proponents of task-based instruction, with which Dogme ELT is, of course, aligned. ‘In other words, the emphasis shifts from the traditional interventionist, proactive, modelling behaviour of synthetic approaches to a more reactive mode for teachers – students lead, the teacher follows’ (Long, 2015, p. 70). Or, as Michael Breen (1985) so memorably put it: ‘The language I learn in the classroom is a communal product derived through a jointly constructed process.’

A recent book that attempts to unify the different dimensions of emergence – the historical, the biographical and the moment-by-moment – enlists a felicitous metaphor:path 02

 ‘The quasi-regular structure of language arises in rather the same way that a partially regular pattern of tracks comes to be laid down through a forest, through the overlaid traces of endless animals finding the path of local least resistance; and where each language processing episode tends to facilitate future, similar, processing episodes, just as an animal’s choice of a path facilitates the use of that path for animals that follow’ (Christiansen & Chater, 2016, p. 132.)

Is teaching, then, simply a matter of guiding the learners to find the tracks laid down by their predecessors?

References

Breen, M. (1985). The social context for language learning – a neglected situation? Studies in Second Language Acquisition, 7.

Christiansen, M.H. & Chater, N. (2016) Creating language: integrating evolution, acquisition and processing. Cambridge, Mass: MIT Press.

Ellis, N. (1998) Emergentism, connectionism and language learning. Language Learning, 48/4.

Ellis, N., Römer, U. & O’Donell, M.B. (2016) Usage-based approaches to language acquisition and processing: Cognitive and corpus investigations of construction grammar. Oxford: Wiley.

Holland, J. H. (1998) Emergence: From chaos to order. Oxford: Oxford University Press.

Hopper, P.J. (1998) ‘Emergent language’ in M. Tomasello, (ed.) The New Psychology of Language: Cognitive and Functional Approaches to Language Structure. Mahwah, NJ.: Lawrence Erlbaum.

Long, M. (2014) Second language acquisition and task-based language teaching. Oxford: Wiley-Blackwell.





C is for Commodification

16 07 2017

cocacola(Or P is for Profit)

Let’s say you identify a large and untapped market for a product that you manufacture. Unfortunately the market is in one of the world’s most economically depressed areas. In order to capture and monopolize the market you need to be able to deliver your product at the lowest possible cost to the maximum number of consumers. The smaller the unit of sale, but the more of them, the better. Aggressive marketing will be needed, of course, to persuade a sceptical and precarious client-base to sign up – and stay signed up. And those who are delivering the product should be paid as little as you can get away with: if they are relatively untrained, so much the better.

Soft drinks and cigarettes have been marketed to developing countries like this for decades. Now it is the turn of education. A recent report in The New York Times describes how a chain of low-cost private schools called Bridge International Academies has co-opted the practices of commodification to profit from the dire state of education in many parts of the developing world. As The Times reports:

It was founded in 2007 by [Shannon] May and her husband, Jay Kimmelman, along with a friend, Phil Frei. From early on, the founders’ plans for the world’s poor were audacious. ‘‘An aggressive start-up company that could figure out how to profitably deliver education at a high quality for less than $5 a month could radically disrupt the status quo in education for these 700 million children and ultimately create what could be a billion-dollar new global education company,’’ Kimmelman said in 2014.

Notice the key collocation that captures the essence of the business model: ‘to profitably deliver…’

The way they do this is to employ untrained teachers, give them a crash course, pay them less than public school teachers to work longer hours (which include recruitment drives among the local population), and then ‘deliver’ them their lesson plans by means of e-readers – lesson plans which are written by teams of content writers in the US who have never been near the local context. As the NYT describes it:

The e-reader all but guarantees that every instructor, despite his or her education or preparation level, has a lesson script ready for every class — an important tool in regions where teachers have few resources. But scripts can be confining, some teachers told me. And in some of the 20 or so Bridge classrooms I observed, pupils occasionally asked questions, but Bridge instructors ignored them. Teachers say that they are required to read the day’s script as written or risk a reprimand or eventual termination, and they do not have time to entertain questions. Bridge says that ‘‘teachers are required to reference the day’s teachers’ guide and to diligently work to ensure all material is covered in each lesson.’’

The Times correspondent was lucky enough to witness a lesson (reporters are discouraged from entering Bridge schools):

Inside the Bridge school in Kiserian, an hour’s drive from central Nairobi, students wore the same green uniforms and sat at attention behind the same rough wooden desks I saw in Kawan­gware. In front of a blackboard, a preschool teacher, Gladys Ngugi Nyambara, a thin woman also dressed in bright green, held a Bridge ‘‘teacher computer’’ that contained a recently downloaded lesson script on recognizing the ‘‘F’’ sound in common English words. Nyambara held up a picture of a fish and saw these words on the e-reader’s screen: What is this? (signal) Fish.

She gestured toward the class with the picture and delivered the line as precisely as she could. ‘‘What is this?’’ She snapped her fingers. ‘‘FEESSH.’’ She surveyed the 26 expectant faces in front of her. Her eyes went back to the script on the gray rectangular tablet. Listen. Say it the slow way. FISH. She followed the prompt. ‘‘Listen, class. This is a FEESSH.’’

There was a pause, and the teacher leaned over the e-reader. Our turn. Pupils say it the slow way. (signal) Fish. ‘‘Class, your turn.’’ She snapped her fingers again. ‘‘What is this?’’

After some uncertainty over whether to use ‘‘this’’ or ‘‘that,’’ the children began to dutifully respond. ‘‘This is a FEEEESH.’’

Nyambara pressed on, repeating the call-and-response five more times. ‘‘This is a FEESH. Now class?’’ Snap. ‘‘This is a FEESH,’’ responded the children, their voices moving from uncertainty to singsong, pleased to be catching on.

Needless to say, the delivery model has attracted some major corporate players who are already heavily invested in the economics of digitally-mediated commodification, Bill Gates, the Chan Zuckerberg Initiative and Pearson being just a few. As The Times notes, ‘the company’s pitch [is] tailor-made for the new generation of tech-industry philanthropists, who are impatient to solve the world’s problems and who see unleashing the free market as the best way to create enduring social change.’

Hands Up

Contrast this with a project that is the polar opposite of Bridge in spirit, intent and educational philosophy, but which also addresses the needs of children (without disempowering their teachers) in very difficult circumstances. Nick Bilbrough’s initiative to use simple technology (Skype, Zoom) – not to deliver commodified lesson ‘MacNuggets’ at a price – but to freely create opportunities for learners to interact and be creative using English, supporting them and their teachers in such deprived situations as the Gaza Strip and Syrian refugee camps in Jordan, is called the Hands-Up Project.

Because it is not designed to make a profit, it has not attracted the attention of Bill Gates or Pearson, needless to say. But watch any of the videos that Nick has posted on his blog and you cannot help but be moved by the level of engagement – not to say the level of English – of these children.

How can we enlist more support for this project, without ‘unleashing the free market’ and the forces of commodification, I wonder?





I is for Interdisciplinarity

9 07 2017

cage concert governors islandIt’s probably not surprising that two shows I went to in New York this month were serendipitously connected. One was an outdoor performance of a piece by John Cage for prepared piano. The other was the current Robert Rauschenberg exhibit at the Museum of Modern Art (see link here).  I say not surprising, because both artists lived and worked in New York at some point in their trajectories. (In fact, Cage taught at The New School where I am currently based). More significantly, both taught and collaborated at Black Mountain College in North Carolina in the early fifties, a collaboration which is celebrated and documented in the MoMA exhibition. The famous but unrecorded Theater Piece No. 1 that they both mounted in 1952, in collaboration with other Black Mountain stalwarts, such as the dancer and choreographer Merce Cunningham, the poet Charles Olson and the pianist David Tudor (playing on a prepared piano), is generally credited as being the precursor of the ‘happening’.

 

prepared piano

a prepared piano

 

Black Mountain College was an independent residential school set up in 1933, staffed by, among others, a number of artists and intellectuals fleeing fascism in Europe. It offered an experimental liberal arts education that was inspired in part by John Dewey’s notion of experiential learning; (Dewey himself served as an advisor for a time). There was no predetermined curriculum – students were encouraged to design their own courses –  and equal weight was given to both the sciences and the arts.

As Lehmann (2015, p. 102) describes it, ‘Experimentation served not only as the dominant method of learning and teaching, but also as a means of developing artistic skills, which were explicitly held to be learnable by everyone’.

One of its most influential teachers was Josef Albers, its professor of art, who has previously taught at the Bauhaus in Berlin: his pedagogical approach is what we might now call task- or activity-based, i.e. an approach that begins with experimentation and where the teacher intercedes only at the point of need. Asked what kind of teachers he envisaged, he replied, ‘I would like to have professors of carpentry but I would say ‘Let the freshmen make all the mistakes and then let the professor of carpentry show him how to do it!’… Give them freedom first.” (quoted in Blume et al, 2015, p. 140).

 

rauschenberg's goat

Rauschenberg’s goat

Fundamental to the Black Mountain experience was its cross-curricular philosophy, i.e. its interdisciplinarity, a tradition inherited from the Bauhaus, whose mission was ‘to abolish the institutionally calcified separation between creative disciplines’ (Eggelhöfer 2015, p. 111). One way that the distinctions between subject areas were elided was through collaborative projects which drew on a multiplicity of skills. Theater Piece No. 1 was a case in point. (A recent exhibition on Black Mountain College at the Nationalgalerie in Berlin was called Black Mountain: An interdisciplinary experiment.)

 

The interdisciplinary and task-based approach to education pioneered at Black Mountain survives – or has been revived – in two very different contexts, as reported recently in the press and social media.

In Finland, a major reform of an already highly-rated educational system involves a transversal approach to curriculum design, whereby interdisciplinary projects require students to draw on a range of subject areas in what is called ‘phenomenon-based’ education. Contrary to some press reports, this does not mean dismantling the subject-based curriculum entirely. As one Finnish educator describes it:

What will change in 2016 is that all basic schools for seven to 16-year-olds must have at least one extended period of multi-disciplinary, phenomenon-based teaching and learning in their curricula. The length of this period is to be decided by schools themselves.

The rationale is spelled out thus:

What Finnish youth need more than before are more integrated knowledge and skills about real world issues, many argue. An integrated approach, based on lessons from some schools with longer experience of that, enhances teacher collaboration in schools and makes learning more meaningful to students.

Presumably, the learning of foreign languages such as English, is a candidate for such integration.

Also in Europe, but in a less privileged context, a public primary school in Barcelona attracted media attention recently after winning a prestigious prize for its pedagogical approach. The Joaquim Ruyra School in a predominantly working-class suburb, and where 9 in every 10 students are the children of immigrants, has been outscoring local schools, including some upmarket private schools, on tests in a range of skills, English language being just one. Its approach is essentially activity-based: groups of students work through a cycle of tasks over one lesson, each group working on a different task for twenty minutes before moving to the next. The teacher, working with volunteers – mostly family members – supervises the tasks, and elicits an evaluation of each task’s outcomes. Tasks typically involve collaborative problem-solving and guided discovery, and, while the traditional division between subjects hasn’t been collapsed, the tasks (I imagine) involve deploying a far wider range of cognitive and linguistic skills than do the more mechanical exercises associated with testing delivery-style modes of teaching.

Joaquim Ruyra classroom El Mundo

The Joaquim Ruyra school (from El Mundo)

 

Both the Finnish and Catalan experiments are consistent with the Black Mountain College principles that challenge traditional curricular structures – specifically the tight division into subjects, and lockstep, transmissive teaching.

As far as I know, there was no language teaching at Black Mountain. Had there been, I wonder what it would have been like?

 

References

Blume, E., Felix, M., Knapstein, G., and Nichols, C. (eds) Black Mountain: An interdisciplinary experiment 1933-1957. Berlin: Spector Books.

Eggelhöfer, F. (2015) ‘Processes instead of results: what was taught at the Bauhaus and Black Mountain College,’ in Blume, et al (eds).

Lehmann, A. J. (2015) ‘Pedagogical practices and models of creativity at Black Mountain College’, in Blume, et al. (eds).

 





M is for Machine translation

2 07 2017

(Or: How soon will translation apps make us all redundant?)

Arrival Movie

An applied linguist collecting data

In a book published on the first day of the new millennium, futurologist Ray Kurzweil (2000) predicted that spoken language translation would be common by the year 2019 and that computers would reach human levels of translation by 2029. It would seem that we are well on track. Maybe even a few years ahead.

Google Translate, for example, was launched in 2006, and now supports over 100 languages, although, since it draws on an enormous corpus of already translated texts, it is more reliable with ‘big’ languages, such as English, Spanish, and French.

A fair amount of scorn has been heaped on Google Translate but, in the languages I mostly deal with, I have always found it fairly accurate. Here for example is the first paragraph of this blog translated into Spanish and then back again:

En un libro publicado el primer día del nuevo milenio, el futurólogo Ray Kurzweil (2000) predijo que la traducción hablada sería común para el año 2019 y que las computadoras llegarían a niveles humanos de traducción para 2029. Parecería que estamos bien en el camino. Tal vez incluso unos años por delante.

In a book published on the first day of the new millennium, futurist Ray Kurzweil (2000) predicted that the spoken translation would be common for 2019 and that computers would reach human translation levels by 2029. It would seem we are well on the road. Maybe even a few years ahead.

Initially text-to-text based, Google Translate has more recently been experimenting with a ‘conversation mode’, i.e. speech-to-speech translation, the ultimate goal of machine translation – and memorably foreshadowed by the ‘Babel fish’ of Douglas Adams (1995): ‘If you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language.’

The boffins at Microsoft and Skype have been beavering away towards the same goal: to produce a reliable speech-to-speech translator in a wide range of languages. For a road test of Skype’s English-Mandarin product, see here: https://qz.com/526019/how-good-is-skypes-instant-translation-we-put-it-to-the-chinese-stress-test/

The verdict (two years ago) was less than impressive, but the reviewers concede that Skype Translator will ‘only get better’ – a view echoed by The Economist last month:

Translation software will go on getting better. Not only will engineers keep tweaking their statistical models and neural networks, but users themselves will make improvements to their own systems.

Mention of statistical models and neural networks reminds us that machine translation has evolved through at least three key stages since its inception in the 1960s. First was the ‘slot-filling stage’, whereby individual words were translated and plugged into syntactic structures selected from a built-in grammar.  This less-than-successful model was eventually supplanted by statistical models, dependent on enormous data-bases of already translated text, which were rapidly scanned using algorithms that sought out the best possible phrase-length fit for a given word. Statistical Machine Translation (SMT) was the model on which Google Translate was initially based. It has been successful up to a point, but – since it handles only short sequences of words at a time – it tends to be less reliable dealing with longer stretches of text.

star trek translator.png

An early translation app

More recently still, so-called neural machine translation (NMT), modelled on neural networks, attempts to replicate mental processes of text interpretation and production. As Microsoft describes it, NMT works in two stages:

  • A first stage models the word that needs to be translated based on the context of this word (and its possible translations) within the full sentence, whether the sentence is 5 words or 20 words long.
  • A second stage then translates this word model (not the word itself but the model the neural network has built of it), within the context of the sentence, into the other language.

Because NMT systems learn on the job and have a predictive capability, they are able to make good guesses as to when to start translating and when to wait for more input, and thereby reduce the lag between input and output.  Combined with developments in voice recognition software, NMT provides the nearest thing so far to simultaneous speech-to-speech translation, and has generated a flurry of new apps. See for example:

https://www.youtube.com/watch?v=ZAHfevDUMK4

https://www.cnet.com/news/this-in-ear-translator-will-hit-eardrums-next-month-lingmo/

One caveat to balance against the often rapturous claims made by their promoters is that many of these apps are trialled using fairly routine exchanges of the type Do you know a good sushi restaurant near here?  They need to be able to prove their worth in a much wider variety of registers, both formal and informal. Nevertheless, Kurzweil’s prediction that speech-to-speech translation will be commonplace in two years’ time looks closer to being realized. What, I wonder, will it do to the language teaching industry?Universal-Translator-FI.png

As a footnote, is it not significant that developments in machine translation seem to have mirrored developments in language acquisition theory in general, and specifically the shift from a  focus primarily on syntactic processing to one that favours exemplar-based learning? Viewed from this perspective, acquisition – and translation – is less the activation of a pre-specified grammar, and more the cumulative effect of exposure to masses of data and the probabilistic abstraction of the regularities therein. Perhaps the reason that a child – or a good translator – never produces sentences of the order of Is the man who tall is in the room? or John seems to the men to like each other (Chomsky 2007) is not because these sentences violate structure-dependent rules, but because the child/translator has never encountered instances of anything like them.

References

Adams, D. (1995) The hitchhiker’s guide to the galaxy. London: Heinemann.
Chomsky, N. (2007) On Language. New York: The New Press.
Kurzweil, R. (2000) The Age of Spiritual Machines: When Computers Exceed Human Intelligence.  Penguin.

 





M is for Manifesto

25 06 2017

Blanchett as teacherIf you get a chance to see Julian Rosefeldt’s movie Manifesto, starring Cate Blanchett, do – if for no other reason than to see Blanchett at the top of her form, playing 13 different roles and as many accents, to often hilarious effect. (You can see the trailer here).

Originally conceived as an art gallery video installation, it has now been spliced together as an art-house movie. Each of its thirteen segments has Blanchett reciting and/or enacting a manifesto, or a cluster of related manifestos, that launched various 20th century art movements: Dadaism, Futurism, the Situationists, Surrealism, etc. The Pop Art manifesto, for example takes the form of Blanchett, with a broad Southern accent, saying grace in advance of a turkey dinner, while her long-suffering family roll their eyes at each successively outrageous pronouncement, taken verbatim from Claes Oldenberg’s 1961 text ‘I am for an art…’: “I am for an art that is political-erotical-mystical, that does something other than sit on its ass in a museum….I am for the art of punching and skinned knees and sat-on bananas. I am for the art of kids’ smells. I am for the art of mama-babble…’ and so on. And on.

But my favorite sequence has to be the one near the end, about film, in which Blanchett plays a primary school teacher with a pitch-perfect ‘teacherly’ voice, talking her class through the Dogme 1995 manifesto. Hovering over the kids as they complete an assignment, she gently corrects one of them: “Shooting must be done on location.” And another: “The camera must be handheld.”

dogme95The Dogme 1995 film manifesto, apparently drafted over a bottle of red wine by Lars Von Trier and a handful of his Scandinavian film-making buddies, was, of course, the stimulus for the Dogme ELT manifesto.  The scene in Manifesto prompted me to revisit both. Here, for the record, are four of the 10 ‘vows’ that adherents to the Dogme film movement were expected to comply with:

1.Shooting must be done on location. Props and sets must not be brought in (if a particular prop is necessary for the story, a location must be chosen where this prop is to be found).

2.The sound must never be produced apart from the images or vice versa. (Music must not be used unless it occurs where the scene is being shot). […]

7.Temporal and geographical alienation are forbidden. (That is to say that the film takes place here and now.) […]

10. The director must not be credited.

Motivated by a similar desire to ‘rescue’ teaching from the clutches of the grammar syllabus, as enshrined in coursebooks, and all the associated pedagogical paraphernalia that goes with it, I drafted an (intentionally provocative) Dogme ELT manifesto which clearly echoes both the style and spirit of the van Trier one, and takes the form of ten ‘vows’ (Thornbury 2001):

  1. Teaching should be done using only the resources that teachers and students bring to the classroom – i.e. themselves – and whatever happens to be in the classroom. If a particular piece of material is necessary for the lesson, a location must be chosen where that material is to be found (e.g. library, resource centre, bar, students’ club?)

  2. No recorded listening material should be introduced into the classroom: the source of all “listening” activities should be the students and teacher themselves. The only recorded material that is used should be that made in the classroom itself, e.g. recording students in pair or group work for later re-play and analysis.

  3. The teacher must sit down at all times that the students are seated, except when monitoring group or pair work (and even then it may be best to pull up a chair). In small classes, teaching should take place around a single table.

  4. All the teacher’s questions must be “real” questions (such as “Do you like oysters?” Or “What did you do on Saturday?”), not “display” questions (such as “What’s the past of the verb to go?” or “Is there a clock on the wall?”)

  5. Slavish adherence to a method (such as audiolingualism, Silent Way, TPR, task-based learning, suggestopedia) is unacceptable.

  6. A pre-planned syllabus of pre-selected and graded grammar items is forbidden. Any grammar that is the focus of instruction should emerge from the lesson content, not dictate it.

  7. Topics that are generated by the students themselves must be given priority over any other input.

  8. Grading of students into different levels is disallowed: students should be free to join the class that they feel most comfortable in, whether for social reasons, or for reasons of mutual intelligibility, or both. As in other forms of human social interaction, diversity should be accommodated, even welcomed, but not proscribed.

  9. The criteria and administration of any testing procedures must be negotiated with the learners.

  10. Teachers themselves will be evaluated according to only one criterion: that they are not boring.

Re-reading it now, I realise how it was influenced (a) by the specific training context in which I was working, where elicitation sequences and the playing of barely audible cassette recordings were the order of the day, and (b) by my reading of Postman and Weingartner’s radical treatise, Teaching as a Subversive Activity (1967) which similarly called for a moratorium on mandated curricula and formal testing. I still hold by that, but the final vow (about not being boring) is just plain silly.dogme_circle

The key vow is, of course, the first one, and its proscription on ‘imported’ materials. While the idea of taking students to the bar or library is clearly impractical, technology now allows us to bring the bar or library into the classroom, thereby realising Peter Strevens’ (1956) injunction that:

“Language is not a sterile subject to be confined to the classroom. One of two things must be done: either life must be brought to the classroom or the class must be taken to life.”

Does anything else in the Dogme ELT manifesto strike you as worth retaining?

References

Postman, N. & Weingartner, C. (1967) Teaching as a subversive activity. Harmondsworth: Penguin.

Strevens, P. (1956) Spoken language: an introduction for teachers and students in Africa. London: Longmans, Green and Co.

Thornbury, S. (2001) ‘Teaching Unplugged (Or That’s Dogme with an E)’. IT’s for Teachers, Issue 1 (February), 10-14.

 

 





S is for Sylvia (Ashton-Warner)

18 06 2017

Sylvia Ashton-Warner‘I harness the communication, since I can’t control it, and base my method on it’ (Ashton-Warner, 1966, p.85).

Sylvia Ashton-Warner (1908 – 1984) was a primary school teacher in rural New Zealand, where she was entrusted with teaching reading and writing, using textbooks that were imported from Britain. The content of these ‘primers’ bore little resemblance to the world of her pupils (most of whom were of Māori origin). Their inability to identify with the textbooks and their consequent failure to develop good literacy skills was a constant source of frustration for Ashton-Warner. She wrote (cited in Hood, 1990, p. 91):

There’s no communication … you see they’re not thinking about what they’re writing about or what I’m teaching. I’m teaching about ‘bed’ and ‘can’ but they were thinking about canoes and grandfathers and drowned men and eels.

This frustration led to her abandoning the use of the imported textbooks altogether and, instead, developing an approach – and the materials to go with it – that ‘emerged’ out of the lives and experiences of the children themselves.

In her successful novel, Spinster (1958, p. 67) she describes the germination of this idea:

A rainy, rainy Thursday and I talk to them all day. They ask ten thousand questions in the morning and eleven thousand in the afternoon. And more and more as I talk with them I sense hidden in the converse some kind of key. A kind of high-above nebulous meaning that I cannot identify. And the more I withdraw as a teacher and sit and talk as a person, the more I join in with the stream of their energy, the direction of their inclinations, the rhythms of their emotions, and the forces of their communications, the more I feel my thinking travelling towards this; this something that is the answer to it all; this . . . key.

Conscious that each child had a unique inner imagery, she reasoned that if she could just capture and label these ‘pictures of the inner vision’, she had all the material she needed to provide the foundations of literacy – what she called the ‘key vocabulary’. These were the words that, once written down and recognized, would unlock the ability to read and write texts that included them. These first words, she believed, ‘must have an intense meaning’ and ‘must be made of the stuff of the child himself’ (1966, p. 28).

kids with cards cropBecause the words that emerged from the children provided the basis for their initial writing and reading tasks, she called the approach to literacy ‘organic’ – it grew naturally out of the ‘stuff of the child’: ‘I reach a hand into the mind of the child, bring out a handful of the stuff I find there, and use that as our first working material’ (1966, p. 28).

How did it work? The first stage in Ashton-Warner’s ‘key vocabulary’ process is the eliciting from each child of a ‘key’ word, i.e. one that has strong associations for them, and writing it on a card which the child takes ownership of.

After play … we turn our attention to the new words themselves. The children pick up their books and run to the blackboard and write them up: the words asked for during the writing of the morning. They’re not too long ago to be forgotten.  Some of them are, when a child has asked a lot, but they ask you what they are.

Since they are all on the wall blackboard, I can see them from one position. They write them, revise them, the older children spell them and the younger merely say them… Of course, there’s a lot of noise, but there’s a lot of work too. (p.63)

girl at boardThese words then become the basis of sentences that the children individually write on the blackboards that ring the room. These sentences in turn form the basis of mini-narratives, usually autobiographical, that the pupils write into their notebooks and share, the teacher supplying correction at the point of need. Ashton-Warner used these texts as the basis for writing her ‘infant readers’ which she herself illustrated. Out of this ‘raw material’ – and with no explicit teaching as such – the ability to read and write develops.

In  her life-time and beyond, Sylvia achieved a considerable degree of fame, not only as an educational innovator but as a novelist and counter-cultural icon. For a while she was revered by the progressive schools movement and her methods were adopted beyond her native New Zealand (where her capacity to irritate even her supporters, along with her tendency to stereotype the Māori, badly dented her reputation). As with many visionary educators, her fame may have owed a lot to her own charisma, but those who were taught by her attest to the success of her approach.

One way her legacy has survived is the Language Experience Approach (LEA), a literacy program used with success in the US and based on the principle that the best way of teaching children to read is through their own words. Essentially, the teacher transcribes the telling of a shared experience (e.g. a field trip) or an individual’s narrative, recasting it into more target-like language were necessary. The class then read the story aloud, either in chorus, or individually, and any further revisions and corrections are made. These stories can then be saved as part of the class reading library and even shared with other groups of learners.

And, of course, Ashton-Warner’s organic, materials-light approach is a direct precursor of dogme ELT/teaching-unplugged. In both her teaching journal and her novel she describes the day she burnt all her classroom materials: ‘It’s impressive to see it go up in smoke. … But teaching will be much simpler now, and there’ll be more time for conversation. And whatever the past has or has not taught me, I’m satisfied that communication on any level, giving birth as it does to the new body, the new idea or the new heart, is the most that life can be’ (Spinster, p p.86 – 87).

kids at desks

References

Ashton-Warner, S. (1966) Teacher. Harmondsworth: Penguin.
Ashton-Warner, S. (1958) Spinster. London: Secker & Warburg.
Hood, L. (1990) Sylvia! The biography of Sylvia Ashton-Warner. Auckland: Penguin.

Photos from Teacher.





P is for Problematizing (2)

11 06 2017

Neil portrait.jpgNeil Forrest, teacher trainer at IH Barcelona for over 30 years, retired this week.  I worked with Neil for at least 10 of those years, mainly on the DTEFLA, now DELTA, courses. Working so closely with someone for so long, not to mention sharing a house in the country, had a profound effect on my ‘practical theory’ of language teaching. We were also lucky in that we were pretty much free to design and administer our courses the way we wanted.

One insight I gained from Neil was his comment that, if he observed a lesson in which there were no problems – where everything went smoothly and according to the plan, then there was probably no learning. By problems, he meant those moments when the unexpected happens – when, for example, a teacher’s question elicits a response that is not the intended one, or when a student asks a random grammar question, or when a student utterance contains an inexplicable error, or when a student misinterprets a sentence in a text. Arguably, it’s by engaging with – and attempting to resolve – these unforeseen problems that opportunities for learning are optimized. By contrast, a lesson that runs along its tracks smoothly and effortlessly, with the punctuality of a Swiss train, is probably a lesson in which the learners are under-challenged. And without challenge – or ‘push’, to use Merrill Swain’s term (see P is for Push) – there is no momentum, no learning. Just stasis.

The notion of ‘problematizing’ learning has antecedents in the ‘down the garden path’ treatment which is designed to purposefully induce – and then correct – errors of overgeneralization. For example, Tomasello and Herron (1988) conducted an experiment in which learners were taught – among other things – past tense verb endings for a set of regular verbs, and then were given an exercise that asked them to make sentences about the past with a new set of verbs, some of which were irregular. Having been led ‘down the garden path’, the learners inevitably made overgeneralization errors (e.g. she taked…I runned…) and were then corrected. Compared to a control group, where errors were not forced in this way, learning was found to be more effective.

 

Neil and me cropped

Problematizing at International House, Barcelona – late 80s?

I adapted this principle to produce what VanPatten (2015) calls ‘sentence interpretation tasks’, designed to induce learners to make subtle choices and thereby notice grammar features that might otherwise fly below their radar. An example might be having to choose the pictures  – without any prior instruction – that match each sentence of such pairs as The ship sank/The ship was sunk; The door opened/The door was opened, etc.

 

It is the feedback that learners get on their errors – whether forced or not – that drives learning, argues John Hattie, summarizing the results of literally thousands of research studies, and concluding: ‘We need classes that develop the courage to err’ (Hattie 2009, p. 178).

It may also be the case that the most effective type of feedback on error is the feedback that learners get when their message is not understood or when it is misinterpreted. Thus, the learner who says I am leaving here, meaning I am living here, and gets the response Bye, then! may pay greater attention to avoiding this pronunciation error when it next comes up. This is a case for sometimes ‘acting dumb’ when learners make errors, in order to demonstrate the potential effect of such errors outside the classroom.

If not being understood acts as an incentive to pay closer attention to form, so too might not understanding. In contradistinction to Krashen’s argument that comprehension is a necessary, and even sufficient, condition for learning, Lydia White (1987) has argued that it may be the failure to understand that leads to learning, in that it may force the learner to pay closer attention to grammatical form. As she puts it, ‘the driving force for grammar change is that input is incomprehensible, rather than comprehensible’ (p. 95, emphasis added). Similarly, Lynch (1996, p. 86) argues:

From the longer term perspective, comprehension problems are vital opportunities for learning. If learners encountered no difficulties of understanding, they would not need to go beyond their current level. It is by having to cope with the problem – either in understanding someone else or in expressing themselves – that they may notice the gap and may learn the missing item.

Coping with problems is basic to John Hattie’s view of good teaching as being cycles of trial, error and feedback. But, in a follow-up to his 2009 book, he makes the point that ‘if there is no challenge, the feedback is probably of little or any value: if students already know the material or find it too easy, then seeking or providing feedback will have little effect’ (Hattie 2012, p.131). Of course, providing challenge is not without its risks: ‘When we experience challenge, we often encounter dissonance, disequilibrium, and doubt’ (op. cit. p. 58). But Hattie argues that these tensions can be productive: ‘This positive creation of tension underlines the importance of teachers in encouraging and welcoming error, and then helping the students to see the value of this error to move forward; this is the essence of great teaching’ (ibid.).

Sant Cebrià.jpg

Can Ferran, Sant Cebrià

 

My initial training as a language teacher encouraged me to pre-empt errors at all costs, and to ensure that any texts that learners were exposed to were well within their level of comprehension. It wasn’t until I started working with Neil that I realized the value of forced errors and of only partly comprehensible texts – the value, in other words, of problems.

References

Hattie, J. (2009) Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge.

Hattie, J. (2012) Visible learning for teachers: maximizing impact on learning. London: Routledge.

Lynch, T. (1996) Communication in the language classroom. Oxford: Oxford University Press. p.85.

Tomasello, M., & Herron, C. (1989). ‘Feedback for language transfer errors: The garden path technique’. Studies in Second Language Acquisition, 11, 385-395.

VanPatten, B. (2015) ‘Input processing in adult SLA’ in VanPatten, B. & Williams, J. (eds) Theories in second language acquisition: An introduction (2nd edition). London: Routledge.

White, L. (1987) ‘Against comprehensible input: the input hypothesis and the development of second language competence’. Applied Linguistics, 8, 95-110.