S is for SLA

1 03 2015

The criteria for evaluating the worth of any aid to language learning (whether print or digital, and, in the case of the latter, whether app, program, game, or the software that supports these) must include some assessment of its fitness for purpose. That is to say, does it facilitate learning?

But how do you measure this? Short of testing the item on a representative cross-section of learners, we need a rubric according to which its learning potential might be predicted. And this rubric should, ideally, be informed by our current understandings of how second languages are best learned, understandings which are in turn derived — in part at least — from the findings of researchers of second language acquisition (SLA).

This is easier said than done, of course, as there is (still) little real consensus on how the burgeoning research into SLA should be interpreted. This is partly because of the invisibility of most cognitive processes, but also because of the huge range of variables that SLA embraces: different languages, different aspects of language, different learners, different learning contexts, different learning needs, different learning outcomes, different instructional materials, and so on. Generalizing from research context A to learning context B is fraught with risks. It is for this reason that, in a recent article, Nina Spada (2015) urges caution in extrapolating classroom applications from the findings of SLA researchers.

Cautiously, then, and following VanPatten and Williams’ (2007) example, I’ve compiled a list of ‘observations’ about SLA that have been culled from the literature (albeit inflected by my own particular preoccupations). On the basis of these, and inspired by Long (2011), I will then attempt to frame some questions that can be asked of any teaching aid (tool, device, program, or whatever) in order to calculate its potential for facilitating learning.

Exposure to input is necessary

Here, then, are 12 observations:

  1. The acquisition of an L2 grammar follows a ‘natural order’ that is roughly the same for all learners, independent of age, L1, instructional approach, etc., although there is considerable variability in terms of the rate of acquisition and of ultimate achievement (Ellis 2008), and, moreover, ‘a good deal of SLA happens incidentally’ (VanPatten and Williams 2007).
  2. ‘The learner’s task is enormous because language is enormously complex’ (Lightbown 2000).
  3. ‘Exposure to input is necessary’ (VanPatten and Williams 2007).
  4. ‘Language learners can benefit from noticing salient features of the input’ (Tomlinson 2011).
  5. Learners benefit when their linguistic resources are stretched to meet their communicative needs (Swain 1995).
  6. Learning is a mediated, jointly-constructed process, enhanced when interventions are sensitive to, and aligned with, the learner’s current stage of development (Lantolf and Thorne 2006).
  7. ‘There is clear evidence that corrective feedback contributes to learning’ (Ellis 2008).
  8. Learners can learn from each other during communicative interaction (Swain et al. 2003).
  9. Automaticity in language processing is a function of ‘massive repetition experiences and consistent practice’ in ‘real operating conditions’ (Segalowitz 2003; Johnson 1996).
  10. A precondition of fluency is having rapid access to a large store of memorized sequences or chunks (Nattinger & DeCarrico 1992; Segalowitz 2010)
  11. Learning, particularly of words, is aided when the learner makes strong associations with the new material (Sökmen 1997).
  12. The more time (and the more intensive the time) spent on learning tasks, the better (Muñoz 2012). Moreover, ‘learners will invest effort in any task if they perceive benefit from it’ (Breen 1987); and task motivation is optimal when challenge and skill are harmonized (Csikszentmihalyi 1990).

On the basis of these observations, and confronted by a novel language learning tool (app, game, device, blah blah), the following questions might be asked:

  1. ADAPTIVITY: Does the tool accommodate the non-linear, often recursive, stochastic, incidental, and idiosyncratic nature of learning, e.g. by allowing the users to negotiate their own learning paths and goals?
  2. COMPLEXITY: Does the tool address the complexity of language, including its multiple interrelated sub-systems (e.g. grammar, lexis, phonology, discourse, pragmatics)?
  3. INPUT: Does it provide access to rich, comprehensible, and engaging reading and/or listening input? Are there means by which the input can be made more comprehensible? And is there a lot of input (so as to optimize the chances of repeated encounters with language items, and of incidental learning)?
  4. NOTICING: Are there mechanisms whereby the user’s attention is directed to features of the input and/or mechanisms that the user can enlist to make features of the input salient?
  5. OUTPUT: Are there opportunities for language production? Are there means whereby the user is pushed to produce language at or even beyond his/her current level of competence?
  6. SCAFFOLDING: Are learning tasks modelled and mediated? Are interventions timely and supportive, and calibrated to take account of the learner’s emerging capacities?
  7. FEEDBACK: Do users get focused and informative feedback on their comprehension and production, including feedback on error?
  8. INTERACTION: Is there provision for the user to collaborate and interact with other users (whether other learners or proficient speakers) in the target language?
  9. AUTOMATICITY: Does the tool provide opportunities for massed practice, and in conditions that replicate conditions of use? Are practice opportunities optimally spaced?
  10. CHUNKS: Does the tool encourage/facilitate the acquisition and use of formulaic language?
  11. PERSONALIZATION: Does the tool encourage the user to form strong personal associations with the material?
  12. FLOW: Is the tool sufficiently engaging and challenging to increase the likelihood of sustained and repeated use? Are its benefits obvious to the user?

Is it better than a teacher?

This list is very provisional: consider it work in progress. But it does replicate a number of the criteria that have been used to evaluate educational materials generally (e.g. Tomlinson 2011) and educational technologies specifically (e.g. Kervin and Derewianka 2011). At the same time, the questions might also provide a framework for comparing and contrasting the learning power of self-access technology with that of more traditional, teacher-mediated classroom instruction. Of course, the bottom line is: does the tool (app, program, learning platform etc) do the job any better than a trained teacher on their own might do?

Any suggestions for amendments and improvements would be very welcome!

References:

Breen, M. P. 1987. ‘Learner contributions to task design’, republished in van den Branden, K., Bygate, M. & Norris, J. (eds) 2009. Task-based Language Teaching: A reader. Amsterdam: John Benjamins.

Csikszentmihalyi, M. 1990. Flow: The psychology of optimal experience. New York: Harper & Low.

Ellis, R. 2008. The Study of Second Language Acquisition (2nd edn). Oxford: Oxford University Press.

Kervin, L. & Derewianka, B. (2011) ‘New technologies to support language learning’, in Tomlinson, B. (ed.) Materials Development in Language Teaching (2nd edn). Cambridge: Cambridge University Press.

Lightbown, P.M. (2000) ‘Classroom SLA research and second language teaching’. Applied Linguistics, 21/4, 431-462.

Long, M.H. (2011) ‘Methodological principles for language teaching’. In Long, M.H. & Doughty, C. (eds) The Handbook of Language Teaching, Oxford: Blackwell.

Muñoz, C. (ed.) (2012). Intensive Exposure Experiences in Second Language Learning. Bristol: Multilingual Matters.

Nattinger, J.R. & DeCarrico, J.S. (1992). Lexical Phrases and Language Teaching. Oxford: Oxford University Press.

Segalowitz, N. (2003) ‘Automaticity and second languages.’ In Doughty, C.J. & Long, M.H, (eds) The Handbook of Second Language Acquisition. Oxford: Blackwell.

Segalowitz, N. (2010) Cognitive Bases of Second Language Fluency. London: Routledge.

Sökmen, A.J. (1997) ‘Current trends in teaching second language vocabulary,’ in Schmitt, N. and McCarthy, M. (Eds.) Vocabulary: Description, Acquisition and Pedagogy. Cambridge: Cambridge University Press.

Spada, N. (2015) ‘SLA research and L2 pedagogy: misapplications and questions of relevance.’ Language Teaching, 48/1.

Swain, M. (1995) ‘Three functions of output in second language learning’, in Cook, G., & Seidlhofer, B. (eds) Principle and Practice in Applied Linguistics: Studies in Honour of H.G.W. Widdowson. Oxford: Oxford University Press.

Swain, M., Brooks, L. & Tocalli-Beller, A. (2003) ‘Peer-peer dialogue as a means of second language learning’. Annual Review of Applied Linguistics, 23: 171-185.

Tomlinson, B. (2011) ‘Introduction: principles and procedures of materials development,’ in Tomlinson, B. (ed.) Materials Development in Language Teaching (2nd edn). Cambridge: Cambridge University Press.

VanPatten, B. & Williams, J. (eds) 2007. Theories in Second Language Acquisition: An Introduction. Mahwah, NJ: Lawrence Erlbaum.

This is a revised version of a post that first appeared on the eltjam site:  http://eltjam.com/how-could-sla-research-inform-edtech/

 


Actions

Information

32 responses

1 03 2015
Old School

Perhaps one should ask “Is it as good as a teacher” before one asks “Is it better than a teacher”.

1 03 2015
Scott Thornbury

Yes, absolutely. To quote Pit Corder, as long ago as 1966: ‘The use of mechanical aids in the classroom is justified only if they can do something which the teacher unaided cannot do, or can do less effectively’.

1 03 2015
Carol Read

Great to have you back on Sunday mornings, Scott! This is a really useful and concise checklist of observations and questions but what leaps out to me to add is AFFECT as this so often a clincher – never mind tipping balance in favour of a teacher!

1 03 2015
Scott Thornbury

Thanks for the ‘welcome back’ message, Carol. On the subject of AFFECT, a similar point to yours was made on the ELTjam site, when this post first appeared. Here’s how I responded then:

Glad you mentioned ‘affect’. Tomlinson (2011), in his short list of research findings that he thinks ought to inform the design of teaching materials, claims that ‘learners who achieve positive affect are much more likely to achieve communicative competence’ (p. 7), but nowhere does he provide any references to research that might confirm this, apart from quoting Dulay, Burt and Krashen (1982) to the effect that ‘the less anxious the learner, the better language acquisition proceeds’. But when you check the original source, the evidence they cite is unconvincing, and one study even showed a positive correlation with test anxiety and achievement, i.e. the more anxious the subjects were before the test, the better they performed. (Of course, Krashen went on to make the affective filter a key determiner in his ‘Monitor model’ of SLA).

Ellis (2008) summarises the more recent research thus: ‘There is clear evidence to show that anxiety is an important factor in L2 acquisition. However, anxiety (its presence or absence) is best seen not as a necessary condition of successful L2 learning, but rather as a factor that contributes in differing degrees in different learners, depending in part on other individual difference factors such at their motivational orientation and personality’ (p. 697).

This echoes an earlier observation of Long’s (1990):’The role of affective factors appears to be indirect and subordinate to more powerful developmental and maturational factors, perhaps influencing such matters as the amount of contact with the L2, or time on task’ (‘The least a theory of second language acquisition needs to explain’, TESOL Quarterly, 24/4, p. 657). This is why I subsumed affect and motivation into my tenth ‘question’ [now twelfth], i.e. the better disposed the learner is to the product, the more time they may be prepared to invest – time being the key factor, not affect in itself. In short, just because an app is ‘fun’ doesn’t guarantee its learning power, but it may contribute to it. On the other hand, the fact that it’s not fun should not discredit it.

***
In preparing this revised version of the post, I revisited the literature on affect and on emotion, convinced that I had missed something important – that there surely must be research studies that have proven beyond doubt that ‘when learners are feeling positive about the materials/tasks they are engaged in, the learning uptake is greater’. But, while there is some evidence that NEGATIVE affect, typically anxiety, is not conducive to learning, the literature on POSITIVE affect, e.g. ‘having fun’, is less conclusive. One of the problems is that it is difficult to separate out positive affect as being a cause of learning, as opposed to its being the result. Likewise, ‘engagement’ is a slippery concept. As Hattie (2009: 49) points out, ‘We should not make the mistake … of thinking that because students look engaged and appear to be putting in effort they are necessarily achieving; this is one of the myths that is held in too many classrooms – busy work alone does not make the difference’.

This is why, in the end, I elected for Csikszentmihalyi’s concept of ‘flow’ as a better indicator of learning potential, in that it attempts to relate engagement to both skills and challenge (and the optimal balance of both) rather than to such an elusive concept as ‘fun’.

14 04 2019
Phil Brown

This made for a really interesting read. Thank you.

Related to affect is stress and there has been some interesting research on different types of stress, for example, since Dr Karl Albrecht (1979) distinguished time stress, anticipatory stress, situational stress, and encouter stress.

Adding to Scott’s point about the range of results due to anxiety, it seems that negative effects that impair learning are generally due to longterm, sustained and overwhelming (or unmanaged?) stress/anxiety. In contrast, potentially positive outcomes include improved focus and motivation, which in general seem to arise if stress is short term and the person knows they can deal with it – perhaps also related to self-efficacy.

1 03 2015
Dan learnercoachingelt

Hi Scott,
It would be encouraging to hope that everyone involved in language learning and teaching use a checklist of evaluative criteria such as this, from the digital designers this article was presumably aimed at originally on ELTJam, to teachers evaluating their own methodologies, to the learners themselves, who could use it to navigate the daunting choices at their disposal.

It is this last group that stand to benefit most, I believe, from a greater awareness of the learning activities that might be effective and those that are wasting their energies. I can imagine a simplified list of questions that learners can ask about the tools, resources, classes and books they are employing to identify the most promising activities and where the gaps in their individual learning programmes lie. I’ll be suggesting how teachers can equip their students with evaluative criteria at the Innovate conference in Barcelona in May, and I’m certainly going to incorporate some of the ideas in your post, so many thanks!

1 03 2015
Scott Thornbury

Thanks, Dan – looking forward to hearing you talk on the subject at the Innovate Conference.

What the heck, why not give it a plug:

http://innovateelt.com/

14 04 2019
Phil Brown

I realise some time has passed since this post but just today, in a course on Evaluating Digital Materiasl with Pete Sharma, I found myself thinking and talking about using such criteria to help students evaluate learning aids.

I recall how I would set EFL uni students the task of evaluating their strategies or tools I’d introduced them to (e.g. from list learning to word cards and spaced repetition, English Central, extensive reading and graded readers, etc). They would track their progress for a week to a month, and share their feedback in class, either in pairs, small groups, then later in presentations.

Following simple questions related to learning strategies research, I simply asked them to evaluate what worked well, what challenges they faced or didn’t work well, and how they might overcome them OR consider if it was better to pursue or abandon the learning strategy or aid.

However, having more specific guiding questions like these, grounded in SLA, might have been more revealing and objective, not to mention thorough. One other question, however, occurs to me:

Would thoroughness and objectivity with more specific criteria be more appealing, useful, and/or understandable for students than having them use simpler guiding question and/or establish their own criteria?

1 03 2015
ariascarm

Hi Scot! Great reading for a Sunday morning while having coffee and getting ready to mark students’ papers on Writing! While thinking about all these points , I wonder If learner’s targets could find a place in your list. I reckon there are so many different ones… and passing exams in a pre Uní environment is so strong an aim that even students don’t have the Adquisition of a Second Language as a real outcome whatsoever. I do not know for a tool , platform or the like , but for a teacher I can tell you students ( my case teens )motivation gets first place on a list for SLA . Have a great week!

1 03 2015
Scott Thornbury

Thanks for your comment! Yes, I tried hard to fit motivation into my ‘twelve commandments’, but the literature on motivation is so diffuse that it’s very hard to extrapolate any single principle on which to base classroom practice apart from the rather anodyne and glaringly obvious: “If your students are motivated they will learn quicker than if they are not”. But is there a universal factor that ’causes’ motivation across all age-groups and independent of all learner needs? I doubt it – unless it’s the teacher him/herself.

14 04 2019
Phil Brown

I think it’s good for learners to ask themselves the question in terms of how motivated did they feel to use the learning strategy/aid, then explore and share why or in what way. This can be revealing to them, their classmates, and us of course.

In addition, I’ve found it beneficial to have students better understand what makes them motivated/demotivated and what helps them to maintain motivation as well as remotivate themselves. They can then more easily apply those motivation strategies to what they are doing, and learning in general.

1 03 2015
Marisa Constantinides

A welcome post though most of your criteria would be good in any case whether for digital or not digital material.

1 03 2015
Scott Thornbury

Agreed, Marisa. When I first wrote this post, it was directed at digital tools, but, on reflection, it would seem that the criteria apply equally to print materials and, even (as Bill suggests below) to ‘methods’ themselves.

3 03 2015
Marisa Constantinides

Indeed! Activity sequences or lessons, too, though that is one of the first obvious thoughts – see flow, feedback, etc..

1 03 2015
William Acton

Great piece, Scott! I’d only add that the same set of criteria should be applied to one’s “local” method. Since we are officially in the “Post method” era, meaning no general method will work in any two classrooms, any new app or procedure or technique or conceptual tool must also be judged on its systemic contribution to the practitioner’s one, unique method. As any good capitalist knows, separate an instructor from her method and you have a a sales opportunity, impulse buying at its best. That concept is far and away the most challenging for the graduate students and teachers in training that I work with. It is also one that is rarely addressed well in the research and pedagogical literature today. I’m going to use your framework on my blog shortly to assess the efficacy and coherence of my system and a couple of others. Will be interesting to see how it works! Appears to be a good “method” for that . . .

From: An A-Z of ELT <comment-reply@wordpress.com> Reply-To: An A-Z of ELT <comment+eq0o1t96ew9zrmjkjrti2sp@comment.wordpress.com> Date: Saturday, February 28, 2015 at 11:02 PM To: William Acton <william.acton@twu.ca> Subject: [New post] S is for SLA

Scott Thornbury posted: ” The criteria for evaluating the worth of any aid to language learning (whether print or digital, and, in the case of the latter, whether app, program, game, or the software that supports these) must include some assessment of its fitness for purpose. “

2 03 2015
Scott Thornbury

Thanks, Bill, for your comment. As for applying these criteria to one’s ‘method’, I would want to add the proviso that pedagogy is probably 10% ‘method’ (in the sense of being the realization of a prescribed set of procedures and techniques) and 90% the teacher, i.e. his or her skill set + interpersonal skills + beliefs and values etc etc, such that any attempt to rate his/her effectiveness against criteria derived from research into learning (with or without teachers) is going to be messy, to say the least. This is not to say that there aren’t objective criteria for assessing effective teaching, but only that they are more likely to be derived from classroom observation than from ‘laboratory’ studies of language acquisition. (And it is probably in studies of actual teaching where the issues of affect, motivation and engagement – discussed elsewhere on this thread – really kick in). I guess that the criteria I have extrapolated from the research are most valid in situations where there are NO teachers, e.g. the autonomous learner using Duolingo.

1 03 2015
huwjarvis

Thanks for this thought provoking post Scott. I wonder however if “a reflective step back and forward” is needed here.. We know, don’t we, that much in language education is “picked up”, or in Krashen’s sense acquired rather than learned,… don’t we? And Cross (2006) has identified that most learning takes place in informal contexts outside the classroom. As you may know I have argued elsewhere that in a digital era this presents new opportunities and challenges for TESOL because technology is no longer the means to an end “language learning”. It’s the other way round: students are learning English to communicate with others as digitally connected global citizens..

2 03 2015
Scott Thornbury

Thanks, Huw. Isn’t it the case – not so much that ‘students are learning English to communicate with others as digitally connected global citizens’ (as you put it) – but that they ‘are learning English BY communicating with others as digitally connected global citizens’. That is to say, the old communicative mantra of ‘learning through using’ (as opposed to ‘learning in order to use’) does seem to be even more viable in a digitally interconnected world, hence part of the strength of any tool, appliance etc is the extent that it taps into this potential – something that perhaps I should have mentioned.

2 03 2015
Paola Cossu

A warm “welcome back” from Argentina, Scott! I’ll definitely discuss these with my trainees.

2 03 2015
Michael Roberts

Reblogged this on Apprendre l'anglais à Marrakech and commented:
What makes a teaching tool useful? How do you objectively rate a a handout or an app? This article suggest 12 criteria: Which do you think are important?

adaptive, complex, input, noticing, output, scaffolding, feedback, interaction, automaticity (scalable?), chunks, personalization, flow

The definitions of each criteria are below.

Feel free to comment what makes a teaching tool useful!

ADAPTIVITY
Does the tool accommodate the non-linear, often recursive, stochastic, incidental, and idiosyncratic nature of learning, e.g. by allowing the users to negotiate their own learning paths and goals?

COMPLEXITY
Does the tool address the complexity of language, including its multiple interrelated sub-systems (e.g. grammar, lexis, phonology, discourse, pragmatics)?

INPUT
Does it provide access to rich, comprehensible, and engaging reading and/or listening input? Are there means by which the input can be made more comprehensible? And is there a lot of input (so as to optimize the chances of repeated encounters with language items, and of incidental learning)?

NOTICING
Are there mechanisms whereby the user’s attention is directed to features of the input and/or mechanisms that the user can enlist to make features of the input salient?

OUTPUT
Are there opportunities for language production? Are there means whereby the user is pushed to produce language at or even beyond his/her current level of competence?

SCAFFOLDING
Are learning tasks modelled and mediated? Are interventions timely and supportive, and calibrated to take account of the learner’s emerging capacities?

FEEDBACK
Do users get focused and informative feedback on their comprehension and production, including feedback on error?

INTERACTION
Is there provision for the user to collaborate and interact with other users (whether other learners or proficient speakers) in the target language?

AUTOMATICITY
Does the tool provide opportunities for massed practice, and in conditions that replicate conditions of use? Are practice opportunities optimally spaced?

CHUNKS
Does the tool encourage/facilitate the acquisition and use of formulaic language?

PERSONALIZATION
Does the tool encourage the user to form strong personal associations with the material?

FLOW
Is the tool sufficiently engaging and challenging to increase the likelihood of sustained and repeated use? Are its benefits obvious to the user?

Full article:

S is for SLA

3 03 2015
Sara

Thanks Scott for again a very useful post! it was really inspiring especially that I am currently taking an SLA course in my MA TESOL program. I was wondering if you have any recommendations on apps that might help students (young adults) with their language acquisition.

3 03 2015
Scott Thornbury

Hi Sara – thanks for the enthusiastic comment.

Regarding apps: I suppose I should have said – in the original post – that it’s very unlikely that a single app, program, device or whatever is going to be able to address all 12 of the criteria I listed. Instead, the motivated, wired-up, self-directed learner will probably need to choose a battery (or suite?) of tools, such as a vocab learning tool, a listening practice tool, an interaction device and so on. (And this raises the interesting question as to whether anyone is actually designing just such a suite, where – for example – the data that one application gathers is compatible with and shareable to others – so that the learner’s disparate needs and competences can be harmonized?).

I did mention a few aids in the post E is for eCoursebook – but I suspect that some of these have fallen by the wayside or have been superseded. Any suggestions, anyone out there?

14 04 2019
Phil Brown

I’ve found English Central to be quite comprehensive (covering listening, speaking, reading to a degree and typing words) and increasingly so over the years, although it’s now 5 years since I taught at an institution that rolled it out to select students:

https://www.englishcentral.com/videos

14 03 2015
Elizabeth Anne

Wow (sorry) the way you unravel complex notions is such a pleasure to read, But why, “the bottom line is: does the tool (app, program, learning platform etc) do the job any better than a trained teacher on their own might do?”
Surely the whole point of the online availablity of “these things” (from a teaching rather than a commercial point of view) is to extend the potential for learning outside of class time (aka blended learning)

14 03 2015
Scott Thornbury

Yes, agreed Elizabeth. Rightly or wrongly, the debate over the usefulness or not of educational technologies tends to focus on their classroom use, whereas, as you suggest, the real value of most technological tools is in their extra-curricular applications.

26 03 2015
syedajavaid

Fitness for purpose is a good phrase to use for the evaluation of material on the basis of our understanding of how second language is acquired.

But I think in the suggestions, under adaptability, Culture will also play a major role. It’s not only the learning strategies but also the culture which influences the overall evaluation of digital or print material.

Nevertheless, I quite agree with your observation that the acquisition of grammar in L2 follows the ‘natural order’ as discussed by Stephen Krashen.

29 03 2015
Scott Thornbury

Thanks for the comment. Yes, regarding culure, a question might be asked of any innovation along the lines of the following:

How will the innovation affect the local (e.g. institutional) culture, and will it be compatible with the wider (e.g. social, political, etc) culture?

Or

How will the tool need to be adapted in order to fit into the (local) culture?

26 03 2015
samah1401

Thanks for this thoughts Scott. It is Very nice to read about this in this morning. I think the most imporatnt things is to motivate our students to use Technolgy to learn.

27 03 2015
hana724

Since, I agree with you about the huge range of variables that SLA embrace
would not be more beneficial if we go further to adopt technology to serve our intended lesson aims rather than just using what is already available or ready made programs? At the end one size does not fit all!

4 04 2015
Salma

Very interesting topic … i would advice learners and teachers to look at such criteria when approaching SLA learning through the use of such devices. However it is quite difficult to predict whether all the mentions variables will effectively fit within the criteria.

30 04 2015
Nouf Alhejaily

In my comment, I try to connect what you mentioned Scott as important points in your discussion with what Huw commented here to the theoretical framework:
From a teaching perspective, Krashen & Terrell (1983) stressed that speaking ’emerges’ as a consequence of developing the learner’s competence through comprehensible input in three stages: pre-production, early production and speech emergent. According to Krashen’s input hypothesis(1985), receiving comprehensible input through reading and listening during the silent period activates the acquisition process as you mentioned in number 5 by (Lantolf and Thorne 2006) ‘aligned with, the learner’s current stage of development. Thus, it can be said that these principles can be identified in a PPP lesson in which comprehension precedes production. I believe that this also applies to CbMs with a tutorial function that according to Jarvis (2013), ‘form the practical realisation’ of CALL. The reason that I make such a connection between PPP method and CbMs is that they have the same behaviouristic psychological orientation that focus on repeated drilling.
A useful explanation of how input is processed accounting for SLA is made by Van Patten (1996) who differentiated between “input” and “intake” referring to the fact that only part of “input” is converted to “intake”. This conversion occurs through what you mentioned in number 6 and 7 since modifications help “the potential intake for acquisition” according to Larsen-freeman & Long (1991).
These modifications allow for a kind of interaction that is highly encouraged by Long’s interaction hypothesis (1996) that is in line with the vygotskyan constructive view of learning that can be seen as the core of MALU in the social informal context outside the class as mentioned earlier by Huw.

Thanks for provoking this important topic

22 01 2019
Steve

Hello, I am studying in school to become an elementary school EFL teacher and would like your opinion on Krashen’s method. I know he believes in acquired language but are you aware of any studies that show the rate of English (L2) acquisition for someone living in an English speaking environment versus learning English as an EFL while living in a non-English speaking country? ii would your thoughts on that. Thank you and by the way, your blog is great!

Leave a reply to Phil Brown Cancel reply