Skip to main content

How listening works

 

If you want to improve your teaching of listening, it’s a good idea to have a sense of what we know about what goes on between the moment that the message made by a human voice hits your ear drum, until the moment in which you form meaning from it. Anderson, writing in 1990, came up with a ‘three phase model’ consisting of ‘perceptual processing’, ‘parsing’ and ‘utilisation’: in layman’s terms meaning that firstly you recognise the sound as familiar language, then you break the sound into words and chunks, and finally you give meaning to them, and / or act on them in some way. Later, Cutler and Clifton split up the ‘parsing’ section into two: firstly ‘segmenting’ – breaking the up the continuous stream of what you hear into individual words – and secondly ‘recognising’, which is the process of using your mental dictionary to look up the words.

Most recently, John Field has been the world expert in second language listening. His book Listening In The Language Classroom (2008) gives a thorough background into second language listening and is highly recommended. Field’s model is the one I used to underpin my PhD work and is the one that almost all researchers refer to when working on listening these days. I will explain it in a little more detail here. All of what’s explained in the next paragraph is taking place in a split second within the mind of the listener.

Once the sound comes in, (Field calls it the ‘speech signal’) the listener instantly starts to use the knowledge they have about how sounds go together in the language – what is permissible and what isn’t. For example, when listening to English, an experienced or native listener knows instinctively that all words only have one emphasised syllable, and will use that information automatically to give them clues about where one word finishes and the next one starts. The next step is to start running to the mental dictionary – to try and identify words based on the sounds that have been heard, and other clues like how common it might be, and the words around it. Then the clues that have been picked up allow the listener to narrow the possibilities of what’s been heard, and grammatical clues like endings for tenses and other little chunks of words come into play to give the listener a better and better sense of what’s going on. Overlaid onto these ‘linguistic’ processes are contextual elements like whether the listener knows what the speaker might have intended, the context of what’s being heard, common sense and general knowledge.

This process isn’t as linear as I set out there, though: it’s more iterative. That means it goes round and round and as a listener you might grasp one sense of what you’ve heard, but then the next bit of input makes you reassess. Consider, for example, saying this to someone in English (it doesn’t work so well if you read it): ‘I stooped to pick a buttercup. What a buttock was doing there, I don’t know.’ If you say it in your mind’s ear, it really illustrates how you change your understanding from buttercup to buttock up after you hear the second sentence – it changes the context, makes you split the ‘buttercup’ sound in two, and forces you to alter your interpretation of ‘pick’ to ‘pick up’.

What this shows, then, is that once some speech hits the listener’s ear, their mind is working ever so hard to form meaning, and is performing extraordinary feats. Every teacher knows that you can ask a child ‘what did I just say?’ and they will parrot it back, but this is no indication that they are actually processing anything that’s being said: that’s the ‘echoic memory’. During listening it is crucial to keep some parts of the input ticking over in the echoic memory while actually processing other parts, and then putting the two together to make meaning. In other words, there’s a whole lot going on – and it’s all in the mind of the listener.

What I’ve set out above is not just about second language – it’s the process that everyone experiences whenever they are decoding a language. The difference when it’s a second language is how automatic each bit of the process is. Because every level that I explained above might be hindered if you are a learner of a second language: does the listener know what sounds exist or not in the new language? How big is their vocabulary and how quickly can they retrieve it from their mental dictionary while more and more words come pouring into the ears? Can they automatically recognise little grammatical cues like tense endings, or markers of gender or plurality? Does their general knowledge or cultural knowledge of the target language extend to being able to apply contextual sense to what they hear? All of these elements are way harder in a second language – even for an advanced listener, and slow down the process. And when the process is slowed down, it’s inevitable that many more hitches might arise. Understanding these hitches is the key to beginning to resolve them and therefore to make better listeners of your students.

Comments

Popular posts from this blog

Getting dictatorial about dictation

My year 10 Germanists have their listening mock exam tomorrow, so they wanted to ‘do listening’ today. Poor sods not knowing this is my absolute favourite subject! We worked through some exam strategies for a couple of questions from the AQA Sample 2 (they’re doing Sample 1 for the exam and I hope that they will have forgotten them by the time they do Sample 2 in the winter!). Then I thought: dictation! That’s actually the issue, isn’t it? From the exam perspective there’s a lot less room for making an educated guess, but also my gut feeling is that it would tell me a lot more about their experience of processing audio input in German.  Here are the sentences they heard: 1 Man muss sich / im Urlaub / entspannen. (one must relax oneself on holiday) 2 Nächstes Jahr / werde ich / Abitur machen. (next year I will do A levels) 3 Wir haben / gestern / Trauben gegessen. (we have eaten grapes yesterday) Because we were doing exam practice I started by keeping it as close as possible to the...

No to utility

This post isn’t about listening. It isn’t even about teaching French, specifically. It’s about teaching languages to students whose first language is English - particularly but not exclusively in the UK / English setting; but I think the same arguments would apply to most monolingual English-speaking nations. In fact the same arguments might well apply to teaching non-English in non-English speaking nations too (ie teaching German in Spain or France; teaching French in Germany, etc.) It goes like this: Framing language learning as ‘useful for our future’ As we worry about the slow decline of language learning in the UK, the favoured refrain about WHY kids should learn other languages seems to be one of utility: it’ll get you a better job. It’ll get you into a better university. People ‘with another language’ (whatever that means) earn x% more per year. Even the Guardian article discussing this year’s Language Trends report inferred that key reasons to learn a language were related to...

Steps to dictation - are we going in the right direction?

  Dictation is an old thing come back around in the curricular pendulum swing, and as we all know in the UK languages-teaching world, all students from year 10 and below will have a dictation test as part of their listening exam. There are some teachers who are quite against it, but I’m not one of them. I tend to see exams in terms of washback and feel it can only be good that there is more emphasis on forming the sound-spelling links and attention on the tiny processes of listening. The foundation paper awards 8 marks out of 40 (ie 20%) to these five dictated sentences, and 10 marks out of 50 at higher (hence also 20%).  So today I want to talk about the exercises one might do building up to dictation, and whether these are the best way to go about it.  On Friday I was doing a listening gap-fill with my year 10s. It had come out of the Active Hub book and I had looked at it and quite liked it. I had a few interesting pedagogical moments, though, which are worth sharing. ...