If you want to improve your teaching of listening, it’s a
good idea to have a sense of what we know about what goes on between the moment
that the message made by a human voice hits your ear drum, until the moment in
which you form meaning from it. Anderson, writing in 1990, came up with a
‘three phase model’ consisting of ‘perceptual processing’, ‘parsing’ and
‘utilisation’: in layman’s terms meaning that firstly you recognise the sound
as familiar language, then you break the sound into words and chunks, and
finally you give meaning to them, and / or act on them in some way. Later,
Cutler and Clifton split up the ‘parsing’ section into two: firstly
‘segmenting’ – breaking the up the continuous stream of what you hear into
individual words – and secondly ‘recognising’, which is the process of using
your mental dictionary to look up the words.
Most recently, John Field has been the world expert in
second language listening. His book Listening In The Language Classroom (2008)
gives a thorough background into second language listening and is highly
recommended. Field’s model is the one I used to underpin my PhD work and is the
one that almost all researchers refer to when working on listening these days. I
will explain it in a little more detail here. All of what’s explained in the
next paragraph is taking place in a split second within the mind of the
listener.
Once the sound comes in, (Field calls it the ‘speech
signal’) the listener instantly starts to use the knowledge they have about how
sounds go together in the language – what is permissible and what isn’t. For
example, when listening to English, an experienced or native listener knows
instinctively that all words only have one emphasised syllable, and will use
that information automatically to give them clues about where one word finishes
and the next one starts. The next step is to start running to the mental
dictionary – to try and identify words based on the sounds that have been
heard, and other clues like how common it might be, and the words around it.
Then the clues that have been picked up allow the listener to narrow the
possibilities of what’s been heard, and grammatical clues like endings for
tenses and other little chunks of words come into play to give the listener a
better and better sense of what’s going on. Overlaid onto these ‘linguistic’
processes are contextual elements like whether the listener knows what the
speaker might have intended, the context of what’s being heard, common sense
and general knowledge.
This process isn’t as linear as I set out there, though:
it’s more iterative. That means it goes round and round and as a listener you
might grasp one sense of what you’ve heard, but then the next bit of input
makes you reassess. Consider, for example, saying this to someone in English
(it doesn’t work so well if you read it): ‘I stooped to pick a buttercup.
What a buttock was doing there, I don’t know.’ If you say it in your mind’s
ear, it really illustrates how you change your understanding from buttercup
to buttock up after you hear the second sentence – it changes the
context, makes you split the ‘buttercup’ sound in two, and forces you to alter
your interpretation of ‘pick’ to ‘pick up’.
What this shows, then, is that once some speech hits the
listener’s ear, their mind is working ever so hard to form meaning, and is
performing extraordinary feats. Every teacher knows that you can ask a child
‘what did I just say?’ and they will parrot it back, but this is no indication
that they are actually processing anything that’s being said: that’s the
‘echoic memory’. During listening it is crucial to keep some parts of the input
ticking over in the echoic memory while actually processing other parts, and
then putting the two together to make meaning. In other words, there’s a whole
lot going on – and it’s all in the mind of the listener.
What I’ve set out above is not just about second language –
it’s the process that everyone experiences whenever they are decoding a
language. The difference when it’s a second language is how automatic each bit
of the process is. Because every level that I explained above might be hindered
if you are a learner of a second language: does the listener know what sounds
exist or not in the new language? How big is their vocabulary and how quickly
can they retrieve it from their mental dictionary while more and more words
come pouring into the ears? Can they automatically recognise little grammatical
cues like tense endings, or markers of gender or plurality? Does their general
knowledge or cultural knowledge of the target language extend to being able to
apply contextual sense to what they hear? All of these elements are way harder
in a second language – even for an advanced listener, and slow down the
process. And when the process is slowed down, it’s inevitable that many more
hitches might arise. Understanding these hitches is the key to beginning to
resolve them and therefore to make better listeners of your students.
Comments
Post a Comment