Students vs iPhone

Gigs Today
Listen with your Phone

Decoding Sound

Since attending a talk on developing listening skills by Philida Schellekens, I have really been thinking about the way I teach listening. The talk is detailed in this post and is based on this report. The key part of Schellekens’ findings is that too much effort in language teaching (especially within the UK Skills for Life curriculum… another story) is spent on comprehension of spoken words, rather than on decoding the stream of language into those words.

Since listening to the talk and reading the report, I’ve been trying to think how I could bring more of this into my practice – this is clearly an important step in the listening process that I had been rather neglecting up to this point. In her report Schellekens gives some references for reading about this process – the suggested texts are:

  • Field, J. (2003). ‘Promoting perception: lexical segmentation in L2 listening.’ English Language Teaching Journal 57/3.
  • Field, J. (2003). Psycholinguistics. London: Routledge.
  • Field, J. (2008). Listening in the Language Classroom. Cambridge: Cambridge University Press.

I am going to be honest – these look fascinating, but I haven’t been able to get to a library recently; as soon as I can I want to read these texts (and may have to come back and change this post… or at least write a follow-up). If you are familiar with Field’s work and it appears to you that I am labouring under a misapprehension please comment below, I think this is a crucial area for equipping learners to deal with life in the UK.

Awareness Raising

While I haven’t been able to read these texts yet, I felt that I could follow some of the advice that Philida gave in her talk. One of the approaches suggested was dictation – this is an activity that I feel I don’t use enough with my groups – not sure why, I have just never got that used to using it and so I haven’t thought about it enough to use regularly (perhaps like a learner who has just been taught a new language point, and is starting to understand it, without yet actually being able to use it in their own language practice.) The point that Philida makes is that if we look at the difference between what is said by the teacher in a dictation and what is heard by the learners, we raise awareness of the way that English sounds work, particularly in terms of connected speech and word stress.

Authentic Listening

My lesson was with a Level 1 ESOL group, who are looking at the general topic of ‘News’. You can see the unit from the DFES-produced ‘Skills for Life’ materials here. I use some things from the materials, but I find them a bit chaotic and uninspiring. Luckily for this topic there are plenty of authentic materials floating around, the problem is that much of it is at a high level and challenging for learners to access. I particularly wanted to develop learners’ listening skills but was concerned that authentic news clips could be tricky for learners at this level.

Being a ‘multimodal’ news platform, the BBC is a very good source of topical resources as many of the stories are accompanied by video or audio – I chose to look at this story: Scotland storm blackout hitting thousands. I wasn’t interested in the text, just the video.

Word Cloud of Transcript (created at http://www.wordle.net)

The first step I wanted to take to make sure that the recording would be accessible for learners was to create an introductory activity to ‘activate schemata’ and also to work on some of the vocabulary in the text. I transcribed the recording and created a wordcloud in Wordle. I gave this to learners and displayed some images taken from the Guardian’s coverage of the story. This allowed learners to predict what the story would be about and also ensured that they could identify some of the vocabulary that they would encounter in the text.

Taking on the machine

Following this, I decided to introduce the text by dictating the first section of the text to the learners. I had recently seen a learner use the Dragon Dictation app on their iPhone to help them in a lesson, and had been wondering if there was a way I could harness this in a lesson. I mention Dragon on iOS because I had the hardware and the app was is free… But it is far from being the only app of its kind and I am sure there are similar things available for other smartphones, tablets, PCs or whatever. Google particularly seem to be developing tools in this area. Having studied a tiny bit of language engineering on my Masters (click here for details) I remembered that when I did Machine Translation Error Analysis it was not unlike marking learner’s English – the patterns in the kind of mistakes made by the machine, can sometimes give you a clue about the way language works. I supposed that something similar might happen with dictation and wanted to try something out. I would pit the learners against my iPhone in a human vs machine contest!

Human Tape Recorder; Electronic Ear

I used the ‘human tape recorder’ process (described here) to dictate the first few sentences to the learners and as they were checking what they had written in groups I dictated the text for a final time into my phone headset, emailed myself the results and displayed the resulting text through the data projector.

That’s the far for a bridge behind me nothing is cross that bridge when of Scotland’s major bridges nothing has crossed it since 1030 this morning I recorded a gust of 84 miles an hour and that was enough of the people in charge because it down it had been close the high sided vehicles up to that point

A lot of the differences between the text Dragon created and the one I read out were similar to those in learners’ own texts and the highlighted certain features of spoken English:

  • weak pronunciation of has
  • differentiating vowel sounds in one and when
  • difficulty in identifying final -ed
  • weak pronunciation of for for the people
  • differentiating between close and because

The text from Dragon was useful as it gave us a point to compare to the learners’ texts as it had been through the same process. Hopefully it was also re-assuring that the machine was no better than the learners at this task.

At this point I don’t feel I did enough to practice identifying these sounds on the lesson, but it raised awareness in the learners – they were then able to find answers to the listening comprehension questions from the original text. I also now have a better idea of what I should look at in terms of developing learners’ listening skills.

What’s next?

The question that this raises is whether there are any other applications of computational linguistics that can be simply and usefully employed in the classroom…. Here we tried to understand why the computer had made the mistakes it did – could this be done with Google translate? Would it work better with one of the more rule-based systems (usually powered by Systran)? Could learners use voice recognition to hone their pronunciation? Is it just a piece of pedagogically irrelevant techno-bling? I’d love to see your ideas…

2 thoughts on “Students vs iPhone

  1. There are voice recognition programs out there and some are pretty good (Rosetta stone for example) but this is no way is a replacement for a real teacher.

    Listening is often one of the poorest aspects of language learning among students and it’s always interesting to hear new ideas on how to effectively teach listening.

    Thanks

    1. You’re right – a program like Rosetta Stone is no replacement for a real teacher, and I think that’s what it is trying to be – it’s trying to offer the whole package. I’m more interested in how tools that are specifically for listening – (such as Dragon Naturally Speaking) could be used to support teachers and learners in the classroom.
      I suppose I see it a bit like the difference between Machine Translation (where everything is automated and the quality of the output is limited) and Computer Aided Translation where translators use tools such as Trados or Deja Vu to aid their own human translations.
      For example If there was a program that could give learners a reliable transcript of what they had said in a classroom exercise it could be very useful for accuracy work (followed up by the teacher) – I don’t think that the technology is there yet – but I suspect it’s not that far off… For what it’s worth, the iPhone apps I’ve tested are nowhere near reliable enough for that approach to work yet – but I’m interested to see if they make some similar mistakes to learners and if there could be something to learn from this.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s