Live Radio Captioning for the Deaf

claude80By Claude Almansi
Staff Writer

Thanks to:

  • Sylvia Monnat, director of captioning at Télévision Suisse Romande (French-speaking Swiss television www.tsr.ch) for the explanations she gave me by phone on live captioning through re-speaking.
  • Neal Stein, of Harris Corporation (www.harris.com), for the authorization to publish on YouTube the video excerpt shown below, and for his explanations on the US live radio captioning project.

Why Caption Radio?

Making radio accessible for deaf and hard of hearing persons is not commonly perceived as a priority. For instance, the new version of the Swiss law and ordinance on Radio and Television that came into force in 2007 does add several dispositions about accessibility for people with sight and hearing disabilities but does not mention captioning radio. See art. 7 [1] of the law and art. 7 [2] and 8 [3] of the ordinance (in French). According to most non-deaf people’s “common sense,” deaf persons don’t use radio – just as many non-blind people still believe that blind people can’t use computers.

Yet deaf persons are interested in accessing radio content through captioning, as Cheryl Heppner, Executive Director of NVRC [4], explains in this video:

The video is from the January 8, 2008, I-CART introductory press conference at CES 2008. The full video can be downloaded from www.i-cart.net. Transcript of the above excerpt:

I’m one of 31 million people in the United States who are deaf or hard of hearing. A number that continues to grow. NPR Labs and its partners are on the verge of making many of my dreams come true. Beyond having that really crucial emergency information, captioned radio could also open up a world I’ve never had, because I lost my hearing before my seventh birthday.

When I am stuck in Washington’s legendary Beltway gridlock, I could check the traffic report and find out why, what my best route would be. I could check the sports scores and follow the games for all my favorite teams. I could know why my husband is always laughing so uproariously when he listens to “Car Talk.” And I could annoy him by singing along badly to the lyrics of his favorite songs.

I can’t wait. Thank you.

NPR’s Live Captioned Broadcast of Presidential Election

The work by NPR Labs and its partners, mentioned by Cheryl Heppner in this January 2008 conference, led to the broadcasting of live captioned debates on NPR during the US election campaign a few months later. The assessment by deaf and hard of hearing people of this experiment was extremely positive. According to the press release “Deaf and Hard of Hearing Vote Yes on New Radio Technology During NPR’s Live Captioned Broadcast of Presidential Election” (Nov. 13, 2008) [5]:

  • 95% were happy with the level of captioning accuracy, a crucial aspect for readability and comprehension
  • 77% said they would be interested in purchasing a captioned radio display unit when it becomes available
  • 86% indicated they would be interested in purchasing a “dual-view” screen display for a car (which would enable a deaf passenger to see the captioned radio text while the driver listens to the radio).

How Are Radio Captions Transmitted?

A digital radio signal can be divided to transmit audio and text, and the text can be read on the radio display. In fact, text messages are already being sent micro4_serviceon car radio displays through Radio Data System (RDS). For instance, this is how the Swiss traffic information service Inforoutes updated drivers in real time – or almost – about the state of traffic jams due to work in the Glion tunnel in 2004. (See “Service,” in French, on page 4, in the May 2004 newsletter of Les Radios Francophones Publiques [6].)

The radio devices used in the experience conducted by NPR Labs and its partners that Cheryl Heppner mentions have a bigger display. For the exact technical explanation of how the captions work, see the presentations section of www.i-cart.net.

Stenocaptioning vs. Respeaking

The NPR experiment mentioned above used “stenocaptioned,” i.e., they were written with a stenotype [7] whose output gets translated into captions in normal English by computer software. Live stenocaptioning – whether for news broadcasts or for in-presence events in specially equipped venues – seems to be the preferred solution in countries such as the US and Italy that have a tradition of stenotyping court proceedings or parliamentary debates.

In most other European countries, according to Ms. Sylvia Monnat, director of captioning at Télévision Suisse Romande (French-speaking Swiss TV – www.tsr.ch), broadcasters tend to prefer “respeaking,” which works with speech-to-text technology: the software gets trained to recognize the voice of respeakers, and then converts what they repeat into captions.

Ms. Monnat further explained that, on the one hand, the advantages of respeaking involves training. In fact, countries without a stenotyping tradition do not offer courses for it, whereas existing interpretation schools can arrange respeaking courses since it is a normal exercise in the training of conference interpreters. Moreover, respeaking is easier to learn than stenotyping.

On the other hand, it takes time to, first, train the speech-to-text software to recognize the respeakers’ voices and, second, to add words not present in its basic thesaurus for each respeaker’s voice. Moreover, enough respeakers have to be trained so that one whose voice is recognized by the software will be available when needed. Whereas once a new word has been added to the thesaurus of the stenocaptioning software, it can be used by any stenocaptioner.

Outlook

The fast evolution of technology makes it difficult to foresee the issues of live captioning, even in the near future. Radio and television are merging into “multimedia broadcasting.” And, in turn, the line between broadcasting and the internet is gradually fading (see the HDTV offer by internet providers). Speech-to-text technology will probably continue to improve. Mutimedia devices are also evolving rapidly.

However, the response of the deaf and hard of hearing people who participated in the NPR Live captioning experiment seems to allow one safe surmise: live radio captioning is here to stay, whatever the means it will use tomorrow.

Resources

Further information on live captioning can be found in the online version of the “Proceedings of the First International Seminar on Real-time Intralingual Subtitling” held in Forlì, Italy, on Nov. 17, 2006 [8].

This and other online resources mentioned here have been tagged “captioning” in Diigo and can therefore be found, together with resources added by other Diigo users, in www.diigo.com/tag/captioning.

Three Video Captioning Tools

claude80By Claude Almansi
Staff Writer

First of all, thanks to:

  • Jim Shimabukuro for having encouraged me to further examine captioning tools after my previous Making Web Multimedia Accessible Needn’t Be Boring post – this has been a great learning experience for me, Jim
  • Michael Smolens, founder and CEO of DotSUB.com and Max Rozenoer, administrator of Overstream.net, for their permission to use screenshots of Overstream and DotSUB captioning windows, and for their answers to my questions.
  • Roberto Ellero and Alessio Cartocci of the Webmultimediale.org project for their long patience in explaining multimedia accessibility issues and solutions to me.
  • Gabriele Ghirlanda of UNITAS.ch for having tried the tools with a screen reader.

However, these persons are in no way responsible for possible mistakes in what follows.

Common Features

Video captioning tools are similar in many aspects: see the screenshot of a captioning window at DotSUB:

dotsub_transcribe

and at Overstream:

overstream_transcribe

In both cases, there is a video player, a lst of captions and a box for writing new captions, with boxes for the start and end time of each caption. The MAGpie desktop captioning tool (downloadable from http://ncam.wgbh.org/webaccess/magpie) is similar: see the first screenshot in David Klein and K. “Fritz” Thompson, Captioning with MAGpie, 2007 [1].

Moreover, in all three cases, captions can be either written directly in the tool, or creating by importing a file where they are separated by a blank line – and they can be exported as a file too.

What follows is just a list of some differences that could influence your choice of a captioning tool.

Overstream and DotSUB vs MAGpie

  • DotSUB and Overstream are online tools (only a browser is needed to use them, whatever the OS of the computer), whereas MAGpie is a desktop application that works with Windows and Mac OS, but not with Linux.
  • DotSUB and Overstream use SubRip (SRT) captioning [2] while MAGpie uses Synchronized Multimedia Integration Language (SMIL) captioning [3]
  • Overstream and Dotsub host the captioned result online, MAGpie does not.
  • The preparation for captioning is less intuitive with MAGpie than with Overstream or DotSUB, but on the other hand MAGpie offers more options and produces simpler files.
  • MAGpie can be used by disabled people, in particular by blind and low-sighted people using a screen reader [4], whereas DotSUB and Overstream don’t work with a screen reader.

Overstream vs DotSUB

  • The original video can be hosted at DotSUB; with Overstream, it must be hosted elsewhere.
  • DotSUB can also be used with a video hosted elsewhere, but you must link to the streaming flash .flv file, whereas with Overstream, you can link to the page of the video – but Overstream does not support all video hosting platforms.
  • If the captions are first written elsewhere then imported as an .srt file, Overstream is more tolerant of coding mistakes than DotSUB – but this cuts both ways: some people might prefer to have your file rejected rather than having gaps in the captions.
  • Overstream allows more precise time-coding than DotSUB, and it also has a “zooming feature” (very useful for longish videos), which DotSUB doesn’t have.
  • DotSUB can be used as a collaborative tool, whereas Overstream cannot yet: but Overstream administrators are planning to make it possible in future.
  • With DotSUB, you can have switchable captions in different languages on one player. With Overstream, there can only be one series of captions in a given player.

How to Choose a Tool . . .

So how to choose a tool? As with knitting, first make a sample with a short video using different tools: the short descriptive lists above cannot replace experience. Then choose the most appropriate one according to your aims for captioning a given video, and what are your possible collaborators’ availability, IT resources, and abilities.

. . . Or Combine Tools

The great thing with these tools is that you can combine them:

As mentioned in my former Making Web Multimedia Accessible Needn’t Be Boring post, I had started captioning “Missing in Pakistan” a year ago on DotSUB, but gone on using MAGpie for SMIL captioning (see result at [5] ). But when Jim Shimabukuro suggested this presentation of captioning tools, I found my aborted attempt at DotSUB. As you can also do the captioning there by importing a .srt file, I tried to transform my “.txt for SMIL” file of the English captions into a .srt file. I bungled part of the code, so DotSUB refused the file. Overstream accepted it, and I corrected the mistakes using both. Results at [6] (DotSUB) and [7] (Overstream) . And now that I have a decent .srt file for the English transcript, I could also use it to caption the video at YouTube or Google video: see YouTube’s “Video Captions: Help with Captions” [8]. (Actually, there is a freeware program called Subtitle Workshop [9] that could apparently do this conversion cleanly, but it is Windows-only and I have a Mac.)

This combining of tools could be useful even for less blundering people. Say one person in a project has better listening comprehension of the original language than the others, and prefers Overstream: s/he could make the first transcript there, export the .srt file, which then could be mported in DotSUB to produce a transcript that all the others could use to make switchable captions in other languages. If that person with better listening comprehension were blind, s/he might use MAGpie to do the transcript, and s/he or someone else could convert it to a .srt fil that could then be uploaded either to DotSUB or Overstream. And so on.

Watch Out for New Developments

I have only tried to give an idea of three captioning tools I happen to be acquainted with, as correctly as I could. The complexity of making videos accessible and in particular of the numerous captioning solutions is illustrated in the Accessibility/Video Accessibility section [10] of the Mozilla wiki – and my understanding of tech issues remains very limited.

Moreover, these tools are continuously progressing. Some have disappeared – Mojiti, for instance – and other ones will probably appear. So watch out for new developments.

For instance, maybe Google will make available the speech-to-text tool that underlies its search engine for the YouTube videos of the candidates to the US presidential elections (see “”In their own words”: political videos meet Google speech-to-text technology” [11]): transcribing remains the heavy part of captioning and an efficient, preferably online speech-to-text tool would be an enormous help.

And hopefully, there will soon be an online, browser-based and accessible SMIL generating tool. SubRip is great, but with SMIL, captions stay put under the video instead of invading it, and thus you can make longer captions, which simplifies the transcription work. Moreover, SMIL is more than just a captioning solution: the SMIL “hub” file can also coordinate a second video for sign language translation, and audio descriptions. Finally, SMIL is a W3C standard, and this means that when the standard gets upgraded, it still “degrades gracefully” and the full information is available to all developers using it: see “Synchronized Multimedia Integration Language (SMIL 3.0) – W3C Recommendation 01 December 2008 [12].

Follow

Get every new post delivered to your Inbox.

Join 201 other followers