Accessibility and Literacy: Two Sides of the Same Coin

Accessibility 4 All by Claude Almansi

Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons

On July 13, 2009, WIPO (World Intellectual Property Organization) organized a discussion entitled  Meeting the Needs of the Visually Impaired Persons: What Challenges for IP? One of its focuses was the draft Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons, written by WBU (World Blind Union), that had been proposed by Brazil, Ecuador and Paraguay at the 18th session of  WIPO’s Standing Committee on Copyright and Related Rights in May [1].

A pile of books in chains about to be cut with pliers. Text: Help us cut the chains. Please support a WIPO treaty for print disabled=

From the DAISY Consortium August 2009 Newsletter

Are illiterate people “reading disabled”?

At the end of the July 13 discussion, the Ambassador of Yemen to the UN in Geneva remarked that people who could not read because they had had no opportunities to go to school should be included among “Reading Disabled Persons” and thus benefit from the same copyright restrictions in WBU‘s draft treaty, in particular, digital texts that can be read with Text-to-Speech (TTS) software.

The Ambassador of Yemen hit a crucial point.

TTS was first conceived as an important accessibility tool to grant blind people access to  texts in digital form, cheaper to produce and distribute than heavy braille versions. Moreover, people who become blind after a certain age may have difficulties learning braille. Now its usefulness is being recognized for others who cannot read print because of severe dyslexia or motor disabilities.

Indeed, why not for people who cannot read print because they could not go to school?

What does “literacy” mean?

No one compos mentis who has seen/heard blind people use TTS to access texts and do things with these texts would question the fact that they are reading. Same if TTS is used by someone paralyzed from the neck down. What about a dyslexic person who knows the phonetic value of the signs of the alphabet, but has a neurological problem dealing with their combination in words? And what about someone who does not know the phonetic value of the signs of the alphabet?

Writing literacy

Sure, blind and dyslexic people can also write notes about what they read. People paralyzed from the neck down and people who don’t know how the alphabet works can’t, unless they can use Speech-to-Text (STT) technology.

Traditional desktop STT technology is too expensive – one of the most used solutions, Dragon NaturallySpeaking, starts at $99 – for people in poor countries with a high “illiteracy” rate. Besides, it has to be trained to recognize the speakers’ voice, which might not be an obvious thing to do for someone illiterate.

Free Speech-to-Text for all, soon?

In Unhide That Hidden Text, Please, back in January 2009, I wrote about Google’s search engine for the US presidential campaign videos, complaining that the  text file powering it – produced by Google’s speech-to-text technology – was kept hidden.

However, on November 19, 2009, Google announced a new feature, Automatic captions in YouTube:

To help address this challenge, we’ve combined Google’s automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions, or auto-caps for short. Auto-caps use the same voice recognition algorithms in Google Voice to automatically generate captions for video.

(Automatic Captions in YouTube Demo)

So far, in the initial launch phase, only some institutions are able to test this automatic captioning feature:

UC Berkeley, Stanford, MIT, Yale, UCLA, Duke, UCTV, Columbia, PBS, National Geographic, Demand Media, UNSW and most Google & YouTube channels


As the video above says, the automatic captions are sometimes good, sometimes not so good – but better than nothing if you are deaf or don’t know the language. Therefore, when you switch on automatic captions in a video of one of the channels participating in the project, you get a warning:

warning that the captions are produced by automatic speech recognition

Short words are the rub

English – the language for which Google presently offers automatic captioning – has a high proportion of one-syllable words, and this proportion is particularly high when the speaker is attempting to use simple English: OK for natives, but at times baffling for foreigners.

When I started studying English literature at university, we 1st-year students had to follow a course on John Donne’s poems. The professor had magnanimously announced that if we didn’t understand something, we could interrupt him and ask. But doing so in a big lecture hall with hundreds of listeners was rather intimidating. Still, once, when I noticed that the other students around me had stopped taking notes and looked as nonplussed as I was, I summoned my courage and blurted out: “Excuse me, but what do you mean exactly by ‘metaphysical pan’?” When the laughter  subsided, the professor said he meant “pun,” not “pan,” and explained what a pun was.

Google’s STT apparently has the same problem with short words. Take the Don’t get sucked in by the rip… video in the UNSW YouTube channel:

If you switch on the automatic captions [2], there are over 10 different transcriptions – all wrong – for the 30+ occurrences of the word “rip.” The word is in the title (“Don’t get sucked in by the rip…”), it is explained in the video description (“Rip currents are the greatest hazards on our beaches.”), but STT software just attempts to recognize the audio. It can’t look around for other clues when the audio is ambiguous.

That’s what beta versions are for

Google deserves compliments for having chosen to semi-publicly beta test the software in spite of – but warning about – its glitches. Feedback both from the partners hosting the automatically captionable videos and from users should help them fine-tune the software.

A particularly precious contribution towards this fine-tuning comes from partners who also provide human-made captions, as in theOfficial MIT OpenCourseWare 1800 Event Video in the  MIT YouTube channel:

Once this short word issue is solved for English, it should then be easier to apply the knowledge gained to other languages where they are less frequent.


…as the above-embedded Automatic Captions in YouTube Demo video explains, now you:

can also download your time-coded caption file to modify or use somewhere else

I have done so with the Lessig at Educause: Creative Commons video, for which I had used another feature of the Google STT software: feeding it a plain transcript and letting it add the time codes to create the captions. The resulting caption .txt  file I then downloaded says:

and think about what else we could
be doing.

So, the second thing we could be doing is
thinking about how to change norms, our norms,

our practices.
And that, of course, was the objective of

a project a bunch of us launched about 7 years
ago,the Creative Commons project. Creative


Back to the literacy issue

People who are “reading disabled” because they couldn’t go to school could already access texts with TTS technology, as the UN Ambassador of Yemen pointed out at the above-mentioned WIPO discussion on Meeting the Needs of the Visually Impaired Persons: What Challenges for IP? last July.

And soon, when Google opens this automated captioning to everyone, they will be able to say what they want to write in a YouTube video – which can be directly made with any web cam, or even cell phone cam – auto-caption it, then retrieve the caption text file.

True, to get a normal text, the time codes should be deleted and the line-breaks removed. But learning to do that should be way easier than learning to fully master the use of the alphabet.


  • Text-to-Speech, a tool first conceived to grant blind people access to written content, can also be used by other reading-disabled people, including people who can’t use the alphabet convention because they were unable to go to school and, thus, labeled “illiterate.”
  • Speech-to-Text, a tool first conceived to grant deaf people access to audio content, is about to become far more widely available and far easier to use than it was recently, thus potentially enabling people who can’t use the alphabet convention because they were unable to go to school and labeled “illiterate” the possibility to write.

This means that we should reflect on the meanings of the words “literate” and “illiterate.”

Now that technologies first meant to enable people with medically recognized disabilities to use and produce texts can also do the same for those who are “reading disabled” by lack of education, industries and nations presently opposed to the Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons should start thinking beyond “strict copyright” and consider the new markets that this treaty would open up.

5 Responses

  1. HR 3101, the 21st Century Communications and Video Accessibility Act of 2009, would extend closed captioning requirements from regular television to the Internet. You already know that captioning helps people learning to read, so you can see the huge literacy potential of this bill.

    • Hi Jamie,

      Thank you for this very useful information, and for having linked your user name to Caption Action 2: Internet Closed Captioning on Facebook. Though I am not American, I joined, because accessibility laws passed in any country prod legislators of other countries to do the same – but especially so when the country is as influential as the USA.

      I also liked the witty way in which you of Caption Action 2: Internet Closed Captioning cleared a possible misunderstanding:

      …we have also become aware that some deaf and hearing people think mistakenly that the 21st Century Communications and Video Accessibility Act of 2009 will require everyone to caption on YouTube. That is not true. Only commercial and government broadcasters will likely be required to caption on YouTube. Grandma won’t have to caption her video of her grandbaby learning how to walk.

      What about schools and universities, though? Do they count as government broadcasters when they post videos online? Or are they already under obligation to caption them under section 508 of ADA? (that was my impression, but I have seen many educational non captioned videos online).

  2. Excellent article, Claude, and I thought you might be interested in this anecdote.

    When we first started our online education program in Jefferson County Schools, we were deluged by last minute applications from students who had left education and were looking for a path back in. Many were seniors, and we did not have a senior English class. Even though I was the program administrator, I quickly devised a class and taught it myself, trying to stay a couple of weeks ahead of the students.

    Eventually the special education teacher assigned to our program asked me about one of my very best students. She had just received his IEP, which indicated that he both read and wrote at a 2nd grade level.

    I brought him in and asked him to read me the last essay he had written aloud. He struggled, as one might expect when a 2nd grade reader is reading an essay written at the 12th grade level. But then he asked me if he could just talk about it. He describe the entire contents accurately, and even answered probing questions with great insight.

    The then explained that he was using screen reading and screen writing software to read the literature and write his essays. He expected to be punished, but I told him the only thing he was doing wrong was not telling me about it in the first place.

    As the course went on, he was the star of the class. His insights into the literature and his ability to communicate his ideas were consistent with the top third of the Advanced Placement students I had taught over the years.

    A year later I happened to meet with the special education team that had taught him in his previous high school. Using his test results as a guide, they had never given him anything to read or write beyond the 2nd grade level. They were stunned by what I told them.

    How many other potential geniuses are being suppressed this way?

    • Thanks, John.

      Your student reminded me of several ones I taught too. Some dyslexic people are very good at compensating, so they don’t get detected as dyslexic until therapy becomes more difficult less efficient. And in Switzerland, until fairly recently, this also meant that they were not entitled to refunds for therapy or for assistive tech (fortunately, this changed recently).

      That was the case of a girl a friend told me about. She was in her final secondary school year, and getting stumped by André Gide’s “Les Faux-monnayeurs”, which was on the reading list for her final exams. A sprawling novel with so many characters even Gide forgot whom he had made die in a former part. The publisher said sorry, they did not have an electronic version they could sell her. Anyway, she couldn’t have afforded proper screen reading software. So in the end I sent her the link to the French Les Faux-monnayeurs Wikipedia” item, which has – under Une construction complexe (you bet) a graphic representation of all the family relations in the novel.

      She wrote it helped. But holy cow, an electronic version would have helped her much more, even without a proper screen reader. Back in the middle 1990’s, I read Laclos’ “Les liaisons dangereuses” with a class of non native French speakers, because adolescents usually enjoy it. But if you’re not a native speaker, it is also rather complicated. So we found the electronic text online and downloaded one copy, which we then reproduced off-line for all students. Back then in ISDN times, it meant hogging the school’s connection for 48 hours and I got duly told off for that. But it was worth it: now that they could look up who was writing to whom about bedding whom in order to spite whom when they forgot, they started enjoying the novel.

  3. Now autocaptioning is available for all YouTube videos: see The Future Will Be Captioned: Improving Accessibility on YouTube (March 4, 2010):

    Today, we are opening up auto-captions to all YouTube users. There will even be a “request processing” button for un-captioned videos that any video owner can click on if they want to speed up the availability of auto-captions. (…)

    But read the rest too, especially if you intend to “request processing” in your existing videos. And it would be interesting to see if/how autocaption works with languages other than English.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s