Accessibility and Common Sense

Claude AlmansiBy Claude Almansi
Editor, Accessibility Issues

Technology and technology guidelines are very important in implementing accessibility. Yet accessibility is not a technology issue — it is a common sense issue, both because it is logical and because making things as accessible as possible for as many people as possible becomes an obvious necessity once you “sense in common” with the other person, put yourself in his or her place.

Accessibility in 3D life

(I am not sure if what follows makes sense to readers in America, as accessibility in real life seems to be part of the American culture.)

People without motor disability usually don’t notice steps at the entrance of public buildings or toilet doors too narrow for a wheelchair. If you are in one, or often accompanying a person in one, you do. Builders’ decisions at times can lead to strange absurdities, though they know about accessibility rules and architectural technology. For instance, in 2000, a grand accessible toilet was added to the Museo d’Arte in Lugano (CH), while at the same time accessing the museum in a wheelchair was made well-nigh impossible by adding of a visitor-counting turnstile at the main entrance: people in wheelchairs had to be carted by on a spiral staircase up to a back door.

True, building decisions were made by the town administration, which, though it had a public works departments where people should know the rules and the technology to implement them, was not known for its common sense — in either meaning of the term. However, in 2001, after a protest by a disabled people’s association was taken up in the local media and caused questions in the local parliament, the administration finally provided a lift to the level of the back entrance for people in wheelchairs.

Computer accessibility: non-text objects

Guidelines for computerized and web content accessibility says that equivalent alternatives to auditory and visual content must be provided for deaf and blind people (see the first WCAG 01 guideline, for instance). For instance, if a video is used, this means captioning audio for deaf people and giving an audio description of nonverbal actions for blind people. Or at least, if this is not feasible, offer an alternative text transcript that can be read by both blind (through text-to-speech) and deaf people.

Alt attribute

Static images that convey information should be provided with an alternative content description: when a short description is enough, this can be done in the alternative content description attribute (alt=”description”) in the link that shows the image. This should be fairly simple: nowadays, authoring tools — be they desktop or online, like the one for this blog — prompt you to add such a description when you insert an image through the “rich text” editor (see Authoring Tool Accessibility Guidelines (ATAG) Overview and links therein), which will add the alt attribute.

Nevertheless, while the above-mentioned Museo d’Arte of Lugano gave in to public pressure about wheelchair accessibility, its website remains blithely callous in ignoring basic accessibility precepts, in spite of directives to make all public administration sites accessible. It still has a “no right click” script that disables the contextual menu, thus hampering people with motor disabilities, despite the long-averred uselessness of such scripts to prevent users from saving images (either by saving the whole webpage or by looking up the URL of a given image in the source code). And it uses text images without any alt attribute instead of normal text for its navigation. Therefore, if you view the homepage in “replace images by alt attributes” mode in order to get the same content a blind person using a screen reader would, the result is:

As all texts are presented as gifs of text images WITOUT alt attributes in this page, you only see the word HOMEPAGE

Empty alt attribute

If the image is purely decorative, you still provide an alt attribute, but you leave it empty (alt=””): this way the text-to-speech just ignores it. Nevertheless, there are websites that use the empty alt attribute (no description) for images that convey information (and vice versa add a useless description for decorative images, which means that the screen reader will read a lot of bunk).

Limits of automated accessibility checkers

Automated accessibility checkers are very useful to spot accessibility problems. But as they only check the source of the page, they won’t fail a page for inappropriate uses of the empty alt attribute — they will just suggest you check that the image really doesn’t convey information. Maybe at times the empty alt attribute is deliberately used to pass the automated check, for instance if laws or regulations state that a given type of computerized content (educational in particular) must apply accessibility guidelines and if this is only checked with an automated program.

Embedding an inaccessible page into a frame is another way to bypass automated accessibility checks. www.mantecausd.net (mentioned in Microsoft Case Studies: Manteca Unified School District) does pass Priority 1 level of accessibility with the CynthiaSays checker, in spite of evident lack of alt attributes (and misuses of the empty alt attribute in some cases). But it does so thanks to the use of frames. What the checker reads is the source page, which only says: “Welcome to the Manteca Unified School District. Our site uses frames, but your browser doesn’t support them.” The realcontent is in http://manteca.schoolspan.com, which is embedded in a frame of www.mantecausd.net. CynthiaSays does fail http://manteca.schoolspan.com for the lack of alternative description, but a hasty check on just www.mantecausd.net might misleadingly give the impression that the page conforms to the Priority 1 level of accessibility.

Be it through the inappropriate use of the empty alt attribute or of frames, though, the result is that blind people don’t get the information conveyed by images. This is why it is so essential to apply common sense, to put oneself in the other person’s place

Accessibility in education

Fortunately, most educational web sites are designed for real accessibility to the greatest possible number of students, not just to pass automated accessibility tests. And while this can be time-consuming, it also offers great advantages to all students:


Designing for accessibility leads to greater educational usability

In the 3D world, removing — or better, avoiding from the beginning — architectural barriers to facilitate access for people in wheelchairs also improves usability for other people: mothers with a child in a pram, aged persons for whom the staircase access is too tiring, etc.

This is also true with designing computerized content with accessibility to the greatest possible number of users in mind. If you structure a text correctly, using hierarchical heading styles for subtitles (instead of just playing around with bold and font size) to make navigation easier with a screen reader for blind people, you can also automatically extract an interactive table of content. This is handy for everyone. And adding explanatory graphics to help people who have other, non-visual, text reading impairments (dyslexia for instance) will also help people who are more visually inclined.

The point is that accessibility leads to redundancy in order to cover as many cases as possible of disabilities. And hence it also covers different learning styles.

Teachers’ and students’ content

While main educational web sites tend nowadays to apply accessibility guidelines, course materials uploaded to a course management system or platform can at times remain an issue. It is therefore necessary to educate teachers about what accessibility does and does not entail and about simple tools to implement it (captioning etc.).

Web 2.0 and accessibility in education

Some education authorities are very wary of public Web 2.0 tools being used in schools, but usually because they fear they’d have to answer for students being exposed to inappropriate contacts and content. However, even when there is no such veto from the powers above, Web 2.0 tools can also present accessibility issues, especially for authoring. Jennison Asuncion has created the LinkedIn Web 2.0 Accessibility Forum where questions about these issues are discussed (you have to join, but anyone can).

Universal accessibility?

Some education authorities require that links to the Nth level be checked for appropriate content in course materials. This is not feasible, not even in the limited “non-pornographic” sense of “appropriate” they usually have in mind. Let alone for accessibility. Each person is different, and so it has been claimed that there is no such thing as universal accessibility because persons with a disability will each have different requirements. However, they will also each have their own way to address barriers.

Faced with a reading requirement presented as an image PDF, for instance, blind students are more likely than non-blind ones to think of putting it through an optical character recognition software to get a text version their text-to-speech can read — and to have such software on their computer. Yet why not start by giving the reading requirement as text to start with? It would be far more usable for everybody. One problem is that accessibility is often perceived as something very complicated and technological, “for geeks.” This is discouraging. So are some myths like “accessibility and usability are not compatible,” whose propagators at times allege to prove it by saying that “a black text on a black background,” like the one below1

This is an example of “black on
black” text that might pass automated accessibility tests.
But who – except kids wanting to write “secret messages”
– would do that?

would pass accessibility checks. Automated checks, maybe. But as explained above, automated checks are useful tools, but just tools.

So even if universal electronic accessibility is not concretely reachable, accessibility to the greatest number of people, according to their various capacities and impairments, must be the goal. To this end, there are some basic “common sense” design principles that are useful to all, and there are free, easy-to-use tools to implement them. And for fine-tuning, there are experts ready to answer questions. It is necessary to make people — and teachers in particular — who produce electronic content aware of this.

Pet bitch

One of the accessibility design principles is the already mentioned use of heading styles for titles and subtitles in a text, rather than messing about with character size and shape and bold and what-not directly on the text. See Using Headings Correctly in WebAIM’s Creating a Semantic Structure page.

Indeed, heading styles are semantic because they identify for others — not only for the screen-readers used by the blind — what you consider as main and subsidiary content, and they allow you to draw an interactive table of content2. Yet, somehow, it is at times difficult to convey the usefulness of headings, even to teachers and to people otherwise endowed with strong logical capacities. So why don’t blog platforms — this one included — almost never offer the possibility to choose heading styles in their visual editor while wiki platforms do?

Sure, authors can switch to the html version and add the necessary tags, as I have done here. But I can still remember the not-so-distant time when I had sworn I would never learn a single html tag, because I thought it was “geek stuff”. . .

______________________

1To view the text, just highlight the black box by mousing over it

Three Video Captioning Tools

claude80By Claude Almansi
Staff Writer

First of all, thanks to:

  • Jim Shimabukuro for having encouraged me to further examine captioning tools after my previous Making Web Multimedia Accessible Needn’t Be Boring post – this has been a great learning experience for me, Jim
  • Michael Smolens, founder and CEO of DotSUB.com and Max Rozenoer, administrator of Overstream.net, for their permission to use screenshots of Overstream and DotSUB captioning windows, and for their answers to my questions.
  • Roberto Ellero and Alessio Cartocci of the Webmultimediale.org project for their long patience in explaining multimedia accessibility issues and solutions to me.
  • Gabriele Ghirlanda of UNITAS.ch for having tried the tools with a screen reader.

However, these persons are in no way responsible for possible mistakes in what follows.

Common Features

Video captioning tools are similar in many aspects: see the screenshot of a captioning window at DotSUB:

dotsub_transcribe

and at Overstream:

overstream_transcribe

In both cases, there is a video player, a lst of captions and a box for writing new captions, with boxes for the start and end time of each caption. The MAGpie desktop captioning tool (downloadable from http://ncam.wgbh.org/webaccess/magpie) is similar: see the first screenshot in David Klein and K. “Fritz” Thompson, Captioning with MAGpie, 2007 [1].

Moreover, in all three cases, captions can be either written directly in the tool, or creating by importing a file where they are separated by a blank line – and they can be exported as a file too.

What follows is just a list of some differences that could influence your choice of a captioning tool.

Overstream and DotSUB vs MAGpie

  • DotSUB and Overstream are online tools (only a browser is needed to use them, whatever the OS of the computer), whereas MAGpie is a desktop application that works with Windows and Mac OS, but not with Linux.
  • DotSUB and Overstream use SubRip (SRT) captioning [2] while MAGpie uses Synchronized Multimedia Integration Language (SMIL) captioning [3]
  • Overstream and Dotsub host the captioned result online, MAGpie does not.
  • The preparation for captioning is less intuitive with MAGpie than with Overstream or DotSUB, but on the other hand MAGpie offers more options and produces simpler files.
  • MAGpie can be used by disabled people, in particular by blind and low-sighted people using a screen reader [4], whereas DotSUB and Overstream don’t work with a screen reader.

Overstream vs DotSUB

  • The original video can be hosted at DotSUB; with Overstream, it must be hosted elsewhere.
  • DotSUB can also be used with a video hosted elsewhere, but you must link to the streaming flash .flv file, whereas with Overstream, you can link to the page of the video – but Overstream does not support all video hosting platforms.
  • If the captions are first written elsewhere then imported as an .srt file, Overstream is more tolerant of coding mistakes than DotSUB – but this cuts both ways: some people might prefer to have your file rejected rather than having gaps in the captions.
  • Overstream allows more precise time-coding than DotSUB, and it also has a “zooming feature” (very useful for longish videos), which DotSUB doesn’t have.
  • DotSUB can be used as a collaborative tool, whereas Overstream cannot yet: but Overstream administrators are planning to make it possible in future.
  • With DotSUB, you can have switchable captions in different languages on one player. With Overstream, there can only be one series of captions in a given player.

How to Choose a Tool . . .

So how to choose a tool? As with knitting, first make a sample with a short video using different tools: the short descriptive lists above cannot replace experience. Then choose the most appropriate one according to your aims for captioning a given video, and what are your possible collaborators’ availability, IT resources, and abilities.

. . . Or Combine Tools

The great thing with these tools is that you can combine them:

As mentioned in my former Making Web Multimedia Accessible Needn’t Be Boring post, I had started captioning “Missing in Pakistan” a year ago on DotSUB, but gone on using MAGpie for SMIL captioning (see result at [5] ). But when Jim Shimabukuro suggested this presentation of captioning tools, I found my aborted attempt at DotSUB. As you can also do the captioning there by importing a .srt file, I tried to transform my “.txt for SMIL” file of the English captions into a .srt file. I bungled part of the code, so DotSUB refused the file. Overstream accepted it, and I corrected the mistakes using both. Results at [6] (DotSUB) and [7] (Overstream) . And now that I have a decent .srt file for the English transcript, I could also use it to caption the video at YouTube or Google video: see YouTube’s “Video Captions: Help with Captions” [8]. (Actually, there is a freeware program called Subtitle Workshop [9] that could apparently do this conversion cleanly, but it is Windows-only and I have a Mac.)

This combining of tools could be useful even for less blundering people. Say one person in a project has better listening comprehension of the original language than the others, and prefers Overstream: s/he could make the first transcript there, export the .srt file, which then could be mported in DotSUB to produce a transcript that all the others could use to make switchable captions in other languages. If that person with better listening comprehension were blind, s/he might use MAGpie to do the transcript, and s/he or someone else could convert it to a .srt fil that could then be uploaded either to DotSUB or Overstream. And so on.

Watch Out for New Developments

I have only tried to give an idea of three captioning tools I happen to be acquainted with, as correctly as I could. The complexity of making videos accessible and in particular of the numerous captioning solutions is illustrated in the Accessibility/Video Accessibility section [10] of the Mozilla wiki – and my understanding of tech issues remains very limited.

Moreover, these tools are continuously progressing. Some have disappeared – Mojiti, for instance – and other ones will probably appear. So watch out for new developments.

For instance, maybe Google will make available the speech-to-text tool that underlies its search engine for the YouTube videos of the candidates to the US presidential elections (see “”In their own words”: political videos meet Google speech-to-text technology” [11]): transcribing remains the heavy part of captioning and an efficient, preferably online speech-to-text tool would be an enormous help.

And hopefully, there will soon be an online, browser-based and accessible SMIL generating tool. SubRip is great, but with SMIL, captions stay put under the video instead of invading it, and thus you can make longer captions, which simplifies the transcription work. Moreover, SMIL is more than just a captioning solution: the SMIL “hub” file can also coordinate a second video for sign language translation, and audio descriptions. Finally, SMIL is a W3C standard, and this means that when the standard gets upgraded, it still “degrades gracefully” and the full information is available to all developers using it: see “Synchronized Multimedia Integration Language (SMIL 3.0) – W3C Recommendation 01 December 2008 [12].