Info Literacy: Julian Assange’s Statement for the Feb. 4, 2011 Melbourne Rally

Claude AlmansiBy Claude Almansi
Editor, Accessibility Issues
ETCJ Associate Administrator

Several mainstream media have mentioned and at times quoted from the video statement Julian Assange recorded in the UK for a rally in support of Wikileaks in Melbourne on February 4, 2011. These media reports are easily retrievable with a search engine, and here is the video, captioned in English:

Vodpod videos no longer available.

Julian Assange Challenges The Internet Generati…, posted with vodpod, and here are its subtitles file (.srt 12 Kb, with timecodes) and transcript (.txt 8 Kb, without timecodes).

Wouldn’t it, together with the reports about it, make a nice object for media literacy activities? Please propose yours in comments to this post.
And if you wish to subtitle it in other languages, you can do so at:

Of Cows, Captions and Copyright: Users Need the Right to Caption and Subtitle Videos for Access and Learning

Claude AlmansiBy Claude Almansi
Editor, Accessibility Issues

Disclaimer | Digesting grass | Digesting videos | Video and text | Read-Write culture and tools | Universal Subtitles | Copyright hits the fan | Lessig’s plea | Other obstacles |Solution?

Disclaimer

Non scientists should refrain from using scientific concepts as metaphors. I am fully aware of this, and actually, when a sociologist or other humanistic scholar thus hijacks terms or phrases like “black hole,” “big bang,” “DNA,”  etc., I skip his/her text if possible.

Nevertheless, what little I understand of how the cellulase enzyme works for ruminants has been very instrumental  in my first perception of how captioning videos helps all users digest their content, and underlies what I have written here so far about captioning. Hence the decision to come out explicitly with this subjective and uninformed perception of  it.

Digesting grass

Cows can digest and assimilate the grass cellulose because they ruminate it, but not only: humans could  chew and re-chew grass for hours and hours, yet they would still excrete its cellulose whole without assimilating any because we lack  something cows have: the cellulase enzyme that chops up the molecules of cellulose into sugar types so that they can be assimilated Continue reading

YouTube, Geoblocks and Proxies

Accessibility 4 All by Claude Almansi
Skip to updates

Geoblocking as censoreship measure Continue reading

Computers in the Classroom Can Be Boring

Lynn ZimmermannBy Lynn Zimmerman
Editor, Teacher Education

The headline of an article in the Chronicle of Higher Education this week caught my attention: “‘Teach Naked’ Effort Strips Computers from Classrooms.” The article, posted on July 20, 2009, is written by Jeffrey Young and is actually called “When Computers Leave the Classroom, So Does Boredom.”

Young writes that, according to studies, students think lectures and labs depending on computer technology are less interesting than those relying on discussion and interaction. PowerPoint presentations (one of the main areas of complaint), for example, are often used as a replacement for transparencies shown on an overhead projector and make no substantive difference in lesson delivery. An effective use of video technology should be to spark discussion and not be a replacement for a lecture.

Young says students also complain that these interactive classes require more effort than lectures. He says that students who are used to the lecture model are often resistant to this type of participatory learning. I can attest to this from my own computer lab with 1990's computers round a central tableexperience. I teach my face-to-face classes seminar-style with small group and large group activities and discussion. I will never forget one student telling me, “Instead of all this group stuff, why don’t you just tell us what you want us to know.” (Unfortunately, that student is now a teacher who probably lectures to his students.)

Despite its title, the article is not insisting that all technology and all computers should be thrown out of the classroom. It is making the point that the way technology is used in the classroom needs to be reassessed and changed so that it is not just being used to replicate the traditional modes of delivery.

Many of the authors in this journal have advocated just such changes (most recently, Judith Sotir in Two Steps Forward . . . Several Back and Judith McDaniel in What Students Want and How to Design for It: A Reflection on Online Teaching). As McDaniel pointed out, we need to “design for a structure that challenges and rewards.”

I agree that this attention to design is important not only in the online environment McDaniel was referring to but also in the face-to-face classroom with or without technology. As Young says, with stiff competition from online courses, face-to-face courses need to engage students so that they see a reason for being in the classroom.

Adventures in Hybrid Teaching: The First Day Is the Hardest

heeter_upside80By Carrie Heeter
Guest Author

Monday was the first day of the semester, and Monday night, 6:30 to 7:20, is the live component of hybrid TC841, my graduate design research class. Hybrid means a third of class time happens in person, and two-thirds online at the students’ convenience.

This is the first year my department (Department of Telecommunication, Information Studies, and Media at Michigan State University) actually scheduled a class meeting time (yay!), meaning I did not have to begin by finding a time when every enrolled student was available to come to class. In prior years after I found a day and time every student could attend, we would squeeze into the GEL (Games for Entertainment and Learning) Lab conference room.

In spring 2009, we had an actual scheduled time AND place. Room 161 Comm Arts. The room has a projector. What luxury.

My department very generously lets me telecommute, but they do not consider it their responsibility to support my lack of physical presence in Michigan. So, as of Monday morning, I did not yet know how I was going to get to class from my office in San Francisco.

I saw that two students enrolled in TC841 had been my students in a class I taught in fall. Both had been gone over break so I waited to contact them until they returned. At 12:32 Sunday night, I emailed them to ask, “Do either of you have a laptop you would be willing to bring to class tomorrow night, to Skype me in?”

heeter01There was no answer when I got to the office at 8am California time. By 9am, I received a “sure!” email from YoungKim. I proposed we start trying to connect at 6, before the 6:30 class.

At 6:08pm Michigan time, I received an incoming Skype call. (Yay!) With some fumbling, my audio worked. He figured out how to connect to the classroom projector, and logged in to and opened Breeze, the TC841 blog, and ANGEL in separate browser windows. I got video of the class via YoungKim’s Skype.

My tablet PC was running Breeze for video (not audio). My desktop PC was running Skype for audio but no video (using a handheld mic) and a second Breeze connection as well as the blog and ANGEL.

Five minutes before class started, Breeze failed on the tablet PC, meaning they lost my video. Reconnecting never worked. My only connected camera was the laptop. But the Skype connection was to my desktop. Video of me was not going to happen.

I had forgotten that the last time I used Skype was showing it to Sheldon on his new laptop, and that while playing around I had turned my image upside down. So most of the class only saw me as a small upside down still image in the Skype window. I’m afraid to go check what I might have been wearing.

Students were still arriving, so some never saw me on video at all. I joked that I hadn’t had time to brush my hair but would be ready for video next week. It is unusual to be able to see the class when they can’t see me. Much better than not seeing them, that’s for sure. When one student walked into the classroom 10 minutes late, he entered a room with 13 students sitting at tables, looking at a projection screen. A disembodied voice (me) said, “Welcome to TC841! The students here are pretending there is a professor.”

Half an hour into class, one of my cats pried the office door open (which I had closed to keep them out). After meowing disruptively for a bit, she jumped onto my keyboard, switching the Breeze window to a mode I’ve never seen before, one where I could not control Breeze or change to any other windows on my computer. (Why would there be a “switch to larger than full screen and freeze all controls” special keystroke command? Just to give cats disruptive power, I think.) At that same moment a student who had logged in to Breeze (as I had proposed they do) took over Breeze and was playing around, resizing his video window, eliminating the class’ and my view of the PowerPoint.

After fumbling for a minute, I quit Breeze (command Q), went to the blog, and opened the PDF handout I had posted of the PowerPoint so I could know what else to talk about. Class moved into a lively discussion about “sampling” methods used in research about media design, and ended on time.

A good time was had by all.

A Model for Integrating New Technology into Teaching

By Anita Pincas
Guest Author

I have been an internet watcher ever since I first got involved with online communications in the late 1980s, when it was called computer conferencing. And through having to constantly update my Online Education & Training course since 1992, I’ve had the opportunity to see how educational approaches to the use of the internet, and after it, the world wide web, have evolved. Although history doesn’t give us the full answers to anything, it suggests frameworks for looking at events, so I ‘d like to propose a couple of models for understanding the latest developments in technology and how they relate to learning and teaching.

First, there seem to be three broad areas in which to observe the new technology. This is a highly compressed sketch of some key points:

1. Computing as Such

Here we have an on-going series of improvements which have made it ever easier for the user to do things without technical knowledge. There is a long line of changes from the early days before the mouse, when we had to remember commands (Control +  X for delete, Control +  B for bold, etc.), to the clicks we can use now, and the automation of many functions such as bullet points, paragraphing, and so on. The most recent and most powerful of these developments is, of course, cloud computing, which roughly means computer users being able to do what they need on the internet without understanding what lies behind it (in the clouds). Publishing in a blog, indeed on the web in general, is one of the most talked about examples of this at the moment. The other is the ability to handle video materials. Both are having an enormous impact on the world in general in terms of information flow, as well as, more slowly, on educational issues. Artificial intelligence, robotics, and “smart” applications are on the way too.

2. Access to and Management of Knowledge

This has been vastly enlarged through simple increase in quantity, which itself has been made possible by the computing advances that allow users to generate content, relatively easy searches, and open access publishing that cuts the costs. Library systems are steadily renewing themselves, and information that was previously unobtainable in practice has become commonplace on the web (e.g. commercial and governmental matters, the tacit knowledge of every day life, etc.). As the semantic web comes into being, we can see further advances in our ability to connect items and areas of knowledge.

3. Communications and Social Networking

We can now use the internet – whether on a desktop or laptop or small mobile – to communicate 1 to 1, or 1 to many, or many to many by voice, text and multimedia. And this can be either synchronous or asynchronous across the globe. The result has been an explosion of opportunities to network individually, socially and commercially. Even in education, we can already see that the VLE is giving way to the PLE (personal learning environment) where learners network with others and construct and share their own knowledge spaces.

For teachers there is pressure not to be seen as out of date, but with too little time or help, they need a simple, structured way of approaching the new technological opportunities on their own. The bridge between the three areas of development should be a practical model of teaching and learning. I use one which the teachers who participate in my courses regularly respond to and validate. It sees learning and teaching in terms of three processes:

  1. acquiring knowledge or skills or attitudes,
  2. activating these, and
  3. obtaining feedback on the acquisition and activation.

I start off by viewing any learning/teaching event as a basic chronological sequence of 3Ps:

But this basic template is open to infinite variation. This occurs by horizontal and vertical changes. The horizontal variations are: the order in which the three elements occur; the repetition of any one of them in any order; the embedding of any sequence within any other sequence. The vertical changes are in how each of the three elements is realised. So the model can generate many different styles of teaching and ways of learning, e.g., problem based, discovery based, and so on.

Finally, this is where the bridge to technology comes in. If a teacher starts from the perceived needs in the teaching and learning of the subject, and then systematically uses the 3Ps to ask:

  • What technology might help me make the content available to the learners? [P1]
  • What technology might help me activate their understanding/use of the new content? [P2]
  • What technology might help me evaluate and give the learners feedback on their understanding or use? [P3]

then we have needs driving the use of the technology, and not the other way around.

Here is a simple example of one way of organising problem based learning:

(Click on the table to zoom in.)

I have developed the model with its many variations in some detail for my courses. Things get quite complex when you try to cover lots of different teaching and learning needs under the three slots. And linking what the learners do, or want to do, or fail to do, etc., with what the teacher does is particularly important. Nevertheless, I find that my three areas of new development plus the 3P scaffolding make things rational rather than being a let’s-just-try-this approach. Perhaps equally important, it serves as a template to observe reports of teaching methods and therefore a very useful tool for evaluation. I have never yet found a teaching/learning event that could not be understood and analysed quickly this way.

Three Video Captioning Tools

claude80By Claude Almansi
Staff Writer

First of all, thanks to:

  • Jim Shimabukuro for having encouraged me to further examine captioning tools after my previous Making Web Multimedia Accessible Needn’t Be Boring post – this has been a great learning experience for me, Jim
  • Michael Smolens, founder and CEO of DotSUB.com and Max Rozenoer, administrator of Overstream.net, for their permission to use screenshots of Overstream and DotSUB captioning windows, and for their answers to my questions.
  • Roberto Ellero and Alessio Cartocci of the Webmultimediale.org project for their long patience in explaining multimedia accessibility issues and solutions to me.
  • Gabriele Ghirlanda of UNITAS.ch for having tried the tools with a screen reader.

However, these persons are in no way responsible for possible mistakes in what follows.

Common Features

Video captioning tools are similar in many aspects: see the screenshot of a captioning window at DotSUB:

dotsub_transcribe

and at Overstream:

overstream_transcribe

In both cases, there is a video player, a lst of captions and a box for writing new captions, with boxes for the start and end time of each caption. The MAGpie desktop captioning tool (downloadable from http://ncam.wgbh.org/webaccess/magpie) is similar: see the first screenshot in David Klein and K. “Fritz” Thompson, Captioning with MAGpie, 2007 [1].

Moreover, in all three cases, captions can be either written directly in the tool, or creating by importing a file where they are separated by a blank line – and they can be exported as a file too.

What follows is just a list of some differences that could influence your choice of a captioning tool.

Overstream and DotSUB vs MAGpie

  • DotSUB and Overstream are online tools (only a browser is needed to use them, whatever the OS of the computer), whereas MAGpie is a desktop application that works with Windows and Mac OS, but not with Linux.
  • DotSUB and Overstream use SubRip (SRT) captioning [2] while MAGpie uses Synchronized Multimedia Integration Language (SMIL) captioning [3]
  • Overstream and Dotsub host the captioned result online, MAGpie does not.
  • The preparation for captioning is less intuitive with MAGpie than with Overstream or DotSUB, but on the other hand MAGpie offers more options and produces simpler files.
  • MAGpie can be used by disabled people, in particular by blind and low-sighted people using a screen reader [4], whereas DotSUB and Overstream don’t work with a screen reader.

Overstream vs DotSUB

  • The original video can be hosted at DotSUB; with Overstream, it must be hosted elsewhere.
  • DotSUB can also be used with a video hosted elsewhere, but you must link to the streaming flash .flv file, whereas with Overstream, you can link to the page of the video – but Overstream does not support all video hosting platforms.
  • If the captions are first written elsewhere then imported as an .srt file, Overstream is more tolerant of coding mistakes than DotSUB – but this cuts both ways: some people might prefer to have your file rejected rather than having gaps in the captions.
  • Overstream allows more precise time-coding than DotSUB, and it also has a “zooming feature” (very useful for longish videos), which DotSUB doesn’t have.
  • DotSUB can be used as a collaborative tool, whereas Overstream cannot yet: but Overstream administrators are planning to make it possible in future.
  • With DotSUB, you can have switchable captions in different languages on one player. With Overstream, there can only be one series of captions in a given player.

How to Choose a Tool . . .

So how to choose a tool? As with knitting, first make a sample with a short video using different tools: the short descriptive lists above cannot replace experience. Then choose the most appropriate one according to your aims for captioning a given video, and what are your possible collaborators’ availability, IT resources, and abilities.

. . . Or Combine Tools

The great thing with these tools is that you can combine them:

As mentioned in my former Making Web Multimedia Accessible Needn’t Be Boring post, I had started captioning “Missing in Pakistan” a year ago on DotSUB, but gone on using MAGpie for SMIL captioning (see result at [5] ). But when Jim Shimabukuro suggested this presentation of captioning tools, I found my aborted attempt at DotSUB. As you can also do the captioning there by importing a .srt file, I tried to transform my “.txt for SMIL” file of the English captions into a .srt file. I bungled part of the code, so DotSUB refused the file. Overstream accepted it, and I corrected the mistakes using both. Results at [6] (DotSUB) and [7] (Overstream) . And now that I have a decent .srt file for the English transcript, I could also use it to caption the video at YouTube or Google video: see YouTube’s “Video Captions: Help with Captions” [8]. (Actually, there is a freeware program called Subtitle Workshop [9] that could apparently do this conversion cleanly, but it is Windows-only and I have a Mac.)

This combining of tools could be useful even for less blundering people. Say one person in a project has better listening comprehension of the original language than the others, and prefers Overstream: s/he could make the first transcript there, export the .srt file, which then could be mported in DotSUB to produce a transcript that all the others could use to make switchable captions in other languages. If that person with better listening comprehension were blind, s/he might use MAGpie to do the transcript, and s/he or someone else could convert it to a .srt fil that could then be uploaded either to DotSUB or Overstream. And so on.

Watch Out for New Developments

I have only tried to give an idea of three captioning tools I happen to be acquainted with, as correctly as I could. The complexity of making videos accessible and in particular of the numerous captioning solutions is illustrated in the Accessibility/Video Accessibility section [10] of the Mozilla wiki – and my understanding of tech issues remains very limited.

Moreover, these tools are continuously progressing. Some have disappeared – Mojiti, for instance – and other ones will probably appear. So watch out for new developments.

For instance, maybe Google will make available the speech-to-text tool that underlies its search engine for the YouTube videos of the candidates to the US presidential elections (see “”In their own words”: political videos meet Google speech-to-text technology” [11]): transcribing remains the heavy part of captioning and an efficient, preferably online speech-to-text tool would be an enormous help.

And hopefully, there will soon be an online, browser-based and accessible SMIL generating tool. SubRip is great, but with SMIL, captions stay put under the video instead of invading it, and thus you can make longer captions, which simplifies the transcription work. Moreover, SMIL is more than just a captioning solution: the SMIL “hub” file can also coordinate a second video for sign language translation, and audio descriptions. Finally, SMIL is a W3C standard, and this means that when the standard gets upgraded, it still “degrades gracefully” and the full information is available to all developers using it: see “Synchronized Multimedia Integration Language (SMIL 3.0) – W3C Recommendation 01 December 2008 [12].