‘YouTube Copyright School’ – Remixed and Mixed Up

Claude AlmansiBy Claude Almansi
Editor, Accessibility Issues
ETCJ Associate Administrator

In his lecture, “The Architecture of Access to Scientific Knowledge: Just How Badly We Have Messed This Up” (at CERN, Geneva, CH. April 18, 2011), Lawrence Lessig discussed YouTube’s new copyright school. (See 35:42 – 39:46 in the subtitled and transcribed video of his lecture.) The YouTube Copyright School video he showed and commented was uploaded by YouTube on March 24, 2011, then integrated into what looks like an  interactive tutorial, also entitled YouTube Copyright School, with a quiz on the side.

More information about this “school” was given on the YouTube Official Blog in “YouTube Copyright Education (Remixed)” (April 14, 2011):

If we receive a copyright notification for one of your videos, you’ll now be required to attend YouTube Copyright School, which involves watching a copyright tutorial and passing a quiz to show that you’ve paid attention and understood the content before uploading more content to YouTube.

YouTube has always had a policy to suspend users who have received three uncontested copyright notifications. This policy serves as a strong deterrent to copyright offenders. However, we’ve found that in some cases, a one-size-fits-all suspension rule doesn’t always lead to the right result. Consider, for example, a long-time YouTube user who received two copyright notifications four years ago but who’s uploaded thousands of legitimate videos since then without a further copyright notification. Until now, the four-year-old notifications would have stayed with the user forever despite a solid track record of good behavior, creating the risk that one new notification – possibly even a fraudulent notification – would result in the suspension of the account. We don’t think that’s reasonable. So, today we’ll begin removing copyright strikes from user’s accounts in certain limited circumstances, contingent upon the successful completion of YouTube Copyright School, as well as a solid demonstrated record of good behavior over time. Expiration of strikes is not guaranteed, and as always, YouTube may terminate an account at any time for violating our Terms of Service.

Continue reading

Computer Science – A Field of Dreams

By Robert Plants

[Editor’s Note: This article was written in response to Bonnie Bracey Sutton‘s call for submissions from selected writers. Bonnie is ETCJ’s editor of policy issues, and the focus of her call was Erik W. Robelen’s “Schools Fall Behind in Offering Computer Science” (Education Week, 7.14.10); WebCite version. -js]

You can’t build it and expect people to come. We cite statistics on what is and what isn’t but fail to dig into the symptoms. We point out initiatives that may influence supply and demand but don’t go on to look at what influences K-12 education that results in the dearth of interest in computer science. In most states, the emphasis lies in producing enough teachers to staff the education that we have. We have an educational system focused on a standardized curriculum, rote memorization, nationalized testing, curriculum standards. Dig a little deeper and you will find that the structure of schooling is about the little red brick building we have always known, grades, classrooms, curriculum, teaching strategies – one size fits all. In many ways, our system of schooling has not changed in 100 years. Continue reading

We Need an Eco-Smart Model for Online Learning

Jim ShimabukuroBy Jim Shimabukuro
Editor

Two articles that appeared in my Google alerts today (7.17.10) grabbed my attention. Both were out of California. One was a San Francisco Chronicle editorial blasting the University of California’s vision of an internet-delivered bachelor’s degree program.

The other was an op-ed by James Fay and Jane Sjogren, sharing their vision of a hypothetical Golden State Online, or GSO, a “stand-alone online community college campus.”

On the surface, the visions seem to be quite different, and the viewpoints are obviously different. However, below the surface, both visions share a common flaw — they’re based on models of online learning that are, in my opinion, simply not sustainable.

This got me thinking about an alternative model that would be infinitely sustainable. After a few starts and stops, I came up with an eco-smart model for online learning, or E-SMOL. Continue reading

JRTE Spring 2010 Issue – A Sacrilegious Review

Jim ShimabukuroBy Jim Shimabukuro
Editor

Three of the four articles that make up the spring 2010 issue (v42n3) of Journal of Research on Technology in Education caught my attention more for their assumptions than their stated purposes. These assumptions highlight, for me, some of the weaknesses inherent in efforts to introduce technology into schools and colleges.

In “Technology’s Achilles Heel: Achieving High-Quality Implementation,” the “heel” for Gene E. Hall is school and college administrators. According to Hall, “Education technology scholars and practitioners are engaged with some of the most promising and interesting innovations.” However, these innovations don’t find their way into classrooms because of the failure of administrators to implement them. Thus, our enlightened ed tech guiding lights are “confronted first hand with the challenges associated with disappointing implementation efforts and failures to go to scale.” Continue reading

UNESCO, World Anti-Piracy Observatory and YouTube

Accessibility 4 All by Claude Almansi

Content:

Continue reading

YouTube, Geoblocks and Proxies

Accessibility 4 All by Claude Almansi
Skip to updates

Geoblocking as censoreship measure Continue reading

Learning Styles and the Online Student: Moving Beyond Reading

Lynn ZimmermannBy Lynn Zimmerman
Editor, Teacher Education

In his January 30, 2010 article, Reading Ability As a ‘New’ Challenge for Online Students, Jim Shimabukuro focused on the connection between reading skills and the online environment. As a teacher educator, this issue is one of my concerns about online education.  In today’s online environment those who communicate and process well by reading and writing are at a definite advantage, while students who learn and process in other ways may not adapt as easily. As Jim pointed out – reading is more than being able to decode and comprehend words. Therefore, if we want to meet the learning needs of all students, we have to take different ways of learning and processing into account, and use a variety of strategies and techniques to promote learning (see Howard Gardner’s webs site about Multiple Intelligences http://www.howardgardner.com/MI/mi.html or the Illinois Online Network’s page called Learning Styles and the Online Environment at http://www.ion.uillinois.edu/resources/tutorials/id/learningStyles.asp )

Part of the answer is having technology that will handle audio and video, which can be a challenge. For example, this semester I am teaching a class online that I usually teach as a hybrid. There is a video clip that I usually show my students and after determining that I would not be infringing copyright, I enlisted the aid of our AV people to put the clip into a format that my online students could view. It works great if you are using one of the computers in their computer lab. However, for some reason that no one can pinpoint, the link will not work properly everywhere. On the computer in my office on campus, I get audio only. At home, I get nothing. My students are supposed to watch this clip next week and I have no idea how many of them will actually be able to view it, despite the best efforts of our AV people to make it available in a variety of formats.

On a more positive note, I did have success using Adobe Presenter to record audio onto the PowerPoint presentations that the students will view. In this way, those who prefer to listen can do that and those who prefer to read can read the notes that are part of the presentation. I also located some YouTube videos that I assigned instead of readings on a couple of topics.

However, I have not yet come up with a plan for the students’ being able to produce audio or video clips instead of writing. There are options, of course, but again access to technology can be an issue. I considered asking students to upload an audio or video file as one assignment, but rejected that idea because of the possible problems with technology. I want the students to spend time on the content, not on learning new technology. The best scenario, as far as I’m concerned, would be to have one or two synchronous online discussions using Skype, or similar technology so that students could talk to one another. Maybe next, I can develop something along that line.

To be most effective as a learning tool, online technology has to evolve to the point that students can readily use the skills they already have in addition to (perhaps, while learning) these new skills.

While I agree with Jim,  that “the reading tasks online are therefore a significant departure from the traditional, and they require a whole new set of skills,” I think we need to look at the issue from another direction, too. To be most effective as a learning tool, online technology has to evolve to the point that students can readily use the skills they already have in addition to (perhaps, while learning) these new skills. Otherwise, rather than being an educational equalizer, the online environment will be just another way that we sift and sort students. We will lose those who can’t adapt easily, and we will be educating only those who can.

Accessibility and Literacy: Two Sides of the Same Coin

Accessibility 4 All by Claude Almansi

Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons

On July 13, 2009, WIPO (World Intellectual Property Organization) organized a discussion entitled  Meeting the Needs of the Visually Impaired Persons: What Challenges for IP? One of its focuses was the draft Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons, written by WBU (World Blind Union), that had been proposed by Brazil, Ecuador and Paraguay at the 18th session of  WIPO’s Standing Committee on Copyright and Related Rights in May [1].

A pile of books in chains about to be cut with pliers. Text: Help us cut the chains. Please support a WIPO treaty for print disabled=

From the DAISY Consortium August 2009 Newsletter

Are illiterate people “reading disabled”?

At the end of the July 13 discussion, the Ambassador of Yemen to the UN in Geneva remarked that people who could not read because they had had no opportunities to go to school should be included among “Reading Disabled Persons” and thus benefit from the same copyright restrictions in WBU‘s draft treaty, in particular, digital texts that can be read with Text-to-Speech (TTS) software.

The Ambassador of Yemen hit a crucial point.

TTS was first conceived as an important accessibility tool to grant blind people access to  texts in digital form, cheaper to produce and distribute than heavy braille versions. Moreover, people who become blind after a certain age may have difficulties learning braille. Now its usefulness is being recognized for others who cannot read print because of severe dyslexia or motor disabilities.

Indeed, why not for people who cannot read print because they could not go to school?

What does “literacy” mean?

No one compos mentis who has seen/heard blind people use TTS to access texts and do things with these texts would question the fact that they are reading. Same if TTS is used by someone paralyzed from the neck down. What about a dyslexic person who knows the phonetic value of the signs of the alphabet, but has a neurological problem dealing with their combination in words? And what about someone who does not know the phonetic value of the signs of the alphabet?

Writing literacy

Sure, blind and dyslexic people can also write notes about what they read. People paralyzed from the neck down and people who don’t know how the alphabet works can’t, unless they can use Speech-to-Text (STT) technology.

Traditional desktop STT technology is too expensive – one of the most used solutions, Dragon NaturallySpeaking, starts at $99 – for people in poor countries with a high “illiteracy” rate. Besides, it has to be trained to recognize the speakers’ voice, which might not be an obvious thing to do for someone illiterate.

Free Speech-to-Text for all, soon?

In Unhide That Hidden Text, Please, back in January 2009, I wrote about Google’s search engine for the US presidential campaign videos, complaining that the  text file powering it – produced by Google’s speech-to-text technology – was kept hidden.

However, on November 19, 2009, Google announced a new feature, Automatic captions in YouTube:

To help address this challenge, we’ve combined Google’s automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions, or auto-caps for short. Auto-caps use the same voice recognition algorithms in Google Voice to automatically generate captions for video.

(Automatic Captions in YouTube Demo)

So far, in the initial launch phase, only some institutions are able to test this automatic captioning feature:

UC Berkeley, Stanford, MIT, Yale, UCLA, Duke, UCTV, Columbia, PBS, National Geographic, Demand Media, UNSW and most Google & YouTube channels

Accuracy?

As the video above says, the automatic captions are sometimes good, sometimes not so good – but better than nothing if you are deaf or don’t know the language. Therefore, when you switch on automatic captions in a video of one of the channels participating in the project, you get a warning:

warning that the captions are produced by automatic speech recognition

Short words are the rub

English – the language for which Google presently offers automatic captioning – has a high proportion of one-syllable words, and this proportion is particularly high when the speaker is attempting to use simple English: OK for natives, but at times baffling for foreigners.

When I started studying English literature at university, we 1st-year students had to follow a course on John Donne’s poems. The professor had magnanimously announced that if we didn’t understand something, we could interrupt him and ask. But doing so in a big lecture hall with hundreds of listeners was rather intimidating. Still, once, when I noticed that the other students around me had stopped taking notes and looked as nonplussed as I was, I summoned my courage and blurted out: “Excuse me, but what do you mean exactly by ‘metaphysical pan’?” When the laughter  subsided, the professor said he meant “pun,” not “pan,” and explained what a pun was.

Google’s STT apparently has the same problem with short words. Take the Don’t get sucked in by the rip… video in the UNSW YouTube channel:

If you switch on the automatic captions [2], there are over 10 different transcriptions – all wrong – for the 30+ occurrences of the word “rip.” The word is in the title (“Don’t get sucked in by the rip…”), it is explained in the video description (“Rip currents are the greatest hazards on our beaches.”), but STT software just attempts to recognize the audio. It can’t look around for other clues when the audio is ambiguous.

That’s what beta versions are for

Google deserves compliments for having chosen to semi-publicly beta test the software in spite of – but warning about – its glitches. Feedback both from the partners hosting the automatically captionable videos and from users should help them fine-tune the software.

A particularly precious contribution towards this fine-tuning comes from partners who also provide human-made captions, as in theOfficial MIT OpenCourseWare 1800 Event Video in the  MIT YouTube channel:

Once this short word issue is solved for English, it should then be easier to apply the knowledge gained to other languages where they are less frequent.

Moreover…

…as the above-embedded Automatic Captions in YouTube Demo video explains, now you:

can also download your time-coded caption file to modify or use somewhere else

I have done so with the Lessig at Educause: Creative Commons video, for which I had used another feature of the Google STT software: feeding it a plain transcript and letting it add the time codes to create the captions. The resulting caption .txt  file I then downloaded says:

0:00:06.009,0:00:07.359
and think about what else we could
be doing.

0:00:07.359,0:00:11.500
So, the second thing we could be doing is
thinking about how to change norms, our norms,

0:00:11.500,0:00:15.670
our practices.
And that, of course, was the objective of

0:00:15.670,0:00:21.090
a project a bunch of us launched about 7 years
ago,the Creative Commons project. Creative

etc.

Back to the literacy issue

People who are “reading disabled” because they couldn’t go to school could already access texts with TTS technology, as the UN Ambassador of Yemen pointed out at the above-mentioned WIPO discussion on Meeting the Needs of the Visually Impaired Persons: What Challenges for IP? last July.

And soon, when Google opens this automated captioning to everyone, they will be able to say what they want to write in a YouTube video – which can be directly made with any web cam, or even cell phone cam – auto-caption it, then retrieve the caption text file.

True, to get a normal text, the time codes should be deleted and the line-breaks removed. But learning to do that should be way easier than learning to fully master the use of the alphabet.

Recapitulating:

  • Text-to-Speech, a tool first conceived to grant blind people access to written content, can also be used by other reading-disabled people, including people who can’t use the alphabet convention because they were unable to go to school and, thus, labeled “illiterate.”
  • Speech-to-Text, a tool first conceived to grant deaf people access to audio content, is about to become far more widely available and far easier to use than it was recently, thus potentially enabling people who can’t use the alphabet convention because they were unable to go to school and labeled “illiterate” the possibility to write.

This means that we should reflect on the meanings of the words “literate” and “illiterate.”

Now that technologies first meant to enable people with medically recognized disabilities to use and produce texts can also do the same for those who are “reading disabled” by lack of education, industries and nations presently opposed to the Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons should start thinking beyond “strict copyright” and consider the new markets that this treaty would open up.

Twitter Could Drive You Cuckoo

Lynn ZimmermannBy Lynn Zimmerman
Editor, Teacher Education

In “Study: How Twitter Is Hurting Students” on the Higher Ed Morning web site, Carin Ford gives an overview of several kinds of social networking sites and how they affect our brains.

The article includes a link to another article, “Facebook ‘Enhances Intelligence’ but Twitter ‘Diminishes It’, Claims Psychologist,” by Lucy Cockcroft, which examines the research conducted by Scottish psychologist Dr. Tracy Alloway.

Tracy AllowayAlloway contends that using Facebook helps improve working memory while Twitter, text messaging, and watching YouTube actually weaken it.

The Ford article invites readers to comment on the article and asks: Do you interact with students on Facebook? The reader responses are equally as interesting as the two articles. One response questions the validity of Alloway’s report. Others discuss pros and cons of using Twitter and Facebook in the educational setting.

These links seem to complement the various discussions that have been on ETC recently regarding the use of social networking sites in the educational setting, including Interview: Steve Cooper of TechUofA. TechUofA uses Facebook as a learning management system.

Interview: Steve Cooper of TechUofA

Jim ShimabukuroBy Jim Shimabukuro
Editor

Steve Cooper is founder of Tech University of America (TechUofA) and a former Army education trainer. The following interview was conducted via email from September 8 to 12.

JS: How did you come up with the idea of offering college courses for a flat monthly fee (e.g., $99 for all the classes a student wants to take) and how long have you been doing this?

SC: All of the courses that we are developing will be free and open to everyone. However, only when students want to begin a transcript and earn an academic certificate or degree is there  a $99/month that allows them to take ten courses per year. We only use free etextbooks/resources so there aren’t any other major fees associated with earning a degree.

While building several online university programs I watched as they artificially raised tuition to the student loan cap. I was one of the few for-profit CEOs who didn’t have an MBA or wasn’t a banker so I looked at things much differently. In 2007 when I took over as CEO of a for-profit university, I decided to lower tuition in order to make higher education accessible to more people. We immediately began to enroll students from Africa and several other countries. I found that if you have the same quality of faculty as other well established schools and run a transparent program then people will attend your school if you lower your tuition. At the same time I started to see the popularity of social networking sites explode while the economy started to weaken. I then realized that three things were hot: social networking sites, online learning, and lower or zero-tuition.

Steve Cooper1

Early in 2008, I used to drive over to University of Phoenix Online and sit in the parking lot in search of inspiration. I would sit there for hours watching the sunset, hoping to soak up some of their creative energies, while asking myself, “What would Dr. John Sperling, the founder of University of Phoenix, do today if he were to do it all over again?” I concluded that the first thing he would do is take education to the masses as he did years ago by bringing education from the ivory tower to the community in office buildings then eventually via distance learning. I think one of his greatest keys to success was leveraging existing resources rather than trying to force people to change. For example, he didn’t try to make the corporate offices where they held classes look “academic” nor did he develop some goofy learning management system to deliver their distance learning courses. Rather, they used the existing business offices and Outlook Express. People were familiar with regular office buildings (not intimidating like a college campus) and it was convenient. Also, most adults have used Outlook or Outlook Express so they lessened the learning curve by using systems students would be familiar with — and if they weren’t, chances were that someone they knew could help them — as opposed to building some esoteric and irrelevant elearning system that wasn’t intuitive to adult learners.

So, I eventually thought that if Dr. Sperling were to start over he would bring higher education to the masses. However, today the masses are in social networking sites. At this point I still had not seen a social networking site but realized that if they were generating that much buzz there had to be a reason. I logged into one and instantly said to myself that this is the ideal online classroom! A week later I directed one of my staff members to teach a course in a social networking site, PerfSpot.com, since I knew their leadership and found them to be dedicated to a global reach — and it was absolutely amazing! Social networking sites allow the faculty and students to control their online learning environment (end-user innovation) and can do all the things that conventional learning management systems can’t or won’t allow such as video, audio, showing photos of the users, widgets, etc.

Moreover, using social networking sites to deliver college courses greatly reduces our cost of delivering education since we pay neither learning management fees, which can be as high as $120 per student per course, nor technical staff for support. In essence, it’s a win-win-win because the social networking sites benefit from having more users (our students), we as a college gain by not having any learning management fees, and our faculty and students win because they get to control their learning environment.

JS: Are TechUofA courses accredited? If yes, by whom? If not, is lack of accreditation a problem?

SC: No. Tech University of America is not accredited. We must be operating for two years before we are eligible to apply for accreditation, and we intend to apply as soon as we are eligible in 2011. Since we are a start-up school we have a lot of R&D, yet, at the same time, we are a business so we have to actively seek ways to grow our student body. In order to better serve new schools as well as their prospective students, I believe that accrediting bodies should have a provision that allows for new schools to be conditionally accredited before they start offering courses and then heavily monitor them until they are accredited. In the meantime, we have to operate for two years prior to seeking accreditation, which does offer us time to improve our academic processes while fine tuning the operations of our university.

JS: Are TechUofA classes completely online? Or are students required to participate in F2F (face-to-face) activities at some point during a course? If not, is this lack of F2F contact a problem?

SC: All of our courses and programs are delivered 100% online, and we do not plan on any residency requirements. Recent studies have shown that online learners can attain the same, if not higher, learning outcomes than their F2F counterparts. Having said that, I do think that F2F interaction is obviously valuable. To this end we are exploring several ways that we can integrate various optional study programs that will bring some of our students who live around the world together for meaningful experiential and F2F learning.

______________________________

Steve Cooper: While I fully agree with Chris Anderson, author of Free: The Future of a Radical Price . . . that anything that becomes digital inevitably becomes free, I do think that we will see a hierarchy emerge within online learning: we will have free or very low priced schools, then more expensive programs, and finally exclusive online programs for the very wealthy . . . .

______________________________

JS: Is it possible for a student to complete a degree or certificate at TechUofA? Would this be a TechUofA degree or a degree offered by a college that relies on TechUofA courses?

SC: We will be offering certificate programs as well as associate, bachelor, and master degree programs in business management with several concentrations in fields such as criminal justice, sustainability, construction management, computer science, and sports management. However, we are beginning to partner with several colleges that are interested in using our courses — in these cases students who are enrolled in a partner school and complete our courses would receive credit/degrees from their school, not Tech University of America. In this case we will serve as a blackboard, if you will, with our courses hosted in Facebook, and partner schools will use them at their discretion.

JS: What is the instructor-to-student ratio for your classes? If the ratio is far greater than for F2F classes, how do TechUofA instructors manage the large number of students?

SC: Our student-to-faculty ratio is 1:20 for most courses, and 1:25 for the rest, which is about the average for most online schools, and considerably less than large research universities where ratios of 1:500 are not uncommon.

JS: If students enter a course at any time and exit at any time, I’d imagine that record keeping may be a major problem. Does the instructor monitor all of her/his students? Or is this managed by someone else?

SC: Non-degree seeking students, those who are just using the course materials, may come and go as they please. For our degree-seeking students we have definite start and end dates for each course, and each course is eight weeks in length. Since our courses have less than 25 students, our faculty are able to manage each course.

JS: Are TechUofA instructor salaries comparable to that of F2F institutions? Do you have full- and part-time instructors?

SC: We engage adjunct faculty members to teach our courses. They must have a graduate degree from an accredited school, with practical work experience in their field of study. We also require that our faculty have teaching experience at a regionally accredited school. This allows us to demonstrate that the quality of our faculty is comparable to that of accredited schools.

We use a variable pay model, with each faculty member earning $50-$75 per student. This incentivizes faculty to teach more students per course and is fair because the more they work, the more they earn. At the same time it helps us contain our costs since we are not paying faculty $2,000 when there are only four students in a course. The fact that we do not cap the amount faculty can earn means that they can do quite well. Also, our model encourages faculty to use their own videos in YouTube, social networking sites, etc., which can increase the likelihood that they will be able to secure a textbook contract because faculty who can demonstrate a substantial following these days are highly sought after by publishers. Finally, given that we charge $99 per month, you can see that 50-75% of our revenues go to faculty pay as they are the most critical part of our team.

JS: Does TechUofA rely on staff from countries where salaries and wages are much lower? If yes, is there a problem in quality?

SC: No. However, as we grow our international student body, we will explore hiring staff who reside in countries where we have a large student base so that our staff can relate well to our students, thus serving them better than we can here in Phoenix, Arizona. I must add that I personally am not convinced that outsourcing labor to other countries always saves a considerable amount of money, especially when you consider the inevitable travel, loss of business from language barriers, rising costs associated with outsourcing, etc.

JS: Is TechUofA international? In other words, do students come from many different nations? In U.S. TechUofA classes, are international students charged a higher fee?

SC: We are proud that we have had many inquiries from international students, and in our model everyone pays the same fees. However, we are working on raising money so that we can offer scholarships to people in developing countries so they do not have pay anything to earn a degree from Tech University of America. Also, we are working on building a networking system that allows more fortunate students to sponsor (pay for tuition) for students who cannot afford the $99 a month to earn a degree. We believe this will lead to several meaningful relationships between our students.

JS: Are services such as TechUofA growing in numbers and popularity? Do you foresee a time when the TechUofA way of providing classes will be the dominant means of earning a diploma, degree, or certificate? Will this be at the K-12 or college level? Or both?

SC: Absolutely. Click here for the best overview of this movement which refers to us as EduPunks. Yes, I do see a day when the Tech University of America model will be the prevailing way of providing online courses, and by this I mean using social networking sites as the learning management system rather than Blackboard, using free etextbooks rather than traditional textbooks, etc., but I do not think all schools will have all their courses offered for free and only charge $99 a month for degree seeking students. While I fully agree with Chris Anderson, author of Free: The Future of a Radical Price, and his assertion that anything that becomes digital inevitably becomes free, I do think that we will see a hierarchy emerge within online learning: we will have free or very low priced schools, then more expensive programs, and finally exclusive online programs for the very wealthy that are as expensive as, if not moreso, than Harvard. At the same time I predict that we will only have 50 state schools – one for each state – that has football teams, fraternities, etc., and the rest of the students will attend private, for profit schools, either onground or online, especially given the rise of online high schools. I have been told there are more than one million online high schools students in America.

JS: Is student cheating a problem in TechUofA classes? If not, how is it handled?

SC: Student cheating, plagiarism, program integrity and student authentication are all serious challenges for all schools. The Higher Education Opportunity Act (HEOA) of 2008 requires that a school that offers online courses have procedures in place to ensure that that students who enroll in a course are the same students who take the course and ultimately receive credits. While the HEOA doesn’t apply to us since we will not utilize Title IV funds (federal student loans), we are fully committed to ensuring program integrity. In addition to having a required assignment on personal accountability and plagiarism in our introductory course, have engaged CSIdentity’s Voice Verified product to ensure that the student who enrolls in Tech University of America is the same student who is in a particular course, completes course assisgnments, and ultimately receives academic credit. This is done by randomly verifying the biometrics of their voice throughout their entire course of studies, and the Voice Verified solution is more accurate than a fingerprint.

JS: Is there anything else you’d like to add?

SC: Sure, I think it is important to point out that Facebook, which is central to our model at Tech University of Ameirca, was created by students, for students, and it is fitting that Facebook is finally becoming the leading learning management system. I predict that within 2-5 years, Facebook will buy Blackboard and move all of its users into Facebook..

Meet the Endless Summer – A Review of ED-MEDIA 2009

Stefanie_Panke80By Stefanie Panke
Editor, Social Software in Education

The 21st annual World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-MEDIA) attracted 1200 participants from 65 countries. A diverse crowd, including K-12 teachers, university faculty members, researchers, software developers, instructional designers, administrators and multimedia authors, came together at the Sheraton Waikiki Hotel from the 22nd to 26th of June with a common goal: to share the latest ideas on e-learning and e-teaching in various educational settings and at the same time enjoy the aloha spirit of tropical Oahu, Hawaii.

Organized by the Association for the Advancement of Computing in Education (AACE), the annual conference takes place at varying locations in the US, Europe and Canada. Thanks to funding by the German Academic Exchange Agency, I was able to join my colleagues in Hawaii to present two current research projects on social tagging and blended learning and en passant absorb the international flair and information overflow that go together with a packed conference program.

ed_media09

The attendees experienced a full program. In addition to various invited lectures, 210 full papers and 235 brief papers were presented, complemented by numerous symposiums, round tables, workshops and an extensive poster session. The conference proves to be exceedingly competitive with an acceptance ratio for full paper submissions of 37%, and 56% for brief papers. Eleven submissions were honored with an outstanding paper award. My favorite was the work of Grace Lin and Curt Bonk on the community Wikibooks, which can be downloaded from their project page.

Beginning with Hawaiian chants to welcome the participants at the official conference opening and the local adage that “the voice is the highest gift we can give to other people,” audio learning and sonic media formed a recurring topic. The keynote of Tara Brabazon challenged the widely held perception that “more media are always better media” and argued for carefully developed sonic material as a motivating learning format. She illustrated her point with examples and evaluation results from a course on methods of media research (see YouTube excerpt below). Case study reports from George Washington University and Chicago’s DePaul University on iTunesU raised questions about the integration into learning management systems, single-sign-on-procedures and access management.

Among the invited lectures, I was particularly interested in the contribution of New York Times reporter Alex Wright, who reflected upon the history of hypertext. The author’s web site offers further information on The Web that Wasn’t. Alan Levine, vice-president of the Austin based New Media Consortium, clearly was the darling of the audience. Unfortunately, his talk took place in parallel to my own presentation on social tagging, but Alan has created a web site with his slides and hyperlink collection that gives a vivid overview on “50+ Web 2.0 ways to tell a story.”

A leitmotif of several keynotes was the conflict between open constructivist learning environments on one side versus instructional design models and design principles derived from cognitive psychology on the other. Stephen Downes advocated the learning paradigm of connectivism and praised self-organized learning networks that provide, share, re-use and re-arrange content. For those interested in further information on connectivism, an open content class starts in August 2009. This radical turn to free flowing, egalitarian knowledge networks was not a palatable idea for everyone. As an antagonist to Downes, David Merrill presented his “Pebble in the Pond” instructional design model that — similar to “ADDIE” (analysis, design, development, implementation, evaluation) — foresees clear steps and predictable learning outcomes. Tom Reeves, in turn, dedicated his keynote to a comprehensive criticism of multimedia principles derived from the cognitive load theory, picking up on an article by Kirschner, Sweller & Clark (2006), “Why Minimal Guidance Does Not Work . . . .” The audience, in particular the practitioners, reacted to this debate true to the Goethe verse “Prophet left, prophet right, the world child in the middle.” As Steve Swithenby, director of the Centre for Open Learning of Mathematics at Open University (UK) posted in the ED-MEDIA blog: “Well, actually, I want to do both and everything in between. I can’t see that either is the pattern for future learning – both are part of the ways in which learning will occur.”

With blog, twitter feed, flickr group and ning community, the conference was ringing with a many-voiced orchestra of social software tools. Gary Marks, member of the AACE international headquarters and initiator of the new ED-MEDIA community site, announced that he has planned several activities to foster interaction. So far, however, the few contributions are dedicated to potential leisure activities on Hawaii. The presentation “Who We Are” by Xavier Ochoa, Gonzalo Méndez, and Erik Duval offered a review on existing community ties of ED-MEDIA through a content analysis of paper submissions from the last 10 years. An interactive representation of the results is available online.

Twitter seems to have developed into a ubiquitous companion of conference talks. Whether the short messages add to the academic discourse and democratize ex cathedra lectures or divert the attention from the presenter, replacing substance with senseless character strings, is a controversial discussion. Accordingly, twitter received mixed responses among the conference attendees and presenters. In the end, 180 users joined the collective micro-blogging and produced approximately 2500 postings — an overview may be found at Twapper. As a follow-up to this year’s ED-MEDIA, participants were invited to take part in an online survey, designed by the Austrian/German twitter research duo Martin Ebner and Wolfgang Reinhardt. The results will hopefully further the understanding of the pros and cons of integrating microblogging in e-learning conference events.

The AACE used ED-MEDIA as an occasion to announce plans for future growth. Already responsible for three of the largest world-wide conferences on teaching and learning (ED-MEDIA, E-LEARN and SITE), the organization extends its catalog with two new formats. A virtual conference called GlobalTime will make its debut in February 2011. Additionally, the new face-to-face conference GlobalLearn targets the Asian and Pacific regions.

Is ED-MEDIA worth a visit? The sheer size of the event leads to a great breadth of topics, which often obstructs an in-depth discussion of specific issues. At the same time, there is no better way to gain an overview of multiple current trends in compact form. Another plus, all AACE conference contributions are accessible online through the Education and Information Technology Library. The next ED-MEDIA will take place in Toronto, Canada, from June 28 to July 2, 2010.

The New Social Networking Frontier

judith_sotir_80By Judith Sotir

The idea of using social networks in the classroom is still outside the comfort zone of many classroom instructors. Sites such as Facebook, MySpace and  Twitter have connotations that many instructors instinctively avoid. They see the pitfalls, but not the value. There are warning flags all over the place. I’ve heard educators say, “If you allow students to use a site like Twitter in the classroom, students will abuse it and just network with friends.” Sure, always a possibility. But if you allow students Internet access on computers, they can always access sites you don’t want them accessing. It all comes down to the control an instructor has in the classroom. An ineffective instructor with no classroom discipline doesn’t need Internet access to fail. Those are the teachers who would not notice handwritten notes being passed around the classroom in the pre-tech days.

We’ve (reluctantly) moved to acceptance of using academic websites in the classroom. Instructors see the value, and students know and like using them. We’ve found the value in YouTube, but have developed Teacher Tube to combat many of the content concerns. Social network sites are still a new frontier. First, instructors are not all that familiar with them. I think every instructor (and parent) should get on the computer and sign up for one or more of the social network sites, if only to know what it is that the kids are doing. One thing is certain, the KIDS are on them, daily, and even hourly. They can access them from classroom computers or cell phone browsers. I have Facebook and Twitter buttons on my iPhone so access takes less than a second. Of course they also let me know via email when someone has added something new to my page. It’s all about accessibility, and for kids, accessibility is like breathing. They just do it. My nephew once said that if he had to go more than a few hours without Facebook he would implode. I honestly believe him.

(Video source: “Twitter for Teachers” by Thomas Daccord, added to TeacherTube on 20 March 2009)

So how do educators use these tools? Tom Preskett in his article Blogs for Education, Blogs for Yourself referenced the Write4 website, which allows one to publish articles, photos, videos, etc. without set-ups or logins. Your work is published to your Twitter account. What’s the value? Easy and fast access. You give your students one site (such as your classroom Twitter account), and give them the ability to access these sites wherever and whenever they wish. You simply tell them to follow you on Twitter. It’s simple and effective because students are there anyway. Will all students actually read your Tweets? No, but not all students will read the homework you assign or even participate in class discussions. But the point is that students are familiar with social networking and use it regularly. And as educators, we have to believe that most students want to learn and want to succeed.

(Video source: “How Do You Use Twitter” by David Di Franco, added to YouTube on 8 April 2009)

I’ve never been able to understand instructors who believe students want to fail. They may not hang on your every word, but they do listen and know the correlation between work and success. Give them something they can use, and they will pay attention. Will they push the envelope? Of course. But that happens with any age group. Case in point: professional development programs. Put a group of instructors into a professional development class and watch them as they stare out the window, play with anything but the prescribed websites on the computer, and even talk and laugh with each other. In a training setting, most professional educators mirror the behavior of their students. The key to success is the same as the key needed to succeed with students: give them something they find useful and they will pay attention.

A Digital Educator in Poland

lynnz80By Lynn Zimmerman
Editor, Teacher Education

During this Spring 2009 semester, I am teaching at a major university in a large city in Poland. My students are 3rd, 4th , and 5th year students , most of whom plan to be English teachers. Technology is playing a role in this experience in some expected and unexpected ways.

First of all, I have easy access to the folks back home. I served as a Peace Corps volunteer in Poland from 1992-1994 and, during that time, the communications infrastructure was rudimentary.  Many people did not have telephones, myself included. The couple of times I called my mother in the U.S. I had to go to the post office and order the call. Then I had to wait until the overseas operator was able to connect to me. When I returned to Poland in 2000 the cell phone boom had occurred, and Internet service was on its heels. Now with Skype and IM and all the other communication devices at our fingertips, it is almost as though I never left home. This easy accessibility is actually a mixed blessing. The chair of my department has been able to give me tasks to do, even though I am several thousands of miles away.

Although I travel quite a bit and try to journal, I am rarely successful keeping up the journaling process. This time, I decided to set up a “private” social network on Ning (www.ning.com) for my friends. I recorded a video about my impending trip. I put up links to my Polish university and other interesting places. I have been posting pictures of my adventures and have written blogs to keep my friends informed. I think that having an audience other than myself is helping me keep up the process.

On the downside has been the lack of technology available to my students here. The building where I teach has one lecture room, reserved only for large lecture classes, that has a computer and projector, but no Internet access. The technology guy here did show me how to download some clips that I was planning to use from YouTube (using mediaconverter.org), so I was able to work polandaround the no Internet access issue. I have one class of about 40 students and that is the only one allowed to use that room. Unfortunately this week when I was planning to show a DVD and a YouTube clip, the system was not functioning. For my other classes, I have had to re-think how I teach them, taking into account that I would not be able to use the videos and PowerPoints that I usually use with my classes.

Another issue that arose is that none of my students have ever done an online discussion. I use online discussions once or twice a semester when I have to go to a conference. The university here does not have a built-in classroom management system like WebCT or Blackboard, so I set up a discussion on Ning. Because I did not have Internet access in the classroom, I had to take “snapshots” of the screens to show the students what to do. (The computer system was functioning that day.) Then I had to deal with the students’ anxiety about doing this activity. Most of the students participated, and I must say that the ones who did participate did a really good job, better than many of my students in the US. However, another professor has referred several times to the week I “missed” class. She obviously has no idea of how time-intensive setting up and conducting an online discussion is for the teacher or the students.

On the other hand, I was recently at a symposium in another part of Poland and technology, including Internet access, was available in many classrooms. This particular university also specializes in providing services for students with vision and hearing disabilities. They have special adaptive equipment in several classrooms to aid these students’ learning.

So far I have experienced the advantages of technology for staying in touch as well as the challenges it poses when there is little or no access in the classroom. I feel a little bit like I have fallen into Dean McLaughlin’s short novel, Hawk Among the Sparrows (http://www.fantasticfiction.co.uk/m/dean-mclaughlin/hawk-among-sparrows.htm), which is about a pilot in a modern fighter jet with nuclear missiles and technological guidance systems who goes through a time warp to World War I. None of his highly sophisticated weaponry will work in this low-tech time period so that, in the end, the only way he can be effective is to use his jet as a projectile and crash into the enemy’s installation. I certainly hope that is not my fate!

Unhide That Hidden Text, Please

claude80By Claude Almansi
Staff Writer

Thanks to:

  • Marie-Jeanne Escure, of Le Temps, for having kindly answered questions about copyright and accessibility issues in the archives of the Journal de Genève.
  • Gabriele Ghirlanda, of Unitas, for having tested the archives of the Journal de Genève with a screen reader.

What Hidden Text?

Here, “hidden text” refers to a text file combined by an application with another object (image, video etc.) in order to add functionality to that object: several web applications offer this text to the reader together with the object it enhances – DotSUB offers the transcript of video captions, for instance:

dotsub_trscr

Screenshot from “Phishing Scams in Plain English” by Lee LeFever [1].

But in other applications, unfortunately, you get only the enhanced object, but the text enhancing it remains hidden even though it would grant access to content for people with disabilities that prevent them from using the object and would simplify enormously research and quotations for everybody.

Following are three examples of object-enhancing applications using text but keeping it hidden:

Multilingual Captioning of YouTube and Google Videos

Google offers the possibility to caption a video by uploading one or several text files with their timed transcriptions. See the YouTube example below.

yt_subtYouTube video captioning.

Google even automatically translates the produced captions into other languages, at the user’s discretion. See the example below. (See “How to Automatically Translate Foreign-Language YouTube Videos” by Terrence O’Brien, Switch,

yt_subt_trslOption to automatically translate the captions of a YouTube video.

Nov. 3, 2008 [2], from which the above two screenshots were taken.) But the text files of the original captions and their automatic translations remain hidden.

Google’s Search Engine for the US Presidential Campaign Videos

During the 2008 US presidential campaign, Google beta-tested a search engine for videos on the candidates’ speeches. This search engine works on a text file produced by speech-to-text technology. See the example below.

google_election_searchGoogle search engine for the US presidential election videos.

(See “Google Elections Video Search,” Google for Educators 2008 – where you can try the search engine in the above screenshot – [3] and “‘In Their Own Words’: Political Videos Meet Google Speech-to-text Technology” by Arnaud Sahuguet and Ari Bezman. Official Google blog, July 14, 2008 [4].) But here, too, the text files on which the search engine works remain hidden.

Enhanced Text Images in Online Archives

Maybe the oddest use of hidden text is when people go to the trouble of scanning printed texts, produce both images of text and real text files from the scan, then use the text file to make the image version searchable – but hide it. It happens with Google books [5] and with The European Library [6]: you can browse and search the online texts that appear as images thanks to the hidden text version, but you can’t print them or digitally copy-paste a given passage – except if the original is in the public domain: in this case, both make a real textual version available.

Therefore, using a plain text file to enhance an image of the same content, but hiding the plain text, is apparently just a way to protect copyrighted material. And this can lead to really bizarre solutions.

Olive Software ActivePaper and the Archives of Journal de Genève

On December 12, 2008, the Swiss daily Le Temps announced that for the first time in Switzerland, they were offering online “free access” to the full archives – www.letempsarchives.ch (English version at [7]) – of Le Journal de Genève (JdG), which, together with two other dailies, got merged into Le Temps in 1998. In English, see Ellen Wallace’s “Journal de Geneve Is First Free Online Newspaper (but It’s Dead),” GenevaLunch, Dec. 12, 2008 [8].

A Vademecum to the archives, available at [9] (7.7 Mb PDF), explains that “articles in the public domain can be saved as

jdg_vm_drm

images. Other articles will only be partially copied on the hard disk,” and Nicolas Dufour’s description of the archiving process in the same Vademecum gives a first clue about the reason for this oddity: “For the optical character recognition that enables searching by keywords within the text, the American company Olive Software adapted its software which had already been used by the Financial Times, the Scotsman and the Indian Times.” (These and other translations in this article are mine.)

The description of this software – ActivePaper Archive – states that it will enable publishers to “Preserve, Web-enable, and Monetize [their] Archive Content Assets” [10]. So even if Le Temps does not actually intend to “monetize” their predecessor’s assets, the operation is still influenced by the monetizing purpose of the software they chose. Hence the hiding of the text versions on which the search engine works and the digital restriction on saving articles still under copyright.

Accessibility Issues

This ActivePaper Archive solution clearly poses great problems for blind people who have to use a screen reader to access content: screen readers read text, not images.

Le Temps is aware of this: in an e-mail answer (Jan. 8, 2009) to questions about copyright and accessibility problems in the archives of JdG, Ms Marie-Jeanne Escure, in charge of reproduction authorizations at Le Temps, wrote, “Nous avons un partenariat avec la Fédération suisse des aveugles pour la consultation des archives du Temps par les aveugles. Nous sommes très sensibilisés par cette cause et la mise à disposition des archives du Journal de Genève aux aveugles fait partie de nos projets.” Translation: “We have a partnership with the Swiss federation of blind people (see [11]) for the consultation of the archives of Le Temps by blind people. We are strongly committed/sensitive to this cause, and the offer of the archives of Journal de Genève to blind people is part of our projects.”

What Digital Copyright Protection, Anyway?

Gabriele Ghirlanda, member of Unitas [12], the Swiss Italian section of the Federation of Blind people, tried the Archives of JdG. He says (e-mail, Jan. 15, 2009):

With a screenshot, the image definition was too low for ABBYY FineReader 8.0 Professional Edition [optical character recognition software] to extract a meaningful text.

But by chance, I noticed that the article presented is made of several blocs of images, for the title and for each column.

Right-clic, copy image, paste in OpenOffice; export as PDF; then I put the PDf through Abbyy Fine Reader. […]

For a sighted person, it is no problem to create a document of good quality for each article, keeping it in image format, without having to go through OpenOffice and/or pdf. [my emphasis]

<DIV style=”position:relative;display:block;top:0; left:0; height:521; width:1052″ xmlns:OliveXLib=”http://www.olive-soft.com/Schemes/XSLLibs&#8221; xmlns:OlvScript=”http://www.olivesoftware.com/XSLTScript&#8221; xmlns:msxsl=”urn:schemas-microsoft-com:xslt”><div id=”primImg” style=”position:absolute;top:30;left:10;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130200.png” border=”0″></img></div><div id=”primImg” style=”position:absolute;top:86;left:5;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130201.png” border=”0″></img></div><div id=”primImg” style=”position:absolute;top:83;left:365;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130202.png” border=”0″></img></div><div id=”primImg” style=”position:absolute;top:521;left:369;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130203.png” border=”0″></img></div><div id=”primImg” style=”position:absolute;top:81;left:719;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130204.png” border=”0″></img></div>

From the source code of the article used by Gabriele Ghirlanda: in red, the image files he mentions.

Unhide That Hidden Text, Please

Le Temps‘ commitment to the cause of accessibility for all and, in particular, to find a way to make the JdG archives accessible to blind people (see “Accessibility Issues” above) is laudable. But in this case, why first go through the complex process of splitting the text into several images, and theoretically prevent the download of some of these images for copyrighted texts, when this “digital copyright protection” can easily be by-passed with right-click and copy-paste?

As there already is a hidden text version of the JdG articles for powering the search engine, why not just unhide it? www.letempsarchives.ch already states that these archives are “© 2008 Le Temps SA.” This should be sufficient copyright protection.

Let’s hope that Olive ActivePaper Archive software offers this option to unhide hidden text. Not just for the archives of the JdG, but for all archives working with this software. And let’s hope, in general, that all web applications using text to enhance a non-text object will publish it. All published works are automatically protected by copyright laws anyway.

Adding an alternative accessible version just for blind people is discriminatory. According to accessibility guidelines – and common sense – alternative access for people with disabilities should only be used when there is no other way to make web content accessible. Besides, access to the text version would also simplify life for scholars – and for people using portable devices with a small screen: text can be resized far better than a puzzle of images with fixed width and height (see the source code excerpt above).

Links
The pages linked to in this article and a few more resources are bookmarked under http://www.diigo.com/user/calmansi/hiddentext

Live Radio Captioning for the Deaf

claude80By Claude Almansi
Staff Writer

Thanks to:

  • Sylvia Monnat, director of captioning at Télévision Suisse Romande (French-speaking Swiss television www.tsr.ch) for the explanations she gave me by phone on live captioning through re-speaking.
  • Neal Stein, of Harris Corporation (www.harris.com), for the authorization to publish on YouTube the video excerpt shown below, and for his explanations on the US live radio captioning project.

Why Caption Radio?

Making radio accessible for deaf and hard of hearing persons is not commonly perceived as a priority. For instance, the new version of the Swiss law and ordinance on Radio and Television that came into force in 2007 does add several dispositions about accessibility for people with sight and hearing disabilities but does not mention captioning radio. See art. 7 [1] of the law and art. 7 [2] and 8 [3] of the ordinance (in French). According to most non-deaf people’s “common sense,” deaf persons don’t use radio – just as many non-blind people still believe that blind people can’t use computers.

Yet deaf persons are interested in accessing radio content through captioning, as Cheryl Heppner, Executive Director of NVRC [4], explains in this video:

The video is from the January 8, 2008, I-CART introductory press conference at CES 2008. The full video can be downloaded from www.i-cart.net. Transcript of the above excerpt:

I’m one of 31 million people in the United States who are deaf or hard of hearing. A number that continues to grow. NPR Labs and its partners are on the verge of making many of my dreams come true. Beyond having that really crucial emergency information, captioned radio could also open up a world I’ve never had, because I lost my hearing before my seventh birthday.

When I am stuck in Washington’s legendary Beltway gridlock, I could check the traffic report and find out why, what my best route would be. I could check the sports scores and follow the games for all my favorite teams. I could know why my husband is always laughing so uproariously when he listens to “Car Talk.” And I could annoy him by singing along badly to the lyrics of his favorite songs.

I can’t wait. Thank you.

NPR’s Live Captioned Broadcast of Presidential Election

The work by NPR Labs and its partners, mentioned by Cheryl Heppner in this January 2008 conference, led to the broadcasting of live captioned debates on NPR during the US election campaign a few months later. The assessment by deaf and hard of hearing people of this experiment was extremely positive. According to the press release “Deaf and Hard of Hearing Vote Yes on New Radio Technology During NPR’s Live Captioned Broadcast of Presidential Election” (Nov. 13, 2008) [5]:

  • 95% were happy with the level of captioning accuracy, a crucial aspect for readability and comprehension
  • 77% said they would be interested in purchasing a captioned radio display unit when it becomes available
  • 86% indicated they would be interested in purchasing a “dual-view” screen display for a car (which would enable a deaf passenger to see the captioned radio text while the driver listens to the radio).

How Are Radio Captions Transmitted?

A digital radio signal can be divided to transmit audio and text, and the text can be read on the radio display. In fact, text messages are already being sent micro4_serviceon car radio displays through Radio Data System (RDS). For instance, this is how the Swiss traffic information service Inforoutes updated drivers in real time – or almost – about the state of traffic jams due to work in the Glion tunnel in 2004. (See “Service,” in French, on page 4, in the May 2004 newsletter of Les Radios Francophones Publiques [6].)

The radio devices used in the experience conducted by NPR Labs and its partners that Cheryl Heppner mentions have a bigger display. For the exact technical explanation of how the captions work, see the presentations section of www.i-cart.net.

Stenocaptioning vs. Respeaking

The NPR experiment mentioned above used “stenocaptioned,” i.e., they were written with a stenotype [7] whose output gets translated into captions in normal English by computer software. Live stenocaptioning – whether for news broadcasts or for in-presence events in specially equipped venues – seems to be the preferred solution in countries such as the US and Italy that have a tradition of stenotyping court proceedings or parliamentary debates.

In most other European countries, according to Ms. Sylvia Monnat, director of captioning at Télévision Suisse Romande (French-speaking Swiss TV – www.tsr.ch), broadcasters tend to prefer “respeaking,” which works with speech-to-text technology: the software gets trained to recognize the voice of respeakers, and then converts what they repeat into captions.

Ms. Monnat further explained that, on the one hand, the advantages of respeaking involves training. In fact, countries without a stenotyping tradition do not offer courses for it, whereas existing interpretation schools can arrange respeaking courses since it is a normal exercise in the training of conference interpreters. Moreover, respeaking is easier to learn than stenotyping.

On the other hand, it takes time to, first, train the speech-to-text software to recognize the respeakers’ voices and, second, to add words not present in its basic thesaurus for each respeaker’s voice. Moreover, enough respeakers have to be trained so that one whose voice is recognized by the software will be available when needed. Whereas once a new word has been added to the thesaurus of the stenocaptioning software, it can be used by any stenocaptioner.

Outlook

The fast evolution of technology makes it difficult to foresee the issues of live captioning, even in the near future. Radio and television are merging into “multimedia broadcasting.” And, in turn, the line between broadcasting and the internet is gradually fading (see the HDTV offer by internet providers). Speech-to-text technology will probably continue to improve. Mutimedia devices are also evolving rapidly.

However, the response of the deaf and hard of hearing people who participated in the NPR Live captioning experiment seems to allow one safe surmise: live radio captioning is here to stay, whatever the means it will use tomorrow.

Resources

Further information on live captioning can be found in the online version of the “Proceedings of the First International Seminar on Real-time Intralingual Subtitling” held in Forlì, Italy, on Nov. 17, 2006 [8].

This and other online resources mentioned here have been tagged “captioning” in Diigo and can therefore be found, together with resources added by other Diigo users, in www.diigo.com/tag/captioning.

Three Video Captioning Tools

claude80By Claude Almansi
Staff Writer

First of all, thanks to:

  • Jim Shimabukuro for having encouraged me to further examine captioning tools after my previous Making Web Multimedia Accessible Needn’t Be Boring post – this has been a great learning experience for me, Jim
  • Michael Smolens, founder and CEO of DotSUB.com and Max Rozenoer, administrator of Overstream.net, for their permission to use screenshots of Overstream and DotSUB captioning windows, and for their answers to my questions.
  • Roberto Ellero and Alessio Cartocci of the Webmultimediale.org project for their long patience in explaining multimedia accessibility issues and solutions to me.
  • Gabriele Ghirlanda of UNITAS.ch for having tried the tools with a screen reader.

However, these persons are in no way responsible for possible mistakes in what follows.

Common Features

Video captioning tools are similar in many aspects: see the screenshot of a captioning window at DotSUB:

dotsub_transcribe

and at Overstream:

overstream_transcribe

In both cases, there is a video player, a lst of captions and a box for writing new captions, with boxes for the start and end time of each caption. The MAGpie desktop captioning tool (downloadable from http://ncam.wgbh.org/webaccess/magpie) is similar: see the first screenshot in David Klein and K. “Fritz” Thompson, Captioning with MAGpie, 2007 [1].

Moreover, in all three cases, captions can be either written directly in the tool, or creating by importing a file where they are separated by a blank line – and they can be exported as a file too.

What follows is just a list of some differences that could influence your choice of a captioning tool.

Overstream and DotSUB vs MAGpie

  • DotSUB and Overstream are online tools (only a browser is needed to use them, whatever the OS of the computer), whereas MAGpie is a desktop application that works with Windows and Mac OS, but not with Linux.
  • DotSUB and Overstream use SubRip (SRT) captioning [2] while MAGpie uses Synchronized Multimedia Integration Language (SMIL) captioning [3]
  • Overstream and Dotsub host the captioned result online, MAGpie does not.
  • The preparation for captioning is less intuitive with MAGpie than with Overstream or DotSUB, but on the other hand MAGpie offers more options and produces simpler files.
  • MAGpie can be used by disabled people, in particular by blind and low-sighted people using a screen reader [4], whereas DotSUB and Overstream don’t work with a screen reader.

Overstream vs DotSUB

  • The original video can be hosted at DotSUB; with Overstream, it must be hosted elsewhere.
  • DotSUB can also be used with a video hosted elsewhere, but you must link to the streaming flash .flv file, whereas with Overstream, you can link to the page of the video – but Overstream does not support all video hosting platforms.
  • If the captions are first written elsewhere then imported as an .srt file, Overstream is more tolerant of coding mistakes than DotSUB – but this cuts both ways: some people might prefer to have your file rejected rather than having gaps in the captions.
  • Overstream allows more precise time-coding than DotSUB, and it also has a “zooming feature” (very useful for longish videos), which DotSUB doesn’t have.
  • DotSUB can be used as a collaborative tool, whereas Overstream cannot yet: but Overstream administrators are planning to make it possible in future.
  • With DotSUB, you can have switchable captions in different languages on one player. With Overstream, there can only be one series of captions in a given player.

How to Choose a Tool . . .

So how to choose a tool? As with knitting, first make a sample with a short video using different tools: the short descriptive lists above cannot replace experience. Then choose the most appropriate one according to your aims for captioning a given video, and what are your possible collaborators’ availability, IT resources, and abilities.

. . . Or Combine Tools

The great thing with these tools is that you can combine them:

As mentioned in my former Making Web Multimedia Accessible Needn’t Be Boring post, I had started captioning “Missing in Pakistan” a year ago on DotSUB, but gone on using MAGpie for SMIL captioning (see result at [5] ). But when Jim Shimabukuro suggested this presentation of captioning tools, I found my aborted attempt at DotSUB. As you can also do the captioning there by importing a .srt file, I tried to transform my “.txt for SMIL” file of the English captions into a .srt file. I bungled part of the code, so DotSUB refused the file. Overstream accepted it, and I corrected the mistakes using both. Results at [6] (DotSUB) and [7] (Overstream) . And now that I have a decent .srt file for the English transcript, I could also use it to caption the video at YouTube or Google video: see YouTube’s “Video Captions: Help with Captions” [8]. (Actually, there is a freeware program called Subtitle Workshop [9] that could apparently do this conversion cleanly, but it is Windows-only and I have a Mac.)

This combining of tools could be useful even for less blundering people. Say one person in a project has better listening comprehension of the original language than the others, and prefers Overstream: s/he could make the first transcript there, export the .srt file, which then could be mported in DotSUB to produce a transcript that all the others could use to make switchable captions in other languages. If that person with better listening comprehension were blind, s/he might use MAGpie to do the transcript, and s/he or someone else could convert it to a .srt fil that could then be uploaded either to DotSUB or Overstream. And so on.

Watch Out for New Developments

I have only tried to give an idea of three captioning tools I happen to be acquainted with, as correctly as I could. The complexity of making videos accessible and in particular of the numerous captioning solutions is illustrated in the Accessibility/Video Accessibility section [10] of the Mozilla wiki – and my understanding of tech issues remains very limited.

Moreover, these tools are continuously progressing. Some have disappeared – Mojiti, for instance – and other ones will probably appear. So watch out for new developments.

For instance, maybe Google will make available the speech-to-text tool that underlies its search engine for the YouTube videos of the candidates to the US presidential elections (see “”In their own words”: political videos meet Google speech-to-text technology” [11]): transcribing remains the heavy part of captioning and an efficient, preferably online speech-to-text tool would be an enormous help.

And hopefully, there will soon be an online, browser-based and accessible SMIL generating tool. SubRip is great, but with SMIL, captions stay put under the video instead of invading it, and thus you can make longer captions, which simplifies the transcription work. Moreover, SMIL is more than just a captioning solution: the SMIL “hub” file can also coordinate a second video for sign language translation, and audio descriptions. Finally, SMIL is a W3C standard, and this means that when the standard gets upgraded, it still “degrades gracefully” and the full information is available to all developers using it: see “Synchronized Multimedia Integration Language (SMIL 3.0) – W3C Recommendation 01 December 2008 [12].

Michelle Rhee – What’s Really at Stake?

Jim ShimabukuroBy Jim Shimabukuro
Editor

She’s on the cover of Time (week of December 8), in a classroom, unsmiling, dressed in black, holding a broom, with the cover title, “How to Fix America’s Schools,” set to look as though it’s the lesson for the day written on the blackboard. Framing her head is the huge “TIME” trademark. She is Michelle Rhee, Chancellor of Education, District of Columbia Public Schools. And the question for the “class” is, Does she have the answer to America’s failing public school systems? Is it, finally, time to make the kinds of sweeping changes that she represents?

Her goal’s clear, “To make Washington the highest-performing urban school district in the nation” [1]. The yardstick is a simple one: reading and math scores on standardized achievement tests. And her formula’s just as simple: reward teachers who can help her reach her goal and get rid of the ones who can’t.

time_mag_cover_dec8This unflinching focus, she says, places the student’s best interest at the forefront of schools. Higher scores will eventually translate to college degrees and better jobs, which are the tickets out of poverty, discrimination, and all the other social ills.

The underlying assumption is that all students can significantly improve their scores IF they have teachers [1] who are willing to set that as the primary goal and do everything it takes to reach it. In this picture, there is absolutely no room for failure. Little or no gain in scores is a sign of failure, and failure means a quick exit from the teaching profession. When student success is weighed against teacher security, there is no issue. Tenure is a dead horse. For teachers, the decision is a simple one, too: Deliver higher scores or get out.

“She is angry at a system of education that puts ‘the interests of adults’ over the ‘interests of children,’ i.e., a system that values job protection for teachers over their effectiveness in the classroom. Rhee is trying to change that system” [2].

What about the gray area, the affective dimensions that defy objective measurement? Rhee says, “The thing that kills me about education is that it’s so touchy-feely. . . . People say, ‘Well, you know, test scores don’t take into account creativity and the love of learning.’ . . . I’m like, ‘You know what? I don’t give a crap.’ Don’t get me wrong. Creativity is good and whatever. But if the children don’t know how to read, I don’t care how creative you are. You’re not doing your job” [1].

michelle_rhee01In pursuit of her goal, Rhee has the complete backing of D.C. Mayor Adrian M. Fenty, who appointed her chancellor in June 2007. “In her first 17 months on the job, Rhee closed 23 schools with low enrollment and overhauled 27 schools with poor academic achievement. She also fired more than 250 teachers and about one-third of the principals at the system’s 128 schools” [3].

Rhee scares the daylights out of me because she may very well be the wish that we’re warned to watch out for, the one that we might actually get. Now that we have someone with the power to really change the system, I suddenly have cold feet. Yes, she seems to make sense. Student achievement should take precedence over the needs of teachers. But are there other issues waiting below the surface that might just jump out and bite us if we follow Rhee?

For example, despite the radical nature of her approach, the bundle that we think of as “school” remains pretty much the same. The burden of accountability has shifted to the teacher, but the roles, resources, goals, and environment remain constant. Even pedagogy seems to be the same–more homework, more demanding tasks, more discipline, more testing. In other words, the same, but more of it.

One could argue that Rhee’s changes don’t go far enough and need to include innovations in information technology. There’s the possibility that these innovations could enhance learning by dramatically altering schools as we know them without some of the harsher consequences that seem to be a part of Rhee’s strategy.

Another issue is the effectiveness of strategies that Rhee lumps into the category of “touchy-feely.” Are these affective, student-centered, holistic, indirect methods proven ineffective? Or are they, perhaps, just as if not more effective than Rhee’s hard-nosed direct approach? Are we ready to toss these out as useless?

Yet another issue is the similarity of Rhee’s model to test-oriented systems in Asia. Is Rhee simply transporting a traditional model from China, India, Japan, and South Korea to the U.S.? If yes, then are there consequences that we need to be aware of?

Finally, are we beginning to draw a line between schools in general and poor urban schools in particular? A line that requires a radically different approach for the latter? Are we bending to the notion that schools not only can be but should be different for resource-poor inner-city schools? If this is the case, then could we be developing a system that channels or tracks children into careers at an early age, forever excluding college for many in favor of technical training? This could result in a form of economic and racial discrimination with far-reaching consequences.

In conclusion, my initial reaction is that Rhee’s ideas sound good, but I’m not quite ready to dump what we have now for an approach that we haven’t fully discussed or studied. At this juncture, an open discussion about the implications of Rhee’s tactics may be in order. I’m sure there are many other issues at stake. Thus, please share your thoughts with us. Either post them as comments to this article or email them to me at jamess@hawaii.edu

(Note: For a quick background, see Amanda Ripley’s “Rhee Tackles Classroom Challenge” [26 Nov. 2008] at Time.com and Thomas, Conant, and Wingert’s “An Unlikely Gambler” [23 Aug. 2008, from the magazine issue dated 1 Sep. 2008] at Newsweek.com. Finally, go to YouTube and do a search on “michelle rhee” for lists of videos.)