Belgian Newspapers v. Google: Text of the Court of Appeal’s Decision

Claude AlmansiBy Claude Almansi
Editor, Accessibility Issues
ETCJ Associate Administrator

In 2006, Copiepresse, the  rights managing society of  Belgian  publishers of  French- and German-language daily newspapers, sued Google about the snippets shown in Google News  and about the cached versions displayed in Google Search. On May 5, 2011, a decision of the Brussels appeal court slightly reworded but basically confirmed the 2007 judgment of the first instance court :

La cour … Condamne Google à retirer des sites Google.be et Google.com, plus particulièrement des liens «en cache» visibles sur “Google Web” et du service “Google News”, tous les articles, photographies et représentations graphiques des éditeurs belges de presse quotidienne francophone et germanophone, représentés par Copiepresse …,  sous peine d’une astreinte de 25.000,00 € par jour de retard ….

The syntax is contorted and  the  part between commas starting with “plus particulièrement” is ambiguous. Moreover, I’m not a lawyer. So here is a very informal attempt at translation:

The court … orders Google to withdraw from the Google.be and Google.com sites, more particularly from the “cached” links visible on “Google Web” and from the “Google News” service, all articles, photographs and graphical representations of the Belgian publishers of French- and German press represented by Copiepresse  …,   or pay € 25’000.00 for each day in noncompliance …. Continue reading

Beware of Privacy and Other Issues When Signing Up for Free Courses

Claude AlmansiBy Claude Almansi
Editor, Accessibility Issues
ETCJ Associate Administrator

Note: This post arises from my personal experience with one “free” online course for teachers provided by an Italian nonprofit association. Hopefully, other similar offers are managed with more care. However, in case not all of them are, here goes, as a cautionary tale.

Didasca’s course about “Google Apps Education”

Last year, the Italian Didasca association launched its first free online course for teachers about Google Apps for Education. If you know how to use an office suite to produce content, doing so with Google Docs or its version for schools, Google Apps, is a no-brainer. However, using such collaborative online tools with minor students presents some specific issues, in particular privacy issues, which I assumed the Didasca course covered.

Continue reading

Accessibility and Literacy: Two Sides of the Same Coin

Accessibility 4 All by Claude Almansi

Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons

On July 13, 2009, WIPO (World Intellectual Property Organization) organized a discussion entitled  Meeting the Needs of the Visually Impaired Persons: What Challenges for IP? One of its focuses was the draft Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons, written by WBU (World Blind Union), that had been proposed by Brazil, Ecuador and Paraguay at the 18th session of  WIPO’s Standing Committee on Copyright and Related Rights in May [1].

A pile of books in chains about to be cut with pliers. Text: Help us cut the chains. Please support a WIPO treaty for print disabled=

From the DAISY Consortium August 2009 Newsletter

Are illiterate people “reading disabled”?

At the end of the July 13 discussion, the Ambassador of Yemen to the UN in Geneva remarked that people who could not read because they had had no opportunities to go to school should be included among “Reading Disabled Persons” and thus benefit from the same copyright restrictions in WBU‘s draft treaty, in particular, digital texts that can be read with Text-to-Speech (TTS) software.

The Ambassador of Yemen hit a crucial point.

TTS was first conceived as an important accessibility tool to grant blind people access to  texts in digital form, cheaper to produce and distribute than heavy braille versions. Moreover, people who become blind after a certain age may have difficulties learning braille. Now its usefulness is being recognized for others who cannot read print because of severe dyslexia or motor disabilities.

Indeed, why not for people who cannot read print because they could not go to school?

What does “literacy” mean?

No one compos mentis who has seen/heard blind people use TTS to access texts and do things with these texts would question the fact that they are reading. Same if TTS is used by someone paralyzed from the neck down. What about a dyslexic person who knows the phonetic value of the signs of the alphabet, but has a neurological problem dealing with their combination in words? And what about someone who does not know the phonetic value of the signs of the alphabet?

Writing literacy

Sure, blind and dyslexic people can also write notes about what they read. People paralyzed from the neck down and people who don’t know how the alphabet works can’t, unless they can use Speech-to-Text (STT) technology.

Traditional desktop STT technology is too expensive – one of the most used solutions, Dragon NaturallySpeaking, starts at $99 – for people in poor countries with a high “illiteracy” rate. Besides, it has to be trained to recognize the speakers’ voice, which might not be an obvious thing to do for someone illiterate.

Free Speech-to-Text for all, soon?

In Unhide That Hidden Text, Please, back in January 2009, I wrote about Google’s search engine for the US presidential campaign videos, complaining that the  text file powering it – produced by Google’s speech-to-text technology – was kept hidden.

However, on November 19, 2009, Google announced a new feature, Automatic captions in YouTube:

To help address this challenge, we’ve combined Google’s automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions, or auto-caps for short. Auto-caps use the same voice recognition algorithms in Google Voice to automatically generate captions for video.

(Automatic Captions in YouTube Demo)

So far, in the initial launch phase, only some institutions are able to test this automatic captioning feature:

UC Berkeley, Stanford, MIT, Yale, UCLA, Duke, UCTV, Columbia, PBS, National Geographic, Demand Media, UNSW and most Google & YouTube channels

Accuracy?

As the video above says, the automatic captions are sometimes good, sometimes not so good – but better than nothing if you are deaf or don’t know the language. Therefore, when you switch on automatic captions in a video of one of the channels participating in the project, you get a warning:

warning that the captions are produced by automatic speech recognition

Short words are the rub

English – the language for which Google presently offers automatic captioning – has a high proportion of one-syllable words, and this proportion is particularly high when the speaker is attempting to use simple English: OK for natives, but at times baffling for foreigners.

When I started studying English literature at university, we 1st-year students had to follow a course on John Donne’s poems. The professor had magnanimously announced that if we didn’t understand something, we could interrupt him and ask. But doing so in a big lecture hall with hundreds of listeners was rather intimidating. Still, once, when I noticed that the other students around me had stopped taking notes and looked as nonplussed as I was, I summoned my courage and blurted out: “Excuse me, but what do you mean exactly by ‘metaphysical pan’?” When the laughter  subsided, the professor said he meant “pun,” not “pan,” and explained what a pun was.

Google’s STT apparently has the same problem with short words. Take the Don’t get sucked in by the rip… video in the UNSW YouTube channel:

If you switch on the automatic captions [2], there are over 10 different transcriptions – all wrong – for the 30+ occurrences of the word “rip.” The word is in the title (“Don’t get sucked in by the rip…”), it is explained in the video description (“Rip currents are the greatest hazards on our beaches.”), but STT software just attempts to recognize the audio. It can’t look around for other clues when the audio is ambiguous.

That’s what beta versions are for

Google deserves compliments for having chosen to semi-publicly beta test the software in spite of – but warning about – its glitches. Feedback both from the partners hosting the automatically captionable videos and from users should help them fine-tune the software.

A particularly precious contribution towards this fine-tuning comes from partners who also provide human-made captions, as in theOfficial MIT OpenCourseWare 1800 Event Video in the  MIT YouTube channel:

Once this short word issue is solved for English, it should then be easier to apply the knowledge gained to other languages where they are less frequent.

Moreover…

…as the above-embedded Automatic Captions in YouTube Demo video explains, now you:

can also download your time-coded caption file to modify or use somewhere else

I have done so with the Lessig at Educause: Creative Commons video, for which I had used another feature of the Google STT software: feeding it a plain transcript and letting it add the time codes to create the captions. The resulting caption .txt  file I then downloaded says:

0:00:06.009,0:00:07.359
and think about what else we could
be doing.

0:00:07.359,0:00:11.500
So, the second thing we could be doing is
thinking about how to change norms, our norms,

0:00:11.500,0:00:15.670
our practices.
And that, of course, was the objective of

0:00:15.670,0:00:21.090
a project a bunch of us launched about 7 years
ago,the Creative Commons project. Creative

etc.

Back to the literacy issue

People who are “reading disabled” because they couldn’t go to school could already access texts with TTS technology, as the UN Ambassador of Yemen pointed out at the above-mentioned WIPO discussion on Meeting the Needs of the Visually Impaired Persons: What Challenges for IP? last July.

And soon, when Google opens this automated captioning to everyone, they will be able to say what they want to write in a YouTube video – which can be directly made with any web cam, or even cell phone cam – auto-caption it, then retrieve the caption text file.

True, to get a normal text, the time codes should be deleted and the line-breaks removed. But learning to do that should be way easier than learning to fully master the use of the alphabet.

Recapitulating:

  • Text-to-Speech, a tool first conceived to grant blind people access to written content, can also be used by other reading-disabled people, including people who can’t use the alphabet convention because they were unable to go to school and, thus, labeled “illiterate.”
  • Speech-to-Text, a tool first conceived to grant deaf people access to audio content, is about to become far more widely available and far easier to use than it was recently, thus potentially enabling people who can’t use the alphabet convention because they were unable to go to school and labeled “illiterate” the possibility to write.

This means that we should reflect on the meanings of the words “literate” and “illiterate.”

Now that technologies first meant to enable people with medically recognized disabilities to use and produce texts can also do the same for those who are “reading disabled” by lack of education, industries and nations presently opposed to the Treaty for Improved Access for Blind, Visually Impaired and other Reading Disabled Persons should start thinking beyond “strict copyright” and consider the new markets that this treaty would open up.

Encounter: Sidewiki – Handy Tool or Destructive Weapon?

encountersIntroduction: This encounter begins with an idea, a “bump,” from Harry Keller, which first appeared in an etcnews-l listserv post on 8 Oct. 2009. To participate in this encounter, post a comment. I’ll append most or all of the comments to this post as they’re published. -js

Harry Keller

Harry Keller, editor, science education, on 8 Oct. 2009, 9:26AM: Has anyone seen Google’s Sidewiki feature?  It’s pretty scary if you begin to think about it as this blogger [Paul Myers, TalkBiz News: The Blog] has in “Google Steals the Web.” Can any of you calm my trepidation regarding this potentially serious problem?

Claude Almansi Claude Almansi, editor, accessibility issues, and ETC site accessibility facilitator, on 8 Oct. 2009, 11:28 pm: Errh, Jim and Harry, am I missing something in this very long and detailed article? In what is Sidewiki new? The same commenting feature is offered by Diigo bookmarking, and Diigo has been around since 2006.

I used Diigo comments on Myer’s article: they all can be viewed by clicking here, even by people who don’t have a Diigo account, as I made them public, whereas only people who have installed Sidewiki on their Google bar can see the Sidewiki comments, according to that article. And people who install a browsing feature on a toolbar should be aware of what it does. I have twittered my bookmark, and I could have my Diigo bookmarks – and thus my public comments – showing on my Facebook page. And I’m not sure Diigo has the filtering-through-algorithms capacity Google Sidewiki has, however imperfect that might be.

So in what way is Google Sidewiki worse than Diigo comments? When I mentioned having added Diigo comments to the e-codices electronic version of Historia destructionis Troiae, the person in charge of the e-codices project did not get his knickers in a twist over my “stealing his content” or “hijacking his server.” He wrote that he had actually been trying to implement something like that for users to comment, just that the Diigo commenting solution was not user-friendly on pictures (true, it works better on text). Maybe he took it serenely because, being a tech person, he really understood how these commenting web apps work. Which apparently the author of the article only does partially.

These social commenting features can be fabulous for learning projects involving several schools, for instance. With Diigo, at least, you can choose to share your bookmarks or individual comments with a group, and Diigo has keyboard shortcuts that make it accessible to the blind. Teachers wishing to do the same with Sidewiki should check if there are the same shortcuts. I’m not going to install the needed toolbar as I already have the Diigo and Webdeveloper ones (not that I am a web developer, but it is very useful to get concrete evidence of why a site makes you queasy).

Harry Keller (10.9.09, 4:17AM): I have never seen Diigo before and hadn’t heard of Sidewiki before reading the article and have never used it. Given those caveats, here’s my take.

For the crowd that has Google toolbar, a huge number of people, they will be asked to add Sidewiki. The pitch will be seductive: social commenting. Sounds great. Most will do so. I’d expect millions of Sidewiki users in short order.

If I don’t have Sidewiki, then comments about my site will be invisible to me. However, any member who arrives at my site will see them all right there along with my intended site. In other words, they don’t have to go to Sidewiki.com to view the comments. They only have to go to smartscience.net, possibly as the result of a Google search. The right portion of my site will be cut off to make room for the Sidewiki, which will not be cut off at all. Those comments will be more prominent than my site. Whenever I send out http://www.smartscience.net to anyone, the possibility exists that they will see any comments. Those comments can be from anyone, including competitors.

It would be very easy for a competitor to use newly created and fake Gmail account to leave false comments about my service on the site. Then, every Sidewiki user who uses my site would see the negative comment and would probably believe it. Sidewiki users might see the comment and repeat it on various other social networks. I would have no recourse except to complain to Google, who could take their time reviewing the complaint and even decide to leave it alone. However, in a very sinister twist, I wouldn’t even know about the sabotage unless I join Sidewiki.

So, the problem has two parts. The first part is that your own URL would deliver Sidewiki, not some other Google-owned URL. The second part is the ease with which those who would do you grief can sabotage your Sidewiki “enhanced” site without your knowledge.

John AdsitJohn Adsit, editor, curriculum & instruction, K-12, on 9 Oct. 2009, 4:42AM: I share Harry’s concern, with a sense of real despair.

Let’s say Google comes to its senses and decides not to do it.

So what? If it can be done, someone will do it. Perhaps the solution is to get Google and other organizations with decent reputations to shun that technology leave it to some organization so unscrupulous that anything that appears in it will have no credibility to the average viewer, who will see it as a nuisance rather than a valuable asset.

Claude Almansi (10.9.09, 5:45AM):

[Harry Keller:] I have never seen Diigo before and hadn’t heard of Sidewiki before reading the article and have never used it. Given those caveats, here’s my take.

For the crowd that has Google toolbar, a huge number of people, they will be asked to add Sidewiki. The pitch will be seductive: social commenting. Sounds great. Most will do so. I’d expect millions of Sidewiki users in short order.

If I don’t have Sidewiki, then comments about my site will be invisible to me. However, any member who arrives at my site will see them all right there along with my intended site. In other words, they don’t have to go to Sidewiki.com to view the comments. They only have to go to smartscience.net, possibly as the result of a Google search. The right portion of my site will be cut off to make room for the Sidewiki, which will not be cut off at all. Those comments will be more prominent than my site. Whenever I send out http://www.smartscience.net to anyone, the possibility exists that they will see any comments. Those comments can be from anyone, including competitors.

It would be very easy for a competitor to use newly created and fake Gmail account to leave false comments about my service on the site.

Same with Diigo.

Then, every  Sidewiki user who uses my site would see the negative comment and would probably believe it.

If they are Sidewiki users themselves – and they have to be in order to view the comments – they’ll be able to tell the difference between web page and comments.

Sidewiki users might see the comment and repeat it on various other social networks.

From what I understood, you can only share your own Sidewiki comments to other social networks.

I would have no recourse except to complain to Google, who could take their time reviewing the complaint and even decide to leave it alone. However, in a very sinister twist, I wouldn’t even know about the sabotage unless I join Sidewiki.

Same with Diigo sticky notes: you don’t see them unless you are signed into your Diigo account.

So, the problem has two parts. The first part is that your own URL would deliver Sidewiki, not some other Google-owned URL. The second part is the ease with which those who would do you grief can sabotage your Sidewiki “enhanced” site without your knowledge.

Sorry if I was not clear: you can view Diigo sticky notes in the Diigo bookmarking page, AND you can also view them on the site they have been made at, just like Sidewiki comments. For example, click here (from <http://lenovosocial.com/discover/social-site-reviews/diigo/&gt;). They are actually far more invasive than Sidewiki, which stays put in a column on the left of the commented page.

So the Diigo sticky notes present exactly the same potential risks as the Google Sidewiki comments. With the added risk that they are more invasive and can be collected by people who don’t have a Diigo account on the bookmarking page as well. I made 37 frigging Diigo sticky notes on the article you sent, btw.

In nuce: the only way to prevent folks from commenting what you write is not to publish it: whether the authors liked it or not, people wrote comments directly on medieval manuscripts, on printed books, now they do on websites too.

BTW, before Diigo and Sidewiki, there was – still is – Gabbly, which allows you to add a chat to any web page by adding “gabbly.com/” in front of any URL – for instance <http://gabbly.com/etcjournal.wordpress.com/&gt;. Click here for an example.

That could be annoying, too, couldn’t it? I remember joking with a friend about creating such a gabbly chat to promote The Pirate Bay on the site of MPAA or RIAA. And I did that one without registering at gabbly. If you register, you can make money from ads as well. Gabbly has been around for years, and so far as I know there have not been any protests about it.

Claude Almansi (10.9.09, 6:16AM): John, from a search about Gabbly, I got to Gooey, which is apparently the mother of all these applications that allow you to write comments on a web page. So the tech has been around since 1999. As to your suggestion, people who have been using Diigo sticky notes for years for entirely legit research and teaching purposes would not take kindly to Diigo removing this great feature because some people just realized that it has been possible to add comments on a web site unbeknown to the site’s author for 10 years and don’t like it.

Moreover, this is the same tech that enables online captioning of videos e.g. at Overstream.net. And if you can add captions, you can also add comments. So should that captioning possibility be scrapped too?

Why don’t you and Harry trust users’ intelligence a bit more? If a user can view sidewiki, it means s/he has it on her/his toolbar and most likely uses it too, so s/he knows how it works. Therefore it is unlikely s/he’ll be so daft as to confuse the Sidewiki content with the content of the page.

Bonnie Bracey Sutton

Bonnie Bracey Sutton, editor, policy issues, on 9 Oct. 2009, 6:22AM: You guys must have a lot more time than I have, but also you are not in DC where I have too many meetings to go to. I will check Sidewiki out.

Harry Keller (10.9.09, 6:53AM): I see the problem as simply that your web site URL, without any added characters — in my case, http://www.smartscience.net — would be delivered with the added comment to all of those with Google toolbars who, upon urging by Google, accepted the invitation to add Sidewiki. That fact might be just dandy for people publishing research papers and those making Internet noise with their blogs (as I have). However, it invites disaster for all organizations, for-profit and non-profit, who publicize their missions, products, and services on the Internet.

Don’t like the Red Cross? Just Sidewiki-swipe it. Upset that your competitor landed that big contract that you should have? Sidewiki-swipe their site with allegations about sexual misconduct or misappropriation of funds or any other negative stuff you can imagine. Assuming that lots of people have Sidewiki, the allegation could quickly go viral and be unstoppable. Such allegations in blogs are shrugged off these days. You have to find your way to the particular blog, after all. A competitor’s blog would have to be found and would not contain really nasty claims about competition because it would reflect poorly on the company.

On the other hand, the ugly comments would appear to every Sidewiki user who happened upon your site in its native, unamended, and true location in web space: its URL.

I really do not like the idea of my own URL being directly contaminated with stuff over which I have no control. Control of your own web site is a central concept of the Internet.

Furthermore, I have devoted my technical resources for ten years to creating a Web 2.0 resource (although it wasn’t called that when I began). Every customer starts at a specific URL in order to use our service. I may be able to take down my marketing URL, smartscience.net, and avoid Sidewiki problems there. However, I cannot take down my production site. Someone could put pornographic references there for any 6th grade student using my service, and it would reflect on me personally. My business would disappear in a heart beat. A decade of outrageously hard work and self-denial would be snuffed out before I even knew what was happening.

As I said before, I don’t know about Diigo or the others, but I suspect that they have not tied into the unvarnished URLs of web sites as has Google so that when your browser hits that site, whether you care or not, the Sidewiki panel appears. You should have to use another URL or specifically ask for comments. My web site is my web site and my URL should display it and it alone.

David G. LebowDavid Lebow, on 9 Oct. 2009, 8:48AM: I remember people speculating about third-party annotation in the early ‘90s with the advent of the Mosaic browser. Even then, people were concerned about “trespassing” as a problem.

The alternative solution to Sidewiki is to provide “owners” of websites and web pages with Java script libraries and APIs that add third-party annotation. Owners could then choose to add social annotation to their web pages while maintaining control over postings.

Claude Almansi (10.9.09, 9:51AM): Hi Harry. Diigo also works on the web site URL, without any added characters. Click here for an example of a screenshot of the Google page for Sidewiki when viewed by Diigo.com subscribers with the Diigo side navigation bar activated and one set of sticky notes opened inside the web site. I have circled in red the points were Diigo.com subscribers have added sticky notes. I did the screenshot opening the sticky notes set with the one by “Magnolia South” for you, because it is the most critical about Sidewiki.

It is true that Diigo is partly safeguarded from people who would use its sticky notes for trolling because its primary purpose is social bookmarking, and trolls are not much into that. But if the Google anti-troll filter proves efficient enough, trolls will well fall back on Diigo.

[Harry Keller:] Don’t like the Red Cross? Just Sidewiki-swipe it. Upset that your  competitor landed that big contract that you should have? Sidewiki-swipe their site with allegations about sexual misconduct or misappropriation of funds or any other negative stuff you can imagine. Assuming that lots of people have Sidewiki, the allegation could quickly go viral and be unstoppable. Such allegations in blogs are shrugged off these days. You have to find your way to the particular blog, after all. A competitor’s blog would have to be found and would not contain really nasty claims about competition because it would reflect poorly on the company.

On the other hand, the ugly comments would appear to every Sidewiki user who  happened upon your site in its native, unamended, and true location in web space: its URL.

Sidewiki users would immediately know that the comments were done by another Sidewiki user, and would take them with the same skepticism as they would if they stumbled upon an external blog entry. (…)

As I said before, I don’t know about Diigo or the others, but I suspect that they have not tied into the unvarnished URLs of web sites as has Google so  that when your browser hits that site, whether you care or not, the Sidewiki panel appears. You should have to use another URL or specifically ask for comments. My web site is my web site and my URL should display it and it alone.

Granted, Gabbly does not tie into the “unvarnished URLs of websites,” as you have to add gabbly.com after http:// to create the Gabbly chat. But – see above – Diigo does, and has done so for 3 years now, whether you like it or not. So far no Diigo user has added sticky notes to your web site, but it might happen, just as with Sidewiki. If it does, don’t take it too hard. If the comments are legit and constructive, let them be. If they are not, ask the administrators of whichever platform is involved to remove them.

Again, please trust people’s intelligence a bit more: even if the comments appear on the left of your website at its unvarnished URL, they are quite clearly external comments that have nothing to do with the site – both with Diigo and Sidewiki.

PS re your Red Cross example: I confess I have been sorely tempted to add a sticky note to their job application page opened with Firefox, which tells you you have to use Internet Explorer. Quoting the staff officer who told me to go to a cybercafe to use IE there if I don’t have it, as there is no single other way to apply for a Red Cross job. I didn’t, because folks using IE would not see that message for non-IE-users, and because, well, I want to use Diigo for constructive things.

Harry Keller (10.9.09, 12:20PM): Hi Claude, I just have to assume, without any other information, that Diigo is potentially evil too. Given that this all is true, then the major difference is that Google is ubiquitous and so, much more dangerous. My issue is with people searching for “virtual labs” and clicking on the link to smartscience.net, seeing the stuff on the side, and assuming that it’s all true or that I’m somehow responsible.

Among means to thwart this problem are some clever java_script code (defeated if user has java_script off) and simply swarming your pages with your own positive comments and so drowning out any others.

Although the potential for this problem occurred with the first search engine, two big things had to happen since the advent of the Internet. First, commercial sites had to be possible. In the early days they would be flamed. Purists believed that the Internet should not contain commercial messages.

The second was the virtual monopoly of one search engine. I used to think it would be Alta Vista. Was I ever wrong. When a company name becomes a verb, you know trouble is just around the corner.

Although tagging of web sites sounds great in theory, it’s really a dangerous way to run things.

Green Computing – Clippings from the Web

thompson80By John Thompson
Editor, Green Computing

Green computing. Green IT. Whatever you call it, it still means the same thing – doing what you can to reduce the carbon footprint associated with technology use, whether using technology at home or on the office desk or in the IT department’s lair.

Here are a few snippets from recent Web sites, blogs, etc. Click on the associated link to finish reading “the rest of the story” (as Paul Harvey would say).

Green Computing – Laptop Only Offices

There are ways to go green in IT that might not be obvious. Some businesses may have already made the change to laptops for reasons other than portability and a traveling workforce. Laptops are power savers, and saving power is a green goal. Let’s look at how laptops can help you go green.

http://superbatteryy.blogspot.com/2009/07/green-computing-laptop-only-offices.html

MIS 1 Assignment4: Green Campus Computing

The growing use of computers on campus has caused a dramatic increase in energy consumption, putting negative pressure on CU’s budget and the environment. Each year more and more computers are purchased and put to use, but it’s not just the number of computers that is driving energy consumption upward. The way that we use computers also adds to the increasing energy burden.
http://emilios-blog-emilio.blogspot.com/2009/07/mis-1-assignment4-green-campus.html

Seven Design Considerations for a Green Data Centre

greenexpresIT depart­ments are under increas­ing scrutiny and pres­sure to deliver environmentally‐sound solu­tions. Large data cen­tres are one of the most sig­nif­i­cant energy con­sumers in an organisation’s IT infra­strucure so any mea­sures that can be taken to reduce this con­sump­tion (and there­fore also car­bon diox­ide emis­sions) will have a pos­i­tive impact on an organisation’s envi­ron­men­tal foot­print.

http://expressiongreen.com/2009/07/19/seven-design-considerations-for-a-green-data-centre/

Green Campus Computing

Green computing is the study and practice of using computing resources efficiently. The primary objective of such a program is to account for the triple bottom line, an expanded spectrum of values and criteria for measuring organizational (and societal) success. The goals are similar to green chemistry: reduce the use of hazardous materials, maximize energy efficiency during the product’s lifetime, and promote recyclability or biodegradability of defunct products and factory waste.

http://juvz14.blogspot.com/2009/07/what-is-green-computing-green-computing.html

Google banks on data centre with no chillers

Google has taken a radical new approach when it comes to cooling data centres. The search giant has opened a unique data centre in Belgium that has no backup chillers installed but, instead, relies totally upon free air cooling to keep its servers cool.

http://www.computerworld.com.au/article/311616/google_banks_data_centre_no_chillers

The Sustainability Potential of Cloud Computing: Smarter Design

If you listen to venture capitalists and tech gurus, cloud computing is “the new dot-com,” the “biggest shift in computing in two decades” or even the “Cambrian explosion” of the technology era. Among its other heavenly attributes, the cloud is being touted for its ability to address the enormous need for energy efficiency of IT’s own footprint.

http://ow.ly/15IfwE

Greening the Internet: How Much CO2 Does This Article Produce?

Twenty milligrams – that’s the average amount of carbon emissions generated from the time it took you to read the first two words of this article. Now, depending on how quickly you read, around 80, perhaps even 100 milligrams of CO2 have been released. And in the several minutes it will take you to get to the end of this story, the number of milligrams of greenhouse gas emitted could be several thousand, if not more.

http://www.cnn.com/2009/TECH/science/07/10/green.internet.CO2/index.html

Sustainable Desktop Computing

To achieve a sustained reduction in energy consumption associated with desktop computers we recommend groups across the collegiate university to work through these five steps:

Step 1: Estimate. First estimate how much electricity your desktop computing infrastructure will consume if computers are (a) left on all the time or (b) switched off at the end of the day.

Step 2: Research. Many groups within the university and around the world have implemented projects to reduce IT-related greenhouse gas emissions and costs. OUCS is working with these groups to write up a variety of approaches in the form of case studies.

Step 3: Implement. There are many tools you can implement to reduce IT-related electricity consumption. How you achieve this within your group will depend on the needs and skills of your users, and the hardware and software infrastructure you own.

Step 4: Communicate. You will need to encourage as many people as possible to “do their bit.” Behavioural change is likely to be a significant and critical part of any initiative that aims to improve environmental performance.

Step 5: Share. In step two we suggest you read about the work of other groups. In this last step we encourage you to share your experiences by documenting your approach in the form of a case study.

http://www.oucs.ox.ac.uk/greenit/desktop.xml

grass2

Happy reading “the rest of the story.” Where/how did I find this material? Using Tweetdeck, I set up Twitter searches on “green computing” and “green IT,” although almost all the URLs were found in the “green computing” (without using quotation marks) search. Using “green it” (with and without quotation marks) yielded mostly junk results. There was redundancy in the resulting tweets as people send retweets of the same information, plus there were soft/hard sells for related products. But you also find such information as cited above. You also might want to view my archived “Webinar, Blueprint for Green Computing,” found at the inaugural Virtual FOSE show’s site, http://virtual.fose.com/. It is a free registration.

Besides all this material, I hope that the resulting comments to this blog posting will contain more such green computing sites chockfull of more good information.

Google Book Search Settlement Unfair to Non-US Authors

Claude AlmansiBy Claude Almansi
Editor, Accessibility Issues

Of Books and Vegetables

I first thought of calling this post “Of Books and Vegetables” because, when I half woke up the morning after I sent a letter of objection to the Google Book Search Settlement, I remembered Ms B. and the building site for a middle school in Cortona. The building activity had stopped just after the ground had been cleared, due to blocked funds. So for two years,  Ms B., who lived on the other side of the street, used it  to grow very tasty tomatoes and zucchini No one objected to this private exploitation of  the site: it would have been silly to waste its potential, and Ms B. generously shared her vegetables with friends and neighbours. When the funding issue was solved, the building started again and her vegetable patch was bulldozed.

I chose a more conservative title because the analogy with Google scanning out-of-print works in libraries is imperfect: if a big canning industry, instead of Ms B., had started to grow vegetables on the building site,  the borough of Cortona would probably have tried to levy a rental for this use. But the principle remains: it is silly, even immoral, to waste potential revenue – especially if its exploitation will serve the public.

Challenging or objecting?

So I did not object to the Google Book Search Settlement for the same proprietary reasons as the eminent cultural personalities who signed the Heidelberg Appeal (English textGerman text with signatures):

Comic where someone says: Well, I'll be cross-eyed, Billy Goat! Cattle rustlers! This explains th' strange noises in th' ghost town above --- No wonder it was called Whispering Walls

Actually, I did not mean to object: at first I only challenged the Settlement Registry’s classifying as  “not commercially available” the Google scan of  Theatre of Sleep, an anthology my late husband Guido Almansi and I had edited and published with Pan Books in 1986 – and for which, after his death in 2001, I was the remaining mentioned copyright holder.

The physical book has indeed been out of print for years, but it contains many excerpts from in-copyright and commercially available works, which we had obtained permission to use in – and only in – the Pan Book version. Even if the Settlement foresees the possibility for right holders on such excerpts to claim them and forbid Google to display them, some right holders might not know about the Settlement, or not remember exclusive permissions granted decades ago; besides, the search engine of the Settlement registry often does not find the authors of such excerpts. Under our initial transactions for Theatre of Sleep, I am answerable to these right holders – no pact between parties who had nothing to do with these transactions can change this.

Another reason not to allow Google to display even the rest of the anthology under the Settlement’s conditions was the absolutely unacceptable digital restriction of what – paying – users would be able to print or copypaste from Google books. Such digital restriction measures just don’t work: in Copying from a Google Book, I show how easy it is to do so even with theoretically thus restricted works. And if users pay for an e-book, they should be able to do what they want with it for personal use. So I made an unprotected e-version of what was legally offerable in Theatre of Sleep, and uploaded it  in archive.org/details/TheatreOfSleep, an in-progress version because I will re-add in-copyright texts when I get permission again.

Foreign authors and the Settlement

I could have left things at that, without objecting to the Settlement. But Peter Brantley of the Internet Archive pointed out in an e-mail that many people who are hit by the Settlement and utterly dislike it do not object because it is too complex and they have no legal training. This is my situation too, so I included the excessive complexity of the Settlement in my objections.

Theatre of Sleep An anthology of literary dreams - Guido Almansi Claude BéguinThen there was another reason for objecting. Guido and I also did an adaptation of Theatre of Sleep for the Italian readership – Teatro del Sonno – which was published by Garzanti in 1988, is out of print, and has been scanned by Google. For that one we had ceded the copyright to Garzanti, mainly because we did not want to send the permission requests all over again and Garzanti could do that more easily.

But Garzanti has not yet claimed Teatro del Sonno under the Settlement. Its editorial director explained to me that Italian publishers have chosen to wait for the result of the Final Fairness Hearing about it, in case it results in its invalidation: due to the imprecision of the Registry’s search engine, checking what Google has and has not scanned is very time-consuming. Though they are very displeased with the Settlement, Italian publishers are not objecting either, apparently. Above all, they are not systematically informing their authors about the Settlement.

Considering what little info non-US media gave about the Settlement, we are left with the impression that it was a US-only affair. However, this lack of information puts non-US authors at risk. As Mary Minow explained in Google Book Settlement, orphan works, and foreign works (LibraryLaw Blog, April 21, 2009):

The largest group of non-active rights holders are likely to be foreign authors. In spite of Google’s efforts to publicize the settlement abroad, I suspect that most foreign rights owners of out-of-print books will fail to register with the Registry.  There are a couple of reasons for this.  For one, they may not know that their book is still protected by copyright in the US.  In addition, they may assume that international network of reproduction rights organizations would manage their royalties, and not understand the need to register separately. . . .

If there is an injustice being done in the settlement, it is with foreign authors.

Also, if foreign right-holders do not object to the Settlement, how is the US Court to know that they disapprove of it?

Letter of objections

Hence my letter of objections, below. Not because I think they are representative of non-US objections, but because I believe it is important that non-US right-holders object to the Settlement if they disapprove of it, even if their reasons are very different. The deadline for doing so is Sept. 4, 2009, and for the modalities, see 24. How can I object to the Settlement? in the Settlement’s FAQs.

Direct download links: PDFODT

Links

I have gathered / am gathering some bookmarks about the Settlement in diigo.com/user/calmansi/googlesettlement. Several of those, in particular about its repercussions outside US, come from the very useful Google Settlement Information, Documents, News &  Links page in Michael W. Perry’s Inkling Books.

Credits

By order of appearance:

Unhide That Hidden Text, Please

claude80By Claude Almansi
Staff Writer

Thanks to:

  • Marie-Jeanne Escure, of Le Temps, for having kindly answered questions about copyright and accessibility issues in the archives of the Journal de Genève.
  • Gabriele Ghirlanda, of Unitas, for having tested the archives of the Journal de Genève with a screen reader.

What Hidden Text?

Here, “hidden text” refers to a text file combined by an application with another object (image, video etc.) in order to add functionality to that object: several web applications offer this text to the reader together with the object it enhances – DotSUB offers the transcript of video captions, for instance:

dotsub_trscr

Screenshot from “Phishing Scams in Plain English” by Lee LeFever [1].

But in other applications, unfortunately, you get only the enhanced object, but the text enhancing it remains hidden even though it would grant access to content for people with disabilities that prevent them from using the object and would simplify enormously research and quotations for everybody.

Following are three examples of object-enhancing applications using text but keeping it hidden:

Multilingual Captioning of YouTube and Google Videos

Google offers the possibility to caption a video by uploading one or several text files with their timed transcriptions. See the YouTube example below.

yt_subtYouTube video captioning.

Google even automatically translates the produced captions into other languages, at the user’s discretion. See the example below. (See “How to Automatically Translate Foreign-Language YouTube Videos” by Terrence O’Brien, Switch,

yt_subt_trslOption to automatically translate the captions of a YouTube video.

Nov. 3, 2008 [2], from which the above two screenshots were taken.) But the text files of the original captions and their automatic translations remain hidden.

Google’s Search Engine for the US Presidential Campaign Videos

During the 2008 US presidential campaign, Google beta-tested a search engine for videos on the candidates’ speeches. This search engine works on a text file produced by speech-to-text technology. See the example below.

google_election_searchGoogle search engine for the US presidential election videos.

(See “Google Elections Video Search,” Google for Educators 2008 – where you can try the search engine in the above screenshot – [3] and “‘In Their Own Words’: Political Videos Meet Google Speech-to-text Technology” by Arnaud Sahuguet and Ari Bezman. Official Google blog, July 14, 2008 [4].) But here, too, the text files on which the search engine works remain hidden.

Enhanced Text Images in Online Archives

Maybe the oddest use of hidden text is when people go to the trouble of scanning printed texts, produce both images of text and real text files from the scan, then use the text file to make the image version searchable – but hide it. It happens with Google books [5] and with The European Library [6]: you can browse and search the online texts that appear as images thanks to the hidden text version, but you can’t print them or digitally copy-paste a given passage – except if the original is in the public domain: in this case, both make a real textual version available.

Therefore, using a plain text file to enhance an image of the same content, but hiding the plain text, is apparently just a way to protect copyrighted material. And this can lead to really bizarre solutions.

Olive Software ActivePaper and the Archives of Journal de Genève

On December 12, 2008, the Swiss daily Le Temps announced that for the first time in Switzerland, they were offering online “free access” to the full archives – www.letempsarchives.ch (English version at [7]) – of Le Journal de Genève (JdG), which, together with two other dailies, got merged into Le Temps in 1998. In English, see Ellen Wallace’s “Journal de Geneve Is First Free Online Newspaper (but It’s Dead),” GenevaLunch, Dec. 12, 2008 [8].

A Vademecum to the archives, available at [9] (7.7 Mb PDF), explains that “articles in the public domain can be saved as

jdg_vm_drm

images. Other articles will only be partially copied on the hard disk,” and Nicolas Dufour’s description of the archiving process in the same Vademecum gives a first clue about the reason for this oddity: “For the optical character recognition that enables searching by keywords within the text, the American company Olive Software adapted its software which had already been used by the Financial Times, the Scotsman and the Indian Times.” (These and other translations in this article are mine.)

The description of this software – ActivePaper Archive – states that it will enable publishers to “Preserve, Web-enable, and Monetize [their] Archive Content Assets” [10]. So even if Le Temps does not actually intend to “monetize” their predecessor’s assets, the operation is still influenced by the monetizing purpose of the software they chose. Hence the hiding of the text versions on which the search engine works and the digital restriction on saving articles still under copyright.

Accessibility Issues

This ActivePaper Archive solution clearly poses great problems for blind people who have to use a screen reader to access content: screen readers read text, not images.

Le Temps is aware of this: in an e-mail answer (Jan. 8, 2009) to questions about copyright and accessibility problems in the archives of JdG, Ms Marie-Jeanne Escure, in charge of reproduction authorizations at Le Temps, wrote, “Nous avons un partenariat avec la Fédération suisse des aveugles pour la consultation des archives du Temps par les aveugles. Nous sommes très sensibilisés par cette cause et la mise à disposition des archives du Journal de Genève aux aveugles fait partie de nos projets.” Translation: “We have a partnership with the Swiss federation of blind people (see [11]) for the consultation of the archives of Le Temps by blind people. We are strongly committed/sensitive to this cause, and the offer of the archives of Journal de Genève to blind people is part of our projects.”

What Digital Copyright Protection, Anyway?

Gabriele Ghirlanda, member of Unitas [12], the Swiss Italian section of the Federation of Blind people, tried the Archives of JdG. He says (e-mail, Jan. 15, 2009):

With a screenshot, the image definition was too low for ABBYY FineReader 8.0 Professional Edition [optical character recognition software] to extract a meaningful text.

But by chance, I noticed that the article presented is made of several blocs of images, for the title and for each column.

Right-clic, copy image, paste in OpenOffice; export as PDF; then I put the PDf through Abbyy Fine Reader. […]

For a sighted person, it is no problem to create a document of good quality for each article, keeping it in image format, without having to go through OpenOffice and/or pdf. [my emphasis]

<DIV style=”position:relative;display:block;top:0; left:0; height:521; width:1052″ xmlns:OliveXLib=”http://www.olive-soft.com/Schemes/XSLLibs&#8221; xmlns:OlvScript=”http://www.olivesoftware.com/XSLTScript&#8221; xmlns:msxsl=”urn:schemas-microsoft-com:xslt”><div id=”primImg” style=”position:absolute;top:30;left:10;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130200.png” border=”0″></img></div><div id=”primImg” style=”position:absolute;top:86;left:5;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130201.png” border=”0″></img></div><div id=”primImg” style=”position:absolute;top:83;left:365;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130202.png” border=”0″></img></div><div id=”primImg” style=”position:absolute;top:521;left:369;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130203.png” border=”0″></img></div><div id=”primImg” style=”position:absolute;top:81;left:719;” z-index=”2″><img id=”articlePicture” src=”/Repository/getimage.dll?path=JDG/1990/03/15/13/Img/Ar0130204.png” border=”0″></img></div>

From the source code of the article used by Gabriele Ghirlanda: in red, the image files he mentions.

Unhide That Hidden Text, Please

Le Temps‘ commitment to the cause of accessibility for all and, in particular, to find a way to make the JdG archives accessible to blind people (see “Accessibility Issues” above) is laudable. But in this case, why first go through the complex process of splitting the text into several images, and theoretically prevent the download of some of these images for copyrighted texts, when this “digital copyright protection” can easily be by-passed with right-click and copy-paste?

As there already is a hidden text version of the JdG articles for powering the search engine, why not just unhide it? www.letempsarchives.ch already states that these archives are “© 2008 Le Temps SA.” This should be sufficient copyright protection.

Let’s hope that Olive ActivePaper Archive software offers this option to unhide hidden text. Not just for the archives of the JdG, but for all archives working with this software. And let’s hope, in general, that all web applications using text to enhance a non-text object will publish it. All published works are automatically protected by copyright laws anyway.

Adding an alternative accessible version just for blind people is discriminatory. According to accessibility guidelines – and common sense – alternative access for people with disabilities should only be used when there is no other way to make web content accessible. Besides, access to the text version would also simplify life for scholars – and for people using portable devices with a small screen: text can be resized far better than a puzzle of images with fixed width and height (see the source code excerpt above).

Links
The pages linked to in this article and a few more resources are bookmarked under http://www.diigo.com/user/calmansi/hiddentext

Three Video Captioning Tools

claude80By Claude Almansi
Staff Writer

First of all, thanks to:

  • Jim Shimabukuro for having encouraged me to further examine captioning tools after my previous Making Web Multimedia Accessible Needn’t Be Boring post – this has been a great learning experience for me, Jim
  • Michael Smolens, founder and CEO of DotSUB.com and Max Rozenoer, administrator of Overstream.net, for their permission to use screenshots of Overstream and DotSUB captioning windows, and for their answers to my questions.
  • Roberto Ellero and Alessio Cartocci of the Webmultimediale.org project for their long patience in explaining multimedia accessibility issues and solutions to me.
  • Gabriele Ghirlanda of UNITAS.ch for having tried the tools with a screen reader.

However, these persons are in no way responsible for possible mistakes in what follows.

Common Features

Video captioning tools are similar in many aspects: see the screenshot of a captioning window at DotSUB:

dotsub_transcribe

and at Overstream:

overstream_transcribe

In both cases, there is a video player, a lst of captions and a box for writing new captions, with boxes for the start and end time of each caption. The MAGpie desktop captioning tool (downloadable from http://ncam.wgbh.org/webaccess/magpie) is similar: see the first screenshot in David Klein and K. “Fritz” Thompson, Captioning with MAGpie, 2007 [1].

Moreover, in all three cases, captions can be either written directly in the tool, or creating by importing a file where they are separated by a blank line – and they can be exported as a file too.

What follows is just a list of some differences that could influence your choice of a captioning tool.

Overstream and DotSUB vs MAGpie

  • DotSUB and Overstream are online tools (only a browser is needed to use them, whatever the OS of the computer), whereas MAGpie is a desktop application that works with Windows and Mac OS, but not with Linux.
  • DotSUB and Overstream use SubRip (SRT) captioning [2] while MAGpie uses Synchronized Multimedia Integration Language (SMIL) captioning [3]
  • Overstream and Dotsub host the captioned result online, MAGpie does not.
  • The preparation for captioning is less intuitive with MAGpie than with Overstream or DotSUB, but on the other hand MAGpie offers more options and produces simpler files.
  • MAGpie can be used by disabled people, in particular by blind and low-sighted people using a screen reader [4], whereas DotSUB and Overstream don’t work with a screen reader.

Overstream vs DotSUB

  • The original video can be hosted at DotSUB; with Overstream, it must be hosted elsewhere.
  • DotSUB can also be used with a video hosted elsewhere, but you must link to the streaming flash .flv file, whereas with Overstream, you can link to the page of the video – but Overstream does not support all video hosting platforms.
  • If the captions are first written elsewhere then imported as an .srt file, Overstream is more tolerant of coding mistakes than DotSUB – but this cuts both ways: some people might prefer to have your file rejected rather than having gaps in the captions.
  • Overstream allows more precise time-coding than DotSUB, and it also has a “zooming feature” (very useful for longish videos), which DotSUB doesn’t have.
  • DotSUB can be used as a collaborative tool, whereas Overstream cannot yet: but Overstream administrators are planning to make it possible in future.
  • With DotSUB, you can have switchable captions in different languages on one player. With Overstream, there can only be one series of captions in a given player.

How to Choose a Tool . . .

So how to choose a tool? As with knitting, first make a sample with a short video using different tools: the short descriptive lists above cannot replace experience. Then choose the most appropriate one according to your aims for captioning a given video, and what are your possible collaborators’ availability, IT resources, and abilities.

. . . Or Combine Tools

The great thing with these tools is that you can combine them:

As mentioned in my former Making Web Multimedia Accessible Needn’t Be Boring post, I had started captioning “Missing in Pakistan” a year ago on DotSUB, but gone on using MAGpie for SMIL captioning (see result at [5] ). But when Jim Shimabukuro suggested this presentation of captioning tools, I found my aborted attempt at DotSUB. As you can also do the captioning there by importing a .srt file, I tried to transform my “.txt for SMIL” file of the English captions into a .srt file. I bungled part of the code, so DotSUB refused the file. Overstream accepted it, and I corrected the mistakes using both. Results at [6] (DotSUB) and [7] (Overstream) . And now that I have a decent .srt file for the English transcript, I could also use it to caption the video at YouTube or Google video: see YouTube’s “Video Captions: Help with Captions” [8]. (Actually, there is a freeware program called Subtitle Workshop [9] that could apparently do this conversion cleanly, but it is Windows-only and I have a Mac.)

This combining of tools could be useful even for less blundering people. Say one person in a project has better listening comprehension of the original language than the others, and prefers Overstream: s/he could make the first transcript there, export the .srt file, which then could be mported in DotSUB to produce a transcript that all the others could use to make switchable captions in other languages. If that person with better listening comprehension were blind, s/he might use MAGpie to do the transcript, and s/he or someone else could convert it to a .srt fil that could then be uploaded either to DotSUB or Overstream. And so on.

Watch Out for New Developments

I have only tried to give an idea of three captioning tools I happen to be acquainted with, as correctly as I could. The complexity of making videos accessible and in particular of the numerous captioning solutions is illustrated in the Accessibility/Video Accessibility section [10] of the Mozilla wiki – and my understanding of tech issues remains very limited.

Moreover, these tools are continuously progressing. Some have disappeared – Mojiti, for instance – and other ones will probably appear. So watch out for new developments.

For instance, maybe Google will make available the speech-to-text tool that underlies its search engine for the YouTube videos of the candidates to the US presidential elections (see “”In their own words”: political videos meet Google speech-to-text technology” [11]): transcribing remains the heavy part of captioning and an efficient, preferably online speech-to-text tool would be an enormous help.

And hopefully, there will soon be an online, browser-based and accessible SMIL generating tool. SubRip is great, but with SMIL, captions stay put under the video instead of invading it, and thus you can make longer captions, which simplifies the transcription work. Moreover, SMIL is more than just a captioning solution: the SMIL “hub” file can also coordinate a second video for sign language translation, and audio descriptions. Finally, SMIL is a W3C standard, and this means that when the standard gets upgraded, it still “degrades gracefully” and the full information is available to all developers using it: see “Synchronized Multimedia Integration Language (SMIL 3.0) – W3C Recommendation 01 December 2008 [12].

Green Computing: How to Reduce Our Personal Carbon Footprints

thompson80By John Thompson
Staff Writer
22 November 2008

“I’d put my money on the sun and solar energy. What a source of power! I hope we don’t have to wait until oil and coal run out before we tackle that.” That quote sounds quite timely as President-elect Obama made “green energy” part of his vision for America’s future, including using clean energy as an engine to create millions of new “green collar” jobs. So over the course of the 2008 presidential campaign, the general public has heard about his vision for clean energy and should be primed for that issue to be addressed in his new administration. But apart from what government and business can and should do to address the energy situation, what can and should individuals do to support this initiative? Specifically, what can individual computer users do to reduce their personal carbon footprints?

However, it seems somewhat self-defeating to embark on new, costly initiatives to reduce energy costs without also first examining ways in which we can make cost saving adjustments on the personal level. With over 300 million people in the USA, if each person, or even each office or household, made a conscious effort to examine his or her own use of energy, it would seem that the multiplier effect of millions of small daily changes would yield significant results on a national scale. What are some changes that individuals can make to support green computing and reduce their technology carbon footprints? Let’s look at some ways to start making a difference by picking just a few low-hanging fruits.

thompson01Power management. Keep computers and printers turned off unless you’re using them. Or at least set computer and monitor power management controls to enter low power “sleep” mode when your system is not actively in use. And while a PC does use some power in sleep mode, it’s very small—maybe 10% of what’s needed when it’s running at full power. Also, cut down on the time a computer operates unattended before it goes into sleep mode. The US Department of Energy estimates that a PC wastes up to 400 kilowatt-hours of electricity a year just by functioning at full power even though it’s not being used. Dell reportedly has saved almost $2 million and avoided 11,000 tons of CO2 emissions in one year through a global power-management initiative that calls for its employees to say “nighty-night” daily to their PCs by changing the power management setup so their PCS enter sleep mode each night.

E-mail. Look at our use of e-mail, which continues to explode. Personally, a quick count shows that I have sent close to 400 personal and business-related e-mails this month, and there’s still a week left in the month. And that number is a small fraction of the hundreds that I receive each day and of the estimated several hundred billion sent daily worldwide. Use e-mail to minimize paper use, but don’t routinely print them. Add a message at the bottom of your e-mails requesting that recipients save paper by thinking twice before printing them off their screens. I’ve seen administrators who have their administrative assistants print out all e-mails so they can read and maybe reply to them. Suggest outsourcing your organization’s e-mail to Gmail as Google probably runs its data centers much more economically and greener than you do. And switching can generate cost savings and maybe increased e-mail features for users.

Online learning. By clicking to enter your course instead of driving to campus you do away with commuting and parking hassles while also eliminating your car exhaust emissions. A 2005 report on the environmental impact of providing higher education courses found, “on average, the production and provision of the distance learning courses consumed nearly 90% less energy and thompson02produced 85% fewer CO2 emissions” (p. 4). Online courses also typically reduce paper use since traditional classroom courses still use large amounts of paper (e.g., handouts). Unless your instructor assigns a textbook (many of the online courses I teach have not used a print text in years), everything is digital through e-mail or using the Internet. So if you have a choice between taking a college course in a traditional campus setting or accessing your course from work or home, consider the online choice. No campus presence equates to less energy use, but be sure to use the power management settings on your computer system and resist the temptation to print out all your online reading assignments.

All these suggestions sound doable to most folks. In addition, there are many other simple ways to reduce your personal energy use. But we aren’t talking about going totally “green” and parking your car and walking everywhere. We’re simply looking at ways you—the person reading this blog online right now—can start making a small but significant difference.

Then why are most of these simple strategies not being implemented? Why are computer users not seeking to achieve the TBL—triple bottom line (economic, environmental and social)—and save money, help protect the environment, and do what’s right for society? Is it strictly an “I didn’t know” reason, or are there other obvious and not so obvious reasons that individuals are not taking personal responsibility to reduce their own carbon footprints? Is this a nation (world?) of people with little awareness of these small yet effective changes or just plain lazy folks waiting for government and business to light the light and lead us to reduced energy consumption? What do you think?

Oh, that opening quotation? That’s from Thomas Edison—in 1931. One would hope that there is more progress on sustainable energy in the near future than in the past 77 years.  Don’t leave it up to government or your boss. Little things YOU can do can make a big difference. Making small, almost seemingly insignificant changes can yield huge cumulative results. Green computing is just a change of habit.