Meet the Endless Summer – A Review of ED-MEDIA 2009

Stefanie_Panke80By Stefanie Panke
Editor, Social Software in Education

The 21st annual World Conference on Educational Multimedia, Hypermedia & Telecommunications (ED-MEDIA) attracted 1200 participants from 65 countries. A diverse crowd, including K-12 teachers, university faculty members, researchers, software developers, instructional designers, administrators and multimedia authors, came together at the Sheraton Waikiki Hotel from the 22nd to 26th of June with a common goal: to share the latest ideas on e-learning and e-teaching in various educational settings and at the same time enjoy the aloha spirit of tropical Oahu, Hawaii.

Organized by the Association for the Advancement of Computing in Education (AACE), the annual conference takes place at varying locations in the US, Europe and Canada. Thanks to funding by the German Academic Exchange Agency, I was able to join my colleagues in Hawaii to present two current research projects on social tagging and blended learning and en passant absorb the international flair and information overflow that go together with a packed conference program.

ed_media09

The attendees experienced a full program. In addition to various invited lectures, 210 full papers and 235 brief papers were presented, complemented by numerous symposiums, round tables, workshops and an extensive poster session. The conference proves to be exceedingly competitive with an acceptance ratio for full paper submissions of 37%, and 56% for brief papers. Eleven submissions were honored with an outstanding paper award. My favorite was the work of Grace Lin and Curt Bonk on the community Wikibooks, which can be downloaded from their project page.

Beginning with Hawaiian chants to welcome the participants at the official conference opening and the local adage that “the voice is the highest gift we can give to other people,” audio learning and sonic media formed a recurring topic. The keynote of Tara Brabazon challenged the widely held perception that “more media are always better media” and argued for carefully developed sonic material as a motivating learning format. She illustrated her point with examples and evaluation results from a course on methods of media research (see YouTube excerpt below). Case study reports from George Washington University and Chicago’s DePaul University on iTunesU raised questions about the integration into learning management systems, single-sign-on-procedures and access management.

Among the invited lectures, I was particularly interested in the contribution of New York Times reporter Alex Wright, who reflected upon the history of hypertext. The author’s web site offers further information on The Web that Wasn’t. Alan Levine, vice-president of the Austin based New Media Consortium, clearly was the darling of the audience. Unfortunately, his talk took place in parallel to my own presentation on social tagging, but Alan has created a web site with his slides and hyperlink collection that gives a vivid overview on “50+ Web 2.0 ways to tell a story.”

A leitmotif of several keynotes was the conflict between open constructivist learning environments on one side versus instructional design models and design principles derived from cognitive psychology on the other. Stephen Downes advocated the learning paradigm of connectivism and praised self-organized learning networks that provide, share, re-use and re-arrange content. For those interested in further information on connectivism, an open content class starts in August 2009. This radical turn to free flowing, egalitarian knowledge networks was not a palatable idea for everyone. As an antagonist to Downes, David Merrill presented his “Pebble in the Pond” instructional design model that — similar to “ADDIE” (analysis, design, development, implementation, evaluation) — foresees clear steps and predictable learning outcomes. Tom Reeves, in turn, dedicated his keynote to a comprehensive criticism of multimedia principles derived from the cognitive load theory, picking up on an article by Kirschner, Sweller & Clark (2006), “Why Minimal Guidance Does Not Work . . . .” The audience, in particular the practitioners, reacted to this debate true to the Goethe verse “Prophet left, prophet right, the world child in the middle.” As Steve Swithenby, director of the Centre for Open Learning of Mathematics at Open University (UK) posted in the ED-MEDIA blog: “Well, actually, I want to do both and everything in between. I can’t see that either is the pattern for future learning – both are part of the ways in which learning will occur.”

With blog, twitter feed, flickr group and ning community, the conference was ringing with a many-voiced orchestra of social software tools. Gary Marks, member of the AACE international headquarters and initiator of the new ED-MEDIA community site, announced that he has planned several activities to foster interaction. So far, however, the few contributions are dedicated to potential leisure activities on Hawaii. The presentation “Who We Are” by Xavier Ochoa, Gonzalo Méndez, and Erik Duval offered a review on existing community ties of ED-MEDIA through a content analysis of paper submissions from the last 10 years. An interactive representation of the results is available online.

Twitter seems to have developed into a ubiquitous companion of conference talks. Whether the short messages add to the academic discourse and democratize ex cathedra lectures or divert the attention from the presenter, replacing substance with senseless character strings, is a controversial discussion. Accordingly, twitter received mixed responses among the conference attendees and presenters. In the end, 180 users joined the collective micro-blogging and produced approximately 2500 postings — an overview may be found at Twapper. As a follow-up to this year’s ED-MEDIA, participants were invited to take part in an online survey, designed by the Austrian/German twitter research duo Martin Ebner and Wolfgang Reinhardt. The results will hopefully further the understanding of the pros and cons of integrating microblogging in e-learning conference events.

The AACE used ED-MEDIA as an occasion to announce plans for future growth. Already responsible for three of the largest world-wide conferences on teaching and learning (ED-MEDIA, E-LEARN and SITE), the organization extends its catalog with two new formats. A virtual conference called GlobalTime will make its debut in February 2011. Additionally, the new face-to-face conference GlobalLearn targets the Asian and Pacific regions.

Is ED-MEDIA worth a visit? The sheer size of the event leads to a great breadth of topics, which often obstructs an in-depth discussion of specific issues. At the same time, there is no better way to gain an overview of multiple current trends in compact form. Another plus, all AACE conference contributions are accessible online through the Education and Information Technology Library. The next ED-MEDIA will take place in Toronto, Canada, from June 28 to July 2, 2010.

Interview with Bert Kimura: TCC 2009 April 14-16

Jim ShimabukuroBy Jim Shimabukuro
Editor

The following ETC interview with Bert Kimura, coordinator of the annual TCC (Technology, Colleges and Community) Worldwide Online Conference, the longest running virtual conference, was conducted via email on April 7-8, 2009. Dr. Kimura, a professor at Osaka Gakuin University, orchestrates the completely online event from Japan. The theme of the 14th annual conference is “The New Internet: Collaborative Learning, Social Networking, Technology Tools, and Best Practices.” It will be held on April 14-16, 2009. TCC is a conference designed for university and college practitioners including faculty, academic support staff, counselors, student services personnel, students, and administrators.

Question: What’s the theme of this year’s conference and, more specifically, why did you choose it?

The Internet world is abuzz with social networking and Web 2.0 technologies and, recently, its impact on teaching and learning. We thought that this focus would be appropriate for faculty along with what their colleagues have been doing with these technologies in their (i.e., the early adopters’) classrooms.

TCC coordinators pay attention to the Horizon Report published annually by the New Media Consortium and EduCause. Two years ago, the report cited social media as a technology to have short term impact on teaching and learning.

bert_kimura2Question: What are the primary advantages of online vs. F2F conferences?

1. Ability to “attend” all conference sessions, including the ability to review sessions and content material.
2. No travel expenses or time lost from the workplace.
3. No need to obtain travel approval and submit complex documents to meet administration and/or business office requirements.

Question: What are some innovative or new features that you’ve added to TCC?

1. Live sessions have made the conference alive, i.e., people seem to like knowing that others are doing the same thing at the same time. Through these sessions they can interact with each other through the “back door,” a background chat that is going on simultaneously; this is the same as speaking to your neighbor when sitting in a large plenary session at a conference. Additionally all sessions are recorded and made exclusively available for review to registered participants for six months.
2. Collaboration with LearningTimes. The LearningTimes CEO and president are very savvy technically and hands-on, and they understand how educators work, how tech support should be provided, and they provide an excellent online help desk to conference participants, especially presenters. Their staff support responds quickly and accurately to participant queries. They also respond graciously and encouragingly to those with much less technical savvy.
3. Paper proceedings (peer reviewed papers). We believe that this is one way to raise the credibility of this event and make it accessible to a broader higher education audience. Research institutions still require traditional (and peer reviewed) publications for tenure and promotion. However, by publishing entirely online, we also promote a newer genre. Proceedings can be found at: http://etec.hawaii.edu/proceedings/
4. Inclusion of graduate student presentations. We feel that we need to invest in the future and that TCC can also become a learning laboratory for graduate students. Grad students, especially if they are at the University of Hawai`i, may have much greater difficulty in getting to F2F conferences than faculty.

Question: What’s the secret to TCC’s success?

1. Great collaboration among faculty, worldwide, to bring this event together. We have over 50 individuals that assist in one way or another — advisory panel, proposal reviews (general presentations, e.g., poster sessions), paper proceedings editorial board, editors (writing faculty that review and edit descriptions), session facilitators, and a few others.
2. Quality of presentations — they are interesting, timely, and presented by peers, for and about peers.
3. Continuity and satisfaction among participants. Our surveys (see Additional Sources below) consistently show very high rates of satisfaction. We have managed to persist, and TCC is recognized as the longest running online (virtual) conference.
4. Group rates for participation — i.e., a single charge for an entire campus or system.
5. TCC provides a viable professional development venue for those that encounter difficulty with travel funding.

Question: What are the highlight keynotes, presentations, workshops, etc. for this year’s conference?

See tcc2009.wikispaces.com for the current conference program, presentation descriptions, etc. For keynote sessions, see http://tcc2009.wikispaces.com/Keynote+sessions

tsurukabuto_kobe
“Sakura in early morning. Taking out the trash was pleasant this morning.”
iPhone2 photo (8 April 2009) and caption by Bert Kimura. A view of cherry
blossoms from his apartment in Tsurukabuto, Nada-ku, Kobe, Japan.
See his Kimubert photo gallery.

Question: What’s the outlook for online conferences in general? Are they growing in popularity? Will they eventually surpass F2F conferences? If they’re not growing or are developing slowly, what are some of the obstacles?

At the moment, I’m not sure about the outlook — there are more virtual individual events or hybrid conferences, but not many more, if any, that are entirely online. One thing that is clear is many established F2F conferences are adding or considering streaming live sessions. Some openly indicate that a virtual presentation is an option.

The biggest challenge is the view that online events should be “free,” i.e., they should use funding models that do not charge participants directly. For an event that is associated with a public institution such as the University of Hawai`i (Kapi`olani Community College), it is impossible to use “micro revenue” funding models because institutional business procedures do not accommodate them easily.

Likewise, there is no rush among potential vendors to sponsor single online events. I have been talking with LearningTimes, our partners, to see if a sponsor “package” might be possible, where, for a single fee, a vendor might be able to sponsor multiple online conferences.

Even with 50+ volunteers, a revenue stream is vital to assure continuity. We operate on a budget that is one-twentieth or less of that for a traditional three-day F2F conference. Without volunteers, we could not do this.

Question: What are the prospects for presentations in different languages in future TCC conferences? If this is already a feature, has it been successful? Do you see it growing?

At the moment and with our current audience, there has not been an expressed need for this. However, if we were to target an event for a particular audience (e.g., Japan or China), then we would need to provide a support infrastructure, i.e., captioning and/or simultaneous interpretation.

On the other hand, the Elluminate Live interface that we use for live sessions does allow the user to view the interface and menus in his native language. Elluminate is gradually widening its support of other languages. Having experienced the use of another language interface, Japanese, I find that it makes a big difference to see menu items and dialogue boxes in your native language.

Question: Tell us about your international participants. Has language been a barrier for their participation?

– So far language has not been a challenge. It might be that those who suspect that it will be don’t register. Some, I think, see this as an opportunity to practice their English skills.
– International participants are much fewer in number (less than 10 percent). We’ve had presenters from Saudi Arabia, UK, Scandinavia, Brasil (this year’s keynoter), Australia, Japan, Sri Lanka, Canada, Israel, Abu Dabi,  Greece, India, as well as other countries.
– In some regions such as Asia (Japan is the example that I’m most knowledgeable about) personal relationships make the difference in terms of participation. On the other hand, it is difficulty for a foreigner, even if s/he lives in the target country, to establish personal networks. I have been able to do this gradually over the past seven years — but it is still, by far, not enough to draw a significant number (even with complimentary passes) to the event. In Japan, it also coincides with the start of the first semester (second week of classes) and, consequently, faculty are busy with regular duties. If we were to hold this event in the first week of September, the effect would be the same for the US. We would have difficulty attracting good quality presentations and papers that, in turn, will draw audiences to the event.

Question: What’s in the works in terms of new features for future conferences?

– Greater involvement with graduate students as presenters and conference staff. It provides TCC with manpower and, at the same time, TCC serves as a valuable learning laboratory for students.
– Events, either regional or global, on occasion, to keep the community interacting with one another throughout the year.
– Some sort of ongoing social communications medium to keep the community informed or to share expertise among members on a regular basis (e.g., a blog, twitter, etc.)

[End of interview.]
_________________________
The official registration period for TCC 2009 is closed, but you can still register online at https://skellig.kcc.hawaii.edu/tccreg
The homepage for the event can be found at http://tcc.kcc.hawaii.edu

Additional Sources: For additional information about the annual TCC conference, see the following papers presented at the 2006 and 2008 Association of Pacific Rim Universities (APRU) Distance Learning and the Internet (DLI) conferences at Toudai and Waseda: Online Conferences and Workshops: Affordable & Ubiquitous Learning Opportunities for Faculty Development, by Bert Y. Kimura and Curtis P. Ho; Evolution of a Virtual Worldwide Conference on Online Teaching, by Curtis P. Ho, Bert Kimura, and Shigeru Narita.

A Model for Integrating New Technology into Teaching

By Anita Pincas
Guest Author

I have been an internet watcher ever since I first got involved with online communications in the late 1980s, when it was called computer conferencing. And through having to constantly update my Online Education & Training course since 1992, I’ve had the opportunity to see how educational approaches to the use of the internet, and after it, the world wide web, have evolved. Although history doesn’t give us the full answers to anything, it suggests frameworks for looking at events, so I ‘d like to propose a couple of models for understanding the latest developments in technology and how they relate to learning and teaching.

First, there seem to be three broad areas in which to observe the new technology. This is a highly compressed sketch of some key points:

1. Computing as Such

Here we have an on-going series of improvements which have made it ever easier for the user to do things without technical knowledge. There is a long line of changes from the early days before the mouse, when we had to remember commands (Control +  X for delete, Control +  B for bold, etc.), to the clicks we can use now, and the automation of many functions such as bullet points, paragraphing, and so on. The most recent and most powerful of these developments is, of course, cloud computing, which roughly means computer users being able to do what they need on the internet without understanding what lies behind it (in the clouds). Publishing in a blog, indeed on the web in general, is one of the most talked about examples of this at the moment. The other is the ability to handle video materials. Both are having an enormous impact on the world in general in terms of information flow, as well as, more slowly, on educational issues. Artificial intelligence, robotics, and “smart” applications are on the way too.

2. Access to and Management of Knowledge

This has been vastly enlarged through simple increase in quantity, which itself has been made possible by the computing advances that allow users to generate content, relatively easy searches, and open access publishing that cuts the costs. Library systems are steadily renewing themselves, and information that was previously unobtainable in practice has become commonplace on the web (e.g. commercial and governmental matters, the tacit knowledge of every day life, etc.). As the semantic web comes into being, we can see further advances in our ability to connect items and areas of knowledge.

3. Communications and Social Networking

We can now use the internet – whether on a desktop or laptop or small mobile – to communicate 1 to 1, or 1 to many, or many to many by voice, text and multimedia. And this can be either synchronous or asynchronous across the globe. The result has been an explosion of opportunities to network individually, socially and commercially. Even in education, we can already see that the VLE is giving way to the PLE (personal learning environment) where learners network with others and construct and share their own knowledge spaces.

For teachers there is pressure not to be seen as out of date, but with too little time or help, they need a simple, structured way of approaching the new technological opportunities on their own. The bridge between the three areas of development should be a practical model of teaching and learning. I use one which the teachers who participate in my courses regularly respond to and validate. It sees learning and teaching in terms of three processes:

  1. acquiring knowledge or skills or attitudes,
  2. activating these, and
  3. obtaining feedback on the acquisition and activation.

I start off by viewing any learning/teaching event as a basic chronological sequence of 3Ps:

But this basic template is open to infinite variation. This occurs by horizontal and vertical changes. The horizontal variations are: the order in which the three elements occur; the repetition of any one of them in any order; the embedding of any sequence within any other sequence. The vertical changes are in how each of the three elements is realised. So the model can generate many different styles of teaching and ways of learning, e.g., problem based, discovery based, and so on.

Finally, this is where the bridge to technology comes in. If a teacher starts from the perceived needs in the teaching and learning of the subject, and then systematically uses the 3Ps to ask:

  • What technology might help me make the content available to the learners? [P1]
  • What technology might help me activate their understanding/use of the new content? [P2]
  • What technology might help me evaluate and give the learners feedback on their understanding or use? [P3]

then we have needs driving the use of the technology, and not the other way around.

Here is a simple example of one way of organising problem based learning:

(Click on the table to zoom in.)

I have developed the model with its many variations in some detail for my courses. Things get quite complex when you try to cover lots of different teaching and learning needs under the three slots. And linking what the learners do, or want to do, or fail to do, etc., with what the teacher does is particularly important. Nevertheless, I find that my three areas of new development plus the 3P scaffolding make things rational rather than being a let’s-just-try-this approach. Perhaps equally important, it serves as a template to observe reports of teaching methods and therefore a very useful tool for evaluation. I have never yet found a teaching/learning event that could not be understood and analysed quickly this way.

An Interview with Terry Anderson: Open Education Resources – Part I

boettcher80By  Judith V. Boettche

This is my first experience with doing a formal blog posting, although it has been on my list for a while. Jim Morrison suggested that this format, the new blog area for Innovate, might be a good way to more quickly share a recent interview on open education resources with Terry Anderson, director, Canadian Institute for Distance Education Research, and one of the keynoters at the 14th Annual Sloan-C International Conference on Online Learning in Orlando, Florida, on 4 November 2008. Terry’s keynote title was “Social Software and Open Education Resources: Can the crowd learn to build great educational content?

One of my goals in going to the conference was to interview Terry about his perspectives on open education resources, and I was not disappointed! Terry was very gracious in meeting me over lunch the day of his keynote in the Caribe Royal restaurant. We had a broad-ranging conversation that included his personal experiences with making the book, The Theory and Practice of Online Learning, now in its second edition, freely available on the web. But more on that later.

So here goes. I hope you enjoy it!

terry_andersonJB: Terry, the abstract for your keynote emphasizes the promise of open education resources (OERs) to radically reduce the cost of educational content production and availability. Yet you seem to indicate that educators are not making much use of these resources. Why not?

TA: I don’t think that the availability of OERs has had much impact as yet. Lots of people download content, but how many people use it for serious academic work?

JB: Why do you think that is? What do you think has to happen before the adoption of OERs becomes more widespread and thus more of a force in keeping educational content costs down?

TA: I think there are two issues in the adoption and use of OERs: credentialing and the social support of learning experiences.

People work hard when they are motivated, and most people are motivated by credentialing or earning some kind of a certificate. What is needed for broadening the impact of open educational resources is to provide a pathway for credentialing. For example, with the open courseware from MIT, they provide the courseware resources, but no credentialing. It is up to other institutions to provide the pathway to credentialing. For example, at Athabasca University, a significant number of our Athabasca courses have what we call a “challenge alternative.” This means students can elect to writing an equivalent final exam or completing the final requirements of a course — without actually taking the course.

The second issue is that of social support. Many students find it difficult to learn on their own independent of a social environment. They like to struggle and engage with other learners as they learn. So one of our future tasks is likely to focus on developing educational experiences that include interaction with other students. For example, a learning experience that says, “Go to this site and do this with others who have started at about the same time.”

JB: What about the financial model for OER? How is this going to work? How do we ensure that people with expertise, talent get some compensation for their time and resources?

TA: What we have here, I think, is the same issue that exists with television, music and other creative industries. I think that micropayments are one approach that will work. We see this in the model from Apple with iTunes. Rather than buying a whole album, people select and pay 99 cents for one track of a CD. We need to experiment with additional different models that include reaching out with micropayment models to the long tail of the net —where there are millions of people on line today. We need to begin doing more looking out beyond the 200 or so million people in the U.S.

terry_anderson_sbJB: What about faculty members? Is the micropayments model going to be important for them?

TA: For many faculty it is not an issue. Even today, writing educational materials generally does not mean a lot of money coming back to faculty. And it does not matter as faculty are paid by the state or by the institution! Faculty may dream about writing a textbook that becomes a nationwide top seller, but it doesn’t happen very often.

I think we should move away from a production model where textbooks are written by one or two superstars to a production model with a much larger group of folks. Or move to a co-production model such as we do for research journals.

[Note: Terry’s thoughts on content production models made me think about the Wikipedia model. Maybe we should consider a Chemistry or Physics Resources Wikipedia? -JB]

JB: Terry, what bout the current costs of textbooks and educational materials. Are the costs for educational materials really a big deal?

TA: It really depends a great deal on where you are. When I am working on my campus I have access through our institution’s library database agreements to almost any resource I am interested in using. And this is the same for most of my colleagues in the academic community. So, we start to forget that materials may not be similarly “available” to others. If you go to Africa where the tuition is $45, and the libraries do not have access to content and the textbook is $90 to $100, it’s a very big deal!

JB: Let’s return to the question in the subtitle of your keynote presentation. Terry, do you think the “crowd” can learn to build great educational content?”

TA: Oh, I think “yes!” A colleague and I have been working on a book that is in a long gestation period. The book focuses on the “three aggregations of the many.”

The third “aggregation” is the collective, which is the “crowd.” A lot of people are using the net for many purposes. As they are doing this, they are all leaving traces of their activity, explicitly by voting or buying or doing something; or implicitly by which sites they are visiting and how long they stay on a site. Data mining and data capture techniques include tools that match what some people are doing with what other people are doing with some automatic filtering going on. We are at the early stages of that. Collectives are being used as learning resources without enrolling the class. This means if you use the net fairly frequently, it will reward you.

[Continued in Part II]

Making Web Multimedia Accessible Needn’t Be Boring

claude80By Claude Almansi
Guest Author
7 November 2008

Some people see the legal obligation to follow Web content accessibility guidelines – whether of the W3C or, in the US, of section 508 – as leading to boring text-only pages. Actually, these guidelines do not exclude the use of multimedia on the web. They say that multimedia should be made accessible by “Providing equivalent alternatives to auditory and visual content” and in particular: “For any time-based multimedia presentation (e.g., a movie or animation), synchronize equivalent alternatives (e.g., captions or auditory descriptions of the visual track) with the presentation.”[1]

This is not as bad a chore as it seems, and it can be shared between several people, even if they are not particularly tech-savvy or endowed with sophisticated tools.

Captioning with DotSUB.com

Phishing Scams in Plain English, by Lee LeFever[2], was uploaded to DotSub.com, and several volunteers did the captions in the different languages. The result can be embedded in a blog, a wiki or a web page. The captions also appear as copyable text under dotsub“Video Transcription,” which is handy if people discussing the video want to quote from it. Besides, a text transcription of a video also tends to raise its ranking in search engines, which still mainly scan text.

The only problem is that the subtitles cover a substantial part of the video.

Captioning with SMIL

This problem can be avoided by captioning with SMIL, which stands for Synchronized Multimedia Interaction Language. A SMIL file, written in XML, works as a “cogwheel” between the original video and other files (including captioning files) it links to and synchronizes.[3]

The advantage, compared to DotSUB, is that captions stay put in a separate field under the video and don’t interfere.

This is why, after having tried DotSUB, I chose the SMIL solution for: “Missing in Pakistan – Sottotitolazione Multilingue.[4]

So far, the simple text timecoded files for SMIL captioning still have to be made off-line, though Alessio Cartocci – who conceived the player in the example above – has already made a beta version of an online SMIL captioning tool.

Captioning with SMIL Made Easy on Webmultimediale.it

The Missing in Pakistan example is on Webmultimediale.org, the site where the WebMultimediale project team experiments with the creative potential of applying accessibility guidelines to online multimedia – for instance, in collaboration with theatrical companies.

web_multiHowever, the project also has a public video sharing and captioning platform, Webmultimediale.it, where everyone can upload a video and its captioning file to produce a captioned video for free. The site is fairly bilingual, Italian-English. By default, you can only upload one captioning file, but you can contact Roberto Ellero, the founder of the project, through http://www.webmultimediale.org/contatti.php if you wish to add more captions.

Webmultimediale.it also has a video tutorial in Italian on how to produce a time-coded captioning file using MAGpie, which is only accessible when you are signed in, but as it is in Italian, English-speaking users might prefer to use the MAGpie Documentation[5,6] directly.

Other Creative Potentialities of SMIL

As can be seen in the MAGpie Documentation and in the W3C Synchronized Multimedia page[3], SMIL also enables the synchronization of an audio description file and even of a second video file, usually meant for sign language translation. While these features are primarily meant to facilitate access to deaf and blind people, they can also be used creatively to enhance all users’ experience of a video.