[This is a joint post by Jen Guiliano, assistant director of the Maryland Institute for Technology in the Humanities, and George H. Williams, associate professor of English at the University of South Carolina Upstate.
This post was originally published on September 20, 2012 at ProfHacker.]
Consider this a call to digital humanists generally and more specifically to the project directors (from 34 different projects) who attended today’s Project Directors meeting at the National Endowment for the Humanities’ Office of Digital Humanities:
What is your project doing to address accessibility for people with disabilities?
Today’s meeting is a gathering of project directors from the Digital Humanities Start-up Grants, Digital Humanities Implementation Grants, and the Institutes for Advanced Topics in the Digital Humanities competitions. Each project gets just three minutes and three powerpoint slides to introduce their project and their concerns, so we’re taking the liberty of publishing a blog post as a follow-up.
Over the last several decades, scholars have developed standards for how best to create, organize, present, and preserve digital information so that future generations of teachers, students, scholars, and librarians may still use it. What has remained neglected for the most part, however, are the needs of people with disabilities. As a result, many of the otherwise most valuable digital resources are useless for people who are–for example–deaf or hard of hearing, as well as for people who are blind, have low vision, or have difficulty distinguishing particular colors.
[Notes taken during September 2010 meeting at NEH]
NEH Communications Office
Paula Wasley: email@example.com – 202-606-8424
Meredith Hindley: firstname.lastname@example.org – 202-606-8452
- Can provide advice on how to pitch your project to the media, write up a press release, locate contact info for the media.
- They’d love to get updates on how your project is doing
- They’d love to get any press releases or news articles about the grant project
Poster session at Digital Humanities 2011
Our work combines digital humanities expertise with the important insights of disability studies in the humanities, an interdisciplinary field that considers disability “a way of interpreting human differences,” in the words of Rosemarie Garland-Thomson. Digital knowledge tools that assume all end-users approach information with the same abilities risk excluding a large population of people.
Below you’ll see one of our videos from the BrailleSC project. This particular video features a teacher working with a young student who is visually impaired. The video, like all of our videos, needs subtitles. If BrailleSC is to fulfill its goal of creating fully accessible content, then we need to make sure that people with hearing impairments will be able to benefit from our videos. After experimenting with a paid service for transcribing our videos, I began to think about what it would take to create a tool that would allow people to volunteer their transcription efforts. Such a tool would benefit not only BrailleSC, but also other projects that also feature video or audio.
One tool whose development I’ve been very interested in is Scripto, “a light-weight, open source, tool that will allow users to contribute transcriptions to online documentary projects.” Scripto, being developed by the Center for History and New Media, is designed for projects where images of written or printed documents need to be transcribed. The potential exists, I believe, for adapting a tool like this for projects involving video or audio rather than the written word.
Until that potential is realized, however, there are some other options available. I recently learned about a great project called Universal Subtitles, an open-source tool that brings together volunteers who want to subtitle videos and videos that need subtitles. The idea is pretty simple; I’ll just quote from their site:
You add our widget to your videos. Then you and your viewers can add subtitles, which anyone can watch. We save the subtitles on our site (but you can download them). And each video has its own collaboration space on our site (like a wikipedia article) where people can make improvements, track changes, and give feedback.
The project is being undertaken by the Participatory Culture Foundation, “a non-profit organization building free and open tools for more a democratic and decentralized media. Universal Subtitles is a featured Mozilla Drumbeat project, and they’re currently raising money to get the tool out of beta. Note: From now until January 1 Mozilla will match your donation to the project.
Universal Subtitles is composed of three main parts:
- The collaborative website will develop as a space for collaborative subtitling and translating videos. The site will encourage dynamics like: formation of teams for subtitling a program, or a topic; tracking which subtitling or translation tasks are the most requested, and mobilizing volunteers; volunteers recruiting their friends for help transcribing or translating a video.
- The protocol/open spec (still in the early stages) will allow clients such as Firefox extensions, desktop video players, websites, or browsers to look up and download matching subtitles from subtitle database(s).
Everything … will be available under the open source AGPL license.
It’s easy to get involved with Universal Subtitles:
For more information
I have a fairly simple question about Omeka, and I hope there’s a fairly simple answer (or that someone can point me to the right place in the documentation or discussion forum.) If an item has a few media files associated with it, how would I go about creating a template for a dynamically-generated page that would display one of those media files and not all of them at once? Here’s the context for the question: with BrailleSC, we’re creating an Omeka archive of oral histories. Each oral history item will be presented as a transcription (in HTML), a video file (MP4), and an audio file (MP3). As I understand it, the default page for an Omeka item automatically displays all of the files associated with that particular item, and in the case of a 30-minute video what that means is the user must wait for a pretty large video file to load, even if all they’re interested in is the transcription or the mp3.
Now, we could use some kind of Flash-based player that wouldn’t load the video on the page but would stream it when the user specifically triggers the video; unfortunately, however, Flash is not compatible with the screen reader shoftware used by many of our intended audience.
What’d we like to have is a page that automatically displays the transcription (and maybe a screenshot from the video) but just provides links to the pages that contain the video and the audio. If I understand the backend correctly, such pages would need to be passed the “id” of the Omeka item so that they could then grab the appropriate video or audio file and embed it on the page. Is that correct? Could anyone give me a nudge in the right direction so that I could hack something together?
Thanks in advance for any and all advice!