Baltimore-Washington Airport. WiFi for 25 cents a minute. Composing offline.
I’ve been thinking about Matt’s recent entry on the future of human-computer interaction and the inadequacy of the current dominant model for using computers (keyboard, monitor, mouse). Back when I worked on the DISC project at MITH, I was forced to re-evaluate my assumptions about using computers and designing webpages. DISC is a resource site for disability studies, and we wanted to design the website with maximum accessibility, so we had to think about the needs of users who were vision impaired or couldn’t see at all, or who had difficulty navigating because of all the clutter that often accumulates on webpages. At the same time, we aimed for an attractive looking site. We kept the design simple and added certain features specifically for certain kinds of users.
For example, because tech-savvy blind users often have their computer read websites out loud, it’s tedious for them to have to listen to the identical detailed navigation menu on every page within a given site. To solve this problem, we inserted an invisible gif at the beginning of each page with an alt attribute that read “skip to main content.” This would be invisible to sighted users, but those listening to the page would jump over what they didn’t need to hear. We also made sure that every image tag had an “alt” attribute where necessary, although in general we kept images to a minimum. For advice on accessibility issues, we worked with a blind person who listened to the web, rather than reading it off of a screen. She demonstrated her screen reading software for us, but because we could not understand the webpage when it was read so quickly–especially the navigational elements, which sound like nonsense when read out loud–she had to slow down a great deal the default setting on her program. Exactly who is disabled in this scenario?
I also suggested that the “title” attribute of the <a href=…> tag would be useful for users with cognitive disabilities who might be confused about where a link was taking them. However, when the woman decided to check on this with others she knew who were also knowledgable about accessibility, the reaction was very strongly negative. “Just because the code allows you to do something doesn’t mean it’s a good idea!” was one vehement response. What I soon learned, however, was that most screen reading software did not know what to do with the “title” attribute of a link. But rather than chalk this up to an inadequacy of the available software, these users decided that any webpages should be coded to conform with the software they were using.
Doesn’t this sound familiar? To those of us who are more or less comfortable with the existing dominant model of interacting with computers, anything different, like a fast screen reader, seems alien, and the substantial shortcomings of our familiar model are invisible to us. However, I think that many of the developments taking place in accessibility software and hardware will prove very useful in bringing the future that Matt imagines into being. Because my briefcase usually has at least one and often two very large anthologies in it for class–e.g. the Norton Anthology of English Literature, the Complete Works of Shakespeare, the Complete Works of John Milton–the last thing I need is a big laptop adding to the curvature of my spine. So I work on a very small Dell, and it could be even smaller if I was not tied to a keyboard-screen model of input and output. These two elements remain an obstacle to further and further size and weight reduction.
My fantasy computing device would be something like the new Palm Tungsten C. It’s small enough to fit in my hand, but it has enough memory to store the files I usually need and the power to run the programs I use most of the time. Do away with the built-in miniature keyboard and develop some good voice-recognition software, and the display could take up even more real-estate. Or do away with the display altogether and the device could get even smaller. Because my laptop does not have any removable media drive built in, I recently bought a Sony thumbdrive with 128MB of memory so that I can back things up without having to find an Internet connection. It’s the size and weight of a very small kazoo. The thumbdrive cost me about $100, not cheap but remarkable considering that ten years ago, L and I bought a desktop with a 200MB hard drive for $1300.
You can now get 30 gigabytes of memory in the newest iPods; combine that kind of storage with the processing power and connectivity of some of the newest handhelds, and you don’t even need a laptop anymore, provided you can get over the input/output hurdle. And the hurdle is not on the hardware end; it’s on the user end. If users like me could move from a primarily visual model for understanding information to an oral/aural model, then the future of computing looks very different than the present. Of course, this is not taking into account image-oriented tasks like video editing or digital photography or reading texts rendered in typographically interesting ways.
In the fantastic future, for my kazoo-sized storage and processing device I could have a docking station attached to a large monitor, if I need it, and some sort of ergonomically correct keyboard, plus the usual connections to devices like printers or digital cameras. On the other hand, if a standard technology like Bluetooth ever takes off, the wired docking station won’t even be necessary; just set the device down near your desktop set-up and go. Perhaps commercial/public spaces like coffee shops, airports, libraries could have stations you rent by the minute if you needed to plug your device in to a larger input/output format.
Some university libraries have an office for “Adaptive Technologies” to assist those users with “special needs.” But in the end, isn’t all technology adaptive? There is no “natural” way to interact with the ones and zeros that make up the data we are interested in creating, transmitting, receiving, and using. There is only the model we have chosen to think of as natural, and as Matt suggests, as it stands now, this model has many shortcomings.
That’s a really nice, thoughtful post George.
I found this post of yours very interesting and thoughtful, particularly your experience with working with visually non-abled users and accessibility.
Masters student, UTS, Sydney Australia.
Apparently Intel has been reading my blog. Have a look at the above, then go to
Story found via Slashdot:
Slashdot discussion on mobile phone for the blind:
Via Slashdot: Apple computers announces a free screen reader for OS X.