When Technology Just Works, it Makes All the Difference
By Aser Tolentino, Assistive Technology Instructor
When working in the accessibility field, there’s nothing truly as fulfilling as seeing something just work the way it’s supposed to for people who never thought they’d be able to do something that was once very fundamental to their identity. Sometimes, all of the tools to make that happen are there but just haven’t been put together in the right combination, or there’s just one component that has to be modified to make everything come together and work.
This is one of those stories.
The focus of this year’s Big Day of Giving was our music support group. This terrific group of staff, students, former clients and members of the blind and low-vision community were brought together through their love of music and interest in creating something cool that they can share with each other. Among the things on display as part of their concert and jam session last week – for those that knew what to look for – was a collection of tools that help blind and low-vision musicians perform and record independently. Here’s a description of some of the tools our staff used to make music.
One of the unique aspects of working with audio that makes it friendly to people who are blind or have low vision is that it can be entirely engaged using senses other than sight. You might be thinking, “Well, of course that’s true,” but then you’re probably just thinking about the passive act of listening to music. When you record sound and shape its acoustic signature, you control things like the sensitivity of the microphones capturing the sound or the power that goes into the amplification of the signal that’s routed through a series of filters and then translated into digital code that’s stored on a computer. Each of these processes are things that can be controlled by analog mixers that still use dials, switches and faders, just like in the days of reel-to-reel tape. The mixer board we use has a basic equalizer that can emphasize the bass, middle or treble frequencies in a channel, controls for adjusting the panning from or position of the source for that channel in the stereo mix left to right of each source, and the gain or strength of that channel in the final mix sent to the computer. Some of the funds raised during Big Day of Giving will go toward a new mixer with more inputs that will allow more musicians to participate in a recording or jam session.
Digital Audio Workstations
Once the audio has been sent to the computer, it wouldn’t be of much use unless the user could play, edit and save it. The audio from the mixer is sent over analog cables to a device called an audio interface that digitizes it before passing it to the computer using a piece of software known as a digital audio workstation or DAW. There are many different DAWs and a commendable number of their developers are actively taking steps to make their products accessible to the blind. The software most commonly used by our support group is called Logic and is developed by Apple, which has a well-deserved reputation for breaking ground in the realm of accessibility. Using Apple’s screen reader, our assistive technology instructor Randy Owen or one of his assistants can adjust the volume of recorded tracks, cut unnecessary sections, split the recording into songs and, most importantly, apply the effects like reverb, distortion and compression that transform a simple jam session into a professional-sounding mix suitable for upload to sites like SoundCloud or even the iTunes music store.
Just like with the mixer, working with the digital audio can benefit from the use of physical, analog-style controls that provide a tactile experience rather than the on-screen controls or their audible readouts from a screen reader. A DAW can interface with a device called a control surface, which is sort of like a keyboard meant specifically for editing audio. The version we use features motorized faders that actually shift to the position of the parameters of the different tracks when they’re selected in the DAW. You can then manually move them to adjust that parameter and have the change reflected in the song you’re working on in real time. This means you can move instruments around in the sound field, turn down different instruments to make sure everyone gets heard, and filter out things like room noise and unwanted distortion from the recording equipment – all by touch and sound.
In a far different context, it was once said that the important things are always simple, but the simple things are always hard. Well, the hard things are always simple, and the simple things are always important. You’ll notice that in our description, the centerpiece of the whole process was the audio software. I may have made it seem like the easiest thing in the world to make that software usable by someone who couldn’t see. But if you were to look at the screen when a sighted user works with a DAW, you would see multiple windows full of multi-colored spectral waveforms, arcane symbology and a lot of actions that involve using a mouse pointer to draw lines and curves. So, it can seem far from simple, and all the harder when someone asks you if a blind person could use something so complicated. On the back end in the code, though, that’s all just math. And so people figured it out – companies like Avid, Apple, Cockos and open source projects like Audacity working with blind users like the developers of the NVDA screen reader made the effort to make the process accessible. The end result is that something that seemed beyond the realm of possibility for many people just a few years ago simply just works. For a few dozen people in our lobby last week and a few million people around the world every day, efforts like these make all the difference.