AudioAid

Again, advocating the use of mobile phones as an alternative assistive technology for people with disabilities. In this case, who are hard of hearing or deaf. It attempts to translate audio stimulation into vibration on the mobile. This will enable deaf users to “feel” sounds around them. E.g. a door bell, phone ringing, fire alarm, someone shouting. Basically, when the phone ‘listens’ to any audio peaks around it, it will vibrate accordingly.

When we started brainstorming on accessibility solutions with mobile phones, we used a generative model, where we select a human sensory disability and link it to a sensor on the phone. So e.g. let’s say someone is colorblind, we use the camera on the phone to detect and name colors on the viewfinder. You find a human sensor problem and see if the phone has a sensor to help.

So we are at this room, brainstorming on it. We were clearly avoiding the debate on the deafness area. Why?

I always get the feeling that technologists will only throw “brainstormed” ideas they know they can implement. It seems that in order to come up with ideas to solve a certain problem, they are staring at the known tools and procedures they got and wait for the idea to come from there. It is like someone looking at hammer and thinking “what can i craft using this?”. We tried to do different for the AudioAid.

The generative model would lead us to the following idea:

human defective sensor : ear/hearing >> mobile related sensor : microphone

The concept was to make the phone listen for the deaf person. We had absolutely no idea how to do that.  I think that is why we were avoiding deafness, even though the model was clearly giving us that mission. Actually, we knew it was not possible (with the familiar tools and procedures we had). But we decided to stick with it, and given the problem scenario and concept, study the technical feasibility in detail and narrow down the concept until we come up with an alternative.

What I wanted was to make the deaf person hear through the microphone of the phone. I thought of trying to make an app that would work like a cochlear.. The app would process the real time captured audio and turn the volume up. But it seems the AMR-NB codec makes it safe for the ear and it won’t let you do that. So I thought of an alternative to translate the audio stimulation into some other output from the phone, like lights or vibration. I decided no to use lights, because I wanted a calm technology and wanted a non-screen-looking solution.

Vibration it is.

AudioAid is now vibrating the device according to the audio peaks around the user, translating audio stimulation into vibration. A deaf person sitting at home has a hard time e.g. knowing someone is ringing the door bell. With this app, hopefully the phone will vibrate accordingly and the user will be able to go throught their personal checklist – doorbell, fire alarm, etc.

I submitted this to the Calling All Innovators event at Nokia World 2009 in Stuttgart and got a honorable mention. Evidence below.

wearing my dad´s blazer.

 

audioaid – answering question from bloggers

I think concept should come before technical feasibility. Impossible solutions before familiar procedures.

I believe chairs should come before hammers.

 

Leave a Reply