PROJECTS

March 15th, 2016 – I am updating here TODAY! Come back soon! 🙂

 

Humans, Mobiles and their Sensors – This is the proposal of a model that relates human sensory disabilities to specific mobile computers sensors, in such way that these sensors are used by software applications that will improve the interaction between the user and the environment. As a generative theory, the model was used to analyse accessibility (not for the disabled only) problem scenarios (sensory related) and link these problems to specific mobile features and its software APIs. A proof of concept of this model is demonstrated by the implementation of a mobile application (Color detector) whose concept was generated entirely from the model.

color detectorColor Detector – This was result of Humans, Mobiles and their Sensors. I was trying to make a point on mobile sensors helping human limited sensors. It is basically an app that detects colors. Essential and often critical information is communicated through the meaning of colors. The color impaired can only rely on recommendations and best practices accessibility guidelines to be applied by the authors of documents, software, signs, etc, prior to the completion of their work, in order for them to be fairly accessed. Anyway, this project implements a mobile color detection system that attempts to support color blind people to identify and distinguish colors. It is based on image capturing, application of adaptive filters, color detection and color remapping.

magnifierMagnifier – Once more, mobile sensors helping human defective sensors. The Magnifier uses the camera on your phone to improve readability in every-day life situations such as reading. Anyone who has difficulty seeing “fine print” or small objects can benefit from using the app. It has a built in stabilizer, contrast and negative filter. Users can also snap pictures for later reference. It is a way to make mobile phones become environment accessing tools.

 

 

wilson’s icon

AudioAid – Again, advocating the use of mobile phones as an alternative assistive technology for people with disabilities. In this case, who are hard of hearing or deaf. It attempts to translate audio stimulation into vibration on the mobile. This will enable deaf users to “feel” sounds around them. E.g. a door bell, phone ringing, fire alarm, someone shouting. Basically, when the phone ‘listens’ to any audio peaks around it, it will vibrate accordingly.

 

 

Breathe Mobile – This project proposes the use of a (hands-free) breathing interface for mobile phones as an alternative interaction technology for people with disabilities. It attempts to explore the processing of the audio from the microphone in mobile phones to trigger and launch software events. A proof of concept of this work is demonstrated by the implementation and demonstration of a mobile application prototype that enables users to perform basic operations on the phone, such as calling through “puffing” interaction.

 

SpeakStatus (CalmStatus) – This app makes the phone speak about its statuses. The phone will say out loud the battery and signal levels, the date and time, missed calls, new messages, new facebook notifications, new twitter mentions, new mail, etc. This an attempt to implement an alternative calm technology. “A calm technology will move easily from the periphery of our attention, to the center, and back.”

 

 

Desguiator – Even though this is an ‘entertainment’ app, it sure is a product of the observation of human interactions, and how the  use o mobile phones can abruptly cut off a live conversation. This app is a tool for escaping from unpleasant situations and boring moments. When the app is running, the user can tap twice on the phone and a few seconds later, the phone will falsely ring. It explores a bit of tapping interaction, which is a non-looking, non-hearing and non-screen-touching solution.

 

Facelock – This is an app that is supposed to make the phone recognize its owners face. You show your face to the phone, it learns who you ‘are’. You lock it. To unlock, just show your face again. The concept here is very human and natural because we do face recognition all the time, don’t we? The moment we match the face we are looking at with the ‘face database’ we keep in our minds, we do instantly load a dense set of data to the surface.

 

there are more embryo experiments I could share later on. Talk to me if you have any questions.

Leave a Reply