Tago

Tago is a concept for an application that allows visually impaired users to tag their photos with keywords for easy sorting and searchability. It uses a combination of computer image recognition and human computation (via CAPTCHA) to generate rich descriptions of photos, which can then be searched for by voice.

The aim is to allow visually impaired users with greater access to their photos and memories.

img13

Tago lets visually impaired users to search their photo archives by voice. A simple user flow gives users an easy way to recognize photos taken, all by audio cues.

user journies and personas

img13

[ + ] enlarge

View more
img20

[ + ] enlarge

View more
img20

[ + ] enlarge

View more

Tago functionality involved finding photos, organizing photos, and tagging photos - features that different personas would take part in. I planned user journeys for three unique personas in order to explore in detail the various steps each would experience.

the general user experience

img13

With further research, a user experience map was created to empathize with the visually impaired when attempting to find photos in their galleries.

research, testing and mvp

img13

[ + ] enlarge

View more
img20

[ + ] enlarge

View more
img20

[ + ] enlarge

View more

A large part of Tago was the human computation aspect - where non-VI users tagged photos. We created a test using Google Survey to mimic the CAPTCHA system. Proving successful, I then took the data and created a functional prototype for second-level testing with real VI users. Using accesibility guidelines, tags and other relevant information were read aloud to the user as he tapped on each thumbnail.

final design and results

img13

With data and feedback from our user tests, we were able to refine the Tago flow of interaction and finalize a design.

This conceptual project was then presented to The Hong Kong Society for the Blind and received distinction for highlighting an overlooked problem for the visually impaired and their daily lives.