Visual impairment affects approximately 285 million people worldwide, 39 million of who are blind, and 246 million of who have moderate to severe visual impairment. This is a huge community of people, and yet little attention is paid to developing new tools to assist them in navigating the world. Most rely on a combination of their other senses—hearing, touch, even smell—and simple tools like walking sticks to alert them to obstacles. In a world where computers help us with everything from navigating space travel to counting the steps we take in a day, we aim to improve the mobility and access of people with visual impairment beyond the walking stick. Working with miniaturized cameras and computers—so small they can be embedded in your clothes—our researchers are challenging technology to create an experience closer to sight than anything we have previously imagined. In its simplest form, this means assisting the visually impaired as they try to navigate the world safely. But the possibilities are so much bigger. While the primary objective of our project is to help someone get where they need to go, we believe that these systems with embedded computation and sensors could also help them experience the world around them. In the future, wearable systems will make it easier for people with visual impairment to manage every task throughout a busy day, from getting to work in the morning, to making your lunch in the break room, to finding the exact file you need for your next meeting.
In 2012, the late professor Seth Teller, working with the ABF, started the MIT Fifth Sense project. The premise of this project is that computer vision can be used to enable safe navigation and object identification. The challenge is to develop a wearable system that has a small form factor yet it is capable or robustly processing a stream of images to map the feedback to directions and semantic descriptions of the environment that meet the needs of a BVI person. Such computation usually requires very large computer systems.
During the last year, working with our students and postdocs, we developed an end-to-end wearable solution for safe navigation that uses a portable camera to extract information about the local state of the world, such as such as the range and direction of obstacles; the direction of free space; and the identity and location of target objects within a space. This system provides haptic feedback presented to the user on a belt with vibration motors. Additionally, the system has depth-based object detection and recognition used for tasks such as assisting the user in navigating to an empty chair. Specifically, this project contributed
- a wearable vision-based system with haptic feedback for safe navigation with small form factor;
- a miniaturized computer chip that is customized for the processing required by safe navigation and enables a small form factor for the wearable system;
- real-time algorithms for segmenting the free space and mapping it to safe moving instructions, as well as for identifying objects in the space and enabling conceptual tasks such as “finding an empty chair in a room”.
Daniela Rus and Anantha Chandrakasan