Eyes in Your Pocket: “BlindTool” App Represents the New Frontier of Assistive Technology

A screen capture of the BlindTool app identifying a banana, with less likely predictions listed belowBy Brian Klotz

FastCompany calls it “a peek at an inevitable future of accessibility,” a new app called BlindTool that allows users to identify objects in real time using only their phone, and it was created right here in Boston. Developed by Joseph Paul Cohen, a current Ph.D. candidate at the University of Massachusetts Boston and a Bay State native, it aims to increase the independence of individuals who are blind or visually impaired by putting an extra set of eyes in their pocket.

“I’ve had a desire to do this for a while,” says Cohen, whose initial interest in assistive technology came from working with a colleague who was blind during an internship at the U.S. Naval Research Laboratory in Washington D.C., inspiring him to think of the ways modern technology could improve the lives of those with vision impairment.

The app runs on Android devices, and identifies objects it is pointed at in real time using a “convolutional neural network” that can understand 1000 “classes” of objects. While the technology behind it may be complex, its usage is simple: wave your phone around, and the app will cause it to vibrate as it focuses on an object it recognizes – the more it vibrates, the more confident it is. Once it’s fairly certain, it will speak the object aloud.

“It always has a prediction,” Cohen explains, regardless of where it is pointed, so the vibration function allows the user to zero in on objects the app has more confidence in identifying.

Similar object-identifying apps exist, but since they rely on sending an image to an external database to be identified, they take longer to work, and also necessitate the app to cost money.

“I felt it was important that BlindTool be available for free,” Cohen says, “and for that I needed it to run locally.”

Relying on external servers also produces at least several seconds of lag, which was too slow to realize Cohen’s goal of virtually real-time identification.

At first, when porting over more complex “neural networks” to a mobile device, Cohen found that BlindTool took five seconds to identify objects.

“This was unbearably slow to me,” Cohen says. Now, BlindTool can identify objects in less than one second.

The trade-off, then, comes in accuracy. BlindTool operates entirely within the phone, meaning it doesn’t even need to be connected to the Internet to be used (and therefore doesn’t count against the user’s data plan!). This also means, essentially, that the number of objects the app is “trained” to recognize is smaller. Also, since it relies on existing open source software, the images the app is trained on are, as he puts it, “kind of random.”

“It’s trained on sharks, hens, ostriches,” he names as a few examples.

Therefore, the app may have difficulty identifying even common household objects, something the vibration and speech functions were designed to alleviate, as the user will get a sense of how certain the app is of its own identification.

Such issues are to be expected, however, when one considers how new the app is – Cohen developed it in mid-December 2015 – and the remarkably fast time in which he was able to produce a working prototype.

“It was about a day and a half,” he says. Cohen, demonstrating the drive and ingenuity that earned him a 3-year fellowship from the National Science Foundation, likes to give himself what he calls “7-Hour Projects,” working on a single idea for the better part of a day. If it works after 7 hours, he continues, and if not, he puts it away to potentially return to later on.

Currently Cohen is looking to enhance and expand BlindTool, and has a number of ideas on how to do so, including ways to train the app on the types of objects users who are blind or visually impaired would most need identified.

One of his ideas going forward is to crowdsource the dataset; users would send images with their own labels to Cohen, who would then use that information to re-train the app’s network to better identify the most popular objects.

A screen capture of the BlindTool app identifying a frying pan, with less likely predictions listed below it

The BlindTool app in action, correctly identifying a frying pan with less likely predictions listed below it

Cohen is also thinking of ways to allow users to customize the labels, including eliminating terms the user doesn’t need (or that the app misidentifies), or simply renaming them.

“If it calls something a ‘coffee cup,’ but you prefer ‘coffee mug,’ you could change that,” Cohen gives as an example.

In his efforts to continue to develop BlindTool, Cohen has launched a Kickstarter campaign with a goal of raising $5,000 to help fund the next version.

Ultimately, Cohen hopes that BlindTool will help individuals who are blind or visually impaired remain independent.

“I want it to offer an extra amount of perception in the lives of those who cannot see,” he says.

The increasing prevalence of apps such as these represents a new frontier in assistive technology, as the increasing power of technology allows for more advanced functions, even in the palm of your hand.

“[BlindTool] definitely would not have been possible 5 years ago,” says Cohen. In fact, he notes, the software needed to create it only came about a few months ago.

“It was the perfect storm,” he says of BlindTool’s creation. “Phones had become fast enough and the software was now available.”

And the future for assistive technology looks bright. Cohen points to advances in self-driving car technology, including the Tesla’s new “Autopilot” feature, as one of the next revolutions in the field. Object recognition will expand in the coming years as well.

“Scene recognition might be the next thing people conquer,” Cohen says when asked what the evolution of this technology may be. In simple terms, “scene recognition” would entail a device being able to identify multiple objects within one image, something that is often a daunting task for current technology. Progress is already being made, however, including work by Facebook’s Artificial Intelligence team.

As for Cohen’s future, he will be defending his dissertation at UMass Boston in the spring of 2016, and after that intends to pursue a teaching career alongside his research, as well as continue to develop apps and technology, including BlindTool.

After all, the goal of assistive technology is a noble one, and Cohen expressed it well on the download page for BlindTool, when he explains why he is releasing it for free: “I hope this app will help people live a better life.”

Download BlindTool here. If you would like to support the further development of BlindTool, visit its Kickstarter page. To learn more about Joseph Paul Cohen or to contact him, visit his website.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s