Life

This App Helps Visually Impaired Users Navigate The World With AI — Here’s How It Works

by JR Thorpe

Technology has the unique ability to help make the world more accessible, and Lookout, a new app from Google, is hoping to do exactly that. Google has announced that an AI-driven app will describe the world to visually impaired users in real time, analyzing and recognizing everything from people to landmarks. The app is currently only available in the U.S. in English on Pixel devices, but Google "hope[s] to bring Lookout to more devices, countries and platforms soon," a blog post about the launch writes.

The AI behind Lookout is trained to work a bit like Google Lens, in that it attempts to categorize and then describe what appears around you, including words and people. "By holding or wearing your device (we recommend hanging your Pixel phone from a lanyard around your neck or placing it in a shirt front pocket), Lookout tells you about people, text, objects and much more as you move through a space," the developers wrote in the blog post.

It's designed for real-world use, and has three modes: Explore, Shopping and Quick Read. Shopping is focussed on reading bar codes and currency, Quick Read translates text into audio — including signs and menus — and Explore, which is the default mode of the app, highlights what's in view of the camera and describes it.

This is a big deal for multiple reasons. The number of people in the United States with visual impairments is expected to hit 8 million by 2050, according to a 2016 study funded by the National Institutes of Health, with another 16.4 million estimated to develop difficulty seeing such as nearsightedness. Technology might help that increasing population navigate the world more easily.

AI has also been at the forefront of new technologies to assist people with visual impairments; Microsoft, for example, has a Seeing AI that's designed to do the same job as Lookout, and they recently added a new feature that allows users to touch a photo on their screen and hear a description of what it contains. The people who use it can touch various parts of the picture and be told what they are, particularly details that might have been left out of overall descriptions. What color are the flowers a bride is holding? The touch function is designed to help.

Lookout/Google

AI functionality is still developing, and Google admits it's still not perfect when it comes to Lookout, explaining that the AI "detects items in the scene and takes a best guess at what they are." Best guesses may not always be accurate; AI systems need huge amounts of data to make their estimations, and sometimes they can go wrong.

And what goes into the data in the first place matters. Forbes reported in early 2019 that a user of Microsoft's Seeing AI was surprised when it read a child's expression as "contemptuous," which felt like a uniquely human judgment call to make. It's very early days for Google's Lookout, and it remains to be seen whether it can be both user-friendly and accurate about what's happening. For people who rely on technology to help them access the world around them, it could be a game-changer.