As soon as Robin Spinks picks up the phone, he tells me to stop speaking. In the silence, he connects his wireless AirPod headphones to his iPhone for the first time. So far, nothing unusual. But Spinks, who has a visual impairment, has trouble seeing.
The use of technology by the blind and visually impaired has been a mess of clunky software and chunky hardware. Screen readers, which read everything on a webpage including URLs and date stamps: “Tech is helping the blind to see twenty eight september two thousand and seventeen w w w dot w i r e d dot c o m f o r w a r d s l a s h…”, have become a necessary evil. But, finally, assistance technology is catching up. Artificial intelligence is powering rapid developments in computer vision and voice recognition, taking much-needed accessibility features mainstream. In doing so, people with disabilities are able to achieve more. For Spinks, that means “tremendous liberation”.
Accessibility technology can, broadly, come in two forms: it’s incorporated into mainstream devices or available as a specialised product. There has been a boom in the latter category in recent years and its hoped new products will help 285 million people worldwide who have visual impairments. “It’s dreadful to see what is available to what is being rolled-out,” says Kevin Phelan from Aira, a US-based firm developing a headset with an inbuilt camera for the visually impaired.
Aira’s headset works by connecting to a mobile app and beaming images captured to one of its remote call centre agents. The human on the other end of the line is able to provide voice instructions to the wearer. I tested them in a London office block, with the help of a blindfold, and was guided around the building’s lobby by the virtual assistant to eventually reach a coffee shop. When at the counter, the assistant read out options and helped me find a member of staff.
“We’ve tried to build it with the community,” says Phelan, who has a child with a visual impairment. During this year’s Boston Marathon, legally blind Erich Manser was helped through the 26.2 mile course using Aira’s system, as well as a human guide for safety.
More sophisticated examples of giving blind and visually impaired people assistance with technological products are also coming to the market. Orcam’s glasses attachment use machine vision to automatically detect what objects are in front of them. For instance, when you’re walking down the street they can read the text from road signs. Also, when entered through its app it is able to recognise the faces of people you interact with and products you pick-up. Similarly, Microsoft’s Seeing AI app uses a mobile phone’s camera to identify the objects in front of it, before it reads out what they are. Startup GiveVision, which won the WIRED Health startup competition this year, also promises to use augmented reality and virtual reality to help people see.
Much of the tech that’s able to assist those with visual impairments is wearable. Increased mobile connectivity, through 4G and Wi-Fi; a decrease in the cost of cameras and sensors; and the rapidly enhancing AI fields have accelerated many of the recent developments. “The advent of the head mounted displays have opened up a plethora of new opportunity for assistive devices,” says Jeffrey Fenton, the director of outreach at eSight. The startup has created a headset with an inbuilt HD camera that beams what’s in front of you onto two OLED screens in front of your eyes. The company claims it can help the legally blind to see again.
The voice technology in Google’s Home and Amazon’s Echo are also helping those with sight issues to easily access information: voice control removes the need for complex interactions with screens and keyboards. “Because many of those technologies are principally voice-driven, that’s actually starting out with a level playing field, where effectively everyone’s blind because there is no screen,” Spinks says. He adds there are still plenty of use cases for specialised software and hardware but mainstream technology is helping to increase the spread of accessibility features.
“We’re seeing the mainstream players increasing their embedded accessibility,” he says. Back in April 2016, Facebook applied its machine learning to photos uploaded through its iOS app. The AI system analyses photos and suggests a caption describing what may be in the image. For instance, a holiday selfie taken on the beach may prompt a description of “two people, smiling, sunglasses, sky, outdoor, water”. This description can be accessed by screen reading technology, which reads it aloud to a someone with visual impairments. Previously, it would have said “image”.
It’s this wide-scale adoption of features and techniques that assist the blind in someways, which is exciting Spinks. “Our vision of accessibility is that it is discretely available to everyone,” Spinks says. “Leverage it when you need it and it’s clearly in the background when you don’t need it. It’s not a clunky disruptive memory hogging piece of software that causes problems”.