Skip to main content

20-20a Westminster Buildings, Theatre Square, Nottingham, NG1 6LG(0115) 888 2828

How Artificial Intelligence Can Help Accessibility

Written by Ben Leach on

In this blog we look at how AI and machine learning make disabled people’s lives easier and discuss whether AI can help accessibility become easier to implement.

Technology seems to get better every single day. New and exciting apps and technologies are being introduced. All in the hope that they’ll make our lives easier and more efficient. 

A lot of people in the technology industry are aware of digital accessibility. Or at least they should be. Making the web a better place for those with disabilities. 

Both accessibility and AI make people’s lives easier and more efficient. But, what if they can help each other get better? Can accessibility make AI better, and can AI can make accessibility easier to implement?

In this blog, we’ll discuss how AI can help with accessibility.

What is AI? 

Most of the time, technology runs on complex algorithms or code that has been developed by a human. Usually, these computers do not deviate from this complex code as they are not instructed to. 

Artificial intelligence is the ability of a computer to demonstrate traits more often associated with the human mind. Namely, the traits “learning” and “problem solving”, deviating from the algorithm or code. The problem with the term Artificial Intelligence is that it’s evolving all of the time. What was considered Artificial Intelligence 10 years ago, is considered normal now. 

What is digital accessibility? 

If you aren’t already aware, digital accessibility is when websites, apps and digital tech is built so that people with disabilities or access issues can use them. This may be ensuring they can be accessed with screen readers, or alternatively, have provisions available for disabled people 

How can AI can help accessibility? 

There are many ways that AI can assist with accessibility, and in many ways, it already has. From reducing technology barriers that exist for disabled people, to natively helping websites and content be more accessible with less effort. 

As AI gets integrated into everyday digital products, businesses are beginning to understand it’s importance. Because of AI integration in software and web infrastructure, it has empowered a greater number of individuals with different abilities.

Image Recognition 

Google launched the Lookout app last year to help blind people learn about their environment. Using machine learning and image processing, the app is able to identify objects that are surrounding the user and narrate the environment. 

There is also Microsoft’s Seeing AI, which can do a number of things. Including: 

  • read short text, 
  • read documents, 
  • scan barcodes and give the user info on that product,
  • recognise and describe people in front of the user’s phone, 
  • describe a scene around the user 
  • read currency on bills and receipts. 

The only way these products are developed is through complex machine learning – by going through many thousands of iterations and learning from the last. Seeing AI also has the ability to recognise and read handwriting – something which would’ve seemed so out-of-reach for computer technology even 10 years ago. 

Automatic Alt Text

Very similar to image recognition, Facebook was the first social network to introduce an automatic alt text feature. Granted, it still needs some work for complex images with numerous items – but it helps. 

Alt text provides an audio explanation of an image to a blind user to ensure they understand the context of what is being written. Whilst Facebook’s automated alt text is not fully polished, it helps blind users understand the web a whole lot more. 

And it’s no surprise to learn that there’s a technology group that aims to use technology to eradicate the need for alt text at all and that screen readers will have in-built image recognition within them that will generate an alternative text. However, for now, we need to stick to entering it in manually. 

Facial Recognition

Facial recognition has always been debated in the world of accessibility. It makes some things easier, but usually at the expense of a user’s privacy. Artificial intelligence can now assume who is in front of a camera by analysing data – usually numerous photos of a person’s face from different angles. 

Since 2017, Apple have been using facial recognition technology to allow people to unlock iPhones. And Microsoft has it’s Hello software, which operates in a similar way to Apple’s Face ID. 

Unlocking devices by facial recognition is a useful thing in itself, however, what if website CAPTCHA tests could utilise facial recognition software to determine that it’s a real user, and prevent any barrier that CAPTCHA codes pose for disabled users. Of course, this has privacy implications but done in the right way, it could be a game-changer. 

Lip Reading to Subtitles

Back in 2018, researchers came up with the idea of making video editing much more streamlined, Deep Video Portrait. Using artificial intelligence, editors can now edit facial expressions of actors or singers in films/music videos and accurately matched dubbed audio. 

The AI technology allows editors to alter eye movement, facial movement, facial expressions and many minute face animations without having to re-record footage. This has been used successfully to sync dubbed recordings with an actor’s lips already and is in a proof of concept stage of re-positioning actor’s faces in films to better tell a story in context. 

This could help make video more accessible natively. With the Deep Video Portrait AI, it can accurately read an actor’s lips and match audio to it – and if that audio has been written down, it could easily transfer into subtitles. Integrating this into a mobile device could potentially permit hearing-impaired individuals to interpret what others are saying. 

In fact, Google has developed DeepMind. It has researched more than 100,000 natural sentences from BBC videos. It ran these videos (on mute) past their own neural networks and AI, and again with some professional lip-readers – with interesting results. Most lip-reading professionals were 100% accurate of what was being said in the video about 12.4% of the time. Whereas Google’s AI was 100% accurate 46.8% of the time. 

If something like DeepMind was integrated into the web, automated captions could increase in their accuracy significantly. 

Automated Summaries

There’s something close to 24,000 gigabytes worth of information uploaded to the internet every single second. In fact, on the content management system and blogging platform WordPress, there are more than 42.6 million posts published every single month. That’s 30 (ish) posts every second! Which means, even if we’re focusing on a single subject, that’s a lot of information. Far too much for any human mind to comprehend. 

For a person with processing difficulties, reading just one of these 46.2 million posts can pose a significant challenge. Which is why Google has a hand in developing a new tool, TensorFlow. This can generate a single-line summary of any news article uploaded to the web – this will no doubt help those with processing difficulties digest the latest news without have to extract information from lengthy news reports. 

Increasing independence

One of the most advanced systems out there for incorporating AI into its design is Amazon’s smart speaker, Alexa. With thousands of downloadable skills, the opportunities are truly endless with what can be done with the software.

Now being developed (and currently only available in the US) is Amazon’s Show and Tell app. This allows visually impaired users to hold up products to an Alexa with a camera and the system will accurately identify the product it is being shown. 

This is a huge leap forward for technology, allowing blind users to be able to cook independently and understand exactly what they are holding. Alexa can also pair with other home devices, allowing blind users to control heating, kettle, the radio, alarms and their calendars, all by talking to Alexa. 

How far away is it until accessibility can rely upon AI? 

Whilst there are some positive signs of progress in the AI-supported accessibility world. Most of the products and programs that have been mentioned are in the very earliest phase of testing or have a less than impressive success rate. 

It’s imperative that developers and engineers continue to build tools that will support those with disabilities and make their life online easier.