Technologies with the potential to make the web more inclusive are constantly being created. Many of these developments can be considered ‘assistive technologies’. These are tools – from machines to pieces of software – that help people with a wide range of impairments and access needs overcome barriers in their lives.
Of course, not all technology has to be designed to solve one specific problem, nor do they have to help everyone in the same way. Think about automatic doors, for example: they make life easier for wheelchair users and Sign Language speakers (who have to pause their conversations to turn a door handle) - but they also help anybody with their hands full. Here, the same technology helps different people for different reasons. This is an example of universal design at work.
This concept aims to ensure that everything we make can be “accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability or disability” 1 The Centre for Excellence in Universal Design
With this in mind, let’s take a look at a few of the many recent advancements in new technology, and in the cutting-edge world of artificial intelligence, and how they’re making the world better for those with access needs.
The introduction of voice assistant software – like Amazon Alexa, Apple’s Siri, and Google Assistant – has meant that assistive technologies have effectively gone mainstream. According to industry tracker Voicebot.ai, voice assistant users rose to 66.4 million in the U.S. alone in 2018, a 40% increase from 20172. In fact, research firm Juniper predicts that voice assistants will be used by 275 million people worldwide by 20233. They are already built into every major mobile device, and also live in homes as stand alone devices, ready to provide anything from music to weather and news information. But why are they so useful for those with disabilities and access needs?
They don’t require sight
This is one of the most obvious benefits – issuing commands and receiving information through voice alone removes a lot of barriers for those with blindness and low vision. The Royal National Institute for the Blind (RNIB) have written about the benefit of devices like this, calling them an “enormous benefit to many people – in particular, people who have a vision impairment.” In fact Ellie Southwood, the chair of the RNIB, said the following about her Amazon Echo dot at the AI & Disability conference TechShare Pro:
The Echo Dot makes me feel included…I spend far less time searching for things online; I can multitask while online and be more. 4
They don’t require lots of extra equipment
Remember when you had to buy a computer for £500, then pay another £600 for special software?” 5 Robin Spinks, senior strategy manager at RNIB Solutions
Although the internet previously required third party software or custom setups to enable access, voice assistants are now built in. Technology like this sets a great precedent, both technically and financially: “You fundamentally change the economics by building accessibility in”.
They avoid potentially complex and damaging experiences You don’t have to visit news sites with endless modals and popups, nor do you risk being asked to join a site’s mailing list twelve times over. You simply ask a voice assistant a question and receive the answer.
They don’t require a physical input to retrieve information
This is great for those with motor disabilities, or cognitive impairments that affect hand-eye coordination. Like many accessible features we’ve mentioned this will benefit all users: studies have suggested that 55% of people use digital voice assistants because it allows them to keep their hands free. 6
These capabilities could well remove the need for physical input for other aspects of a user’s life in the future as well. One user has even adapted an Amazon echo (along with a small computer called a Raspberry Pi) to move a wheelchair purely through voice commands.
They combat low tech literacy
You now don’t need to understand how to use a computer in order to receive information, or how to navigate spotify to play a song. Therefore, anyone who struggles with technology will benefit. Someone can now say to a device in their home, “Play me some country music” and that’s all there is to it (why they’d ask it to do that is beyond me, but each to their own).
They allow for focus on speech alone
The fact that users communicate with a voice assistant through speech alone can help those that have trouble handling the other aspects involved in typical interpersonal communication (for example social cues or body language).
For this reason, smart devices can be useful to those with certain cognitive impairments, like autism. Users with autism often struggle with wider social aspects of communication, but can find it much easier to communicate with voice assistants. This is because, ‘They don’t have to contend with trying to understand nuanced body language, facial expressions, moods or the million-and-one other things that can be happening every time we talk to someone.’ Some users with autism have even been able to form in-depth relationships with applications like Siri. An American writer documented her son’s in-depth experience with Siri for the New York Times back in 2014, explaining how it helped him to satisfy his constant curiosity about any subject, and even improve his communication with other people:
For most of us, Siri is merely a momentary diversion. But for some, it’s more. My son’s practice conversation with Siri is translating into more facility with actual humans. Yesterday I had the longest conversation with him that I’ve ever had. Admittedly, it was about different species of turtles and whether I preferred the red-eared slider to the diamond-backed terrapin. This might not have been my choice of topic, but it was back and forth, and it followed a logical trajectory. I can promise you that for most of my beautiful son’s 13 years of existence, that has not been the case. 7
Note: Making speech recognition even more inclusive As inclusive as voice assistants can be though, they still require a certain standardised version of speech to interpret commands. There was a time when Siri was incapable of understanding the Scottish accent for example. However today, this barrier still exists for those with non-standard speech, as a result of strokes, aneurysms, and the effects of conditions such as Cerebral Palsy. and Parkinson’s. To combat this, a company called Voiceitt are using a hybrid of statistical modeling and machine learning to create voice recognition software for those with non-standard speech - thereby allowing these people to interact with an entirely new realm of technology and devices.
Smart devices to monitor health
TruSense TruSense have developed an Alexa-powered Personal Emergency Response pendant. The GPS Smart Pendant has a two-way help button and can identify when a potential fall has occurred, triggering a notification to family members and the 24/7 emergency response center.
Apple Watch Siri is available in almost every Apple device including the Apple Watch, and the newest version is capable of generating a medical-quality ECG. Even greater than that though, is its ability to detect falls and accidents, and to take action. There was a famous story a couple of years ago where a father who sustained serious injuries after falling off his bike in the woods was saved by his Apple Watch. Bob Burdett was out mountain biking when he suffered a nasty fall, flipping his bike and knocking himself unconscious after hitting his head. Fortunately, Bob was wearing his Apple Watch at the time, which not only sent a text to his son saying it had ‘detected a Hard Fall’, but also notified emergency services and shared his location with them. 8
Let’s also not forget about the other features in these new devices that, despite being around for years, are even now providing help to those in need. Assistance can be as simple as regular reminders to those with memory issues to complete tasks or take medication, and GPS tracking on devices like the Apple Watch and the TruSense pendant, that grant freedom to those that can travel without assistance, but can alert those that care for them if they don’t arrive at places they’re expected.
Over 166 million adults in the United States play video games, and three-quarters of all Americans have at least one gamer in their household. 9 In the UK the number is a smaller, yet still staggering, coming in at 37.3 million. 10 Gaming has also been found to help combat depression and other mental health problems. For many, it represents the chance to socialise (particularly given the nature of 2020). However the ways in which we interact with a console or PC are still very standardised. Much like screen readers used to be, gaming via alternative inputs continues to be a customised and costly expenditure. Thankfully, Microsoft have been doing some great work to bridge that gap.
The Microsoft Adaptive Controller is a way to play games on XBox consoles that isn’t just via a standard controller. It has a bigger surface area with larger hit areas that don’t require the precision of joysticks and, most importantly, can connect to any piece of pre-existing kit that someone may already have via USB. Each input can be mapped to any button on a controller, so that the same controller can be used in thousands of different ways depending on how they want to configure it.
What’s more, it only costs $149. Now that may seem like a lot, but in comparison to potential thousands spent on custom setups, this drastically lowers the barrier to entry for any new gamer with an access need. Microsoft’s tag line is “when everyone plays, we all win” and I think that’s the perfect way to look at it. You can watch the trailer here.
The term Artificial intelligence (AI) has become virtually synonymous with the idea of new and emerging technology, and yet, if you ask people what it actually is, you’d be hard-pressed to get a clear answer – it’s usually much easier to describe the sort of things that AI can do. So, let’s look at some advancements that artificial intelligence is making in the world of accessibility:
Providing information about images
One of the most common issues with accessibility is the lack of alternative text for images, which means people who are blind or have sight loss could be missing important information. There have been a host of success stories recently, with large companies using new technology to address this problem. Google’s Cloud Vision API has been using a neural network to classify images, but also to extract text embedded in them. This is achieved through Optical Character Recognition (OCR) technology, that can ‘read’ the text, and display it alongside the image, ensuring that no valuable information is trapped in an image.
In a slightly different use case, Facebook has been working for the past few years on automatically adding alt text to images that are uploaded to its platforms. Every day, people share more than 2 billion photos across Facebook, Instagram, Messenger, and WhatsApp, so they set about creating a neural network that could understand what is going on in an image and make that information available to screen readers. At the time of writing it can detect “objects, scenes, actions, places of interest, and whether an image/video contains objectionable content.” Right now, they start every alt text entry with “Image may contain…” as they try to perfect its ability to analyse an image. This is a brilliant piece of work from the world’s most used social network that will help anyone using a screen reader.
Providing automatic video captioning
YouTube has been developing speech-recognition technology, using machine-learning algorithms to automatically generate captions for its videos. They have stated that “the quality of the captions may vary” at this point but, as with any machine learning, the longer it is running the smarter it’ll get. Importantly, any generated captions can easily be edited by the person that uploaded the video should they contain any incorrectly translated speech.
This also improves the system’s accuracy for future captions, as it helps the AI to understand where it went wrong. This technology holds the potential to provide nearly immediate accessibility for one of the most popular websites, and mediums, on the internet – helping those who are deaf, have hearing loss, or encounter a language barrier engage with content freely, and eventually, without having to wait for captions to be added or edited manually.
Google’s DeepMind division has also been using AI to generate closed captions based on lip reading. In a 2016 joint study with the University of Oxford, DeepMind’s algorithm watched over 5,000 hours of television and analysed 17,500 unique words. It then went head-to-head with a professional lip reader over 200 random video clips, and won, clearly – achieving 46.8% of translated words without error, compared to the lip reader’s 12.4%. 11
Providing human-level language translation
In April 2018, Microsoft announced its free translator app - where audio is translated into other languages and into text for captions. It showed for the first time “a Machine Translation system that could perform as well as human translators (in a specific scenario – Chinese-English news translation)”. This was a major breakthrough and, in the year since, they’ve managed to make huge strides in the system’s ability to provide accurate translations for other languages. It now comes as a mobile app, on all major platforms, that can provide real-time translation even when the device is offline. This is really useful for people who have to regularly interact with content that isn’t in their first language, and those who are deaf.
It can protect against spam
Spam detection has existed in email for a while, but spam campaigns are getting both more damaging and more convincing. For example, the National Fraud Intelligence Bureau (NFIB) noticed a large amount of fake TV licensing emails being sent out in September 2018, asking for user’s personal and financial information. They estimated that those involved lost over £830,000 through this campaign alone. 12
Google use TensorFlow to protect vulnerable customers from being scammed. It’s gotten clever enough to block an extra 100 million spam messages a day.
Providing information about a user’s surroundings
One of my personal favourite uses of artificial intelligence is Microsoft’s Seeing AI application, that has “changed the lives of blind and low vision community. Here, a user can use their device’s camera as a form of sight in a range of situations, and the app will then interpret what it can see using artificial intelligence and inform the user audibly. At the time of writing it’s available in 35 countries, and can do things like read out short pieces of written text, identify currency, describe products by reading their barcode, understand documents, and even describe people around you and their emotions.
As you can see, there is a wealth of progress being made in the emerging technology and artificial intelligence sectors. Some of the world’s largest companies are simultaneously investing heavy amounts of time and resources to develop solutions that help many people, including those with disabilities and access needs. The resulting developments make for an exciting time in accessibility – with the potential to radically alter (and improve) how those with a wide array of access needs interact with technology, and indeed the wider world around them.
If you’d like to learn more about accessibility in a format just like this, then I’ve written a book about it! It’s called “Practical Web Inclusion & Accessibility”, and you can learn more about it here.
I’ve also recently started consulting with companies again in order to help them improve their approach to accessibility. If that sounds interesting to you, you can learn more here.