We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Data-driven solutions and decisions “based on data” are often seen as the magic bullet to meet a business challenge, deployed by people across multiple industries and championed by everyone from finance to design. Yet data can create problems too — not least as a source of inequality that fails those with disabilities and also your business decision-making.
So what’s the solution? We know that data has plenty of positives. It can help inform business strategy. And it can validate the choice a business makes. But too often we absent-mindedly rely on data for the full picture, when in fact it’s far from complete.
The data says I don’t exist
Data collection relies on the past to predict the future. Take the creation of user profiles and personas. We track users we already have or collect only the data that it’s possible for us to collect, which means we end up with an incomplete dataset. If someone is already excluded, the dataset will simply confirm that person doesn’t exist.
Now imagine a shopkeeper with a store at the top of a flight of stairs. When asked whether they need a ramp for access, they might say “no, I don’t have any customers that use a wheelchair.” The data might be correct but it neither indicates there’s an accessibility problem for potential customers, nor does it motivate the shopkeeper towards a solution.
It’s this kind of thinking that creates issues. It prevents access to digital experiences for those with disabilities. And it’s a Catch-22 situation that prevents businesses from developing inclusive products and services. If you believe certain groups are not using your products, you’re not motivated to build products for those groups.
Treating data with a healthy dose of skepticism is an approach I’d advise.
Bridging the gap
Recognizing the limitations of your data is another step towards digital accessibility.
Incomplete data is almost inevitable given the analytical tools at our disposal. Typical tools track by traditional methods, including clicks and page views. Again this means that people who can’t access a product already won’t show up in the data.
Remembering that there are huge gaps in the data is important — and remembering that incomplete data creates a significant bias problem even more so. Whether intentional or accidental, past data has bias built into it. Relying on that data without due thought will enshrine the bias even deeper into the system and our decision-making.
Consider too some of the practical issues around capturing data on assistive technology usage, such as screen readers. W3C set the standards for a web for all. In its core principles of API architecture, it states: ‘Make sure that your API doesn’t provide a way for authors to detect that a user is using assistive technology without the user’s consent.’
It’s another Catch-22. How will you know if people have access problems if you can’t detect them in the first place? And even if you could, you cannot fully understand why a user is employing assistive technology. Users find all sorts of workarounds so might be using assisted tech because of a disability, or perhaps an impairment, or any number of related or non-related reasons.
Aggregated data, such as that from WebAIM, can provide some information to fill in some gaps. In the U.K., other commonly used and well-respected sources include the Office for National Statistics and charities including Scope.
They are sources of real user feedback and are peer-reviewed for credibility. Their limitations are data that are often outdated and too generalist for market or segment-specific needs.
The best advice here is to understand what your data set can and can’t provide. And always maintain an awareness of the impact of the sample size.
The ethics of data collection
When it comes to pursuing the aim of greater digital inclusivity, it’s easy to fall into traps that do the opposite. A user test via Zoom or Microsoft Teams can end up being more of a test of the remote software than your product or design. And introducing new content when A/B testing can create inconsistencies for users that will skew your data and exclude.
Before collecting data, you need to ask what you will use it for. There is a danger that in collecting data to help people with disabilities you will create new silos instead.
If you’re only collecting data to send them somewhere other than your main digital experience then you’re using data unethically. Also, when you do track, make sure you’re tracking a wide range of disabilities, not just groups such as the partially sighted or deaf. And remember that some disabilities cannot be tracked by even the most advanced technology.
However, if you understand the data you have and don’t rely on it completely, you can move towards greater digital accessibility.
Sometimes exclusion is unavoidable. So factor in ways to predict it early instead and plan for alternative routes. And always ask yourself and your teams ‘who are we going to exclude by doing this?’
Finally, data is an option while ethics should be compulsory. Question whether you even need certain data that can contribute to exclusion.
The answer is to assume those with disabilities will be trying to access your web platform and build that into your design. Co-create alongside those with issues of accessibility and make your digital experiences accessible to people with disabilities from the start. Build with them and not for them and remember the maxim that “it’s nothing about us without us.”
Do this and they will feel like they exist, whatever your data says.
Kevin Mar-Molinero is director of Experience Technologies at digital transformation agency Kin + Carta. He sits on the BIMA Inclusive Design Council and is a member of W3c COGA Group.