Why diversity should have a critical impact on data privacy

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


The California Privacy Rights Act (CPRA), Virginia Consumer Data Protection Act (VCDPA), Canada’s Consumer Privacy Protection Act (CPPA) and many more international regulations all mark significant improvements that have been made in the data privacy space in the past several years. Under these laws, enterprises may face grave consequences for mishandling consumer data.

For instance, in addition to the regulatory consequences of a data breach, laws such as the CCPA allow consumers to hold enterprises directly accountable for data breaches under a private right of action. 

While these regulations certainly toughen the consequences surrounding the misuse of consumer data, they are still not enough — and may never be enough — to protect marginalized communities. Almost three-fourths of online households fear for their digital security and privacy, with most concerns belonging to underserved populations.

Marginalized groups are often negatively impacted by technology and can face great danger when automated decision-making tools like artificial intelligence (AI) and machine learning (ML) pose biases against them or when their data is misused. AI technologies have even been shown to perpetuate discrimination in tenant selection, financial lending, hiring processes and more.

Demographic bias in AI and ML tools is quite common, as design review processes substantially lack human diversity to ensure their prototypes are inclusive to everyone. Technology companies must evolve their current approaches to using AI and ML to ensure they are not negatively impacting underserved communities. This article will explore why diversity must play a critical role in data privacy and how companies can create more inclusive and ethical technologies.

The threats that marginalized groups face

Underserved communities are prone to considerable risks when sharing their data online, and unfortunately, data privacy laws cannot protect them from overt discrimination. Even if current regulations were as inclusive as possible, there are many ways these populations can be harmed. For instance, data brokers can still collect and sell an individual’s geolocation to groups targeting protesters. Information about an individual’s participation at a rally or protest can be used in a number of intrusive, unethical and potentially illegal ways. 

While this scenario is only hypothetical, there have been many real-world instances where similar situations have occurred. A 2020 research report detailed the data security and privacy risks LGBTQ people are exposed to on dating apps. Reported threats included blatant state surveillance, monitoring through facial recognition and app data shared with advertisers and data brokers. Minority groups have always been susceptible to such risks, but companies that make proactive changes can help reduce them.

The lack of diversity in automated tools

Although there has been incremental progress in diversifying the technology industry in the past few years, a fundamental shift is needed to minimize the perpetuating bias in AI and ML algorithms. In fact, 66.1% of data scientists are reported to be white and nearly 80% are male, emphasizing a dire lack of diversity among AI teams. As a result, AI algorithms are trained based upon the views and knowledge of the teams building them.

AI algorithms that aren’t trained to recognize certain groups of people can cause substantial damage. For example, the American Civil Liberties Union (ACLU) released research in 2018 proving that Amazon’s “Rekognition” facial recognition software falsely matched 28 U.S. Congress members with mugshots. However, 40% of false matches were people of color, despite the fact that they only made up 20% of Congress. To prevent future instances of AI bias, enterprises need to rethink their design review processes to ensure they are being inclusive to everyone.

An inclusive design review process

There may not be a single source of truth to mitigating bias, but there are many ways organizations can improve their design review process. Here are four simple ways technology organizations can reduce bias within their products.

1. Ask challenging questions

Developing a list of questions to ask and respond to during the design review process is one of the most effective methods of creating a more inclusive prototype. These questions can help AI teams identify issues they hadn’t thought of before.

Essential questions include whether the datasets they are using include enough data to prevent specific types of bias or whether they administered tests to determine the quality of data they’re using. Asking and responding to difficult questions can enable data scientists to enhance their prototype by determining whether they need to look at additional data or if they need to bring a third-party expert into the design review process.

2. Hire a privacy professional

Similar to any other compliance-related professional, privacy experts were originally seen as innovation bottlenecks. However, as more and more data regulations have been introduced in recent years, chief privacy officers have become a core component of the C-suite.

In-house privacy professionals are essential to serving as experts in the design review process. Privacy experts can provide an unbiased opinion on the prototype, help introduce difficult questions that data scientists hadn’t thought of before and help create inclusive, safe and secure products.

3. Leverage diverse voices

Organizations can bring diverse voices and perspectives to the table by expanding their hiring efforts to include candidates from different demographics and backgrounds. These efforts should extend to the C-suite and board of directors, as they can stand as representatives for employees and customers who may not have a voice.

Increasing diversity and inclusivity within the workforce will make more room for innovation and creativity. Research shows that racially diverse companies have a 35% higher chance of outperforming their competitors, while organizations with high gender-diverse executive teams earn a 21% higher profit than competitors.

4. Implement diversity, equity & inclusion (DE&I) training

At the core of every diverse and inclusive organization is a strong DE&I program. Implementing workshops that educate employees on privacy, AI bias and ethics can help them understand why they should care about DE&I initiatives. Currently, only 32% of enterprises are enforcing a DE&I training program for employees. It’s apparent that DE&I initiatives need to become a higher priority for true change to be made within an organization, as well as its products.

The future of ethical AI tools

While some organizations are well on their way to creating safer and more secure tools, others still need to make great improvements to create completely bias-free products. By incorporating the above recommendations into their design review process, they will not only be a few steps closer to creating inclusive and ethical products, but they will also be able to increase their innovation and digital transformation efforts. Technology can greatly benefit society, but the onus will be on each enterprise to make this a reality.

Veronica Torres, worldwide privacy and regulatory counsel at Jumio.

Originally appeared on: TheSpuzz

Scoophot
Logo