The nuances of voice AI ethics and what businesses need to do

Learn how your company can create applications to automate tasks and generate further efficiencies through low-code/no-code tools on November 9 at the virtual Low-Code/No-Code Summit. Register here.

In early 2016, Microsoft announced Tay, an AI chatbot capable of conversing with and learning from random users on the internet. Within 24 hours, the bot began spewing racist, misogynistic statements, seemingly unprovoked. The team pulled the plug on Tay, realizing that the ethics of letting a conversational bot loose on the internet were, at best, unexplored. 

The real questions are whether AI designed for random human interaction is ethical, and whether AI can be coded to stay within bounds. This becomes even more critical with voice AI, which businesses use to communicate automatically and directly with customers.

Let’s take a moment to discuss what makes AI ethical versus unethical and how businesses can incorporate AI into their customer-facing roles in ethical ways. 

What makes AI unethical? 

AI is supposed to be neutral. Information enters a black box — a model — and returns with some degree of processing. In the Tay example, the researchers created their model by feeding the AI a massive amount of conversational information influenced by human interaction. The result? An unethical model that harmed rather than helped. 


Low-Code/No-Code Summit

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Register Here

What happens when an AI is fed CCTV data? Personal information? Photographs and art? What comes out on the other end? 

The three biggest factors contributing to ethical dilemmas in AI are unethical usage, data privacy issues, and biases in the system. 

As technology advances, new AI models and methods appear daily, and usage grows. Researchers and companies are deploying the models and methods almost randomly; many of these are not well-understood or regulated. This often results in unethical outcomes even when the underlying systems have minimized biases. 

Data privacy issues spring up because AI models are built and trained on data that comes directly from users. In many cases, customers unwittingly become test subjects in one of the largest unregulated AI experiments in history. Your words, images, biometric data and even social media are fair game. But should they be? 

Finally, we know from Tay and other examples that AI systems are biased. Like any creation, what you put into it is what you get out of it.

One of the most prominent examples of bias surfaced in a 2003 trial that revealed that researchers had used emails from a massive trove of Enron documents to train conversational AI for decades. The trained AI saw the world from the viewpoint of a deposed energy trader in Houston. How many of us would say those emails would represent our POV? 

Ethics in voice AI 

Voice AI shares the same core ethical concerns as AI in general, but because voice closely mimics human speech and experience, there is a higher potential for manipulation and misrepresentation. Also, we tend to trust things with a voice, including friendly interfaces like Alexa and Siri. 

Voice AI is also highly likely to interact with a real customer in real time. In other words, voice AIs are your company representatives. And just like your human representatives, you want to ensure your AI is trained in and acts in line with company values and a professional code of conduct. 

Human agents (and AI systems) should not treat callers differently for reasons unrelated to their service membership. But depending on the dataset, the system might not provide a consistent experience. For example, more males calling a center might result in a gender classifier biased against female speakers. And what happens when biases, including those against regional speech and slang, sneak into voice AI interactions? 

A final nuance is that voice AI in customer service is a form of automation. That means it can replace current jobs, an ethical dilemma in itself. Companies working in the industry must manage outcomes carefully. 

Building ethical AI 

Ethical AI is still a burgeoning field, and there isn’t much data or research available to produce a set of complete guidelines. That said, here are some pointers.

As with any data collection solution, companies must have solid governance systems that adhere to (human) privacy laws. Not all customer data is fair game, and customers must understand that everything they do or say on your website could be part of a future AI model. How this will change their behavior is unclear, but it is important to offer informed consent. 

Area code and other personal data shouldn’t cloud the model. For example, at Skit, we deploy our systems at places where personal information is collected and stored. We ensure that machine learning models don’t get individualistic aspects or data points, so training and pipelines are oblivious to things like caller phone numbers and other identifying features.

Next, companies should do regular bias tests and manage checks and balances for data usage. The primary question should be whether the AI is interacting with customers and other users fairly and ethically and whether edge cases — including customer error — will spin out of control. Since voice AI, like any other AI, could fail, the systems should be transparent to inspection. This is especially important to customer service since the product directly interacts with users and can make or break trust. 

Finally, companies considering AI should have ethics committees that inspect and scrutinize the value chain and business decisions for novel ethical challenges. Also, companies that want to take part in groundbreaking research must put in the time and resources to ensure that the research is useful to all parties involved. 

AI products are not new. But the scale at which they are being adopted is unprecedented. 

As this happens, we need major reforms in understanding and building frameworks around the ethical use of AI. These reforms will move us towards more transparent, fair and private systems. Together, we can focus on which use cases make sense and which don’t, considering the future of humanity.

Sourabh Gupta is cofounder and CEO of

Originally appeared on: TheSpuzz