Responsible use of machine learning to verify identities at scale 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


In today’s highly competitive digital marketplace, consumers are more empowered than ever. They have the freedom to choose which companies they do business with and enough options to change their minds at a moment’s notice. A misstep that diminishes a customer’s experience during sign-up or onboarding can lead them to replace one brand with another, simply by clicking a button. 

Consumers are also increasingly concerned with how companies protect their data, adding another layer of complexity for businesses as they aim to build trust in a digital world. Eighty-six percent of respondents to a KPMG study reported growing concerns about data privacy, while 78% expressed fears related to the amount of data being collected. 

At the same time, surging digital adoption among consumers has led to an astounding increase in fraud. Businesses must build trust and help consumers feel that their data is protected but must also deliver a quick, seamless onboarding experience that truly protects against fraud on the back end.

As such, artificial intelligence (AI) has been hyped as the silver bullet of fraud prevention in recent years for its promise to automate the process of verifying identities. However, despite all of the chatter around its application in digital identity verification, a multitude of misunderstandings about AI remain. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Machine learning as a silver bullet

As the world stands today, true AI in which a machine can successfully verify identities without human interaction doesn’t exist. When companies talk about leveraging AI for identity verification, they’re really talking about using machine learning (ML), which is an application of AI. In the case of ML, the system is trained by feeding it large amounts of data and allowing it to adjust and improve, or “learn,” over time. 

When applied to the identity verification process, ML can play a game-changing role in building trust, removing friction and fighting fraud. With it, businesses can analyze massive amounts of digital transaction data, create efficiencies and recognize patterns that can improve decision-making. However, getting tangled up in the hype without truly understanding machine learning and how to use it properly can diminish its value and in many cases, lead to serious problems. When using machine learning ML for identity verification, businesses should consider the following.

The potential for bias in machine learning

Bias in machine learning models can lead to exclusion, discrimination and, ultimately, a negative customer experience. Training an ML system using historical data will translate biases of the data into the models, which can be a serious risk. If the training data is biased or subject to unintentional bias by those building the ML systems, decisioning could be based on prejudiced assumptions.

When an ML algorithm makes erroneous assumptions, it can create a domino effect in which the system is consistently learning the wrong thing. Without human expertise from both data and fraud scientists, and oversight to identify and correct the bias, the problem will be repeated, thereby exacerbating the issue.

Novel forms of fraud 

Machines are great at detecting trends that have already been identified as suspicious, but their crucial blind spot is novelty. ML models use patterns of data and therefore, assume future activity will follow those same patterns or, at the least, a consistent pace of change. This leaves open the possibility for attacks to be successful, simply because they have not yet been seen by the system during training. 

Layering a fraud review team onto machine learning ensures that novel fraud is identified and flagged, and updated data is fed back into the system. Human fraud experts can flag transactions that may have initially passed identity verification controls but are suspected to be fraud and provide that data back to the business for a closer look. In this case, the ML system encodes that knowledge and adjusts its algorithms accordingly.

Understanding and explaining decisioning

One of the biggest knocks against machine learning is its lack of transparency, which is a basic tenet in identity verification. One needs to be able to explain how and why certain decisions are made, as well as share with regulators information on each stage of the process and customer journey. Lack of transparency can also foster mistrust among users.

Most ML systems provide a simple pass or fail score. Without transparency into the process behind a decision, it can be difficult to justify when regulators come calling. Continuous data feedback from ML systems can help businesses understand and explain why decisions were made and make informed decisions and adjustments to identity verification processes.

There is no doubt that ML plays an important role in identity verification and will continue to do so in the future. However, it’s clear that machines alone aren’t enough to verify identities at scale without adding risk. The power of machine learning is best realized alongside human expertise and with data transparency to make decisions that help businesses build customer loyalty and grow. 

Christina Luttrell is the chief executive officer for GBG Americas, comprised of Acuant and IDology.

Originally appeared on: TheSpuzz

Scoophot
Logo