Deepfakes aren’t going away: Future-proofing digital identity

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Deepfakes aren’t new, but this AI-powered technology has emerged as a pervasive threat in spreading misinformation and increasing identity fraud. The pandemic made matters worse by creating the ideal conditions for bad actors to take advantage of organizations’ and consumers’ blindspots, further exacerbating fraud and identity theft. Fraud stemming from deepfakes spiked during the pandemic, and poses significant challenges for financial institutions and fintechs that need to accurately authenticate and verify identities.

As cybercriminals continue to use tools like deepfakes to fool identity verification solutions and gain unauthorized access to digital assets and online accounts, it’s essential for organizations to automate the identity verification process to better detect and combat fraud.

When deepfake technology evades fraud detection

Fraud-related financial crime has steadily increased over the years, but the rise in deepfake fraud in particular poses real danger and presents a variety of security challenges for everyone. Fraudsters use deepfakes for a number of purposes, from celebrity impersonations to job candidate impersonations. Deepfakes have even been used to carry out scams with large-scale financial implications. In one instance, fraudsters used deepfake voices to trick a bank manager in Hong Kong into transferring millions of dollars into fraudulent accounts. 

Deepfakes have been a theoretical possibility for some time, but have garnered widespread attention only in the past few years. The controversial technology is now much more widely used due to the accessibility of deepfake software. Everyone, ranging from everyday consumers with little technical knowledge to state-sponsored actors, has easy access to phone applications and computer software that can generate fraudulent content. Furthermore, it’s becoming increasingly difficult for humans and fraud detection software to distinguish between real video or audio and deepfakes, making the technology a particularly malicious fraud vector. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The growing fraud risks behind deepfakes

Fraudsters exploit deepfake technology to perpetuate identity fraud and theft for personal gain, wreaking havoc across industries. Deepfakes can be exploited in numerous industries; however, industries working with large amounts of personal identifiable information (PII) and customer assets are particularly vulnerable. 

For example, the financial services industry deals with customer data when onboarding new clients and opening new accounts, making financial institutions and fintechs susceptible to a wide range of identity fraud. Fraudsters can use deepfakes as a vector to attack these organizations, leading to identity theft, fraudulent claims and new account fraud. Successful fraud attempts could be used to generate fake identities at scale, allowing fraudsters to launder money or conduct financial account takeovers.

Deepfakes can cause material damage to organizations through financial loss, reputational damage and diminished customer experiences. 

  • Financial loss: Financial costs associated with deepfake fraud and scams have resulted in losses from $243K to as much as $35M in individual cases. In early 2020, a bank manager in Hong Kong received a call, purportedly from a client, to authorize money transfers for an upcoming acquisition. Using voice-generated AI software to mimic the client’s voice, bad actors conned the bank out of $35M. The money was unable to be traced once transferred. 
  • Reputation management: Misinformation from deepfakes inflicts hard-to-repair damage on an organization’s reputation. Successful fraudulent attempts resulting in financial loss could negatively impact customers’ trust in and overall perception of a company, making it difficult for companies to argue their case.
  • Impact on customer experiences: The pandemic challenged organizations to detect sophisticated fraud attempts while ensuring smooth customer experiences. Those that fail to meet the challenge and become riddled with fraud will leave customers with undesirable experiences at nearly every part of the customer journey. Organizations need to add new layers of defense to their onboarding processes to detect and secure against deepfake scam attempts at the onset.

Future-proofing identity: How organizations can combat deepfake fraud

Current methods of fraud detection cannot verify 100% of real identities online, but organizations can safeguard against deepfake fraud and minimize the impact of future identity-based attacks with a very high degree of effectiveness. Financial institutions and fintechs need to be particularly vigilant when onboarding new customers in order to detect third-party fraud, synthetic identities and impersonation attempts. With the proper technology, organizations can accurately detect deepfakes and combat further fraud.

In addition to validating PII in the onboarding process, organizations need to verify identity through deep multi-dimensional liveness tests, which estimate liveness by analyzing the quality of selfies and estimating depth cues for face authentication. In many cases, fraudsters may attempt to impersonate individuals using legitimate PII combined with a headshot that doesn’t match the individual’s true identity. Traditional identity verification is inaccurate and uses manual processes, creating an expanded attack surface for bad actors. Deepfake technology can easily bypass flat images and even liveness tests in identity verification — in fact, the winning algorithm in Meta’s deepfake detection competition detected only 65% of deepfakes analyzed. 

This is where graph-defined digital identity verification comes in. Continuously sourcing digital data during the picture validation process gives customers confidence in the identities that they are doing business with, and reduces their risk of fraud. Organizations also gain a holistic and accurate view into consumer identity, can identify more good customers, and are less likely to be tricked by deepfake attempts. 

While it’s difficult to combat every type of fraud, security teams can stop deepfake technology in its tracks by evolving beyond legacy approaches and adopting identity verification processes with predictive AI/ML analytics to accurately identify fraud and build digital trust.

Mike Cook is VP of Fraud Solutions, Commercialization at Socure

Originally appeared on: TheSpuzz

Scoophot
Logo