Why your org should plan for deepfake fraud before it happens

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Some young people floss for a TikTok dance challenge. A couple posts a holiday selfie to keep friends updated on their travels. A budding influencer uploads their latest YouTube video. Unwittingly, each one is adding fuel to an emerging fraud vector that could become enormously challenging for businesses and consumers alike: Deepfakes.

Deepfakes defined

Deepfakes get their name from the underlying technology: Deep learning, a subset of artificial intelligence (AI) that imitates the way humans acquire knowledge. With deep learning, algorithms learn from vast datasets, unassisted by human supervisors. The bigger the dataset, the more accurate the algorithm is likely to become.

Deepfakes use AI to create highly convincing video or audio files that mimic a third-party — for instance, a video of a celebrity saying something they did not, in fact, say. Deepfakes are produced for a broad range of reasons—some legitimate, some illegitimate. These include satire, entertainment, fraud, political manipulation, and the generation of “fake news.”  

The danger of deepfakes

The threat posed by deepfakes to society is a real and present danger due to the clear risks associated with being able to put words into the mouths of powerful, influential, or trusted people such as politicians, journalists, or celebrities. In addition, deepfakes also present a clear and increasing threat to businesses. These include:

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

  • Extortion: Threatening to release faked, compromising footage of an executive to gain access to corporate systems, data, or financial resources.
  • Fraud: Using deepfakes to mimic an employee and/or customer to gain access to corporate systems, data, or financial resources.
  • Authentication: Using deepfakes to manipulate ID verification or authentication that relies on biometrics such as voice patterns or facial recognition to access systems, data, or financial resources.
  • Reputation risk: Using deepfakes to damage the reputation of a company and/or its employees with customers and other stakeholders.   

The impact on fraud

Of the risks associated with deepfakes, the impact on fraud is one of the more concerning for businesses today. This is because criminals are increasingly turning to deepfake technology to make up for declining yields from traditional fraud schemes, such as phishing and account takeover. These older fraud types have become more difficult to carry out as anti-fraud technologies have improved (for example, through the introduction of multifactor authentication callback). 

This trend coincides with the emergence of deepfake tools being made available as a service on the dark web, making it easier and cheaper for criminals to launch such fraud schemes, even if they have limited technical understanding. It also coincides with people posting massive volumes of images and videos of themselves on social media platforms — all great inputs for deep learning algorithms to become ever more convincing. 

There are three key new fraud types that security teams in enterprises should be aware of in this regard:

  • Ghost fraud: Where a criminal uses the data of a person who has died to create a deepfake that can be used, for example, to access online services or apply for credit cards or loans.
  • Synthetic ID fraud: Where fraudsters mine data from many different people to create an identity for a person who does not exist. The identity is then used to apply for credit cards or to carry out large transactions.
  • Application fraud: Where stolen or fake identities are used to open new bank accounts. The criminal then maxes out associated credit cards and loans. 

Already, there have been a number of high-profile and costly fraud schemes that have used deepfakes. In one case, a fraudster used deepfake voice technology to imitate a company director who was known to a bank branch manager. The criminal then defrauded the bank of $35 million. In another instance, criminals used a deepfake to impersonate a chief executive’s voice and demand a fraudulent transfer of €220,000 ($223,688.30 USD) from the executive’s junior officer to a fictional supplier. Deepfakes are therefore a clear and present danger, and organizations must act now to protect themselves.

Defending the enterprise

Given the increasing sophistication and prevalence of deepfake fraud, what can businesses do to protect their data, their finances, and their reputation? I have identified five key steps that all businesses should put in place today:

  1. Plan for deepfakes in response procedures and simulations. Deepfakes should be incorporated into your scenario planning and crisis tests. Plans should include incident classification and outline clear incident reporting processes, escalation and communication procedures, particularly when it comes to mitigating reputational risk.
  2. Educate employees. Just as security teams have educated employees to detect phishing emails, they should similarly raise awareness of deepfakes. As in other areas of cybersecurity, employees should be seen as an important line of defense, especially given the use of deepfakes for social engineering. 
  3. For sensitive transactions, have secondary verification procedures.   Don’t trust; always verify. Have secondary methods for verification or call back, such as watermarking audio and video files, step-up authentication, or dual control.
  4. Put in place insurance protection. As the deepfake threat grows, insurers will no doubt offer a broader range of options. 
  5. Update risk assessments. Incorporate deepfakes into the risk assessment process for digital channels and services.

The future of deepfakes 

In the years ahead, technology will continue to evolve, and it will become harder to identify deepfakes. Indeed, as people and businesses take to the metaverse and the Web3, it’s likely that avatars will be used to access and consume a broad range of services. Unless adequate protections are put in place, these digitally native avatars will likely prove easier to fake than human beings.

However, just as technology will advance to exploit this, it will also advance to detect it. For their part, security teams should look to stay up to date on new advances in detection and other innovative technologies to help combat this threat. The direction of travel for deepfakes is clear, businesses should start preparing now. 

David Fairman is the chief information officer and chief security officer of APAC at Netskope.

Originally appeared on: TheSpuzz

Scoophot
Logo