What are deepfakes, and why is the world finding it hard to detect them?

The government issued an advisory on February 21 asking social media platforms such as Facebook, Instagram, YouTube, and Twitter to remove or disable access to “Deepfakes” as per the IT Rules, 2021.

Reiterating Rule 3(2)(b) of the IT Rules, 2021, in the advisory, the ministry of electronics and information technology or MeitY said this content could be impersonation in electronic form, including artificially morphed images of an individual.

According to the rules, social media platforms, called internet intermediaries, are expected to remove unlawful information or disable access within 36 hours of receiving a court order or being notified by the government or its authorised agencies and within 24 hours when it comes from an individual or a person authorised by the said individual. The stated timelines have been reiterated in the advisory.

What are deepfakes?

Deepfake is a portmanteau word combining deep learning and fake. It is an artificial intelligence (AI) technology-driven process, which can be used to create realistic videos of somebody doing, and saying things they have not done or said.

In general, deepfake programs generally use “Generative Adversarial Networks” with two algorithms. While one forges deepfakes, the other identifies flaws in the forgery, which are corrected subsequently. Therefore, the outcome looks realistic, and it isn’t easy to differentiate from the real.

Deepfake has positive use in many industries. For example, this technology can enhance the online shopping experience by allowing users to try on things in virtual environments. Likewise, it enables actors in movies released in multiple languages to look natural in speech delivery.

Unfortunately, this technology has made its way into mainstream consciousness for the wrong reasons. It has been used to disrupt politics, mock TV shows, target influential people, generate blackmail material, and create internet memes and satire.

Is it possible to detect deepfakes?

Big tech companies have developed software that they claim helps identify deepfake videos and photos. Intel, for example, announced its deepfake detection platform FakeCatcher in November last year. According to the company, the platform is the world’s first real-time deepfake detector, which can detect fake videos in milliseconds with 96 per cent accuracy.

Adobe introduced an attribution tool for its Photoshop and Behance software in 2020. It lets creators tag pictures with their names and the history and location of edits to provide more transparency to a public growing increasingly sceptical of digital images.

Likewise, there are other tools and platforms that help users identify deepfakes, but none of the currently available solutions is foolproof. It is because Deepfake creators often discover new ways to fix defects, remove watermarks and alter metadata to cover their tracks.

According to the US-based non-profit global policy think tank RAND, individuals might be better able to separate deepfakes from original videos if they understand the social context of a video. This is exactly the ground India is preparing for to combat the Deepfake menace.

A recent report in Business Standard citing government sources said that MeitY would work with internet intermediaries to create a framework for “trusted fact checkers”. The proposed mechanism is meant to address the growing concerns over deepfake videos and AI-generated misinformation. It will be formed through a self-regulatory process and collaboration between the government, intermediaries and fact-checking agencies.

Deepfakes are an example of human ingenuity, and they showcase scientific and computing progress. This progress is reflected in their realism. And it is this realism which has now become a headache for tech companies and the world in general.

Originally appeared on: TheSpuzz