New deepfake threats loom, says Microsoft’s chief science officer

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Deepfakes, or high-fidelity, synthetic, fictional depictions of people and events leveraging artificial intelligence (AI) and machine learning (ML), have become a common tool of misinformation over the past five years. But according to Eric Horvitz, Microsoft’s chief science officer, new deepfake threats are lurking on the horizon. 

A new research paper from Horvitz says that interactive and compositional deepfakes are two growing classes of threats. In a Twitter thread, MosaicML research scientist Davis Blaloch described interactive deepfakes as “the illusion of talking to a real person. Imagine a scammer calling your grandmom who looks and sounds exactly like you.” Compositional deepfakes, he continued, go further with a bad actor creating many deepfakes to compile a “synthetic history.” 

“Think making up a terrorist attack that never happened, inventing a fictional scandal, or putting together “proof” of a self-serving conspiracy theory. Such a synthetic history could be supplemented with real-world action (e.g., setting a building on fire),” Blaloch tweeted. 

Generative AI is at an inflection point

In the paper, Horvitz said that the rising capabilities of discriminative and generative AI methods are reaching an inflection point. “The advances are providing unprecedented tools that can be used by state and non-state actors to create and distribute persuasive disinformation,” he wrote, adding that deepfakes will become more difficult to differentiate from reality. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The challenge, he explained, arises from the generative adversarial networks (GAN) methodology, an “iterative technique where the machine learning and inference employed to generate synthetic content is pitted against systems that attempt to discriminate generated fictions from fact.” Over time, he continued, the generator learns to fool the detector. “With this process at the foundation of deepfakes, neither pattern recognition techniques nor humans will be able to reliably recognize deepfakes,” he wrote. 

Back in May, Horvitz testified before the U.S. Senate Armed Services Committee Subcommittee on Cybersecurity, where he emphasized that organizations are certain to face new challenges as cybersecurity attacks increase in sophistication — including through the use of AI-powered synthetic media and deepfakes. 

To date, he wrote in the new paper, deepfakes have been created and shared as one-off, stand-alone creations. Now, however, “we can expect to see the rise of new forms of persuasive deepfakes that move beyond fixed, singleton productions,” he said. 

Defending against deepfakes

Horvitz cites a variety of ways governments, organizations, researchers and enterprises can prepare for and defend against the expected rise of interactive and compositional deepfakes. 

The rise of ever-more sophisticated deepfakes will “raise the bar on expectations and requirements” of journalism and reporting, as well as the need to foster media literacy and raise awareness of these new trends. 

In addition, new authenticity protocols to confirm identity might be necessary, he added – even new multifactor identification practices for admittance into online meetings. There may also need to be new standards to prove content provenance, including new watermark and fingerprint methods; new regulations and self-regulation; red-team efforts and continuous monitoring. 

Deepfake vigilance is essential

“It’s important to be vigilant” against interactive and compositional deepfakes, said Horvitz in a tweet over the weekend. 

Other experts also shared the paper on Twitter and weighed in. “Public awareness of AI risk is critical to staying ahead of the foreseeable harms,” wrote Margaret Mitchell, researcher and chief ethics scientist at Hugging Face. “I think about scamming and misinformation a LOT.”  

Horvitz expanded in his conclusion: “As we progress at the frontier of technological possibilities, we must continue to envision potential abuses of the technologies that we create and work to develop threat models, controls, and safeguards — and to engage across multiple sectors on rising concerns, acceptable uses, best practices, mitigations, and regulations.” 


Originally appeared on: TheSpuzz

Scoophot
Logo