Cybersecurity experts argue that pausing GPT-4 development is pointless

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Earlier this week, a group of more than 1,800 artificial intelligence (AI) leaders and technologists ranging from Elon Musk to Steve Wozniak issued an open letter calling on all AI labs to immediately pause development for six months on AI systems more powerful than GPT-4 due to “profound risks to society and humanity.” 

While a pause could serve to help better understand and regulate the societal risks created by generative AI, some argue that it’s also an attempt for lagging competitors to catch up on AI research with leaders in the space like OpenAI.

According to Gartner distinguished VP analyst Avivah Litan, who spoke with VentureBeat about the issue, “The six-month pause is a plea to stop the training of models more powerful than GPT-4. GPT 4.5 will soon be followed by GPT-5, which is expected to achieve AGI (artificial general intelligence). Once AGI arrives, it will likely be too late to institute safety controls that effectively guard human use of these systems.” 

>>Follow VentureBeat’s ongoing generative AI coverage<<

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 

Register Now

Despite concerns about the societal risks posed by generative AI, many cybersecurity experts are doubtful that a pause in AI development would help at all. Instead, they argue that such a pause would provide only a temporary reprieve for security teams to develop their defenses and prepare to respond to an increase in social engineering, phishing and malicious code generation.

Why a pause on generative AI development isn’t feasible

One of the most convincing arguments against a pause on AI research from a cybersecurity perspective is that it only impacts vendors, and not malicious threat actors. Cybercriminals would still have the ability to develop new attack vectors and hone their offensive techniques. 

“Pausing the development of the next generation of AI will not stop unscrupulous actors from continuing to take the technology in dangerous directions,” Steve Grobman, CTO of McAfee, told VentureBeat. “When you have technological breakthroughs, having organizations and companies with ethics and standards that continue to advance the technology is imperative to ensuring that the technology is used in the most responsible way possible.”

At the same time, implementing a ban on training AI systems could be considered a regulatory overreach. 

“AI is applied math, and we can’t legislate, regulate or prevent people from doing math. Rather, we need to understand it, educate our leaders to use it responsibly in the right places and recognise that our adversaries will seek to exploit it,” Grobman said. 

So what is to be done? 

If a complete pause on generative AI development isn’t practical, instead, regulators and private organizations should look at developing a consensus surrounding the parameters of AI development, the level of inbuilt protections that tools like GPT-4 need to have and the measures that enterprises can use to mitigate associated risks. 

“AI regulation is an important and ongoing conversation, and legislation on the moral and safe use of these technologies remains an urgent challenge for legislators with sector-specific knowledge, since the use case range is partially boundless from healthcare through to aerospace,” Justin Fier, SVP of Red Team Operations, Darktrace, told VentureBeat.

“Reaching a national or international consensus on who should be held liable for misapplications of all kinds of AI and automation, not just gen AI, is an important challenge that a short pause on gen AI model development specifically is not likely to solve,” Fier said. 

Rather than a pause, the cybersecurity community would be better served by focusing on accelerating the discussion on how to manage the risks associated with the malicious use of generative AI, and urging AI vendors to be more transparent about the guardrails implemented to prevent new threats. 

How to gain back trust in AI solutions 

For Gartner’s Litan, current large language model (LLM) development requires users to put their trust in a vendor’s red-teaming capabilities. However, organizations like OpenAI are opaque in how they manage risks internally, and offer users little ability to monitor the performance of those inbuilt protections. 

As a result, organizations need new tools and frameworks to manage the cyber risks introduced by generative AI. 

“We need a new class of AI trust, risk and security management [TRiSM] tools that manage data and process flows between users and companies hosting LLM foundation models. These would be [cloud access security broker] CASB-like in their technical configurations but, unlike CASB functions, they would be trained on mitigating the risks and increasing the trust in using cloud-based foundation AI models,” Litan said. 

As part of an AI TRiSM architecture, users should expect the vendors hosting or providing these models to provide them with the tools to detect data and content anomalies, alongside additional data protection and privacy assurance capabilities, such as masking. 

Unlike existing tools like ModelOps and adversarial attack resistance, which can only be executed by a model owner and operator, AI TRiSM enables users to play a greater role in defining the level of risk presented by tools like GPT-4. 

Preparation is key 

Ultimately, rather than trying to stifle generative AI development, organizations should look for ways they can prepare to confront the risks provided by generative AI. 

One way to do this is to find new ways to fight AI with AI, and follow the lead of organizations like Microsoft, Orca Security, ARMO and Sophos, which have already developed new defensive use cases for generative AI. 

For instance, Microsoft Security Copilot uses a mix of GPT-4 and its own proprietary data to process alerts created by security tools, and translates them into a natural language explanation of security incidents. This gives human users a narrative to refer to to respond to breaches more effectively. 

This is just one example of how GPT-4 can be used defensively. With generative AI readily available and out in the wild, it’s on security teams to find out how they can leverage these tools as a false multiplier to secure their organizations. 

“This technology is coming … and quickly,” Jeff Pollard, Forrester VP principal analyst, told VentureBeat. “The only way cybersecurity will be ready is to start dealing with it now. Pretending that it’s not coming — or pretending that a pause will help — will just cost cybersecurity teams in the long run. Teams need to start researching and learning now how these technologies will transform how they do their job.”

Originally appeared on: TheSpuzz

Scoophot
Logo