Two former OpenAI board members say AI firms can’t be left to govern themselves

If any company could have successfully governed itself while safely and ethically developing advanced AI systems, it would have been OpenAI. The organisation was originally established as a non-profit with a laudable mission: to ensure that AGI, or artificial general intelligence—AI systems that are generally smarter than humans—would benefit “all of humanity”. Later, a for-profit subsidiary was created to raise the necessary capital, but the non-profit stayed in charge. The stated purpose of this unusual structure was to protect the company’s ability to stick to its original mission, and the board’s mandate was to uphold that mission. It was unprecedented, but it seemed worth trying. Unfortunately it didn’t work.

Last November, in an effort to salvage this self-regulatory structure, the OpenAI board dismissed its CEO, Sam Altman. The board’s ability to uphold the company’s mission had become increasingly constrained due to long-standing patterns of behaviour exhibited by Mr Altman, which, among other things, we believe undermined the board’s oversight of key decisions and internal safety protocols. Multiple senior leaders had privately shared grave concerns with the board, saying they believed that Mr Altman cultivated “a toxic culture of lying” and engaged in “behaviour [that] can be characterised as psychological abuse”. According to OpenAI, an internal investigation found that the board had “acted within its broad discretion” to dismiss Mr Altman, but also concluded that his conduct did not “mandate removal”. OpenAI relayed few specifics justifying this conclusion, and it did not make the investigation report available to employees, the press or the public.

The question of whether such behaviour should generally “mandate removal” of a CEO is a discussion for another time. But in OpenAI’s specific case, given the board’s duty to provide independent oversight and protect the company’s public-interest mission, we stand by the board’s action to dismiss Mr Altman. We also feel that developments since he returned to the company—including his reinstatement to the board and the departure of senior safety-focused talent—bode ill for the OpenAI experiment in self-governance.

Our particular story offers the broader lesson that society must not let the roll-out of AI be controlled solely by private tech companies. Certainly, there are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts. But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role.

And yet, in recent months, a rising chorus of voices—from Washington lawmakers to Silicon Valley investors—has advocated minimal government regulation of AI. Often, they draw parallels with the laissez-faire approach to the internet in the 1990s and the economic growth it spurred. However, this analogy is misleading.

Inside AI companies, and throughout the larger community of researchers and engineers in the field, the high stakes—and large risks—of developing increasingly advanced AI are widely acknowledged. In Mr Altman’s own words, “Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history.” The level of concern expressed by many top AI scientists about the technology they themselves are building is well documented and very different from the optimistic attitudes of the programmers and network engineers who developed the early internet.

Also Read : Apple’s WWDC may include AI-generated emoji and an OpenAI partnership

It is also far from clear that light-touch regulation of the internet has been an unalloyed good for society. Certainly, many successful tech businesses—and their investors—have benefited enormously from the lack of constraints on commerce online. It is less obvious that societies have struck the right balance when it comes to regulating to curb misinformation and disinformation on social media, child exploitation and human trafficking, and a growing youth mental-health crisis.

Goods, infrastructure and society are improved by regulation. It’s because of regulation that cars have seat belts and airbags, that we don’t worry about contaminated milk and that buildings are constructed to be accessible to all. Judicious regulation could ensure the benefits of AI are realised responsibly and more broadly. A good place to start would be policies that give governments more visibility into how the cutting edge of AI is progressing, such as transparency requirements and incident-tracking.

Of course, there are pitfalls to regulation, and these must be managed. Poorly designed regulation can place a disproportionate burden on smaller companies, stifling competition and innovation. It is crucial that policymakers act independently of leading AI companies when developing new rules. They must be vigilant against loopholes, regulatory “moats” that shield early movers from competition, and the potential for regulatory capture. Indeed, Mr Altman’s own calls for AI regulation must be understood in the context of these pitfalls as having potentially self-serving ends. An appropriate regulatory framework will require agile adjustments, keeping pace with the world’s expanding grasp of AI’s capabilities.

Ultimately, we believe in AI’s potential to boost human productivity and well-being in ways never before seen. But the path to that better future is not without peril. OpenAI was founded as a bold experiment to develop increasingly capable AI while prioritising the public good over profits. Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice. It is, therefore, essential that the public sector be closely involved in the development of the technology. Now is the time for governmental bodies around the world to assert themselves. Only through a healthy balance of market forces and prudent regulation can we reliably ensure that AI’s evolution truly benefits all of humanity.

Helen Toner and Tasha McCauley were on OpenAI’s board from 2021 to 2023 and from 2018 to 2023, respectively.

© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under licence. The original content can be found on www.economist.com

Originally appeared on: TheSpuzz

Scoophot
Logo