Yesterday, Elon Musk decided to enter the charged Twitter debate on whether artificial general intelligence (AGI), or the ability for AI to understand or learn any intellectual task that a human being can, is imminent.
He put it this way:
Of course, Musk’s tweets should always be taken with truckloads of salt. The world’s richest man is also known for some of the world’s worst Twitter takes, from manipulating Tesla stock to giving Kanye West’s 2020 presidential run his “full support” — not to mention the recent “will he or won’t he,” regarding his deal to buy Twitter itself.
An open letter to Elon Musk and a $100,000 challenge
Still, Musk’s comments, as usual, could not simply be ignored. In a new post on his Substack, Gary Marcus, author of Rebooting.AI and a big (some say controversial) driver of the AGI critique on Twitter, wrote an open letter to Elon Musk, offering to place a $100,000 bet on whether AGI would appear by 2029.
So far, he said he isn’t surprised that he hasn’t heard back from Musk – but added that the Tesla CEO’s comments are unhelpful in the larger discussions about AI.
“Musk’s pronouncements on AGI just hasten the all-in-rush on current technology, when we actually probably need to take a step back to understand where we are and face the difficult problems realistically,” said Marcus — pointing out that the hardest problems are around getting machines to reason about the everyday world and to have common sense.
An ‘avalanche of misinformation’ about AI
“There is so much hype about AI and so much money being invested, but invested in the wrong things,” said Marcus. “Things like DALL-E 2 and GPT-3 are fun to play with, but they are likely to create an avalanche of misinformation and don’t actually represent the real hard problems in existing AI technologies, like those of racial, social, and gender inequality that have been documented by people like Dr. Abebe Birhane, Mozilla senior fellow in Trustworthy AI.”
There is also a natural tendency to look at AI, and AGI more specifically, as “something magical,” he added. This, he claimed, has deluded enterprise businesses and government policymakers.
“It leads people to imagine AI as a one-size-fits-all universal solvent, which it isn’t,” he said. “I like to tell businesses that AI is really good right now in the part of the curve where you have a lot of training data, but not so good in the long tail.”
Overall, people have invested a great deal of money into AI, Marcus said, based on an unsound premise – and a focus on AGI being around the corner just adds fuel to that fire. “Things might change in fifty or even twenty years, but expecting full wholesale magic is unrealistic,” he explained. “Managing investment in AI means being realistic and not automatically believing press clippings.”
What artificial general intelligence won’t be able to do by 2029
The new post on Marcus’ Substack, called “Dear Elon Musk, here are five things you might want to consider about AGI,” gets into the details of what Marcus really believes will happen by 2029.
“AGI is a problem of enormous scope, because intelligence itself is of a broad scope,” he said in the post, adding that five important things will not occur by the end of the decade:
- “In 2029, AI will not be able to watch a movie and tell you accurately what is going on.”
- “In 2029, AI will not be able to read a novel and reliably answer questions about the plot.”
- “In 2029, AI will not be able to work as a competent cook in an arbitrary kitchen.”
- “In 2029, AI will not be able to reliably construct bug-free code of more than 10,000 lines from natural language specification or by interactions with a non-expert user. [Gluing together code from existing libraries doesn’t count.]”
- “In 2029, AI will not be able to take an arbitrary proof in the mathematical literature written in natural language and convert it into a symbolic form suitable for symbolic verification.”
Waiting for Elon Musk to respond
Marcus said it would be “terrific fun” if Musk actually responded to his challenge of a bet on whether AGI will come to fruition by 2029.
“It would be great to have a public debate, with or without cash on the line,” he said. “The more the public understands about the realities and challenges of AI, the better.”