Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Artificial intelligence (AI) pioneer Geoffrey Hinton, one of the trailblazers of the deep learning “revolution” that began a decade ago, says that the rapid progress in AI will continue to accelerate.
In an interview before the 10-year anniversary of key neural network research that led to a major AI breakthrough in 2012, Hinton and other leading AI luminaries fired back at some critics who say deep learning has “hit a wall.”
“We’re going to see big advances in robotics — dexterous, agile, more compliant robots that do things more efficiently and gently like we do,” Hinton said.
Other AI pathbreakers, including Yann LeCun, head of AI and chief scientist at Meta and Stanford University professor Fei-Fei Li, agree with Hinton that the results from the groundbreaking 2012 research on the ImageNet database — which was built on previous work to unlock significant advancements in computer vision specifically and deep learning overall — pushed deep learning into the mainstream and have sparked a massive momentum that will be hard to stop.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
In an interview with VentureBeat, LeCun said that obstacles are being cleared at an incredible and accelerating speed. “The progress over just the last four or five years has been astonishing,” he added.
And Li, who in 2006 invented ImageNet, a large-scale dataset of human-annotated photos for developing computer vision algorithms, told VentureBeat that the evolution of deep learning since 2012 has been “a phenomenal revolution that I could not have dreamed of.”
Success tends to draw critics, however. And there are strong voices who call out the limitations of deep learning and say its success is extremely narrow in scope. They also maintain the hype that neural nets have created is just that, and is not close to being the fundamental breakthrough that some supporters say it is: that it is the groundwork that will eventually help us get to the anticipated “artificial general intelligence” (AGI), where AI is truly human-like in its reasoning power.
Looking back on a booming AI decade
Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote this past March about deep learning “hitting a wall” and says that while there has certainly been progress, “we are fairly stuck on common sense knowledge and reasoning about the physical world.”
And Emily Bender, professor of computational linguistics at the University of Washington and a regular critic of what she calls the “deep learning bubble,” said she doesn’t think that today’s natural language processing (NLP) and computer vision models add up to “substantial steps” toward “what other people mean by AI and AGI.”
Regardless, what the critics can’t take away is that huge progress has already been made in some key applications like computer vision and language that have set thousands of companies off on a scramble to harness the power of deep learning, power that has already yielded impressive results in recommendation engines, translation software, chatbots and much more.
However, there are also serious deep learning debates that can’t be ignored. There are essential issues to be addressed around AI ethics and bias, for example, as well as questions about how AI regulation can protect the public from being discriminated against in areas such as employment, medical care and surveillance.
In 2022, as we look back on a booming AI decade, VentureBeat wanted to know the following: What lessons can we learn from the past decade of deep learning progress? And what does the future hold for this revolutionary technology that’s changing the world, for better or worse?
AI pioneers knew a revolution was coming
Hinton says he always knew the deep learning “revolution” was coming.
“A bunch of us were convinced this had to be the future [of artificial intelligence],” said Hinton, whose 1986 paper popularized the backpropagation algorithm for training multilayer neural networks. “We managed to show that what we had believed all along was correct.”
LeCun, who pioneered the use of backpropagation and convolutional neural networks in 1989, agrees. “I had very little doubt that eventually, techniques similar to the ones we had developed in the 80s and 90s” would be adopted, he said.
What Hinton and LeCun, among others, believed was a contrarian view that deep learning architectures such as multilayered neural networks could be applied to fields such as computer vision, speech recognition, NLP and machine translation to produce results as good or better than those of human experts. Pushing back against critics who often refused to even consider their research, they maintained that algorithmic techniques such as backpropagation and convolutional neural networks were key to jumpstarting AI progress, which had stalled since a series of setbacks in the 1980s and 1990s.
Meanwhile, Li, who is also codirector of the Stanford Institute for Human-Centered AI and former chief scientist of AI and machine learning at Google, had also been confident that her hypothesis — that with the right algorithms, the ImageNet database held the key to advancing computer vision and deep learning research — was correct.
“It was a very out-of-the-box way of thinking about machine learning and a high-risk move,” she said, but “we believed scientifically that our hypothesis was right.”
However, all of these theories, developed over several decades of AI research, didn’t fully prove themselves until the autumn of 2012. That was when a breakthrough occurred that many say sparked a new deep learning revolution.
In October 2012, Alex Krizhevsky and Ilya Sutskever, along with Hinton as their Ph.D. advisor, entered the ImageNet competition, which was founded by Li to evaluate algorithms designed for large-scale object detection and image classification. The trio won with their paper ImageNet Classification with Deep Convolutional Neural Networks, which used the ImageNet database to create a pioneering neural network known as AlexNet. It proved to be far more accurate at classifying different images than anything that had come before.
The paper, which wowed the AI research community, built on earlier breakthroughs and, thanks to the ImageNet dataset and more powerful GPU hardware, directly led to the next decade’s major AI success stories — everything from Google Photos, Google Translate and Uber to Alexa, DALL-E and AlphaFold.
Since then, investment in AI has grown exponentially: The global startup funding of AI grew from $670 million in 2011 to $36 billion U.S. dollars in 2020, and then doubled again to $77 billion in 2021.
The year neural nets went mainstream
After the 2012 ImageNet competition, media outlets quickly picked up on the deep learning trend. A New York Times article the following month, Scientists See Promise in Deep-Learning Programs [subscription required], said: “Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.” What is new, the article continued, “is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just ‘neural nets’ for their resemblance to the neural connections in the brain.”
AlexNet was not alone in making big deep learning news that year: In June 2012, researchers at Google’s X lab built a neural network made up of 16,000 computer processors with one billion connections that, over time, began to identify “cat-like” features until it could recognize cat videos on YouTube with a high degree of accuracy. At the same time, Jeffrey Dean and Andrew Ng were doing breakthrough work on large-scale image recognition at Google Brain. And at 2012’s IEEE Conference on Computer Vision and Pattern Recognition, researchers Dan Ciregan et al. significantly improved upon the best performance for convolutional neural networks on multiple image databases.
All told, by 2013, “pretty much all the computer vision research had switched to neural nets,” said Hinton, who since then has divided his time between Google Research and the University of Toronto. It was a nearly total AI change of heart from as recently as 2007, he added, when “it wasn’t appropriate to have two papers on deep learning at a conference.”
A decade of deep learning progress
Li said her intimate involvement in the deep learning breakthroughs – she personally announced the ImageNet competition winner at the 2012 conference in Florence, Italy – meant it comes as no surprise that people recognize the importance of that moment.
“[ImageNet] was a vision started back in 2006 that hardly anybody supported,” said Li. But, she added, it “really paid off in such a historical, momentous way.”
Since 2012, the progress in deep learning has been both strikingly fast and impressively deep.
“There are obstacles that are being cleared at an incredible speed,” said LeCun, citing progress in natural language understanding, translation in text generation and image synthesis.
Some areas have even progressed more quickly than expected. For Hinton, that includes using neural networks in machine translation, which saw great strides in 2014. “I thought that would be many more years,” he said. And Li admitted that advances in computer vision — such as DALL-E — “have moved faster than I thought.”
Dismissing deep learning critics
However, not everyone agrees that deep learning progress has been jaw-dropping. In November 2012, Gary Marcus, professor emeritus at NYU and the founder and CEO of Robust.AI, wrote an article for the New Yorker [subscription required] in which he said ,“To paraphrase an old parable, Hinton has built a better ladder; but a better ladder doesn’t necessarily get you to the moon.”
Today, Marcus says he doesn’t think deep learning has brought AI any closer to the “moon” — the moon being artificial general intelligence, or human-level AI — than it was a decade ago.
“Of course there’s been progress, but in order to get to the moon, you would have to solve causal understanding and natural language understanding and reasoning,” he said. “There’s not been a lot of progress on those things.”
Marcus said he believes that hybrid models that combine neural networks with symbolic artificial intelligence, the branch of AI that dominated the field before the rise of deep learning, is the way forward to combat the limits of neural networks.
For their part, both Hinton and LeCun dismiss Marcus’ criticisms.
“[Deep learning] hasn’t hit a wall – if you look at the progress recently, it’s been amazing,” said Hinton, though he has acknowledged in the past that deep learning is limited in the scope of problems it can solve.
There are “no walls being hit,” added LeCun. “I think there are obstacles to clear and solutions to those obstacles that are not entirely known,” he said. “But I don’t see progress slowing down at all … progress is accelerating, if anything.”
Still, Bender isn’t convinced. “To the extent that they’re talking about simply progress towards classifying images according to labels provided in benchmarks like ImageNet, it seems like 2012 had some qualitative breakthroughs,” she told VentureBeat by email. “If they are talking about anything grander than that, it’s all hype.”
Issues of AI bias and ethics loom large
In other ways, Bender also maintains that the field of AI and deep learning has gone too far. “I do think that the ability (compute power + effective algorithms) to process very large datasets into systems that can generate synthetic text and images has led to us getting way out over our skis in several ways,” she said. For example, “we seem to be stuck in a cycle of people ‘discovering’ that models are biased and proposing trying to debias them, despite well-established results that there is no such thing as a fully debiased dataset or model.”
In addition, she said that she would “like to see the field be held to real standards of accountability, both for empirical claims made actually being tested and for product safety – for that to happen, we will need the public at large to understand what is at stake as well as how to see through AI hype claims and we will need effective regulation.”
However, LeCun pointed out that “these are complicated, important questions that people tend to simplify,” and a lot of people “have assumptions of ill intent.” Most companies, he maintained, “actually want to do the right thing.”
In addition, he complained about those not involved in the science and technology and research of AI.
“You have a whole ecosystem of people kind of shooting from the bleachers,” he said, “and basically are just attracting attention.”
Deep learning debates will certainly continue
As fierce as these debates can seem, Li emphasizes that they are what science is all about. “Science is not the truth, science is a journey to seek the truth,” she said. “It’s the journey to discover and to improve — so the debates, the criticisms, the celebration is all part of it.”
Yet, some of the debates and criticism strike her as “a bit contrived,” with extremes on either side, whether it’s saying AI is all wrong or that AGI is around the corner. “I think it’s a relatively popularized version of a deeper, much more subtle, more nuanced, more multidimensional scientific debate,” she said.
Certainly, Li pointed out, there have been disappointments in AI progress over the past decade –- and not always about technology. “I think the most disappointing thing is back in 2014 when, together with my former student, I cofounded AI4ALL and started to bring young women, students of color and students from underserved communities into the world of AI,” she said. “We wanted to see a future that is much more diverse in the AI world.”
While it has only been eight years, she insisted the change is still too slow. “I would love to see faster, deeper changes and I don’t see enough effort in helping the pipeline, especially in the middle and high school age group,” she said. “We have already lost so many talented students.”
The future of AI and deep learning
LeCun admits that some AI challenges to which people have devoted a huge amount of resources have not been solved, such as autonomous driving.
“I would say that other people underestimated the complexity of it,” he said, adding that he doesn’t put himself in that category. “I knew it was hard and would take a long time,” he claimed. “I disagree with some people who say that we basically have it all figured out … [that] it’s just a matter of making those models bigger.”
In fact, LeCun recently published a blueprint for creating “autonomous machine intelligence” that also shows how he thinks current approaches to AI will not get us to human-level AI.
But he also still sees vast potential for the future of deep learning: What he is most personally excited about and actively working on, he says, is getting machines to learn more efficiently — more like animals and humans.
“The big question for me is what is the underlying principle on which animal learning is based — that’s one reason I’ve been advocating for things like self-supervised learning,” he said. “That progress would allow us to build things that we are currently completely out of reach, like intelligent systems that can help us in our daily lives as if they were human assistants, which is something that we’re going to need because we’re all going to wear augmented reality glasses and we’re going to have to interact with them.”
Hinton agrees that there is much more deep learning progress on the way. In addition to advances in robotics, he also believes there will be another breakthrough in the basic computational infrastructure for neural nets, because “currently it’s just digital computing done with accelerators that are very good at doing matrix multipliers.” For backpropagation, he said, analog signals need to be converted to digital.
“I think we will find alternatives to backpropagation that work in analog hardware,” he said. “I’m pretty convinced that in the longer run we’ll have almost all the computation done in analog.”
Li says that what is most important for the future of deep learning is communication and education. “[At Stanford HAI], we actually spend an excessive amount of effort to educate business leaders, government, policymakers, media and reporters and journalists and just society at large, and create symposiums, conferences, workshops, issuing policy briefs, industry briefs,” she said.
With technology that is so new, she added, “I’m personally very concerned that the lack of background knowledge doesn’t help in transmitting a more nuanced and more thoughtful description of what this time is about.”
How 10 years of deep learning will be remembered
For Hinton, the past decade has offered deep learning success “beyond my wildest dreams.”
But, he emphasizes that while deep learning has made huge gains, it should be also remembered as an era of computer hardware advances. “It’s all on the back of the progress in computer hardware,” he said.
Critics like Marcus say that while some progress has been made with deep learning, “I think it might be seen in hindsight as a bit of a misadventure,” he said. “I think people in 2050 will look at the systems from 2022 and be like, yeah, they were brave, but they didn’t really work.”
But Li hopes that the last decade will be remembered as the beginning of a “great digital revolution that is making all humans, not just a few humans, or segments of humans, live and work better.”
As a scientist, she added, “I will never want to think that today’s deep learning is the end of AI exploration.” And societally, she said she wants to see AI as “an incredible technological tool that’s being developed and used in the most human-centered way – it’s imperative that we recognize the profound impact of this tool and we embrace the human-centered framework of thinking and designing and deploying AI.”
After all, she pointed out: “How we’re going to be remembered depends on what we’re doing now.”