Hugging Face hosts ‘Woodstock of AI,’ emerges as leading voice for open-source AI development

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Hugging Face, the fast-growing New York-based startup that has become a central hub for open-source code and models, cemented its status as a leading voice in the AI community on Friday, drawing more than 5,000 people to a local meetup celebrating open-source technology at the Exploratorium in downtown San Francisco.

The gathering was serendipitously born three weeks ago, when Hugging Face’s charismatic cofounder and CEO, Clement Delangue, tweeted that he was planning to be in San Francisco and wanted to meet with others interested in open-source AI development. 

Within days, interest in the informal meetup snowballed. Registrations ballooned into the thousands. In the final week before the event, Delangue booked the Exploratorium museum, one of the few venues still available that could support thousands of people.

He turned the informal meetup into a massive showcase and networking opportunity for those fascinated by artificial intelligence, from real-world researchers and programmers to investors, entrepreneurs and the simply curious.


Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

“We just crossed 1,500 registrations for the Open-Source AI Meetup!” Delangue said in a text blast to the RSVP list just a few days before the event. “What started with a tweet might lead to the biggest AI meetup in history.”

Hugging Face CEO Clem Delangue messages attendees that his event might be “the biggest AI meetup in history.”

The event was set against the backdrop of a growing debate over large language models (LLMs) and their applications. Critics have expressed concerns about the potential monopolization and commodification of closed LLMs by OpenAI and other companies, such as Google and Microsoft.

In contrast, open LLMs are trained on general web data and serve as a substrate for downstream applications to build upon. The open-source community views LLMs as a public good or a common resource, rather than a private product or service.

Open-source AI has a breakout moment

Attendees began streaming into the Exploratorium around 6 pm on Friday and did not stop coming for hours. They formed a striking blend of ages, races and backgrounds, including retirees, parents, engineers and large groups of 20-somethings dressed in a wide range of attire — from ball gowns to baggy jeans — a broad mix of high fashion and streetwear. The atmosphere was full of energy and the crowd buzzing with excitement, similar to a music festival.

In brief remarks, Delangue addressed the attendees and said the turnout testified to the growing mainstream interest and excitement around open-source AI development. He said Hugging Face’s mission was to make state-of-the-art AI accessible to as wide an audience as possible and, in the process, increase transparency across the ecosystem.

“We expected maybe a few, 100 people to show up,” Delangue said in an address to attendees. “We have 5,000 people tonight. That’s amazing. People are calling it the ‘Woodstock of AI.’”

“I think this event is a celebration of the power of open science and open source,” said Delangue. “I think it’s really important for us to remember in AI that we are where we are because of open science and open source.”

“If this wasn’t for the ‘Attention Is All You Need’ paper, for ‘The Birth’ paper, and for the ‘Latent Diffusion’ paper, we might be 20, 30, 40 or 50 years away from where we are today in terms of capabilities and possibilities for AI,” he said. “If it wasn’t for open-source libraries or languages, if it wasn’t for frameworks like PyTorch, TensorFlow, Keras, Hugging Face, transformers and diffusers, we wouldn’t be where we are today.”

“Open science and open source [are ways] to build a more inclusive future, with less concentration of power in the hands of a few, more contribution from underrepresented populations to fight biases, and overall a much safer future with the involvement of civil society, of nonprofits, of regulators to bring all the positive impact that we can have with AI and machine learning,” Delangue added. “And that’s what we’ve seen on Hugging Face: the impact of open science open source. All of you in the room have contributed to over 100,000 open models on the platform.”

The battle between open and closed LLMs

In recent weeks, a high-stakes debate has been unfolding over whether new large AI models should remain proprietary and commercialized or instead be released as open-source technologies.

On one side, researchers argue transparency reduces risks and commercial pressures to deploy AI before it’s ready; on the other, companies say secrecy is needed to profit from and control their technology. The issue has come to a head in recent weeks as LLMs begin to raise alarms, but there is still no consensus on whether open science or commercialized AI will yield more trustworthy systems.

On Wednesday, three days prior to the open-source AI event, a highly contentious open letter calling for a six-month pause on large-scale AI development made the rounds in the AI community. The letter was signed by high-profile names such as Elon Musk, Steve Wozniak, Yoshua Bengio, Gary Marcus and several thousand other AI experts, researchers and industry leaders.

“I think OpenAI has done incredible work advancing the state of the art. I think first they’re advancing large language models through GPT-2 and GPT-3 — and then the instructGPT or ChatGPT-style model that follows instructions. So, I think that’s at least two major breakthroughs that OpenAI has been responsible for,” Andrew Ng, one of the most influential voices in machine learning over the past decade, said in an interview with VentureBeat.

“At the same time, I feel like I’m also excited about all the open-language models that are being released,” he added. “But I think it’s very reasonable if, for different reasons, different companies choose to have different policies. I’m excited about the very open models and grateful for all the researchers publishing open models, but I’m also grateful for all the work that OpenAI has done to push this out.”

The path to ethical AI likely depends on balancing scientific openness and corporate secrecy. But that balance clearly remains elusive, and the future of AI hangs in the balance. How tech companies and researchers collaborate — or don’t — will determine whether AI elevates or endangers our lives. The stakes are immense, but so, too, are the challenges of navigating this debate.

Originally appeared on: TheSpuzz