Stable Diffusion AI art lawsuit, plus caution from OpenAI, DeepMind | The AI Beat

Check out all the on-demand sessions from the Intelligent Security Summit here.


Back in October, I spoke to experts who predicted that legal battles over AI art and copyright infringement could drag on for years, potentially even going as far as the Supreme Court.

Those battles officially began this past Friday, as the first class-action copyright infringement lawsuit around AI art was filed against two companies focused on open-source generative AI art — Stability AI (which developed Stable Diffusion) and Midjourney — as well as DeviantArt, an online art community.

Artists claim AI models produce “derivative works”

The three artists launched the lawsuit through the Joseph Saveri Law Firm and lawyer and designer/programmer Matthew Butterick, who recently teamed up to file a similar lawsuit against Microsoft, GitHub and OpenAI, related to the generative AI programming model CoPilot. The artists claim that Stable Diffusion and Midjourney scraped the Internet to copy billions of works without permission, including theirs, which then are used to produce “derivative works.”

In a blog post, Butterick described Stable Diffusion as a “par­a­site that, if allowed to pro­lif­er­ate, will cause irrepara­ble harm to artists, now and in the future.”

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

Stability AI CEO Emad Mostaque told VentureBeat that the company — which last month said it would honor artist request to opt-out of future Stable Diffusion training — has “not received anything to date” regarding the lawsuit and “once we do we can review it.”

OpenAI’s Sam Altman and DeepMind’s Demis Hassabis signal caution

I’ll be following up on this lawsuit with a more detailed piece — but thought it was interesting that the news arrives as both OpenAI (who released DALL-E 2 and ChatGPT to immense hype) and DeepMind (which has stayed away from publicly releasing creative AI models) expressed caution regarding the future of generative AI.

In a Time magazine interview last week, DeepMind CEO Hassabis said “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” In urging his competitors to proceed cautiously, he said “I would advocate not moving fast and breaking things.”

Meanwhile, as recently as a year ago, OpenAI CEO Sam Altman encouraged speed, tweeting “Move faster. Slowness anywhere justifies slowness everywhere.” But last week he sang a different tune, according to Reuters reporter Krystal Hu, who tweeted: “@sama said OpenAI’s GPT-4 will launch only when they can do it safely & responsibly. ‘In general we are going to release technology much more slowly than people would like. We’re going to sit on it for much longer…’”

Generative AI can turn “from foe to friend”

Debates around generative AI are certainly only beginning. But the time for these conversations is now, according to the World Economic Forum, which released an article yesterday on the topic tied to its annual meeting currently happening in Davos, Switzerland.

“Just as many have advocated for the importance of diverse data and engineers in the AI industry, so must we bring in expertise from psychology, government, cybersecurity and business to the AI conversation,” the article said. “It will take open discussion and shared perspectives between cybersecurity leaders, AI developers, practitioners, business leaders, elected officials and citizens to determine a plan for thoughtful regulation of generative AI. All voices must be heard. Together, we can surely tackle this threat to public safety, critical infrastructure and our world. We can turn generative AI from foe to friend.”


Originally appeared on: TheSpuzz

Scoophot
Logo