My DALL-E dilemma | VentureBeat

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Today I read Kevin Roose’s newly-published New York Times’ article “We Need to Talk About How Good A.I. Is Getting,” [subscription required] featuring an image generated by OpenAI’s app DALL-E 2 from the prompt “infinite joy.” As I pored over the piece and studied the image (which appears to be either a smiling blue alien baby with a glowing heart or a futuristic take on Dreamy Smurf), I felt a familiar cold sweat pooling at the back of my neck. 

Roose discusses artificial intelligence (AI)’s ”golden age of progress” over the past decade and says “it’s time to start taking its potential and risk seriously.” I’ve been thinking (and perhaps overthinking) about that since my first day at VentureBeat back in April. 

When I sauntered into VentureBeat’s Slack channel on my first day, I felt ready to dig deep and go wide covering the AI beat. After all, I had covered enterprise technology for over a decade and had written often about companies that were using AI to do everything from improve personalized advertising and reduce accounting costs to automate supply chains and create better chatbots. 

It took only a few days, however, to realize that I had grossly underestimated the knowledge and understanding I would need to somehow ram into my ears and get into the deepest neural networks of my brain. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Not only that, but I needed to get my gray matter on the case quickly. After all, DALL-E 2 had just been released. Databricks and Snowflake were in a tight race for data leadership. PR reps from dozens of AI companies wanted to have an “intro chat.” Hundreds of AI startups were raising millions. There were what seemed to be thousands of research papers released every week on everything from natural language processing (NLP) to computer vision. My editor wanted ideas and stories ASAP. 

For the next month, I spent my days writing articles and my evenings and weekends reading, researching, searching – anything I could do to wrap my mind around what seemed like a tsunami of AI-related information, from science and trends to history and industry culture. 

When I discovered, not surprisingly, that I could never learn all that I needed to know about AI in such a short period of time, I relaxed and settled in for the news cycle ride. I knew I was a good reporter and I would do all I could to make sure my facts were straight, my stories were well-researched and my reasoning was sound. 

That’s where my DALL-E dilemma comes in. In Roose’s piece, he talks about testing OpenAI’s text-to-image generator in beta and quickly becoming obsessed. While I didn’t have beta access, I got pretty obsessed, too. What’s not to love about scrolling Twitter to see adorable DALL-E creations like pugs that look like Pikachu or avocado-style couches or foxes in the style of Monet?

And it’s not just DALL-E. My heart skipped beats as I giggled at Google Imagen’s take on a teddy bear doing the butterfly stroke in an Olympic-sized pool. I marveled at Midjourney’s fantastical, Game of Thrones-style bunnies and high-definition renderings of rose-laden forests. And I had the chance to actually use the publicly available DALL-E mini, recently rebranded as Craiyon, with its strangely primitive-yet-beautiful imagery. 

How to cover AI progress like DALL-E 

DALL-E 2 and its large language model (LLM) counterparts have gotten massive mainstream hype over the past year for good reason. After all, as Roose put it, “What’s impressive about DALL-E 2 isn’t just the art it generates. It’s how it generates art. These aren’t composites made out of existing internet images — they’re wholly new creations made through a complex AI process known as ‘diffusion,’ which starts with a random series of pixels and refines it repeatedly until it matches a given text description.” 

In addition, Roose pointed out that DALL-E has big implications for creative professionals and “raises important questions about how all of this AI-generated art will be used, and whether we need to worry about a surge in synthetic propaganda, hyper-realistic deepfakes or even nonconsensual pornography.” 

But, like Roose, I worry how to best cover AI progress across the board, as well as the longstanding debate between those who think AI is rapidly on its way to becoming seriously scary (or believe it already is) and those who think the hype about AI’s progress (including this summer’s showdown over supposed AI sentience) is seriously overblown. 

I recently interviewed computer scientist and Turing Award winner Geoffrey Hinton about the past decade of progress in deep learning (story to come soon). At the end of our call, I took a walk with a spring in my step, smiling ear to ear. Imagine how Hinton felt when he realized his decades-long efforts to bring neural networks to the mainstream of AI research and application had succeeded, as he said, “beyond my wildest dreams.” A testament to persistence.

But then I scrolled dolefully through Twitter, reading posts that veered between long, despairing threads over the lack of AI ethics and the rise of AI bias and the cost of compute and the carbon and the climate, to the exclamation point and emoji-filled posts cheering the latest model, the next revolutionary technique, the bigger, better, bigger, better … whatever. Where would it end?

Understanding AI’s full evolution 

Roose rightly points out that the news media “needs to do a better job of explaining AI progress to non-experts.” Too often, he explains, journalists “rely on outdated sci-fi shorthand to translate what’s happening in AI to a general audience. We sometimes compare large language models to Skynet and HAL 9000, and flatten promising machine learning breakthroughs to panicky ‘The robots are coming!’ headlines that we think will resonate with readers.” 

What’s most important, he says, is to try to “understand all the ways AI is evolving, and what that might mean for our future.” 

For my part, I’m certainly trying to make sure I cover the AI landscape in a way that resonates with our audience of enterprise technical decision-makers, from data science practitioners to C-suite executives. That’s my DALL-E dilemma: How do I write stories about AI that are entertaining and creative, like the most striking AI-generated art, but also accurate and unbiased?

Sometimes I feel like I need the right DALL-E image (or at least, since I don’t have access to DALL-E, I can turn to the free and publicly available DALL-E mini/Craiyon), to describe the cold sweat on the nape of my neck as I scroll through Twitter, the furrow in my forehead as I try to fully understand what I’m being told/sold, as well as the chest-clutching fear I feel sometimes as I worry I’ll get it all wrong. 

Maybe: A watercolor-style portrait of a woman running on a dreamy beach as if her life depended on it, who is reaching for the sky after accidentally letting go of a hundred large red balloons, all rising in different directions, threatening to get lost in the white fluffy clouds above. 

Originally appeared on: TheSpuzz

Scoophot
Logo