Check out all the on-demand sessions from the Intelligent Security Summit here.
The current gold rush to capitalize on generative AI is, ultimately, about making money and boosting business.
After all, Microsoft didn’t just agree to invest fresh new billions into OpenAI just because the latter has a mission to “ensure advanced AI benefits all of humanity.” It’s about commercializing the technology — as in, getting financial gain.
The California gold rush of 1848 – 1855 was about financial gain, too.
But while a select few miners and merchants struck it rich back in the mid-19th century, the mad dash for shiny riches also led to violence against Native Americans, discrimination against Chinese immigrants, and plenty of shattered dreams. The gold rush changed American society — for one thing, it led directly to California statehood in 1850 — but we should never forget the hundreds of thousands of individual people affected.
Intelligent Security Summit On-Demand
Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.
I have shared in the excitement around how generative AI tools like ChatGPT, DALL-E and Stable Diffusion may transform the enterprise and technology landscape. But the past week’s AI and Big Tech news, from outsourced labor to layoffs and lawsuits, provided a sober reminder of the human side of the generative AI storyline that I can’t — and enterprise businesses shouldn’t — simply ignore.
Disturbing news of how ChatGPT was trained
First, there was Billy Perrigo’s Time story on Wednesday about how OpenAI used outsourced Kenyan laborers earning less than $2 per hour to label violent and hate-speech-filled data to help train ChatGPT to serve up less toxic output.
This storyline is not new — AI researcher Timnit Gebru, along with Adrienne Williams and Milagros Miceli, reported on the exploited labor behind AI for NOEMA in October. And the history of industrialization, let alone technology, has been riddled with tales of sweatshops and mistreated labor. But reading Perrigo’s story in the context of OpenAI’s rise and ChatGPT’s hype was particularly upsetting.
Google and Microsoft lay off thousands as AI advances
It was also a week overflowing with Big Tech pink slips: On Wednesday, Microsoft announced plans to lay off 10,000 employees by March, while Amazon began layoffs that will total 18,000 workers. Then, on Friday, Google announced it will slash 12,000 jobs, the biggest layoffs in its history.
The blame for layoffs likely can’t be laid at AI’s feet: As Andrew Chow explained in a new Time piece, they have more to do with current economic conditions, including over-expansion during the pandemic and the end to low interest rates, than to ChatGPT.
That said, the optics are terrible: Microsoft invests billions in OpenAI and tells thousands their jobs are history? Google brings in founders Larry Page and Sergey Brin to help in the AI fight while employees with decades of experience are shown the door? Sigh.
All humans will be affected by generative AI. And we need a minute.
Generative AI is coming at humanity fast and furious. But while businesses might be chomping at the bit to see how it will evolve into a “killer” use case, the rest of us need a minute because we are all going to be affected one way or another and no, it’s not going to be all roses and revenue.
The news last week around CNET pausing its controversial AI-generated stories; lawsuits by artists against text-to-image generators like Stable Diffusion; teachers and schools frantically freaking out about how to deal with ChatGPT — it’s a lot.
It’s fine for VCs and researchers and startups and Big Tech execs and enterprise CIOs to crow about the generative AI possibilities coming down the pike. The appetite for gold is real.
But at the very least, all generative AI stakeholders should acknowledge the human toll. Even if AI isn’t going to take all of our jobs and destroy humanity, there are still people affected at every step of this long and winding journey of evolving artificial intelligence and machine learning. And they should up their game in making sure they focus on how AI can evolve safely, ethically, responsibly and humanely.