The Dilemma of AI Companies: Addressing the Negative Impact

Players gonna play. Haters gonna hate. But when it comes to the pornographic AI-generated deepfakes of Taylor Swift, which were shocking, awful and viral enough to send Elon Musk skittering to hire 100 more X content moderators and Microsoft to commit to more guardrails on its Designer AI app, I would personally like to say to AI companies: No, you cannot simply ‘shake it off.”

I know you would like to shake it off. You’d like to keep cruisin.’ You can’t stop, you say. You won’t stop groovin’. It’s like you have this music in your mind sayin’ “it’s gonna be alright.” After all, Marc Andreessen’s “Techno-Optimist Manifesto” said “Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential.” OpenAI’s oft-stated mission is to develop artificial general intelligence (AGI) that benefits all of humanity. Anthropic is so confident it can build reliable, interpretable, and steerable AI systems that it is building them. And Meta’s chief AI scientist Yann LeCun reminded us all yesterday that the “world didn’t end” five years after GPT-2 was deemed too dangerous to release. “In fact, nothing bad happened,” he posted on X. Sorry, Yann — yes, bad things are happening with AI. That doesn’t mean good things aren’t happening too, or that overall optimism isn’t warranted if we look at the grand sweep of technological evolution in the rear-view mirror.

But yes — bad things are happening, and perhaps the “normies” understand that better than most of the AI industry, because it is their lives and livelihoods that are on the front lines of AI impact. I think it’s important that AI companies fully acknowledge this, in the most non-condescending way possible, and clarify the ways they are addressing it. Only then, I believe, will they avoid falling off the edge of the disillusionment cliff I discussed back in October.

Recognizing the Challenges and Responsibilities

Along with the fast pace of compelling, even jaw-dropping AI developments, I said back then, AI also faces a laundry list of complex challenges — from election misinformation and AI-generated porn to workforce displacement and plagiarism. AI may have incredible positive potential for humanity’s future, but I don’t think companies are doing a great job of communicating what that is. And now, they clearly aren’t doing a great job of communicating how they will fix what is already broken. As Swifties know perfectly well, “now we got problems…you made a really big cut.”

I love the AI beat. I really do — it’s exciting and promising and fascinating. However, it can be exhausting always rooting for what many see as a morally ambiguous anti-hero technology. And sometimes I wish the most vocal AI leaders would stand up and say “I’m the problem, it’s me, at tea time, everybody agrees, I’ll stare directly at the sun but never in the mirror.” But they need to look in the mirror: No matter how many well-meaning, high-minded, good-intentioned AI researchers, executives, academics and policy makers exist, there should be no doubt in anyone’s mind that the Taylor Swift AI deepfake scandal is just the beginning. Millions of women and girls are at risk for being targeted with AI-generated porn. Experts say AI will make the 2024 election a “hot mess.” Whether they can prove it or not, thousands of workers will blame AI for their layoffs. Many “normies” I talk to already sneer with derision when they hear the term “AI.” I’m sure that is incredibly frustrating to those who see the power and promise of AI as a bright, shining star with the potential to solve so many of humanity’s biggest challenges. But if AI companies can’t figure out a way forward that doesn’t simply run over the very humans they are hoping will use and appreciate — and not abuse — the technology? Well, if that happens — baby, now we got bad blood.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts