Site Revisions Underway Some Features May Not Work
Doom & Gloom

Doom & Gloom

Are we too scared? Or not scared enough?

June 11, 2025

If you’ve read anything I’ve posted here over the last year, you know I’ve been talking about AI a lot—and you know I’m mostly optimistic about it. I mean, this blog is literally called Benign—because I genuinely believe a lot of the fears around this tech are, well... benign. But what we’re seeing now isn’t just a shift—it’s the shift. The big one. The one that’s going to hit harder than the internet, mobile phones, and social media combined. I’ve been saying it’s going to rock the world unlike anything we’ve ever seen. And I still believe that.

But something’s shifted lately. Not in the tech itself—we’ve known this stuff was coming. But in how people are reacting to it.

We’re in a new gear now.

Google just dropped Veo 3, the most advanced AI video generator on the planet. Text-to-video is here, for real. And meanwhile, every other post on social media is some viral AI-generated thing: Theo Von as a baby. The cast of Step Brothers as babies. Studio Ghibli-style versions of Breaking Bad. An endless feed of distorted nostalgia, memeified faces, and time-lapsed celebrity clones.

And people hate it.

Like, really hate it.

“It’s not art, it’s slop.”
“It’s ruining the internet.”
“No one asked for this.”
“This is the end of creativity.”
"Almost like the death of a God.”

And yeah... it’s been bumming me out.

Because I’ve been deep in this space for years now. I’ve made entire bodies of work—Jewel Pod, Polite Riot, Clouds of Acid—all using AI as a brush, a tool, a lens. While I don’t blindly generate music, video, or images with AI, I’ve built everything around them—my website, my apps, my chatbots—using a method now known as vibe coding.

The term came from a tweet by Andrej Karpathy and was later expanded on in a Rick Rubin podcast. It basically describes a new style of programming where you let the AI handle the logic, trust the flow, and just react to what you see. You don’t write code so much as you talk to the machine, ask dumb things, copy-paste the bugs back in, and hope it eventually works. And weirdly? It often does.

That’s where I started. I’m the definition of a vibe coder. I barely knew how to code a year ago, but I’ve been building real tools and systems just by trusting the process and having the AI teach me as I go. And now, little by little, I’m shifting from “just get it working” to “actually learning how this works.”

Every project I make becomes a little more me and a little less AI—not for the sake of some grand personal expression, but because I want to actually know how to do this stuff. It’s fun.

It’s like traveling in a foreign country. At first, you rely entirely on translation apps and hand gestures. But over time, you pick up the language. You stop needing the tools. You start speaking for yourself. That’s what vibe coding feels like. For the first time in my life, I’m building the kinds of things I always wanted to—but I’m also learning how to speak the language along the way.

I’ve never been prouder of the work I’m doing.

And yet, all this backlash is making me feel weirdly ashamed.

Not because I’ve made anything bad—but because suddenly the entire world is looking at the medium and saying, “This is garbage.” Even though it’s the same medium I’ve used to make some of the most personal, meaningful art of my life.

Look—I make dumb stuff, too. I’ve made AI memes. I’ve made weird little side projects I’d never publish. Playing with these tools is fun. But seeing AI get reduced to an avalanche of baby-faced celebrities and reanimated sitcoms has made me feel like I’m standing in the middle of a crowded room, watching a flash mob dance to a song that used to mean something to me. Or even worse, that the song never really meant much all along.

And it’s not just the memes. It’s everywhere. You can’t escape it.

Even the things that usually make me feel grounded—like listening to The Bill Simmons Podcast—are creeping into that same space. I know Simmons isn’t for everyone, but I actually read The Book of Basketball—which could just as easily be titled Remembrance of Basketball Things Past—and I can confirm: the guy knows what he’s talking about.

And with the Finals in full swing, I’ve been deep in basketball-podcast-listening mode.

Then today, he had Chuck Klosterman on—another brain I genuinely admire for the most part—and they closed the show with 30 minutes of AI dread.

Not hot takes. Not jokes. Just honest, thoughtful, grim conversation about how AI might be leading us toward a complete reset of humanity. Not in twenty years. In three.

And again—I was bummed.

Because even though the discussion was candid and smart and didn’t feel performative or dumbed down, I walked away with that familiar feeling again: Am I not scared enough? Am I missing something? Am I wrong for not being more pessimistic?

But the more I sat with it, the more I realized: I don’t actually disagree with them. I just see it differently.

The truth is—I don’t know what’s going to happen.

AI might destroy industries. It might lead to mass displacement. Or—as Bill joked—it might invent a superfood that costs nothing and solves global hunger.

Both outcomes are possible. That’s the reality.

Chuck brought up the book Sapiens—and the idea that this may be the first moment in human history where a parent genuinely doesn’t know what their kid needs to know in order to survive in twenty years. Do they need to know how to read? Drive a car? Write a line of code? We don’t know.

That’s scary. But it’s also true of every major leap forward.

When AOL was booming and The Net was in theaters, people were terrified to put credit cards on the internet. And yeah, identity theft happened. But the doomsday version we feared at the time never actually arrived. The world shifted—and we adapted.

Same thing with social media. Same thing with mobile. Same thing with every tool that threatened to rewrite the status quo.

And maybe part of what’s making this moment so disorienting is that all we’re seeing right now are the loudest players.

You’ve got the grifters—people slapping “AI-powered” on startups, like Builder.ai, a $1.5 billion company exposed for using hundreds of human engineers to fake AI-automation. A digital Wizard of Oz scam—one that worked, until it didn’t.

On the other side, you’ve got the luddites, recording TikToks on their smartphones—literal products of AI-enhanced supply chains and algorithmic distribution—ranting into the void: “Stop posting AI slop! It’s not art!!” As if screaming at a tidal wave has ever stopped one from crashing.

Those are the voices we’re seeing because those are the voices that go viral.

The destroyers. The profiters. The ones with an angle to push or a fear to sell.

But what we don’t see—what rarely makes the feed—are the builders.

The ones quietly learning, experimenting, and making things that will far outlast the noise. The ones not on Twitter because they’re too busy building the tools that will reshape our future. The ones asking better questions, and actually doing the work.

So yeah—I don’t know what’s going to happen. It could all be bad. It could all be good. Most likely, it’ll be a messy mix of both.

But one thing’s certain: the tidal wave is coming.

Not in twenty years. In three.

And when it hits, we all get a choice:

You can be a destroyer.
You can be a profiter.
Or you can be a builder.

I know what I want to be.
Do you?

Back to Benign