Site Revisions Underway Some Features May Not Work
Uncanny Backlash

Uncanny Backlash

What Natasha Lyonne’s AI controversy really says about us

June 3, 2025

Natasha Lyonne has spent her career in the mud.

From Slums of Beverly Hills to Russian Doll, But I’m a Cheerleader to Poker Face, she’s chosen roles—and stories—that are scrappy, personal, and a little unhinged. She’s been a cult icon for 25 years without ever turning into a caricature. There are few artists working today more consistently associated with offbeat, human-centered storytelling.

So when Lyonne announced she was making her directorial debut with a “hybrid” film that uses AI tools—through an ethical studio she co-founded—you’d think the internet might pause.

At the very least, you’d think her track record would earn her the benefit of the doubt.

It didn’t.

The moment the word “AI” entered the announcement, the nuance disappeared.

The backlash began.

Trust No Artist

To be fair, public anxiety around AI isn’t unwarranted. It’s a tool often deployed by people trying to cut costs, cut corners, or cut jobs. It’s been used to replicate styles without consent and profit from work it didn’t originate. "Nobody asked for this." And people are tired of being told that “innovation” is synonymous with “inevitable.”

But Natasha Lyonne is not some opportunistic tech executive with a disruption fetish. She’s not replacing actors or writers with models trained on stolen labor. She’s making a weird indie sci-fi comedy about a teenager losing herself in an AR video game.

And she’s doing it with tools that, according to her, are being used responsibly—for things like set extensions, not synthetic lead performances. She’s talking about exploring a new format, not replacing an old one.

And yet, even with all that context, the outcry came fast and loud. So loud that Lyonne had to publicly address it. She described it as “comedic” how quickly people misunderstand headlines, and admitted that being on the receiving end of a backlash felt “scary” and “not fun.”

Context Collapse

The problem isn’t just that people got mad.

It’s that the anger wasn’t tied to specifics. No one waited to see what she was making. No one seemed interested in how the technology would be used—or how that use might differ from the dystopian narratives we’ve grown used to.

That’s the real signal here: the absence of curiosity.

It’s not wrong to scrutinize AI. But it is short-sighted to flatten every use case into the same moral panic. Because when you fail to distinguish between deepfaked misinformation and ethical set design tools, you don’t just make a mistake—you make the whole conversation dumber.

And when even an artist like Natasha Lyonne—someone with no history of selling out—can’t explore new tools without being framed as a villain, it sends a message to everyone else:

Don’t try.

What We Lose When We Punish Curiosity

This isn’t about defending a specific project. Maybe Uncanny Valley will be great. Maybe it won’t. But if we don’t allow trusted artists the space to experiment—especially ones who’ve earned our trust through years of thoughtful work—we don’t leave any room for growth.

There are going to be bad uses of AI in film. That’s a given. But there will also be surprising, human, moving ones. And if we chase every experiment off the table before it begins, we don’t slow the future—we just make sure the wrong people shape it.

Ethics Was the Whole Point

And here’s what makes the backlash even more baffling: this wasn’t a case of corporate overreach or shady model training. Lyonne did the thing AI skeptics have been begging for.

She partnered with engineers to build a transparent, ethically sourced model. She co-founded a studio focused on artist protections. She used the tech as a supplement, not a replacement. No one’s job was scraped or stolen. No one was erased.

In fact, Lyonne has spent years warning against AI misuse. She’s signed open letters. She’s spoken publicly about the need for consent and caution. She’s done more to advocate for artist rights than most of her critics ever will.

And still—because the word “AI” appeared—people decided she was a threat.

That’s not just unfair. It’s revealing.

Because if even someone like Lyonne gets attacked for doing everything right, the problem clearly isn’t the how.

It’s that we’ve made AI a moral Rorschach test—where even good intentions look like betrayal.

Lyonne isn’t promising a techno-utopia. She’s trying to make a weird sci-fi comedy.

We should let her.

Back to Benign