Site Revisions Underway Some Features May Not Work
Producing AI Music is Still Producing

Sampled Past. Generated Future.

Fatboy Slim, AI Music, Race, and Representation

May 19, 2025

I recently watched an old video of Fatboy Slim breaking down how he created Rockafeller Skank. It’s mesmerizing. He walks through his sampling process in detail, showing how the track is stitched together from dozens of obscure sources: funk breaks, library records, forgotten vinyl from secondhand shops. He jokes about still not having all the samples cleared, crossing his fingers as he admits one of the original creators hasn’t come after him yet.

Fatboy Slim - The Rockafeller Skank | The story behind the song

It’s a brilliant piece of honesty—and a fascinating bit of musical history. But it’s also a mirror.

Everything Fatboy Slim describes in that video—the layering, the remixing, the recontextualizing—mirrors exactly how generative AI music tools work. He wasn’t ripping off songs. He was reimagining them. Reassembling cultural references into something new that matched his taste, his vision. That’s exactly how tools like Suno and Udio operate today: not by copying, but by drawing inspiration from millions of data points to synthesize something new.

Critics of AI music often say it’s theft. That it plagiarizes existing work. That it reuses other people’s ideas. But that’s what sampling was, too.

And for decades, sampling wasn’t just accepted—it was celebrated. It birthed new genres. It gave voice to new creators. It made music more inclusive.

In fact, in that same video, Fatboy Slim explains why he fell in love with sampling in the first place. He says: “That’s when white people could make black music without pretending to be black... you could get the funk by using drum loops and samples rather than having to sit there pretending to be Bootsy Collins.”

There’s honesty in that statement. And there’s tension. On one hand, it speaks to the joy of creative exploration—of finding your voice in sounds that weren’t originally yours. On the other, it touches on the real and complicated issue of cultural appropriation. What does it mean for a white artist to “remix” Black art for mass audiences? How do we draw the line between influence and exploitation?

These questions are not new. They’ve been asked about Elvis, about Eminem, about Vanilla Ice. They’ve been asked about Iggy Azalea and Miley Cyrus.

But here’s what I find interesting: generative AI music doesn’t sample. Not literally. It doesn’t lift a waveform and drop it into a DAW. It creates new waveforms from scratch—trained on patterns, not copies. It doesn’t repackage someone else’s melody. It models what melodies in that genre tend to sound like. It’s not pasting—it’s painting.

And that makes this feel, to me, like a step forward.

Why I Make AI Music

I’ve talked before about how and why I create music using AI. I work across a variety of personas—some male, some female, some abstract. I write lyrics from different perspectives. I write the lyrics, sculpt beats with Suno, and iterate on tracks until they match the emotion in my head.

Sometimes those emotions emerge through Jewel Pod, a character whose sound pulls from gangster rap, trip hop, and club anthems.. Other times they sound like Clouds of Acid, rawer, more honest and personal, more human. I don’t adopt these voices to pretend I’m something I’m not. I use them to explore sounds I love—sounds I want to hear more of in the world.

Just like Fatboy Slim didn’t pretend to be Bootsy Collins, I’m not pretending to be M.I.A. or Azalia Banks or Tegan and Sara. I’m just finally able to join the conversation in genres I used to only admire from a distance.

And in doing so, I’ve realized how meaningful this technology could be for people far beyond me.

Because for some people, the issue isn’t creative curiosity—it’s identity.

Expression vs. Appropriation

I’m a cis white man. I don’t pretend otherwise. And I take representation seriously. I’ve spent a lot of time thinking about what’s mine to say, and how to say it without stepping on someone else’s story.

But what excites me about AI music is that it could allow more people to express themselves—not fewer. People who feel misaligned with their physical voice. People who’ve never had access to music production tools. People from marginalized communities who’ve never heard themselves reflected in the catalogs of mainstream art.

When Fatboy Slim and his peers sampled Black artists, it created something vibrant—but it also raised questions of ownership. With generative AI, we’re entering a new paradigm: one where inspiration can happen without directly lifting. One where the “sound of” a culture or style can be recreated without literally using someone else’s recording.

This doesn’t mean ethical concerns vanish. If anything, a whole new set of ethical concerns is born. But it does mean we have a chance to build something less extractive.

The Future of Cultural Voice

There are risks, of course. There’s a difference between expression and exploitation. There’s a difference between paying tribute to a genre and cashing in on it. We still have to ask the hard questions about consent, credit, and context.

But we should also acknowledge the opportunity.

Generative AI gives people the power to explore identity through sound—without needing to pass through traditional gatekeepers. You can now shape the kind of emotional tone that once required a full studio, a team of collaborators, and someone else’s permission.

We need to make room for that kind of experimentation. We need to have the conversation without shutting the door.

There will always be people using these tools irresponsibly, just like there have always been bad actors in any creative scene. But that shouldn't invalidate the tools themselves.

Because creativity has always lived in the grey areas. And technology has always complicated—but never killed—the soul of music.

Back to Benign