top of page
1.png

Is AI about to eat hollywood alive?


ree

It started as a joke. A throwaway Reddit punchline, a nerd’s provocation: “What if an AI wrote the next Tarantino script?” Cute. Until it wasn’t. Until it became a .mp4 file generated from a line of text. A 1080p video of a crying astronaut in the rain... shot without a single camera ever rolling. That’s Sora. That’s Suno. That’s MusicLM. This isn’t sci-fi anymore. It’s science fiction temping for content producers. Since ChatGPT’s viral explosion in late 2022, generative AIs have stopped being just your email wingmen. They write, compose, sketch, slice, dub, animate, and edit. Two months — that’s all it took for ChatGPT to reach over 100 million users. Two months for the tool to become the standard. For “prompt” to sneak into family dinner conversations. This isn’t just a tech evolution. It’s a full-blown creative grammar reset.


And Hollywood knows it. Feels it. Fears it. Because this time, the revolution isn’t coming from a Spielberg on acid or a caffeine-addled A24 team. It’s coming from a line of code. From an AI that doesn’t strike, doesn’t seek validation, doesn’t ask for final cut. And it produces... fast. Too fast. We’ve jumped from “AI will never replace artists” to “this thing just made a trailer in 4K” before we even had time to blink. TikTok’s video consumption has doubled in four years. The creator economy is worth over $250 billion. And yes, some studios are already testing full-AI pipelines. This isn’t a future fantasy. It’s pre-production reality.


But what are we actually making here? A creative revolution? Or just pre-chewed, risk-proof content built to please the algorithm before it even tries to move an audience? Is AI liberating us from the system — or just building a prettier cage? When everything is generatable, what’s left of voice? When anything is possible, what’s left of risk? And most of all: when the machine produces more than you, faster than you, cleaner than you... do you keep trying? Or do you surrender — one prompt at a time? In this episode, we’re pouring strong espresso and diving into glitchy realities. We’ll talk about Sora — the AI generating both dreamscapes and lawsuits. We’ll explore the rise of AI music, from its synthetic harmonies to its shady metadata. We’ll dive into the creator economy, where fans become monetized business models and artists... monthly test subjects in their own subscription labs. And through it all, we’ll ask: does the human voice still matter in a game that’s increasingly just another software update?


Because spoiler alert: the future is not a finished product. It’s a draft. A never-ending beta. And maybe that’s our only shot — being messier than the machine. Less efficient. More unpredictable. More inconvenient. More... alive. Welcome to Cappuccino & Croissant. Today, we’re not pretending to know what tomorrow holds. We’re just diving in — cable unplugged, pen sharpened, sarcasm fully activated.


Sora: the child star of 2025


Some kids grow up in the tabloids. Others on red carpets. And then there’s Sora. Born in February 2024 inside the R&D matrix of OpenAI, this video model is the digital equivalent of a precocious child celebrity — barely a year old and already producing short film–worthy clips without ever having touched a camera. No childhood, no awkward teen phase, no social anxiety. Just a supernatural ability to turn a prompt like “a dog runs in the snow in slow motion, tracking shot” into a photorealistic 1080p sequence — with flying snowflakes and splash physics included. Welcome to the prodigy child of the AI generation. No drool, no acne, no union rights.


And naturally, everyone’s watching. Artists. Studios. Lawyers. Because Sora isn’t just some tech toy — it’s a panic accelerator for the entire entertainment industry. Scripts write themselves (literally), images animate on command, and the only thing keeping this machine from steamrolling everything is that it’s still (officially) in limited-access beta. For now. Technically, Sora is a diffusion transformer trained to understand the structure of the physical world so it can generate videos with spatial, temporal, and narrative coherence. This isn’t just a stack of moving images — Sora “understands” motion physics, perspective, logical transitions. You can ask for “a man in a purple suit crossing a busy Tokyo street in autumn rain,” and it’ll deliver a cinematic shot worthy of a Kendrick Lamar music video. It can simulate camera movements, lens changes, depth of field. It’s professional-grade cinematography… from a sentence. Sixty seconds later, it’s there — full HD, texture-rich, light-balanced. No crew. No shoot.


Selected creators have already been granted early access by OpenAI. The demos online are borderline hallucinatory: scenes that look like luxury commercials, impossible drone shots that would normally require a helicopter, surreal clips somewhere between David Lynch and Pixar. Sure, the aesthetic still carries a few tell-tale signs of AI — slightly off faces, uncanny hands — but the evolution is so rapid it’s making traditional animation studios sweat. But Sora isn’t just fast. It’s scalable. One video a day? That’s so 2023. One video a minute? Technically possible. An entire campaign generated in 24 hours? Already been done. In a world where the algorithm is a hungry god constantly demanding more content, Sora is the perfect servant: tireless, frictionless, deadline-free, and gloriously unpaid.


And that’s where things get sticky. Because as dazzling as the tech is, copyright law wasn’t exactly designed for this kind of entity. In the U.S., the rule is clear: a work without a human author isn’t eligible for copyright protection. Period. In other words: an AI-generated video like Sora’s has no stable legal status. You can make it. You can share it. But can you protect it? The answer: ¯\(ツ)/¯. In Europe, it’s just as legally fuzzy. The AI Act mandates transparency from generative AI developers and requires dataset documentation — but still doesn’t clearly define the legal status of AI-generated works.


And here lies the core headache: if an AI is trained on copyrighted works, and produces outputs that “recombine” them… are we still talking about original creation, or are we dealing with synthetic plagiarism? This debate is no longer theoretical. Lawsuits are already surfacing. Getty Images sued Stability AI for unauthorized use of its image database to train a model. Artist groups have taken legal action against AI models mimicking their styles or reusing compositions. And the Hollywood majors? They’re watching very closely — trying to figure out whether they’ve just been handed a productivity miracle, or a creative coup d’état. In this context, Sora is both the breakout star… and the kid you need to keep an eye on. It fascinates. But it also terrifies. Because it raises the real question: if an AI can generate mind-blowing images from prompts alone, what happens to the director? The storyboard artist? The color grader? And more pressingly — who owns the rights to these images? OpenAI? The user? No one?


Let’s be clear: Sora doesn’t invent. It extrapolates. It stitches. It replays the world based on millions of absorbed examples. Think of it as the ultimate overachieving intern — it knows everything, but understands nothing. It can mimic a scene that feels like Nolan or Wes Anderson… but it doesn’t grasp emotional arcs, subtext, or narrative rhythm. In short: it nails the style — but misses the meaning. And that’s the line between magic and mimicry. Humans create with scars, with hesitation, with contradiction. AI smooths, calculates, aligns. The difference isn’t about technical quality. It’s about ontological essence. Sora might generate the “perfect” image. But it can’t yet give you a moment like Her’s silent despair, Eternal Sunshine’s aching tenderness, or the slow-burning fury of Moonlight.


Still, is that enough to stop the rollout? Probably not. In an industry obsessed with returns, metrics, and tighter deadlines, there’s a strong temptation to settle for “almost.” For “clean.” For “efficient.” Even if it means sacrificing texture. Soul. Risk. That’s the real threat: not that Sora will replace artists — but that it will slowly erase them behind a flood of clickable, flawless, meaningless content. As of now, OpenAI limits access to Sora to selected creators and enterprise partners. Officially, this is to prevent mass production of deepfakes, malicious misuse, or public backlash. Unofficially? It’s a stress test — gauging public reaction, use cases, and ethical gray zones. But let’s not kid ourselves: general rollout is inevitable. It’s a matter of months. Maybe a year. And once it hits the public, we’ll have to learn to coexist. To tell the difference between a sincere video and an optimized one. To ask: who really thought this through? To demand new legal frameworks. New labels. New filters. And maybe… new forms of art.


Because if Sora breaks the current system, it also unlocks new spaces. It could be a game-changer for low-budget creators. An extension of imagination. A bridge between vision and execution that was once out of reach. If — and only if — we don’t hand over storytelling entirely. If we keep a firm grip on the pen. That’s the real battle: not letting the child prodigy become a tyrant. Not turning a brilliant tool into a creative overlord. Sora isn’t a monster. It’s a mirror. And what we see in it will always reflect what we choose to project.


Between music and metadata


Just a few years ago, starting a band meant at least a garage, two friends, one guitar, and several liters of coffee. Today, all it takes is a keyboard, a prompt, and a model trained on thousands of hours of existing music. Welcome to the invisible studio of 2025 — where AI doesn’t tune a guitar, but stacks chords, vocals, and choruses in seconds. It doesn’t sweat. It doesn’t overthink. It doesn’t hit wrong notes. It just produces. On repeat. And it doesn’t ask for royalties.


This isn’t just a technical flex. It’s not about sound quality or innovation. What we’re witnessing is a total redefinition of what we even mean by “original music.” With tools like Suno, MusicLM, and others, AI now generates full songs based on simple text descriptions. Type in: “nostalgic pop-rock ballad with early 2000s female vocals” — and boom, a full track appears. Verse, chorus, bridge, even a bassline. The problem? That track might sound a little too familiar. Like something you’ve heard before. Like someone — who maybe spent ten years crafting their EP — just got summarized by an AI in two lazy prompts. On one side, there’s MusicLM — Google’s model capable of generating music from descriptive text. It gets the vibe, the instruments, the tempo. Write “orchestral piece evoking rain on a window,” and you’ll get something halfway between Max Richter and a Sundance movie bonus track. On the other side, there’s Suno — newer, poppier, more public-facing. In seconds, it spits out complete songs with lyrics, melody, instrumental layers, and even vocals. Not the robotic, vocoder-drenched kind — no. Human-sounding vocals, with nuance, breath, groove. Version 3, launched in mid-2025, even goes further: it can export the whole thing directly to Spotify via API. An AI-made banger. Ready to monetize. No label. No studio. No human.


And the numbers back it up. Since early 2024, tens of thousands of AI-generated songs have flooded streaming platforms — often without the listener even realizing no composer was involved. Some went viral on TikTok. Others slipped unnoticed into lo-fi or ambient playlists. The result? The boundary between “real artist” and synthetic production has never been blurrier. And naturally, not everyone’s thrilled. Here's the catch: to generate this sonic tsunami, what exactly were these models trained on? Massive datasets — full of existing tracks, some public, some under copyright, many unlisted. Google claims that MusicLM only uses royalty-free or licensed material — but verifying the full corpus? Impossible. Suno doesn’t disclose its training material either. And that’s where things get murky. If an AI generates a track that resembles a real song — same chord progressions, similar melody, near-identical structure — is that an homage, a coincidence… or disguised plagiarism?


In May 2025, an independent artist filed a lawsuit after discovering that an AI-generated track reproduced a melodic loop nearly identical to a song from their 2021 EP. The song had been generated on a third-party platform powered by a model derived from MusicLM. The case is ongoing, but it raises a central question: can you claim rights over an AI-generated work, if the tool itself was trained on protected content? There’s no case law yet. But the majors? They’re already sharpening their legal knives. Warner, Sony, Universal — they’re all watching these models like hawks. Not out of artistic concern (let’s not be naïve), but because if AI starts producing hits faster — and cheaper — than their human teams, their business model crumbles. The real fight here isn’t aesthetic. It’s economic.


Then there’s the stealthier problem: disclosure. When a song is AI-generated, there’s currently no requirement for it to be labeled as such. On streaming platforms, users can upload without ever indicating that a machine composed the music. So you’re out there, listening to a soulful ballad about heartbreak… written by an AI from the prompt “emotional breakup song in the style of Adele.” And you, tender human that you are, project real pain onto a mathematically generated output.


That’s why metadata is the new battlefield. Should we mandate “AI-generated” tags on every audio file? Enforce transparency on training datasets? Build reverse-fingerprinting systems to detect recycled patterns? As of now: nothing. The industry is flying blind, while original creators watch their style being digested, replicated, and regurgitated — credit-free. And even when the AI isn’t intentionally copying, it still flattens the landscape. It relies on compositional logic so standardized that it ends up erasing weirdness. Reducing risk. Optimizing formula. When your model is trained on hits, it generates… hits. Flawless. Formulaic. But efficient. Music becomes a derivative product of its own algorithm.


So what now? Do we riot? Pull the plug? Record on cassettes again just to protect our sound from theft? Some artists choose radical rejection: banning their work from datasets, going full analog, embracing militant anti-AI stances. Others subvert: they glitch the tools, bend them, corrupt them — the way you’d sample a broken loop. Then there are those adapting. Using AI as an extension of their creativity — to prototype ideas faster, to simulate collabs, to experiment without racking up studio time. In this scenario, AI is not a threat. It’s a sidekick. A creative co-pilot. But the line is razor-thin. And fragile. Because what helps you compose today… could erase you tomorrow.


This isn’t just a legal debate. It’s a cultural one. Do we want a world where hits are made by systems trained on human emotions — without ever feeling them? Do we want to keep listening to music without knowing whether it came from a person or a prompt? Does the emotion still hit the same when we know it’s been simulated? This isn’t an indictment of AI. It’s a reminder: music is more than just sound arranged over time. It’s memory. Subjectivity. Flaws. A track isn’t “good” because it’s well-produced. It’s good because it cracks something open inside us. Something no dataset can teach. No prompt can describe. No model can replicate.


Fans or test subjects?


At the beginning, it was supposed to be the creators’ revenge. The end of arrogant record labels, snooty studios, and platforms that take 40% and ghost your emails. Patreon, Substack, Ko-fi, OnlyFans, Twitch, TikTok — it was the dream of a direct economy, a cash-and-heart connection between artist and audience. All you needed was a mic, a PayPal link, and boom — an income. A kind of creative dignity. But that was before AI showed up at the table. And started generating… you. Because in 2025, the creator economy isn’t just about “sharing content” anymore. It’s about producing, optimizing, and modeling — continuously, at scale. And in this world, AI is no longer just a tool. It’s becoming a partner, a growth engine — and let’s be real — a potential replacement. No burnout. No imposter syndrome. No Sunday night dread. AI can post, schedule, adapt, replicate. It doesn’t need sleep. And it costs less than your coworking subscription.


Let’s start with the numbers. In 2025, the creator economy is worth over $250 billion — yes, more than the global film industry pre-COVID. Every day, millions of people are publishing, streaming, selling, teaching, moderating, reacting. For passion. For business. For survival. Micro-niche influencers with just 800 subscribers on Substack are making a living. Yoga instructors livestream classes with AI subtitles. Digital artists sell NFTs generated by custom scripts trained on their own styles. The line between amateur, professional, and algorithm has never been thinner. And yet, despite this apparent boom, most creators remain financially unstable. According to the latest data, only 4% of users on major platforms make a full living from their creative work. The rest are juggling side gigs, burning out economically. Because producing content isn’t just about “being creative.” It means being a strategist, a marketer, a community manager, a video editor, a data analyst — and sometimes, a toxicity-proof moderator. That’s where AI slipped in. Because it promises exactly what the exhausted creator silently craves: automation without betrayal.


Today, you can already use AI to generate TikTok scripts, design YouTube thumbnails, write newsletters, analyze your video engagement, and tailor your tone to your niche. You can have a voice assistant read your script, a vocal clone dub your podcast into three languages, or a generative AI produce three reels a day — complete with animated backgrounds and synthesized voiceovers. In 2025, some influencers don’t even shoot their own videos anymore. They send a rough outline to their digital twin — a photorealistic AI that speaks with their voice, face, and signature expressions. The creator, literally replaced by themselves. And what about the audience in all this? They consume. They like. They comment. They pay for premium subscriptions. They don’t always know — or don’t want to know — that the creator they’re supporting at $5/month is actually taking a nap while their AI clone pumps out content every two hours. The “direct connection” between artist and fan becomes a scripted performance, an algorithmic illusion. And monetization? It’s never been so frictionless.


So where do we draw the line? When AI writes, shoots, edits, clones… what’s left of you? Your intent? Your brand? Your username? Is that enough to still call it creation, or are we drifting into an economy of simulated presence? Like those eerie TikTok lives where someone sits frozen on camera repeating “ice cream so good” for three hours — except now, it’s a bot doing it. And there are 30,000 people watching. What made the creator economy powerful is exactly what makes it vulnerable to AI: proximity. The feeling that you’re talking to someone real. But when AI enters the loop, that authenticity becomes performance. You think you’re commenting on a video posted by a digital artist? It was auto-uploaded by an A/B testing tool comparing four different thumbnails. You receive a “deeply personal” newsletter? It was written by GPT-5 using six pre-crafted prompts designed to boost open rates among 18-to-25-year-olds. You fill out a poll to “help the creator choose their next project”? Your responses are training their personal recommendation model. You’re no longer just a fan. You’re a UX test subject.


And no — this isn’t a conspiracy theory. As of 2025, several platforms already allow creators to analyze subscriber behavior: which types of content trigger emotional reactions, which keywords enhance a sense of connection, and the exact timestamp when viewers start to drop off a video. Some startups go even further, offering personalized “personality AIs” — models trained on your voice, your writing, your catchphrases. The creator becomes a product. Then a system. And the fan becomes a signal. Of course, that doesn’t mean sincerity is dead. Or that real relationships with audiences are impossible. There are still people who hand-write their newsletters, shoot on an iPhone without editing, and reply to their own DMs. And their communities often feel it — because in the middle of algorithmic noise, the human voice is recognizable. Precisely because it trembles, hesitates, messes up. But let’s be honest: resisting automation in 2025 is a political act. It means giving up scalability. Forgoing perfect regularity. Sacrificing boosted productivity. It means posting less, being seen less, earning less. In short, it’s creative disobedience.


And maybe that’s the real question here: does content always have to be more — faster, smarter, hyper-targeted? Or can we reclaim another rhythm, another kind of connection, another pact entirely? Can we revalue slowness, imperfection, silence? Or are we all being nudged toward becoming 24/7 content pipelines — no friction, no pause, no end? In response to this shift, some creators are calling for safeguards: a “100% human-made” label, regulation of AI clones, transparency around automation in content. But these demands struggle to gain traction in an ecosystem ruled by click rates. And the truth is, a lot of creators want these tools. Because they’re tired. Because the attention economy trained them to be. Because they know they can’t fight algorithms alone that demand more, more often, for less.


What we’re witnessing is a strange hybridization: half-human, half-bot, half-authentic, half-optimized. The creator becomes their own marketing team, their own production company, their own simulator. And the fans? They adapt. They grow attached to a voice, a tone, an aesthetic — sometimes without realizing it was generated. So, are we still fans… or have we already become test subjects? The answer, as always, lies somewhere in between. And as long as the machine serves expression — not extraction — maybe there’s still a chance to write, create, and share differently.


Conclusion — Our algorithms, our fights


It would be far too easy to close this episode with a good old "AI is going to destroy everything, let’s mourn our lost humanity" kind of speech. But that’s not what Cappuccino & Croissant is here for. This isn’t a funeral service for nostalgic romantics addicted to grainy film reels. This is a scalpel pressed to the side of our time. And the times, as it turns out, don’t fit neatly into dystopias or syrupy utopias. They’re slipperier than that. More cunning. More glitch-coded.


What we’ve seen across the three segments of this episode isn’t just evolution. It’s mutation. It’s a live reprogramming of the creative ecosystem in all its nerve endings and neural shortcuts. Sora is rewriting how we conceive and experience images. Suno and its generative cousins are reducing music to equations, packaging emotion in WAV files and neural vectors. And today’s creators aren’t just humans with ideas anymore — they are hybrid systems, part flesh, part cloud, constantly oscillating between expression and optimization. In front of all this, there are really only two options: denial or lucid resistance. You can pretend this isn’t happening, go back to handwriting poetry in your Moleskine and yelling “I’m a real artist!” to the void. But spoiler: the algorithm doesn’t care. Or you can stare the machine straight in its sensor array and make a conscious decision about how you want to interact with it. Because no, AI is not neutral. But neither is passivity.


AI is fast. AI is impressive. AI is relentless. But it doesn’t want anything. It doesn’t suffer. It doesn’t daydream. It doesn’t write songs at 3 a.m. after three sleepless nights. It doesn’t break down after rewriting the same dialogue for the fifth time. It doesn’t cry when a deeply personal project is shelved for something more marketable. It doesn’t feel that internal tension between the desire to be heard and the fear of being ignored. It doesn’t strike. That’s where our wildcard lies. That’s our glitch in the system. That’s what keeps human creation vital, not necessarily because it’s better — but because it’s marked by something no statistical model can forecast. It carries the unpredictable. The absurd. The dissonant. The productive mistake. The crack in the surface. The future of creation will not be won by drawing a useless binary between humans and machines. It will be shaped by our ability to remain stubbornly, surprisingly, unsettlingly alive in the midst of perfect simulation.


AI is not going anywhere. That’s not the question. The question is how we choose to frame it. What role we want it to play. Who gets to wield it. Who’s allowed to opt out. Who benefits.

Some tools will be liberating. They’ll offer access to resources once reserved for the elite. A teenage girl will be able to produce an animated short film with no budget. An indie musician will drop an EP without needing a studio or a sound engineer. But let’s not confuse access with empowerment. If a tool gives you production power but steals your voice, then it’s not freedom — it’s extraction wearing a UX mask. We need to move past the limp, resigned mindset that says, “Well, I guess we just have to adapt.” No. We have to build with it. Co-construct. Set boundaries. Demand rules. Define a creative space where AI isn’t a god we bend to, but a tool — one among many — held accountable to ethics, law, and collective will.


The European AI Act is, in some ways, a first step in that direction. It introduces a hierarchy of risk, enforces transparency, and bans certain use cases. But it’s not enough. While lawmakers crawl, the models evolve. The datasets swell. And the legal gaps are being exploited with algorithmic precision. We need to go further. Content generated by AI must be labeled clearly and systematically. Artists should have the legal right to opt out of training datasets. We need open-source tracing tools to identify and expose dataset contamination. And we need to redesign copyright itself to make sense in a synthetic era.


But beyond legislation, there’s culture. And that may be where we still have real ground to win. We can educate audiences. We can sharpen their awareness. We can help them spot the difference between lived resonance and polished simulation. We can cultivate an aesthetic of the real. We can advocate for the politics of slowness, for the joy of imperfection, for the power of nuance. We can tell new stories, reject the sterile aesthetic of generic content, and reassert the value of texture, of flaw, of voice.


We don’t need to shrink in the face of AI’s progress. Just because some models are now better than us at one thing doesn’t mean they’ve won the game. The future is not a finished product. It’s a draft. A script still open to rewrites. And we are the rewriters. What Sora, Suno, GPT and others are confronting us with isn’t defeat. It’s responsibility. The responsibility not to let machines decide for us what is beautiful, just, true, viral, marketable or acceptable. The responsibility to offer alternatives. Even if they’re slower. Rougher. Less monetizable.


We may need to slow down. We may have to unlearn certain habits. We may need to exit “maximum output mode.” We might have to return to sensation. To hesitation. To instinct. We might need to become outdated. Offbeat. Disruptive. And that’s a good thing. Because in the end, what AI can never generate is desire. That absurd, irrational, essential moment when someone decides to create something for no reason. Just because it burns. Just because it’s there.


If you’ve made it all the way through this episode — thank you. If you want to support a podcast still entirely written, edited, post-produced, and dreamt up by a human being running on too much coffee, not enough sleep, and a totally irrational obsession with narrative glitches… you know what to do. Subscribe to Cappuccino & Croissant on your favorite platform to catch upcoming episodes — yes, the ones that make you question your relationship to dopamine, pop culture, and late-night existential spirals. Stream the music too — it’s raw, bilingual, and encoded in actual emotion rather than audience retention stats.


Want to go further? My books are there for you. Novels, essays, stories — every page is a hacked-up slice of handmade utopia. Available in both French and English, in print and digital. Because yes, you can be team paperback and team lucid dystopia. And if you’re really feeling it — if you want to be one of the people making this whole thing possible — you can support me on Patreon. For just a few euros a month, you help an independent creator keep existing outside the algorithm. You also get bonus content, behind-the-scenes access, secret projects, and my eternal (unmonetizable but deeply real) gratitude.


We’ll meet again soon. Until then: resist. Create. Even if the algorithm scrolls past you. Even if no one’s watching. Because you don’t need permission to exist. Just a direction — and maybe a cappuccino.

Comments


bottom of page