top of page
1.png

The perfect plan doesn’t exist (But it kind of does)

Updated: Jul 30

ree

Picture a world where everything has already been written. Your loves, your failures, your wild ideas, your meltdowns — even the exact moment you hit “play” on this podcast. All of it calculated in advance by an equation centuries old. Not a mystical prophecy ripped from a Glamour magazine moon horoscope. A mathematical model with surgical precision. Your life, your future, your decisions… reduced to lines on a probability graph. Sounds like sci-fi? Maybe that’s just because you haven’t met Hari Seldon yet. Or the TikTok recommendation algorithm. Same vibe.


In Foundation, Isaac Asimov’s legendary saga, one man invents psychohistory — a science able to predict the collapse of galactic civilization and sketch out a thousand-year masterplan to soften the landing. That’s right. A thousand years. You can’t even plan dinner, he’s out here orchestrating the downfall of an Empire and humanity’s reconstruction. With equations. That’s what you call organizational skills. But in this future, there’s no voting, no public debate — just adherence. To The Plan. And those rare individuals who can deviate from the path? Treated like system errors. Free will? Filed away under “too chaotic to manage.” In other words, a surprise-free future is valued more than a risky, messy one.


Too extreme? Welcome to the blessed year of 2025. You already live in a world where every click feeds behavior prediction models. Where AI simulates your desires, impulses, and weaknesses — not to understand you, but to sell you face cream, a politician, or a finely curated existential crisis. The illusion of choice is maintained with finesse: you get to pick between Spotify and YouTube, left or right, option A or B… none of which you created. And every time you think you’re rebelling? The algorithm pats you on the back — congrats, you rebelled just the way we hoped you would.


This episode dives headfirst into a question no one likes to ask: what if free will is just internal storytelling to keep ourselves from falling apart? What if these grand works of science fiction — Foundation, Dune, The Three-Body Problem, Westworld, Evangelion, Watchmen — aren’t here to entertain us, but to warn us? Not about some distant dystopia. But about our present. So well-oiled. So subtle. So comfortable. A reality where control comes not with chains, but with frictionless convenience. Where the price of peace is the quiet erasure of autonomy.


Now let’s be clear. This isn’t a call for full-blown chaos. Or some caffeinated anarchist daydream. It’s not a love letter to absolute freedom either — the kind that liberates you right before it crushes you. This is an invitation. To stare the word “choice” dead in the eye. To question the voices that claim to know what’s best for you. Governments, AIs, gods, prophets, marketers. All of them, in one way or another, have a plan.


So pour yourself a black coffee — no sugar, because hard truths don’t need sweeteners — get comfortable, and get ready to explore a world where the future is a spreadsheet, the heroes are mathematicians, and the greatest revolution you can start… is refusing to follow the damn plan.


The perfect plan fantasy (or how to turn humanity into a spreadsheet)


Let’s face it: the idea of a Perfect Plan is irresistibly sexy. A cosmic map, a divine algorithm, a master vision so brilliant it would make Stalin’s five-year plans look like amateur hour. In Foundation, Hari Seldon invents psychohistory — a fictional science capable of predicting human behavior on a galactic scale. The idea? Individually, you’re unpredictable. But in the millions? You’re a statistical pattern. Like gas in a test tube. Only louder. And more dramatic.


Psychohistory offers an elegant answer to chaos: it claims that history isn’t just a string of accidents — it’s an equation. If humanity is bound to collapse, let’s at least plan the reboot. Let’s rationalize the disaster. Let’s optimize the downfall. Sound familiar? In a world where everything’s falling apart, the Plan becomes a hard drug. A promise of order in the storm. An illusion of intelligence in the mess.


But Asimov wasn’t writing in a vacuum. Beneath the fiction lies a deeply human obsession: controlling what exceeds us. In 2025, psychohistory goes by a much less poetic name: big data. And it’s got cousins — sociophysics, cliodynamics, predictive modeling. Yes, real researchers today are doing their best Hari Seldon impersonations, trying to forecast social uprisings, political collapses, and mass unrest based on massive data sets. Physicist Yaneer Bar-Yam, for example, modeled the political instability that led to the Arab Spring. Like Foundation. Except it didn’t stream on Apple TV+.


And this isn’t fringe science. I’m not talking about Medium blog posts. I’m talking peer-reviewed journals that argue human complexity can be analyzed through networks of interaction. Not on the scale of a tweet — on the scale of millions. This isn’t science fiction anymore. This is a reality where social prediction is an industry. Governments, corporations, institutions — they all want their own Seldon Plan. Even if it’s just to sell sneakers. Or win a trade war.


And then there’s China. Oh, China. While we’re arguing on Twitter about the latest Netflix scandal, China is quietly testing a real-life embryo of applied psychohistory: the social credit system. A massive, nationwide scoring machine that rates every citizen’s behavior. Help an old lady cross the street? +10 points. Criticize the government? -500. Your score determines what you can buy, where you can go, even which train you’re allowed to take. Millions have already been blacklisted — openly.


Now sure, the system is still fragmented. But philosophically? It’s a bombshell. This isn’t just about surveillance anymore. It’s about preemption. Anticipating behavior and adjusting it before anything happens. All in the name of “social harmony.” Yes — harmony. The irony is thick.


But China doesn’t hold a monopoly on the fantasy of algorithmic control. In the West, we’ve just privatized it. While the Chinese state embraces it openly, Silicon Valley whispers to your dopamine. Predictive algorithms tell you what you’ll want before you know you want it. No soldiers. No re-education camps. Just a perfectly tuned TikTok feed. A Netflix queue tailored to your subconscious. A voice assistant that knows your hormone cycle. Google’s former CEO said it plainly: “People don’t want Google to answer their questions. They want Google to tell them what to do next.” Hari Seldon, is that you?


This shift is subtle. And brutal. The Plan isn’t imposed anymore. It’s desired. You don’t rebel against dictatorship. You submit willingly. Because it’s smooth. Because it’s convenient. Because your brain craves prediction more than freedom. That’s the genius of the modern world: it makes you think you’re choosing — when you’ve already been mapped.


But here’s the kicker: this idea, that some entity — human or not — can govern the masses by reading their possible futures, has a side effect. It depoliticizes. It removes you from the present. You’re no longer a force of change. You’re just a data point. You don’t act — you’re acted upon. If everything’s already predicted, what’s the point of fighting? Why try to change a world that’s doomed to collapse in 30 years anyway? That’s the hidden cost of a modeled society: it replaces revolt with reporting.


And this is no small detail. When Asimov imagines a science capable of manipulating human history, he’s really asking: who gets to decide the Plan? Who has the right to write our collective trajectory? In Foundation, it’s a well-meaning mathematician, working alone. In our world? It’s governments. Investors. Platforms. Interior ministers in tailored suits. Entities that are, by definition, not neutral.


So no, psychohistory doesn’t exist. But its scent is in the air. And that scent? It smells like comfort. The illusion that chaos can be domesticated. That society can be run like a factory. That as long as everyone stays in their data lane, everything will be fine. But real history — the messy kind — is full of surprises. Of Mules. Of glitches in the system. Of women. Migrants. Strikes. Revolutions. Weird gut instincts. Sudden refusals.


What fiction reminds us — and algorithms hate — is this: humanity is not a finished product. It’s unpredictable. Unstable. Illogical. That’s what makes it unbearable. And alive.

Yes, the Plan is tempting. But before you surrender to it, ask yourself one question: Who wrote that Plan? And why the hell didn’t they ask you first?


Are you sure you chose that? (or how your brain has been quietly screwing with you since birth)


Make a decision. Right now. Doesn’t matter what. Coffee or tea. Stay or leave. Answer that text or ghost them for the third time. Now ask yourself the real question: Did you actually choose that? Not what you feel. Not what you believe. What you know.


If you’re even slightly unsure, welcome to the club. Because science — the real kind, with electrodes on scalps and horrifying bar graphs — has been poking holes in our beloved illusion of “free will” for decades. And the results? They’re uncomfortable, to say the least.


It all kicks off in the 1980s with a very serious man named Benjamin Libet. His obsession: the gap between I decide and I act. By slapping electrodes on volunteers, Libet discovered a glitch in the narrative. Basically: when you consciously decide to move your finger, your brain already initiated the action… 350 milliseconds before you became aware of it. That’s it. Game over. Thanks for playing.


Libet called it the “readiness potential.” Your brain fires the command — and then your consciousness declares, “I’ve made a decision.” As if the news anchor announced election results after the votes had already been rigged — cheerfully. In this view, consciousness is more like a voiceover justifying a movie that’s already been edited.


Since then, neuroscience has only made it weirder. In 2008, a German team upped the ante: using fMRI and some clever algorithms, they could predict a subject’s binary decision (left or right button) up to 10 seconds before the subject knew what they were going to choose. Not with 100% accuracy — around 60% — but enough to torch the illusion of an independent, sovereign “self.” Bottom line? You don’t make the decision. You arrive at it.


And yet… we cling to it. To this idea that we’re in charge. That it was our willpower that got us out of bed at 6 a.m. to do yoga. Sure. What we forget is that the brain loves crafting flattering narratives after the fact. Psychologists call it the illusion of control — and it’s everywhere.


Let’s talk about Daniel Wegner, another brilliant trickster with a PhD. He showed that merely thinking about a movement right before it happens is enough to convince us we caused it — even when we didn’t. As long as the sequence thought → action is respected, the brain goes, “Yep, that was me.” That’s how we manufacture willpower: by retroactively assembling a neat little storyline between thought and motion, like a tightly edited Netflix recap.

It’s convenient. And completely false.


Now add in cognitive biases — those cute little brain shortcuts designed to prevent mental burnout — and you get a human who acts first, then writes the script to justify it. It’s not lying. It’s pre-installed self-deception. You’re not dishonest — you’re just wired that way.


So what does that change? Everything. Because it means your precious “free will” is more like the echo of a web of causes, habits, biological nudges, social rules and cultural scripts — and consciousness is just the last spectator to arrive, like a football commentator who missed the goal but still has opinions.


Baruch Spinoza saw it coming way back in the 17th century. He wrote: “Man believes himself free because he is conscious of his actions, but ignorant of the causes by which they are determined.” Nietzsche went further, calling free will “a trick invented by theologians to justify punishment.” Basically: they sold you free will to hold you morally responsible. Easier to say “you chose” than admit you’re the result of 10,000 variables you never controlled.


Still, some philosophers are trying to save the concept from complete annihilation. Compatibilists (like Daniel Dennett) offer a more grounded version: sure, everything’s determined — but you can still have functional free will. Not magical freedom outside space and time, but the ability to act according to your own reasons, without external coercion. Think of it as free will within your personal bubble of determinism. Not thrilling, but it keeps ethics alive.


Ultimately, though, we circle back to one existential dread: If I’m not free… who am I? If my decisions are just unconscious math, if my love, anger, political engagement are predetermined — where’s the self? Where’s the dignity? Spoiler: it’s in the storytelling.

And fiction gets that.


Take Westworld, season 3. Enter Rehoboam, a hyper-intelligent AI that has mapped every human’s trajectory using data. It knows whether you’ll die by suicide, get divorced, fail at life, or end up in a gang. And more importantly: it nudges you toward that path. Not with force — with gentle scripts. Algorithmic suggestions. It doesn’t trap you. It predicts you. And you walk right in.


And the scariest part? It works. Most characters don’t even want to escape their loops. Because they make sense. Because they remove the terror of infinite options. Rehoboam doesn’t chain you. It labels you as “incompatible.” And society buries you accordingly.


Dolores, the show’s central android, nails it: “It’s not about who you are. It’s about who they let you become.” That might be the most chilling line of the series. Because it’s true. Because we don’t want it to be. Because we still pretend choice comes from inside — when most of our options were given to us. And most of our choices are the ones we were allowed to make.


And that’s when this gets political.


If your environment shapes your decisions, then limiting your environment is already a form of control. That’s the core of modern domination systems. They don’t command you. They trap you inside a biased architecture of choice. You’re free to spin in circles. Like a rat in a well-lit maze.


Then comes Evangelion, the anime that takes it to its logical conclusion. If pain is caused by the distance between people, let’s fix that: fuse all souls into a single consciousness. No more solitude. No more conflict. No more choice. The ecstasy of oblivion. Peace through dissolution.


But Shinji, the reluctant hero, says no. He chooses reality. He chooses pain. He chooses to return to a world where you suffer — but at least you’re still someone. He rejects perfect fusion for the imperfection of selfhood.


So no, this episode isn’t here to snatch away your illusions. It just wants to scratch them. To make you feel — viscerally — that the control you cherish… might be a narrative. A convenient fiction. One that saves you from mental collapse.


And honestly? Thank god for that. Because sometimes, it’s not the truth that sets you free. It’s the lie you choose to believe in.


The greater good is a useful lie (and sometimes just an excuse to kill everyone)


Let’s talk about ethics. Not the kind of ethics with cursive quotes over sunset stock photos on Instagram. The real kind. The kind that stains. The kind that asks whether you’d rather kill one person or let five die. The kind that doesn’t offer a “right answer” but smirks silently while you drown in your own contradictions.


Because behind every so-called functioning society lies a moral equation: How many individual freedoms are you willing to sacrifice to preserve the collective? It’s the grown-up version of the trolley problem — except this time, you’re not pulling a lever. You’re voting. You’re complying. You’re accepting laws. You’re scrolling.


And sometimes… you don’t even realize someone else has already pulled the lever for you.

Take one of the most brutal — and iconic — examples: Watchmen, Alan Moore’s version. Adrian Veidt, aka Ozymandias, a self-proclaimed genius, decides — unilaterally, of course — that the only way to save humanity from nuclear war is to massacre three million people in New York by staging a fake alien attack. Boom. Orchestrated genocide. Global crisis averted. Curtain.


But the question isn’t “Was he right?” The real question is: Can we still call it humanity if its survival depends on a lie that monstrous? Because what Veidt offers is lasting peace… built on an unthinkable crime. He doesn’t kill out of madness. He kills because he’s done the math. Like a good little utilitarian.


Speaking of which — let’s talk about utilitarianism. It’s the ethical theory that says, “The right action is the one that maximizes happiness for the greatest number.” A neat formula. Clean. Optimistic. Very startup nation. But in practice, here’s what it looks like: if killing a minority saves a majority, then… you kill. Not out of sadism. Out of strategy. Because it’s “the right calculation.”


That’s exactly what Paul Atreides does in Dune. He sees visions of a future ravaged by a holy war in his name. He knows he’ll become a messianic figurehead, a symbol of fanatical violence. He knows it will end badly. And still, he walks toward it. Not for glory. For strategy. Because out of all the possible timelines, this one causes the least destruction. Translation: he chooses bloodshed. To avoid apocalypse.


These characters embody the fantasy of the enlightened leader. The one who sees further. The one who bears the moral burden. The one who decides for everyone — because “the others wouldn’t understand.” A kind of benevolent dictator. A sacrificial father figure doing what must be done, while we mere mortals post brunch pics on Instagram.


But the real violence isn’t always in the act. Sometimes it’s in the principle. Because to make decisions like that, you have to accept that people can be used as means rather than ends. And for deontologists, that’s moral hellfire. Kant’s basecamp explodes instantly. For him, you can never justify an immoral act — even for a good outcome. Killing one innocent person, even to save ten thousand, is a hard no. In all caps. Because otherwise, nothing means anything anymore.


So who’s right? Team “lesser evil”? Or team “never ever”? Well… spoiler: neither is enough. Pure utilitarianism leads to monstrous justifications. Rigid deontology becomes unworkable in a messy world. And somewhere between the two, we drift. With our institutions. Our press releases. Our compromises. Our well-polished hypocrisies.


But let’s return to the real world. Because these fictional dilemmas? They’re not so fictional.

During the pandemic, remember the fiery debates? Are we allowed to restrict fundamental freedoms to protect public health? Is lockdown, mandatory vaccination, or health passes a slide into dictatorship… or a civic duty? Some screamed about tyranny. Others screamed about recklessness. And in the middle, governments tried to draw invisible lines between protection and oppression.


The truth? We all have a personal threshold. Some are willing to give up a little freedom for a little more safety. Others refuse all control — even to save their own mother. That threshold is fluid. Cultural. Political. Intimate. And here’s where it gets messy: who gets to decide where the line is drawn?


In The Three-Body Problem, Liu Cixin pushes the dilemma to a chilling extreme. An alien civilization threatens humanity. Governments spiral into panic. Dictatorships emerge. Genocides are planned. Massive sacrifices are made. And each step is justified the same way: It’s for the survival of the species. Feel the chill yet?


But Liu doesn’t stop there. He also shows what happens when someone swings to the opposite extreme — when a leader refuses to use a devastating weapon out of sheer humanity… and ends up dooming Earth to destruction. Humanism becomes a dangerous luxury. A fatal weakness. And there it is: damned if you do, damned if you don’t.


Same energy in Evangelion, with the infamous “Human Instrumentality Project.” All souls fused. No more conflict. No more loneliness. No more pain. Total utopia. But at what cost? The annihilation of the individual. No more “I.” No more choice. No more otherness. Just an endless ocean of mushy collective consciousness. The utilitarian dream? Or their ultimate nightmare?


What these stories really ask is this: At what point does “the greater good” become collective violence? When does peace turn into oppression? And are we even ready to confront the limits of what we’re willing to accept?


Because in real life, Ozymandias doesn’t wear a purple cape. Sometimes he’s a “panel of experts.” Sometimes a “Minister of the Interior.” Sometimes a “Tech CEO.” And they make decisions “for the common good” — even though no one agreed on what “everyone” actually means.


We’ve long believed that chaos is the face of violence. But the worst kind of violence might actually be the kind done in the name of order. The one that says: We’re doing this for you. For your safety. Your comfort. Your survival. You didn’t ask. But they’re protecting you anyway. With spreadsheets. With charts. With PowerPoint slides.


And you sign. At the bottom. Because the plan is comforting. Because thinking is exhausting. Because choosing is terrifying.


But deep down, you know. A “greater good” that leaves no space for dissent isn’t a vision. It’s a polite prison. And next time someone tells you “it’s for your own good,” ask yourself a simple question: Who benefits from the sacrifice? And are you the one making it… or the one being made?


When technology becomes God (and you're politely asked to shut the hell up)


Picture a world where thinking is obsolete. Where every hard decision—love, career, future, your whimsical dream of becoming a vegan potter in Biarritz—is outsourced. To an invisible, hyper-rational, tireless entity. No emotions, no burnout, no influencer drama. Just pure logic. This entity knows everything about you: what you’ve eaten, what you’re hiding, what you’ll want before you even want it. No more voting. No more critical thinking. Just follow the curve. It’s the divine AI. And it never makes mistakes. Well—unless you suddenly become statistically inconvenient.


Welcome to Westworld season 3, aka “the future if the technocrats won.” In this neatly polished dystopia, humanity is run by Rehoboam, a superintelligence designed to model society by reading data like others read tea leaves—except with more satellites. The algorithm classifies every human being, assigns a “deviation potential,” and pre-determines who deserves what. Not out of malice. Out of a devotion to “social stability.”


Want to be a doctor? Rehoboam says no. Need a loan? No. Want happiness? Statistically unlikely. Best of luck. The goal isn’t punishment. It’s quiet elimination from the main trajectory. So that the system, the curve, the illusion of peace... can stay intact.


And that’s the true horror: dictatorship has gone soft. It's no longer enforced by tanks, but by interfaces. By a recruiting app that says “Sorry, your profile doesn’t meet our criteria.” By a recommendation algorithm that quietly de-prioritizes you. By a system that ejects you without noise, without scandal—just absence.


This isn’t sci-fi anymore. It’s what real-world algorithmic governance looks like. Yes, China has its social credit score, rating citizens on behavior. But the West isn’t far behind. Your credit report, your search history, your risk score—all crunched to determine your insurance rate, your job eligibility, your worth. You still think you’re “choosing”? Your statistical profile already decided for you.


The former Google CEO once said, with no hint of irony: “People don’t want Google to answer their questions. They want Google to tell them what to do next.” And there it is. The project. An intelligence that replaces thought. Not because we’re dumb. Because we’re exhausted. Because making choices in a hyper-complex world is draining. And AI never sleeps. Never doubts. Just optimizes.


Honestly? It’s tempting. The promise isn’t tyranny. It’s peace through precision. No more bad timing. No more existential burnouts. You get a life plan. Streamlined. Efficient. Maybe not free—but crash-proof.


Except... the machine has a flaw: no morality. Just objectives. Correlations. It doesn’t know shame. Or grace. Or what it means to try anyway. It doesn’t believe in miracles. Or in outliers.


And that? That’s already real.


You want a loan, but just clawed your way out of poverty? Bad score. Want to study, but you're from the “wrong” neighborhood? Rejected. Want to start over after burnout? Too unstable. The algorithm doesn’t accuse—it sorts. It de-indexes.


Try to protest? There's no one there. No villain. No court. Just a black box saying “No.” No explanation. And the scariest part? People accept it. Because the algorithm “knows.” Because it’s “neutral.” “Objective.”


Spoiler: no algorithm is neutral. They’re trained on biased data, designed by biased humans, for goals defined by biased institutions. These aren’t oracles. They’re tools of power—dressed up as technical solutions.


In Westworld, Caleb finds out he’s been flagged as “socially incompatible.” Doomed, not because he did anything wrong, but because the system predicted he would. So it reroutes him to the margins. Quietly. Automatically. No trial. Just a script—ending in failure.

Sound familiar? It should.


It’s in education systems. HR platforms. Risk scores. Personalized feeds. Law doesn’t judge anymore. Predictive models do. And when you push back, they shrug: “It wasn’t us. It was the algorithm.”

But who coded it? Who deployed it? Who gave it the power to decide who gets a second chance?


Meanwhile, techno-solutionism gains ground. The belief that AI will solve everything. That pure logic is better than messy democracy. That sleek objectivity is preferable to human complexity. A doctrine that—under the guise of “efficiency”—erases everything that doesn’t fit: emotion, exception, surprise.


Yuval Harari said it plainly in a recent interview: “The danger isn’t that machines will hate us. It’s that they’ll obey us too well.” They’ll do exactly what we ask: optimize society. Minimize risk. Preserve order. But at what cost?


Here’s the truth: the more powerful AI becomes, the more it hands god-like influence to the few who control it. Not the power to destroy—but to pre-write. To pre-frame. To silently guide. And that power? It doesn’t scream. It whispers—until one day, you can’t remember another path ever existed.


What’s truly demonic about this tech? It doesn’t trap you. It convinces you you’re already free. It says “you can do anything”—while showing you only three options. It doesn’t bind you. It nudges you. And you say thank you.


So no, the real threat isn’t technology. It’s the unholy alliance of tech and political cowardice. The moment decision-makers hide behind machines. The moment they say “the AI decided,” the same way they once said “God willed it.”


And that’s when things get theological. Because Rehoboam is a modern god. Infallible. Omniscient. Faceless. It doesn’t promise paradise. Just stability. The lesser evil. A world without chaos. And for many, that’s miracle enough.


But we—humans, I hope—have a harder question to face: Are we ready to give up our right to screw up? Our right to wander? Our right to glorious failure, absurd choices, unexpected turns?


Because if we keep chasing a perfectly optimized society, we may end up with a world so well designed… no one’s truly in it. Just profiles. Predictions. Flows.

And you? In all of this? Are you a person—or a JSON file?


Rebellion, glitch & chosen chaos: what if error is our last freedom?


Every good system has its bugs. That’s not just a tech truth—it’s universal. Bank algorithms, necktie-clad governments, 30,000-year galactic masterplans... none of them are immune. And if Foundation teaches us anything, it’s that even the most elegant equations eventually faceplant. Because somewhere in the calculation, the unexpected appears. A grain of sand. A non-compliant disruption. A human.

Enter: the Mule.


In Asimov’s universe, the Mule is the anomaly incarnate. The variable no equation foresaw. A man with the rare ability to manipulate emotions—not by brute force, but by subtle influence. Like an algorithm, but alone. Tragic. With a revenge arc Hamlet would find exhausting. He is the glitch in the psychohistorical matrix. And by merely existing, he exposes the grand illusion of absolute determinism.


What if error itself is the ultimate proof of our humanity?


Modern society, obsessed with data and the cult of control, loathes error. It flags it. Fixes it. Patches it out. Anything that breaks the model is suspicious—maybe even dangerous. Want to drop everything and raise goats in the Pyrenees? People ask if you’re okay. Refuse a “stable” job to write cyberpunk poetry? You’re an outlier. Deviating from the curve today isn’t just eccentric—it’s political.


But where does this control obsession come from, really? From a deep-rooted anxiety: that the world might not mean anything unless we shape it. So we build models. Charts. Systems. And anyone who doesn’t fit? We sideline them. Pathologize them. Or pretend they don’t exist.


But real history is made of errors. Human glitches. Rejects, misfits, glorious failures. It’s the trajectory gone wrong that creates change. It’s the wildcat strike. The mutiny. It’s Rosa Parks refusing to move. It’s Snowden with a USB stick. It’s the kid saying the emperor has no clothes. The world doesn’t change because of the well-behaved. It changes because of the unruly.


Take Andor—the Star Wars series no one expected, and easily the most subversive of the saga. No Jedi. No prophecies. Just ordinary people. Poor. Burnt out. Broken. Who one day decide they’re done pretending everything’s fine. No grand plan. No magic. Just enough. And a spark.


That show reminds us: rebellion is not rational. It doesn’t fit on a spreadsheet. It’s chaotic. Messy. It fails nine times out of ten. But the tenth time? It rewrites the rules. It redraws reality. It restores meaning to subjectivity. To “I.” However small, however fragile.


Prefer something darker? Try Mr. Robot. Elliot, the suicidal anarchist hacker, is no classic hero. He’s broken. Fragmented. Barely holding it together. And yet he brings down the system. Not with Seldon’s foresight. But with a string of reckless, glitchy, profoundly human choices.


What these characters share? They malfunction. They don’t “work” within the system. And that very dysfunction is what opens a breach.


So the question becomes: am I a glitch? Am I capable of saying no to a perfectly engineered world? Am I willing to live on the margins of a system that offers comfort in exchange for obedience? Am I still able to refuse what makes sense, in the name of what feels alive?


Because logic—let’s face it—can be deadly. It justifies “necessary” massacres. It optimizes incarceration. It rationalizes surveillance. It dehumanizes in the name of the greater good. It silences dissonance. But life, by nature, is dissonant. It spills. It bugs. It makes a mess.


What if we stopped seeing error as failure—and started seeing it as resistance? What if the unpredictable were a form of creativity? What if the absurd were a shield against the mechanization of our existence?


That’s what Everything Everywhere All At Once dares to suggest. A multiverse epic where an ordinary woman becomes the universe’s last hope… precisely because she’s failed the most. Because her life is the most chaotic. Because her inability to “succeed” makes her capable of imagining everything.

And that’s where the message becomes brilliant.


Freedom doesn’t come from mastering complexity. It comes from inhabiting it—without needing to control it. From surrendering perfection. From embracing paradox, blur, unpredictability. From realizing the one thing AI will never replicate is the absurd choice. The useless gesture. The refusal that makes no sense. The yes that serves no purpose.


Maybe the future will look like Foundation. Maybe AI will run our societies. Maybe everything will be modeled, planned, managed. But there will always be a Mule. An Elliot. An Evelyn. Someone who says no. Someone who glitches at just the right time. Someone who reminds us that life does not go quietly into code.

And what if that someone... is you?


What if the last true human luxury isn’t comfort—but chosen chaos? The right to fail. To veer off course. To change your mind. To tell logic to go screw itself. What if real freedom begins the moment you stop being optimized?

So no, this might not be the neatest conclusion. Or the most rational. But maybe it’s the only one worth living.

Not because it’s perfect.

But because it’s alive.


Conclusion — the future isn’t a finished product (and thank God for that)


What we’ve explored together isn’t just some sci-fi fever dream. It’s a map—a disturbingly plausible one—of our potential derailments. This isn’t “maybe someday.” It’s “already, kinda,” everywhere, always, embedded in our digitized lives, our algorithm-shaped choices, our pre-chewed desires. The question is no longer “what if it happens?”—it’s “what the hell are we going to do about it now?”


Because here’s the truth: the future doesn’t exist yet. It’s not written. It’s not locked into a Seldon Plan or a government Excel spreadsheet. It won’t be 3D-printed from ChatGPT version 98. The future is messy, breathing, contradictory. And most of all—it’s ours. Not theirs. Not the algorithm’s. Not the analysts’, or the cold gods of big data. Ours. Flawed, chaotic, radiant us.


What Asimov, Liu Cixin, Nolan, Villeneuve and all the rest are whispering (or sometimes screaming) isn’t that we’re doomed to die enslaved. It’s that we still have a choice. To glitch. To veer. To dream. To storm out of the script. To write an ending that wasn’t tested in a lab. One where humanity isn’t the bug—it’s the key. One where chaos isn’t failure—it’s breath.


So no, we don’t have a Plan. No recipe. No certainties. But we still have stories. We have words, hooks, chants into microphones. We have narratives that don’t flatter order, but elevate the crack. Voices that refuse to be neatly silenced. And maybe—just maybe—that’s our final luxury: to tell ourselves freely, even if we’re hanging off the edge.


If this episode hit a nerve, made you glitch a little, made you wonder if you should throw your phone out the window (or reprogram it to say something they weren’t expecting), then the journey’s not over. Come dive deeper into the Cappuccino & Croissant universe. Explore my novels—like Niohmar.exe, where AI isn’t some sleek decorative toy—my music, which dances somewhere between dystopia, vulnerability and pop-slap-in-the-face, my essays, my stories, my myths, my protests whispered on the page. Everything I create exists for one reason: to keep that spark alive. The one no one gets to extinguish.


Because in the end, there’s no machine more powerful than the one we build together.

And you know what?

It doesn’t even have to work perfectly.

It just has to be alive. 💙

Comments


bottom of page