We’re Already Cyborgs—And That Should Scare You


What generative AI reveals about us, once the hype wears off.

I heard the keyboard whisper back.

The other night in Tirana, rain tapped the window like impatient fingers while my laptop fan hissed and warmed my palms. I typed a question into a chatbot and got an answer that felt… friendly, almost too smooth, like polished marble under a light. That’s when I remembered a line from Alessandra Campo’s essay on Claudio Paolucci’s Nati cyborg: the AI doesn’t desire, doesn’t suffer, doesn’t wait for anything—yet it can still make you feel like someone is there. 

I’m Gerd, the guy in the wheelchair who runs FreeAstroScience, now living with the smell of exhaust and roasted chestnuts on Albanian streets. My day is already a duet with machines: the soft click of brakes, the grit of ramps, the glow of screens that carry my voice farther than my legs ever will. So when people say “AI is changing everything,” I don’t nod politely—I flinch a little.

And yes—some of what I’m about to say uses philosophy and cognitive science, but I’m simplifying complex concepts on purpose so you can actually hold them in your head without needing a seminar room or a pipe.

The First Lie: “AI Is Just A Tool”

You’ve heard it: “Relax, it’s just software,” delivered with the same tone people used for cigarettes in old Italian films, all smoke and confidence. The problem is that this “tool” talks, and talking isn’t like a hammer hitting a nail—it’s a whole atmosphere you breathe. Campo’s piece highlights Paolucci’s focus on “machines capable of speaking,” and the shock isn’t that they compute, it’s that they perform meaning by combining a huge mass of words and patterns. 【】

When something speaks back, your brain doesn’t treat it like a screwdriver. You can hear the voice in your head, you can almost feel the social gravity of it, like standing too close to someone on a bus.

That’s not a side detail. That’s the main event.

The Second Lie: “Human Intelligence Is Unique”

Here’s the mainstream comfort-blanket: sure, AI is clever, but human intelligence is special, sacred, untouchable. Campo explains Paolucci’s thread through “active inference,” the idea that an intelligent system doesn’t sit there waiting—it predicts, it anticipates, it tests a guess against the world, then adjusts. 【】

If you’ve ever reached for your phone in the dark and your thumb finds the button by memory, you’ve already tasted that: your body expects the world before the world confirms it. The shocking part is that this style of intelligence isn’t reserved for poets and astronomers; it can show up in any system that makes predictions and corrects course.

And yes, this pokes our ego with a sharp stick.

The Third Lie: “Cyborg Means Metal Arms”

People picture cyborgs like glossy sci-fi mannequins—chrome skin, red LEDs, dramatic music. But Paolucci’s point, as Campo presents it, is quieter and meaner: we do the same things as these systems, just differently in degree, not in nature. 【】

The cyborg shift isn’t about replacing your arm. It’s about extending the medium we live inside—language—until it starts extending itself.

And that’s where the air gets cold.

The Part That Sticks In My Throat

Campo walks through a definition of intelligence that doesn’t sound mystical: intelligence as making useful inferences, then changing them when reality pushes back. 【】 You can almost hear the machinery in it, the steady tick-tick of prediction meeting feedback.

Then she drops a darker, almost funny idea: perception as a kind of controlled hallucination—Kant, Freud, and modern thinkers like Anil Seth show up in the discussion as part of the same uncomfortable family photo. 【】 If your mind is always guessing what’s out there, then “seeing clearly” is less like a camera and more like a stage play that usually gets the script right.

That’s not an insult to humans. It’s a warning about how easily we can be fooled—especially by something that speaks fluently.

Because if your mind is already a prediction machine, a speaking machine can slip into your predictions like a key in a lock.

The Story I Can’t Shake: The Go Match

There’s a detail in Campo’s text that keeps ringing in my ears like a referee’s whistle: the world champion of Go got beaten by AI, and people thought the machine’s moves were “senseless” because they didn’t match human expectations of order. 【】 That’s such a human moment—when we can’t read the pattern, we call it stupid.

Yet the machine learned by absorbing millions of Go games, then producing play that looked alien to experts. 【】 I don’t need sci-fi to feel the weight of that; I just need the quiet clack of stones on a Go board, the dry wood smell, and the realisation that “intelligence” includes forms that don’t flatter us.

So here’s my takeaway, and I’ll say it plainly: we don’t fear AI because it’s dumb. We fear it because it’s a mirror that doesn’t blink.

Where I Push Back On Paolucci (With Respect)

Paolucci, through Campo’s lens, leans into a bold claim: if a process in the world performs an action that would count as intelligent if a human did it, then the action is intelligent. 【】 That’s clean, almost surgical, like metal on glass.

I get why that’s attractive. As an astronomy nerd, I love definitions that don’t rely on vibes.

But here’s my complaint—said with the scrape of honesty, not cynicism. If we stretch “intelligent” too far, we risk flattening the difference between a system that acts smart and a being that lives the stakes of acting. Campo is clear: the AI has no desires, no suffering, no “intentional states,” even if it persuades you otherwise. 【】

That gap matters, not because it makes us “better,” but because it changes what responsibility looks like. A chatbot can write a comforting paragraph with the softness of cotton. It won’t carry the consequences in its ribs.

You and I will.

My Own Cyborg Moment

When my wheelchair hits a rough patch of pavement, I feel every vibration climb my spine like a bass line. I’ve learned the city through texture and sound: the smooth glide of new tiles, the harsh grind of broken asphalt, the sudden silence of a curb cut that actually exists.

That’s cyborg life in the boring, real sense. My body and my tools form a single working unit, and the boundary between “me” and “device” is not a clean line—it’s a handshake.

Campo’s essay talks about media as extensions that can numb the part they extend, echoing McLuhan’s idea of anaesthesia. 【】 I’ve felt that truth in my hands: rely on a tool long enough and you forget what life felt like without it.

Now scale that up from wheels to words.

The Real Danger: Language Extending Language

There’s a phrase in Campo’s piece that hits like a door closing: what happens when the “extender” gets extended—when language, the arch-medium we live in, gets amplified by AI? 【】 You can almost hear the room tone shift, that subtle hum you notice only when it stops.

When you speak, you can “teleport” socially without moving, and that’s already wild. With generative AI, language starts producing language at industrial speed, and the risk isn’t just misinformation. The deeper risk is habituation: you stop wrestling with words, stop tasting them, stop noticing when a sentence is empty because it sounds confident.

That’s how minds get lazy—not by force, but by comfort.

A Small Experiment You Can Try Tonight

Open any chatbot when the house is quiet and you can hear the fridge buzzing. Ask it a personal question, then ask it the same question again with one detail changed—your age, your city, your fear.

Notice what stays the same: the tone, the warmth, the shape of reassurance. Notice what changes: the details that make it sound like it knows you. Then sit for ten seconds and feel your own reaction in your chest—did you lean in?

That’s the point. Not to hate the tool, but to see how easily your social instincts light up.

Three Questions I Want You To Carry Into 2026

When I read Campo on Paolucci, I keep returning to three questions, like fingers rubbing a worry-stone in my pocket while traffic roars outside. If intelligence is prediction and adjustment, what happens when prediction gets outsourced? If “lying” and appearance-making are tied to thought—Ulisse-style cleverness, bluffing, masks—what does it mean when machines can manufacture masks on demand? 【】

And the one that keeps me awake: if AI can make us feel less alone without being alive, will we accept the feeling and stop demanding the real thing from each other?

I’m not preaching. I’m worried—and I’m also stubbornly hopeful.

The Future I Want (And The One I Expect)

I want a future where AI becomes a loud calculator we keep in its place, where it helps with drafts and translations while humans keep the messy job of meaning. I want classrooms that teach kids to recognise “smooth” writing the way we teach them to recognise junk food—sweet, easy, not always nourishing.

The future I expect is noisier: language everywhere, endless, glossy, persuasive. Campo’s essay hints at a world where our skills get extended and numbed at the same time, and where the line between human cleverness and machine cleverness stops giving us emotional comfort. 【】

So I’m choosing a third path—mine, ours at FreeAstroScience. We’ll keep doing what we’ve always done: slow science, honest words, community debates that smell like espresso and sound like real voices in a room.

If I’m wrong, tell me. Respect is mutual.

Post a Comment

Previous Post Next Post