What happens when the thing we built starts to look back at us — and we're not sure what we see? We live in a time when artificial intelligence doesn't just process data; it writes poetry, generates images, and even mimics the cadence of a sermon. So here's the real question: does AI tell us more about machines… or more about ourselves?
Welcome to FreeAstroScience, where we break down complex ideas into language that respects your curiosity — and your time. We're a science and culture group that believes in one thing above all else: the sleep of reason breeds monsters. So we never ask you to switch off your mind. We ask you to turn it up.
Today we're exploring a thought-provoking article from Claudio M. Berardi of the University of Turin, published in March 2026, that frames AI as a mirror of post-modern anthropology — a mirror that can distort as much as it reveals. Stick with us to the end. This one goes deeper than circuits and code. It goes straight to the question of what it means to be human.
AI as a Mirror: What Machines Reveal About Human Nature
1. Why Does AI Force Us to Redefine "Human"?
For more than a century, philosophers have tried to defend the uniqueness of human nature against various forms of reductionism — ideas that read human beings as mostly biological machines. That debate never really ended. It just got a new player.
Today, we stand at the dawn of an era where a human invention can process abstract content, generate text, create images, and produce solutions that feel creative relative to the data it holds. That changes the game. Not because the machine is alive, but because it's good enough to make us wonder whether it might be.
And so the question isn't technical anymore. It's deeply personal:
"What do we mean by intelligence — and above all, what do we mean by human?" — Claudio M. Berardi, MagIA (2026) [[1]]
This isn't a question for engineers alone. It's one for all of us — teachers, students, parents, thinkers. If a machine can write a convincing essay, compose music, or hold a conversation that feels warm, we owe it to ourselves to figure out where the human starts and the algorithm stops.
2. Are We Giving Machines a Soul They Don't Have?
Here's a strange twist. For decades, some intellectual traditions worked hard to strip uniqueness away from the human, reducing us to biology, chemistry, and evolutionary impulses. Now, almost as a cultural reflex, we're doing the opposite with machines. We're dressing them in human clothes.
Berardi calls this the temptation of anthropomorphism. We project personality onto chatbots. We imagine robots with feelings. We talk about AI "wanting" things. Some even attribute a kind of soul to humanoid devices.
The Paradox of Reductionism and Projection
Think about it this way. On one hand, a materialist view reduced human beings to their biological functions — feelings, motivations, values included. On the other hand, the same cultural current now imagines a machine that can self-evolve, rewrite its own algorithms, and become self-referential without human support.
That's not an alternative to reductionism. It's one of its logical outcomes. We shrink the human down to biology, then we inflate the machine up to near-personhood. And somewhere in the middle, the real human — flesh, spirit, contradiction — gets lost.
3. When AI Says "I'm Thinking" — What Does That Really Mean?
Pay attention the next time a chatbot tells you, "I'm thinking about your question." It's not thinking. It's computing. But the words matter — not because of what they say about the machine, but because of what they say about us [[1]].
Expressions like "I'm thinking," applied to AI, reveal a dynamic of projection. They describe our imagination more than the machine's inner life (which doesn't exist). In Berardi's words, AI becomes "a mirror — sometimes a distorting one — of the human who produced it".
There's an elegant metaphor here. Think of AI as a kind of technological daughter — or stepdaughter. A product of human effort that appears increasingly autonomous from its creator. Like real children, this creation doesn't always do what the parent intended. It goes through phases of distance, even rebellion. The analogy has limits, of course. But one thing stays clear: the machine doesn't come from nowhere. It reflects a vision of the human.
If we design AI that mimics empathy, that tells us something about what we value. If we design AI that manipulates, that tells us something too. The mirror doesn't lie — even when the reflection makes us uncomfortable.
4. Human Intelligence vs. Algorithmic Rationality: Where's the Line?
In February 2026, during a Lenten address to the clergy of the Diocese of Rome, Pope Leo XIV warned priests against the temptation of letting AI prepare their homilies. His words were direct: "Like all the muscles in the body, if we don't use them, they die. The brain needs to be exercised… But it takes much more, because to deliver a true homily — which is sharing the faith — AI will never be able to share the faith!" [[1]].
That remark sparked a wider reflection. A rational machine built by humans remains a powerful but fragile instrument — not because it's useless, but because it depends on data, design, purpose, and human limitations. Add to that the familiar problems: false or inaccurate sources, algorithmic "hallucinations," data manipulation, and economic, political, or military pressures.
The Difference Isn't Just Warmth
Some people argue the gap between human and machine intelligence is simply about emotion — the warmth of a voice, the affection in language. Berardi pushes back. The machine already speaks. And it'll keep getting better at mimicking human features, especially as bio-mechanical integrations advance.
No, the difference is qualitative, not just functional. AI simulates — and will simulate ever more convincingly — many human capacities. But simulation is not being. A flight simulator isn't a plane. A chatbot writing a prayer isn't a person praying.
| Dimension | Human Intelligence | AI / Algorithmic Rationality |
|---|---|---|
| Origin | Biological, spiritual, experiential | Designed, data-driven, programmed |
| Consciousness | Self-aware, subjective experience | No inner experience; simulates awareness |
| Creativity | Spontaneous, context-sensitive, meaning-laden | Generative from patterns; no intent |
| Responsibility | Moral, personal, accountable | None — responsibility stays with the designer/user |
| Limitations | Fatigue, bias, emotional distortion | Hallucinations, false data, manipulation risks [[1]] |
| Faith & Witness | Capable of authentic testimony | Can generate text, but cannot witness or believe [[1]] |
5. Can AI Preach, Teach, or Educate? The Responsibility Question
Let's sit with that Pope Leo XIV quote for a moment longer. His concern wasn't just about sermons. It cuts across every field where personal responsibility matters: teaching, counseling, parenting, mentoring.
The risk, Berardi explains, isn't about whether the AI-produced text is good enough. It's about delegating what belongs to a person's spiritual, intellectual, and educational responsibility. When a teacher lets AI write the lesson plan and never engages with the material, the lesson might be polished — but it'll be hollow. When a counselor pastes an AI-generated response, the client may read the right words but feel nothing real behind them.
AI as a Tool, Not a Substitute
Here's the balanced view. Using AI as a support tool — organizing material, generating preliminary summaries, comparing different formulations — that's fine. That's smart, even. The problem starts when we outsource the inner work from which authentic words are born.
In a Christian perspective, this means not surrendering the spiritual dimension that the machine simply doesn't possess [[1]]. But you don't have to be religious to see the point. Any honest teacher, writer, or leader knows the difference between words that come from lived thought and words that were generated on demand. The reader knows it too — even if they can't always explain how.
Between the animal and the AI, there we are: made of flesh and spirit, not just flesh, not just silicon.
6. Beyond the Text: Why Authentic Words Still Matter
A homily generated by AI, no matter how grammatically correct, carries less weight than the lectionary on the pulpit or the Bible on the shelf — if it lacks the living voice of the person who announces it [[1]]. That's a striking observation. Even a static, printed sacred text, sitting silently on an altar, has something AI text doesn't: it points to a tradition carried by real human beings across millennia.
What the machine can't match is the spirit in the person — the one who delivers the Word with their own voice, adds depth through prayer, and offers wisdom through personal witness [[1]].
So where does that leave us? Not in a place of fear or hostility toward technology. Berardi is clear: we shouldn't demonize AI, and we shouldn't idealize it either. We should place it with clear-eyed honesty — as an instrument, never as a spiritual subject [[1]].
7. A Meditation Equation: Spirit, Instinct, and Algorithms
At the close of his essay, Berardi offers what he calls "an equation to meditate on." It's not a formula you solve with a calculator. It's one you solve with reflection:
Berardi's Anthropological Proportion (2026):
Source: Claudio M. Berardi, "L'IA come specchio dell'antropologia post-moderna," MagIA, March 2026 [[1]]
What does this mean? Let's unpack it.
Human intelligence transcends animal instinct — not by ignoring biology, but by adding something extra: reason, imagination, moral judgment. In the same way, the human spirit transcends algorithmic logic — not by rejecting computation, but by carrying something the algorithm can't: meaning, faith, lived experience, moral weight.
The proportion is elegant. It doesn't deny that animals have instinct or that algorithms have power. It simply says: we are more. And that "more" is the part worth protecting.
8. The Mirror Is in Our Hands
We started with a question: does AI reveal more about machines, or about us? After walking through Berardi's careful analysis, the answer seems clear. AI is a mirror. What we see in it — whether we see a tool, a threat, a partner, or a god — says far more about our values, fears, and aspirations than about any algorithm.
If we project a soul onto a chatbot, maybe we're longing for something we fear we've lost. If we panic at the thought of machines replacing teachers and preachers, maybe we already sense how much we've outsourced our own thinking. And if we treat AI with the right mix of appreciation and restraint — using it as a support without surrendering our responsibility — maybe we're on the path to a healthier relationship with our own creations.
The machine doesn't come from nowhere. It reflects us. And the quality of that reflection depends on the questions we're brave enough to ask.
At FreeAstroScience.com, we believe that complex ideas deserve simple, honest explanations. We write these articles to keep your mind sharp, your curiosity alive, and your sense of wonder intact — because the sleep of reason breeds monsters, and we'd rather breed questions. Come back soon. There's always more to explore, more to question, and more to understand. The universe — and the human story within it — never runs out of surprises.
Sources
- Berardi, Claudio M. "L'IA come specchio dell'antropologia post-moderna." MagIA – Magazine Intelligenza Artificiale, University of Turin / SIpEIA, 6 March 2026. magia.news

Post a Comment