Can School Stay Human in the Age of Artificial Intelligence?


What if, one morning, you walked into class and found no teacher—only a chat window on a big screen, ready to answer questions about Dante, D’Annunzio, or quantum physics? Would that still be school, or just a glorified search engine with desks?

Welcome, dear readers, to FreeAstroScience.
This article is written by FreeAstroScience only for you, to explore a question that touches teachers, students, and parents alike: how can school stay human while embracing artificial intelligence?

We’ll look at powerful provocations from Italian educators and thinkers, unpack what “AI literacy” really means, and sketch a practical roadmap for a human‑centered, AI‑enhanced school. Stay with us to the end: the heart of the issue is not technology, but what kind of humans we want to become.



What Does It Mean to Imagine a “School as an Antenna of AI”?

Italian psychiatrist and sociologist Paolo Crepet has painted a deliberately unsettling image:
a school transformed into an antenna of Artificial Intelligence, where students no longer talk to a living teacher but only to a chat interface that streams answers about authors and concepts on demand.

His provocation, published on Tecnica della Scuola and discussed in MagIA, is crystal clear: if school outsources its educational task to algorithms, what remains of its mission?

According to Crepet:

  • School is not a call center that must process requests quickly.
  • School is not an efficiency‑driven company seeking to optimize every process.
  • Education is a slow, relational, sometimes messy process.

So the disturbing image of “school as AI antenna” works as a warning:
if we reduce school to a pipeline from database to brain, we lose the very thing that justifies its existence—the formation of conscience.

And here’s our first quiet “aha”:
AI isn’t dangerous because it’s powerful; it’s dangerous when we quietly redefine school around what AI is good at—speed, volume, and availability—and forget what only humans can do.


Why Has School Always Been More Than a Container of Knowledge?

In the Italian tradition, school has never been just a container of knowledge or a warehouse of data. It has been, at its best, a laboratory of integral formation.

Talking about D’Annunzio or Dante in class, as the MagIA piece reminds us, doesn’t just mean analyzing verses. It means:

  • Living an experience of collective interpretation.
  • Entering into dialogue between teacher and students.
  • Confronting different sensibilities and life stories.

It’s within that encounter—not within the perfect answer of an algorithm—that conscience is formed.

Think about your own education.
Often the turning points weren’t dates or formulas. They were:

  • A teacher who asked you a hard question at the right moment.
  • A heated discussion in class that forced you to rethink your assumptions.
  • A silence after someone’s comment, when everyone felt something had shifted.

No AI model, however eloquent, can inhabit a room, read a half‑smile of discomfort, or decide to stay five more minutes after the bell for a student who clearly needs to talk.

So, if school becomes just a place where you “pull answers” from a system, it stops being what it has historically been: a forge of critical, responsible personalities.


Is Technology Really the Problem, or Is Delegation the Real Risk?

Here’s an important nuance: Crepet does not demonize technology.
He warns against something subtler and more dangerous: the temptation to delegate to technology the educational function itself.

The MagIA article summarizes this with a neat formula: “Tecnologia sì, delega no”technology yes, delegation no.

Why is delegation so risky?

  • Because school is a place of slow growth, where mistakes, doubts, and half‑answers matter.
  • Because efficiency (fast, clean, direct) is often the enemy of depth (slow, ambiguous, demanding).
  • Because the most important things we learn—how to love truth, how to disagree respectfully, how to accept frustration—cannot be automated.

From a purely technical point of view, AI tools are amazing at:

  • Summarizing content.
  • Suggesting examples.
  • Providing explanations at different levels.

But if we delegate the selection of what matters, the framing of meaning, and the handling of emotions to them, we are no longer just “using tools.” We are surrendering the cultural direction of education.

So the question is not “Should we use AI at school?”
The sharper question is: “What must we never delegate to AI?”


What Is AI Literacy, and Why Do Students Desperately Need It?

Pedagogue Maria Ranieri, from the University of Florence, insists on the need for AI literacy—a critical literacy that enables students to understand not only how to use AI, but how it works, its limits, and its ethical implications.

In her view, AI should not become a surrogate for teaching, but an occasion to teach students to think about the digital world, instead of being passively shaped by it.

We can break AI literacy into at least four dimensions:

  1. Operational literacy
    • Knowing how to prompt, refine, and verify AI outputs.
  2. Technical literacy
    • Understanding that AI is not “magic” but based on data, training, and statistical patterns.
  3. Critical literacy
    • Recognizing bias, hallucinations, and the limits of models.
  4. Ethical literacy
    • Reflecting on privacy, responsibility, power, and the social impact of automation.

To visualize the balance we want, imagine an ultra‑simple model of a learning session:

L = T human + T AI + T reflection

Where:

  • (T_{\text{human}}) = time of interaction with teachers and peers
  • (T_{\text{AI}}) = time of interaction with AI tools
  • (T_{\text{reflection}}) = time of individual critical reflection

In a healthy, human‑centered setting, AI literacy means increasing (T_{\text{AI}}) without shrinking (T_{\text{human}}) or (T_{\text{reflection}}).
AI becomes a catalyst, not a replacement.

Ranieri’s stance pushes us toward a key insight: AI at school is itself a topic to be studied, not just a tool to be used.


What Does “Cultural Direction” of Technology Look Like in Practice?

Another major voice, Pier Cesare Rivoltella, director of CREMIT at Università Cattolica di Milano, calls for a strong “regia culturale”—a cultural direction of technology in school.

He warns against a “pedagogy of the shortcut”, where AI becomes an excuse to avoid the hard work of thinking.

To understand “cultural direction,” let’s contrast three models of school in a simple table:

Model Role of AI What Students Practice Most Main Risk
Technophobic school Almost absent or banned Manual skills, memorization Irrelevance to real digital life
Technocratic school Central tool for content delivery Speed, productivity, “results” Loss of depth and critical thinking
Human‑centered, AI‑aware school Integrated, but under teacher’s cultural direction Judgment, dialogue, ethical reflection Requires more training and effort

“Cultural direction” means choosing that third model and acting accordingly:

  • Teachers decide when not to use AI, just as much as when to use it.
  • AI prompts are designed to deepen thought, not to generate instant products.
  • Students are asked to justify their use of AI: why this tool, for this task, in this way?

Rivoltella’s warning is clear: the speed of AI responses cannot replace the depth of knowledge.
The job of school is to keep the long road of understanding attractive, even in an age of shortcuts.


Can AI Deepen, Rather Than Destroy, Relationships at School?

Psychotherapist Alberto Pellai brings another dimension: relational and emotional health. He fears that AI, if not mediated by aware adults, could increase the digital loneliness of adolescents.

Adolescents are already immersed in:

  • Social networks shaped by algorithms.
  • Platforms that harvest their attention 24/7.
  • Communities that sometimes amplify anxiety and comparison.

If school simply adds another layer of algorithmic interaction without human mediation, it risks becoming another lonely, glowing rectangle in a teenager’s life.

Pellai insists that school must remain a stronghold of relationship and emotional accompaniment.
Why?

  • No algorithm can authentically empathize with a student.
  • AI cannot assume responsibility for a fragile choice or a risky behavior.
  • A class group, with all its noise and friction, is still a powerful protection against isolation.

So the challenge isn’t simply “AI yes or no,” but:
How can we integrate AI in ways that actually increase human contact?

Some practical ideas:

  • Use AI to free teacher time from bureaucracy, giving more room for conversations.
  • Ask students to bring AI‑generated material and discuss it in person, face‑to‑face.
  • Turn AI into a topic for group debate (“Do we agree with this answer? Why or why not?”).

In this way, AI becomes not another wall between us, but a shared object we can gather around and dissect together.


How Does the Catholic Tradition Frame AI in Education?

The MagIA article also connects these reflections with a voice often forgotten in tech debates: the Catholic Church.

The Vatican’s note Antiqua et nova addresses the relationship between new technologies and humanism, without refusing technology but calling for critical discernment.

Three key principles stand out:

  1. Centrality of the person
    • Every technological choice must keep the human person at the center.
  2. Value of tradition
    • The past is not a weight but a resource; the digital does not erase history.
  3. Ethics of responsibility
    • Using AI always involves moral responsibility; neutrality is an illusion.

The note stresses that AI can be a useful instrument, but it can never replace the moral responsibility and personal conscience of the teacher.
An algorithm:

  • Has no soul.
  • Has no freedom.
  • Has no responsibility.

This doesn’t mean we must sacralize the teacher or demonize the machine.
It simply reminds us that, in the end, someone will answer for the choices made:

  • Which AI systems to adopt.
  • How to protect student data.
  • How to handle biases and errors.
  • When to say “no, this task must be done without AI.”

So, from a Christian humanist perspective, AI is neither an idol nor a demon. It is a powerful tool that must remain subordinate to the dignity of the person.


How Can We Design a Human‑Centered, AI‑Enhanced School?

Let’s get practical. What would a school look like that embraces AI while staying deeply, stubbornly human?

We can structure the answer in a small framework:

Area Goal Concrete Actions
Curriculum Teach AI, not just with AI Modules on how AI works, bias, ethics; interdisciplinary projects mixing literature, history, and digital culture.
Pedagogy Preserve depth and effort Alternate AI‑assisted tasks with “slow” tasks; require students to explain their reasoning beyond AI outputs.
Assessment Evaluate thinking, not just products Oral exams, in‑class writing, reflective logs about AI use; explicit criteria for originality and process.
Teacher training Strengthen “cultural direction” Workshops on AI tools, ethics, and didactics; communities of practice to share experiences and failures.
School culture Keep relationships at the core More space for dialogue, mentoring, group projects; clear rules on screens and offline time.

A few concrete moves a school could adopt tomorrow:

  • AI transparency rule
    • Every time a student uses AI for homework, they must declare how and for what.
  • AI reflection journal
    • Once a week, students write a short note: “This week I used AI for X. It helped in Y, but I noticed risk Z.”
  • “No‑AI zones” in the timetable
    • Certain hours or activities are deliberately AI‑free, to train memory, attention, and creativity without crutches.
  • Collective critique of AI outputs
    • The class analyzes an AI‑generated essay or explanation, hunting for inaccuracies, biases, or missing perspectives.

This is how we avoid both extremes: naïve enthusiasm (“AI will solve school!”) and nostalgic refusal (“Let’s pretend AI doesn’t exist!”). We build, step by step, an alliance between tradition and innovation.


Aha: What If We Treat AI as a Mirror, Not a Master?

Here’s the deeper “aha” moment that sits behind all these debates.

When we pit “human” against “artificial,” we might be missing the real opportunity.
What if AI is, above all, a mirror?

  • A mirror that shows how much of school is just information transfer—and therefore easily automated.
  • A mirror that forces us to ask: What, in education, truly requires a human?
  • A mirror that reveals our own laziness when we’re tempted to let the machine think for us.

The voices highlighted in the MagIA article—from Crepet to Ranieri, from Rivoltella to Pellai, and up to the Vatican note—converge on a single point: education is too precious to be left to machines.

Education is:

  • An act of freedom.
  • An act of responsibility.
  • An act of love for truth.

And that’s precisely where the old warning of Francisco Goya hits us with new force:
“The sleep of reason breeds monsters.”

If we let our critical sense fall asleep—if we outsource judgment to algorithms—we won’t get a neutral, efficient world. We’ll get smarter tools serving unquestioned impulses and powers.

So, maybe the real task of school in the age of AI is this:

Teach young people to look into the digital mirror without becoming its reflection.


What Future of School Do We Want to Build Together?

Let’s gather the threads.

From the MagIA article we’ve seen:

  • Crepet’s provocation of the “school‑antenna of AI,” reminding us not to abdicate education to algorithms.
  • Ranieri’s call for AI literacy, so students can understand and not merely undergo AI.
  • Rivoltella’s idea of cultural direction, to avoid a pedagogy of shortcuts and maintain depth.
  • Pellai’s concern for relationships and emotional health, defending school as a place of human accompaniment.
  • The Vatican’s humanist discernment, insisting on the centrality of the person and the responsibility of the teacher.

All these perspectives suggest a common direction:
the future of school won’t be decided by how much technology we introduce, but by how we integrate it without losing our soul.

So we’re left with a set of questions that are, in fact, invitations:

  • Will we use AI to save time, or to gain time for what matters—listening, discussing, thinking?
  • Will we train students to adapt to algorithms, or to critically shape the world algorithms live in?
  • Will we accept a “pedagogy of the shortcut,” or will we still dare to invite young minds to the long road of understanding?

As FreeAstroScience, our mission is simple and stubborn:
to prove that complex science—and complex ethical questions about science—can be explained clearly, honestly, and accessibly. This post was written for you by FreeAstroScience.com, which specializes in explaining complex science simply and in inspiring curiosity rather than passive consumption.

Because we deeply believe that the sleep of reason breeds monsters.
A school that sleeps, lulled by comfort and automation, hands the future to whoever controls the code.
A school that stays awake—curious, critical, and compassionate—forms people able to govern technology, not be governed by it.

Come back to FreeAstroScience.com whenever you want to keep thinking, questioning, and learning with us.
The age of artificial intelligence has just begun, and the kind of school we build now will shape the kind of humanity we become.

Post a Comment

Previous Post Next Post