I've been thinking a lot lately about something that happened in Italy this summer, and frankly, it's been keeping me up at night. As someone who spends considerable time exploring how complex scientific principles shape our world here at FreeAstroScience, I find myself grappling with a rather uncomfortable question: are we preparing young minds for tomorrow's challenges, or are we simply teaching them to navigate today's distractions?
Let me share three rather provocative thoughts that have been circulating in educational circles—thoughts that, on first glance, might seem reasonable but are actually quite dangerous. First, some argue that artificial intelligence is simply too technical for general education, better left to computer science specialists. Second, there's this notion that AI tools will naturally make teachers obsolete, so why bother training educators? Third, many believe that young people are "digital natives" who'll instinctively know how to handle AI responsibly. Each of these ideas sounds logical until you examine them closely—and then they crumble like ancient parchment.
The reality is far more nuanced and urgent than these simplistic views suggest.-
The Great Educational Silence of 2025
Something rather extraordinary happened during Italy's state examinations this year—or rather, something didn't happen. More than 500,000 students sat for their maturità exams in June, facing questions about literature, history, and philosophy, yet artificial intelligence was conspicuously absent from the proceedings . This wasn't merely an oversight; it was a glaring omission that speaks volumes about our educational priorities.
Only students taking supplementary exams in July encountered AI as a topic, through a thoughtful piece by philosopher Maurizio Ferraris that explored the profound differences between human and artificial intelligence . The fact that this appeared only in the "backup" exam suggests something quite troubling: we're treating one of the most transformative forces of our time as an afterthought.
I find myself wondering—if we don't discuss AI in schools, where exactly do we expect young people to develop critical thinking about it? At the dinner table? Through social media? The answer, unfortunately, is often nowhere at all.
Beyond the Technical: Why AI is Cultural Heritage
Here's what Ferraris understood that many educators seem to miss: artificial intelligence isn't merely a technical subject—it's fundamentally a cultural and philosophical one . When we examine human intelligence, we see something rooted in corporeality, emotions, consciousness of mortality, and the drive to create meaning. AI, by contrast, operates without volition, without awareness of its actions, without the existential weight that defines human experience.
This distinction isn't academic hairsplitting—it's the foundation of what it means to be human in an age of thinking machines. An AI can master chess without knowing it's playing a game, much less caring about victory . We humans, however, construct not just tools but knowledge, transmitting wisdom intentionally across generations. This is what separates us from both animals and algorithms.
The question isn't whether students should learn to use ChatGPT—that's rather like asking whether they should learn to use calculators. The real question is: do they understand what makes human intelligence irreplaceable?
The French Experiment: When Teachers Meet Their Match
Whilst Italy was avoiding AI in exams altogether, France was grappling with a different dilemma. Teachers themselves have begun using AI tools like Gingo, which promises to reduce grading time by a staggering 80% . Applications like Examino and PyxiScience are transforming how educators approach their work, creating what one might call a "correction revolution."
This development reveals something fascinating about our relationship with AI—it's not just changing how students learn, but how teachers teach. The promise of dramatically reduced grading time is seductive, particularly for overworked educators drowning in paperwork. Yet here's the rub: what happens to the personal connection, the nuanced understanding of each student's journey, when algorithms handle the bulk of assessment ?
I'm not suggesting we reject these tools entirely—that would be both futile and foolish. Rather, I'm proposing we consider them with the same critical lens we'd apply to any powerful technology.
The Citizenship Question: Users or Citizens?
This brings me to what I believe is the central challenge: are we training students to be mere users of AI, or conscious citizens in an AI-enabled world? The distinction matters more than you might think.
A user accepts what technology offers, adapting behaviour to fit algorithmic expectations. A citizen questions those expectations, understanding that every algorithm embeds values, priorities, and assumptions—whether intentionally or not . Citizens ask uncomfortable questions: Who decides what an AI system can do? Who bears responsibility when it makes mistakes? Where's the line between helpful assistance and invasive surveillance?
These aren't technical questions—they're fundamentally civic ones, touching on rights, freedoms, and responsibilities. They belong in every classroom, not just computer science labs, because they shape how we'll live together as a society.
The Platonic Challenge: Shadows and Reality
Ferraris invokes Plato's cave allegory, and I think he's onto something profound here . Educational encounters with AI should help students distinguish between shadows on the wall and actual reality—between what's authentic and what's merely convincing, between what enhances human capability and what manipulates human choice.
This isn't about technological fear-mongering. It's about developing what we might call "AI literacy"—the ability to engage with artificial intelligence thoughtfully rather than reflexively. Just as we teach students to read between the lines of a poem or question the bias in a historical source, we need to teach them to interrogate the assumptions built into AI systems.
The Path Forward: Integration, Not Avoidance
What frustrates me most about Italy's examination debacle isn't the absence of AI questions per se—it's what that absence represents. It suggests an educational establishment that's either unprepared for the present reality or actively avoiding it. Neither position serves students well.
The solution isn't to treat AI as a separate subject, quarantined from "real" learning. Instead, we need interdisciplinary approaches that weave AI literacy throughout the curriculum. Literature classes might explore how AI-generated poetry differs from human expression. History lessons could examine how algorithmic decision-making might have altered past events. Ethics courses could grapple with the moral implications of automated systems in healthcare or criminal justice.
This approach recognises that AI isn't just another tool—it's a lens through which we can examine fundamental questions about consciousness, creativity, responsibility, and human dignity.
The Responsibility We Cannot Delegate
As I reflect on these developments from my perspective here at FreeAstroScience, where we're committed to making complex scientific principles accessible to everyone, I'm struck by a particular irony. We live in an age where AI can generate convincing text, create stunning images, and solve mathematical problems that would challenge trained experts. Yet we seem reluctant to trust our students with serious conversations about what this means.
Perhaps that's precisely why we need these conversations most urgently. The future belongs to young people who can think critically about AI, not just use it efficiently. They need to understand both its remarkable capabilities and its fundamental limitations. They need to recognise that human intelligence—messy, emotional, mortal, and meaning-seeking as it is—offers something that no algorithm can replicate.
The school's role has always been to prepare students for the world they'll inherit, not the world we're comfortable discussing. In 2025, that means grappling honestly with artificial intelligence—not as a distant possibility, but as a present reality that's already reshaping how we work, learn, communicate, and understand ourselves.
We can continue avoiding these conversations, or we can embrace them as opportunities to explore what makes us uniquely human. The choice is ours, but the consequences will be theirs.
Written specifically for you by Gerd of FreeAstroScience, where we believe the most complex questions deserve the clearest thinking.
This reflection is profound and much needed in today's world, where some people are dazzled by AI while others express technophobia. The ideal is a middle path, leveraging AI's potential while maintaining critical thinking. And, as discussed in the text, schools need to include these topics in their curriculum.
ReplyDeletePost a Comment