Can AI Protect Peace — Or Quietly Erode It?

Woman holding glowing cracked sphere between ancient library and AI server racks, symbolizing tension between human critical thought and artificial intelligence in protecting peace

Have you ever wondered whether the tools we build to make life easier might also be quietly reshaping our ability to live together in peace?

Welcome to FreeAstroScience, where we break down complex ideas — from astrophysics to ethics — into language anyone can grasp. We're glad you're here. Today, we're stepping beyond the stars to explore something just as vast: the relationship between artificial intelligence, human thought, and the fragile architecture of peace.

This isn't a story about killer robots or dystopian futures. It's about something more subtle and, honestly, more urgent. It's about how the digital environments we inhabit every day shape the way we reason, argue, debate, and — sometimes — stop thinking altogether.

So grab your coffee. Sit with us for a few minutes. We promise this one's worth reading to the very end.


1. When Does Barbarism Begin?

Here's a sentence that stopped us in our tracks:

"Barbarism doesn't arrive all at once."

It creeps in. It shows up when thinking narrows, when language degrades into slogans, when responsibility dissolves into bureaucratic procedures. War is its most extreme expression — but its preparation is often silent .

Think about that for a moment. We tend to picture barbarism as something loud: tanks rolling, buildings falling. But Sgreccia's argument is subtler. The real danger starts earlier, in the everyday erosion of trust, in the simplification of propaganda, in the slow impoverishment of the words we use to describe the world around us .

The wars scarring our planet right now — in different regions, under different flags — show this pattern clearly. Violence isn't just a military event. It's also the result of slower processes: the decay of public trust, the addiction to hostility, the growing difficulty of telling information apart from manipulation .

And here's where artificial intelligence enters the picture. Not because AI pulls triggers. But because it changes the cognitive environment where we form our judgments, set our priorities, and decide what matters .


2. What Does Peace Really Mean in the Age of AI?

We often think of peace as a diplomatic achievement. Treaties signed. Handshakes between leaders. Ceasefires negotiated.

That's part of it — but only a thin slice.

Peace, as Sgreccia describes it, is a daily architecture . It's built from trust in the spoken word, willingness to face disagreement, and the ability to see another person as an end in themselves — not as a tool . When these conditions weaken, coexistence stiffens. Violence starts to feel thinkable.

Hannah Arendt showed us, decades ago, that when thinking-as-judgment disappears, terrifying spaces open up for the abdication of moral responsibility. Paul Ricœur, for his part, insisted on the deep connection between memory, narrative identity, and recognition .

Read together today, their ideas converge on one point: peace depends on our ability to keep a non-automatic relationship with truth, with time, and with each other .

A society where everything gets accelerated, polarized, and reduced to binary camps becomes vulnerable — not just to lies, but to indifference. And indifference, as history keeps reminding us, is one of the most dangerous preconditions for barbarity .


3. How Does AI Reshape Our Cognitive Environment?

We usually talk about AI as a tool. A calculator. A productivity booster. An assistant.

But in its social use, AI tends to become something far bigger: an environment . It shapes what appears plausible, urgent, and believable. It shifts the center of knowledge from justification to performance — where "it works" replaces "I can explain why" .

That shift carries a civic risk we can't ignore. When the question "why?" becomes unnecessary or impossible to answer, dissent weakens. Trust fractures .

The issue isn't just efficiency. It's the kind of relationship with knowledge these systems encourage. When speed of response and smoothness of interaction become the default standards, we risk undermining the value of verifiability, of reasoned debate, of the time it takes to genuinely understand something .

And we're not arguing against technology here. As Sgreccia puts it, the point isn't to pit humanistic nostalgia against technological innovation. The point is to recognize that every cognitive infrastructure quietly trains certain mental habits — and discourages others .

Here's another dimension worth sitting with: the automatic generation of text and images multiplies what can be said without guaranteeing what is understood . Byung-Chul Han calls this a crisis of narration — when experience isn't retained and interpreted but merely consumed and replaced, we lose the building blocks of shared time: memory, promise, reconciliation .

And shared time, really, is the raw material of peace.


4. From Judgment to Automatism: Are We Crossing a Dangerous Line?

Nobody's saying technology causes wars. That would be naive.

But it'd be equally naive to ignore the anthropological climate that certain digital environments can create .

Think about it this way. Platforms and systems that reward fast reactions, oversimplification, and polarization put constant pressure on our judgment. They train us toward automatism rather than reflection . When moral imagination atrophies, propaganda finds fertile ground. When truth becomes just another piece of "persuasive content," conflict gets managed like an exercise in emotional manipulation .

And when public discourse loses its capacity for reasoned argument, something else happens — what philosopher Miranda Fricker calls epistemic injustice: some voices get silenced not because they're wrong, but because they've been made irrelevant .

Barbarism, then, isn't some far-off place. It's a possibility that resurfaces whenever thought grows thin .

This is the anthropological threshold Sgreccia warns about. AI doesn't just modify operational procedures. It shapes the relationship between decision and responsibility, between visibility and invisibility, between presence and irrelevance. Every time a system helps determine what counts, what circulates, what stays at the margins — it indirectly touches the very conditions of coexistence .

In a time marked by armed conflicts, information wars, systematic discrediting of opposing sides, and the emotional saturation of media spaces, this dynamic becomes especially urgent. Peace doesn't only erode under gunfire. It erodes when the cognitive and symbolic conditions that let a society value evidence, weigh words, and honor vulnerable lives are corroded from within .


5. Who Bears Responsibility When AI Acts?

Let's get one thing straight: AI is not a moral subject .

Saying an algorithm "decided" or "wanted" something is a neat trick — and a dangerous one. It gives humans an elegant exit from accountability. If the machine did it, who's to blame?

Peace, though, demands traceability. Who defines the objectives? Who selects the data? Who sets the thresholds and incentives? Who answers for the impact on the most vulnerable?

Hans Jonas argued that in an age of enormous technical power, responsibility can't just mean blame after the fact. It means accountability before the fact — a duty toward those who'll bear consequences far away in time and space . Without that seriousness, technology becomes the perfect machine for doing a great deal while nobody answers for any of it .

This extends to the material side of the digital world, too. There's no "digital" without physical infrastructure — energy, mineral extraction, labor supply chains, deeply unequal distributions of costs and benefits. As Kate Crawford showed in Atlas of AI (2021), when the advantages of technology concentrate among the powerful while human and environmental costs shift onto the most fragile, structural injustice grows . And where injustice accumulates, peace weakens.

So the question of peace in the AI era can't be reduced to a narrow debate about technical regulation. It demands a broader conversation about the justice of our infrastructure, the distribution of vulnerability, and the political quality of the decisions governing innovation .


6. What Does Promoting Peace Look Like in an Algorithmic World?

If peace is, at its core, a way of thinking — then regulating tools alone won't save us. We need to cultivate dispositions that algorithmic environments don't naturally reward .

Sgreccia outlines four such dispositions. Let's walk through each.

Interiority: A Space the Algorithm Can't Colonize

Simone Weil called it attention — a readiness toward truth and toward the other person. Not mental efficiency. A discipline of seeing .

We need inner spaces that aren't invaded by urgency. Places where experience can become awareness, and awareness can become choice. In a world of infinite notifications, this is an act of quiet rebellion.

Critical Sense: The Courage of Slow Thought

This means defending contradiction, verifiability, and the slowness that real discernment requires. Freedom, as Karl Popper understood, doesn't live in rapid-fire choices. It's often born in the patient time of judgment .

When everything pushes us to react now, the bravest thing we can do is pause.

Care for Relationships — and for the World

Peace is born from proximity, mediation, and responsible speech. Today, that care extends to the material backbone of our digital lives: the servers, the mines, the invisible labor. If the benefits concentrate at the top while the costs sink to the bottom, the whole structure cracks .

The Quality of Public Language

When language loses precision — when everything becomes hyperbole, when conflict is painted only as a clash between incompatible identities — our capacity for mediation dies .

Defending exact words, clear distinctions, and honest reasoning isn't a luxury. It's a condition for living together.


7. Is This a Cultural Challenge Before a Technical One?

Yes. Emphatically, yes.

Rules and regulations are necessary. But they aren't enough . No regulatory framework can replace a society that knows how to form judgment, sustain reasoned disagreement, recognize vulnerability without turning it into spectacle, and resist the automatism of permanent reaction .

Education, in this context, goes far beyond digital literacy or technical competence. It means forming people who can inhabit complex environments without giving up the responsibility of understanding .

Peace isn't a topic external to innovation. It's one of the most demanding criteria for evaluating innovation's direction .

And every time we grant AI systems a false aura of self-sufficient neutrality, we prepare the ground for the abdication of human responsibility. Every time we stop asking about aims, asymmetries, impacts, and implications, we shrink the space of public freedom .


8. Barbarism Is Not Inevitable

This is the sentence we want you to carry with you:

Barbarism is not inevitable.

AI can expand knowledge and cooperation. But only if society invests in the opposite of automatism: the education of thought, transparent accountability, and a fierce commitment to truth .

Protecting peace today means, first and foremost, defending the dignity of thinking .

More precisely, it means defending the cultural conditions of good judgment: the seriousness of language, the verifiability of public discourse, the traceability of responsibility, the protection of vulnerable people, and the ability to resist the pull of polarization .

In a time of growing technical power, the decisive question isn't only what systems can do — but what kind of coexistence they make possible .

The Pontifical Lateran University hosted the multidisciplinary conference "Salus Hominis" on March 12–13, 2026, dedicated to the relationship between stewardship of creation and artificial intelligence — a timely reminder that innovation can never be separated from responsibility, limits, and intergenerational justice .


9. Key Thinkers and Concepts at a Glance

To help you navigate the intellectual landscape behind this conversation, here's a reference table of the thinkers and ideas cited by Sgreccia:

Intellectual Framework: AI, Peace & Human Responsibility
Thinker Key Work Core Concept Relevance to AI & Peace
Hannah Arendt Eichmann in Jerusalem (1963) Banality of evil; loss of thinking as judgment When we stop thinking critically, moral abdication follows
Paul Ricœur Soi-même comme un autre (1990) Memory, narrative identity, recognition Shared stories and memory are the fabric of coexistence
Luciano Floridi The Ethics of Information (2013) Infosphere; AI as environment, not just tool AI shapes what we perceive as plausible and real
Miranda Fricker Epistemic Injustice (2007) Testimonial & hermeneutical injustice AI can silence voices by making them irrelevant, not by proving them wrong
Byung-Chul Han Die Krise der Narration (2023) Crisis of narration; consumed vs. interpreted experience Without shared storytelling, memory and reconciliation weaken
Hans Jonas Das Prinzip Verantwortung (1979) The imperative of responsibility; ethics for the technological age Responsibility must come before action, not only after harm
Karl Popper The Logic of Scientific Discovery (1959) Falsifiability; open society; critical rationalism True freedom requires the slow time of judgment, not just rapid choice
Simone Weil Attente de Dieu (1951) Attention as moral and spiritual discipline Peace needs inner spaces not colonized by urgency
Kate Crawford Atlas of AI (2021) Material politics of AI; hidden labor and extraction When tech costs fall on the vulnerable, structural injustice — and conflict — grow

Source: Sgreccia, P. (2026). "Intelligenza artificiale e pace: responsabilità, linguaggio e convivenza." MagIA.


Defending the Dignity of Thought

Let's take a breath and gather what we've covered.

Barbarism doesn't announce itself with trumpets. It arrives quietly — through degraded language, weakened critical sense, dissolved accountability. AI, despite all its benefits, can accelerate these erosions if we don't stay alert .

Peace isn't just the absence of bombs. It's a living architecture built from trust, careful speech, moral imagination, and the willingness to see other people as ends, never as means . AI reshapes the cognitive environment in which all of these capacities either flourish or wither.

Nobody's saying AI is the enemy. It can expand knowledge and cooperation on a massive scale. But only — only — if we invest in what opposes automatism: the education of thought, transparent responsibility, and a deep commitment to truth .

The question at the heart of our era isn't what machines can do. It's what kind of human life they make possible.

As the great Spanish painter Francisco Goya once etched into his famous plate: El sueño de la razón produce monstruosthe sleep of reason breeds monsters. That warning has never felt more relevant. At FreeAstroScience, we believe education means never switching off your mind. Keep it active, keep it questioning, keep it alive.

We're here to explain complex ideas in simple terms — from the physics of neutron stars to the ethics of artificial intelligence. If this article stirred something in you, we're glad. You're not alone in this reflection.

Come back to FreeAstroScience.com whenever you need a place where curiosity meets clarity. We'll be here — thinking out loud, together.


📚 References & Sources

  1. Sgreccia, P. (2026). "Intelligenza artificiale e pace: responsabilità, linguaggio e convivenza." MagIA — Magazine Intelligenza Artificiale. Published March 11, 2026. Read the original article →
  2. Arendt, H. (1963). Eichmann in Jerusalem: A Report on the Banality of Evil. Viking Press.
  3. Ricœur, P. (1990). Soi-même comme un autre. Éditions du Seuil.
  4. Floridi, L. (2013). The Ethics of Information. Oxford University Press.
  5. Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press.
  6. Han, B.-C. (2023). Die Krise der Narration. Matthes & Seitz Berlin.
  7. Jonas, H. (1979). Das Prinzip Verantwortung. Insel Verlag.
  8. Popper, K. (1959). The Logic of Scientific Discovery. Hutchinson & Co.
  9. Weil, S. (1951). Attente de Dieu. La Colombe.
  10. Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

This article was written for you by FreeAstroScience.com — where complex ideas meet simple words.

Post a Comment

Previous Post Next Post