I’m writing this on a rattling commuter train, half-asleep people scrolling past headlines, half-awake algorithms curating what we see next. It’s a perfect setting to talk about conscience—the stubborn human capacity to say “no” even when the system says “yes.” I’ve just read a striking reflection on Pietro Polito’s new book “Preferirei di no. Fuori la guerra dalla storia,” and it hits a nerve in our age of automation: we’re delegating decisions to machines at the very moment we most need moral courage. The piece traces a long arc—from lonely acts of dissent under fascism, through the legal recognition of conscientious objection in 1972, to today’s quiet surrender to “what the model recommends.” It argues, bluntly, that AI can compute but it cannot object, and if we offload judgement entirely, we hollow out what makes us human.
Let me set out three bold claims I often hear—and then dismantle them. First, that “AI will eventually learn ethics on its own.” It won’t: optimisation isn’t conscience, and predictive accuracy doesn’t birth duty or remorse. Second, that “laws alone secure peace.” Vital, yes—but the law only started protecting objectors after years of civil resistance; statutes followed conscience, not the other way around. Third, that “the age of conscription is over, so the fight is done.” Not even close: the frontier has moved from the barracks to the factory, the marketplace, the codebase—anywhere we produce, trade, or deploy instruments of harm. If anything, the stakes are higher now.
I’m Gerd, President of Free Astroscience, and my job is to make complex ideas simple without making them smaller. Today, I want to translate a century of Italian moral struggle into a practical, modern toolkit you can carry into your work, your feed, and your life.
The Old Fight: From Solitary “No” To Public Right
Polito’s story begins in difficult places: quiet, early acts of refusal under fascism; the post-war courage of Pietro Pinna in 1949, who anchored his objection in nonviolence; and a chain of people who paid in courtrooms and careers for the clarity of their convictions. Italy’s Constitution says, memorably, “L’Italia ripudia la guerra,” and that spirit helped transform a private moral stand into a civic framework: the 1972 law that created alternative civilian service, a new legal path for those who refused to bear arms. It’s easy to forget how contested that path was. The law itself spelled out obligations, oversight, and penalties; it was not a carte blanche—it was a structured recognition that conscience can limit state power. That hard-won shift—from individual testimony to collective awareness—is the muscle memory we need now.
The New Front: Objection In The Age Of Algorithms
Here’s the pivot that matters. The article argues that the most dangerous thing about automation isn’t speed or scale—it’s the “disactivation” of critical conscience when we outsource decisions to automated pipelines in courts, warfare, or surveillance. AI can optimise, classify, predict. It cannot disobey. It doesn’t experience conflict between the legal and the just, the efficient and the humane. When we accept automated efficiency as the only metric, we shrink the space where a person can stand up and say “I’d rather not”—the minimal form of freedom that anchors all the others.
You don’t need to be in a courtroom to feel this. Think of content moderation that buries nuance; hiring systems that score “fit” by past prejudice; escalation decisions filtered through a targeting model that never sweats. The danger isn’t just bad outputs; it’s the habit of obedience to a process that can’t blush. The old objection was a refusal to carry a weapon. The new objection may be a refusal to build one, ship one, or hide one inside code. The principle is the same: a human takes responsibility for saying no—not as a tantrum, but as a truth.
What We Can Practise Now
I’ve been in labs and organisations where the gravitational pull of “it works, ship it” is strong. The way out is boring and brave at the same time. Start with visibility: insist on audit trails, appeal paths, and clear human-on-the-loop checkpoints where someone can halt a system without being punished for prudence. Follow with proportionality: if a model drives a consequential decision—freedom, livelihood, safety—then the standard of evidence and review should be higher than “it usually performs well on the benchmark.” Build discretion in public, not as a quiet override, so teams learn that responsibility isn’t a bug, it’s the point. This is how a civic right becomes institutional culture—exactly how alternative service moved from fragile exception to recognised practice.
And yes, draw lines. If a product meaningfully contributes to harm—arms trade, escalatory targeting, coercive surveillance—object in the domain you control: don’t design it, don’t deploy it, don’t launder its legitimacy. The piece frames this as enlarging conscientious objection “against all weapons,” extending the old refusal to the modern supply chain of conflict. That’s not grandstanding; it’s maintenance work for democracy.
Education As A Quiet Rebellion
Polito closes with a simple, stubborn idea: teach disobedience as part of citizenship, not to glorify refusal, but to preserve dignity when law and conscience collide. The language is resonant—“coscientizzazione,” a clunky word for a crucial habit: noticing, naming, and owning responsibility, together. In the 1980s, this looked like basement meetings, scrappy newsletters, and endless conversations; today, it might look like open model cards, red-team reports, refusal policies, and communities of practice where engineers and ethicists sit at the same table. Different tools, same task: making space for judgement before it’s needed.
As someone who translates science for a living, I’ll put it this way. A telescope without a trained observer is just glass. An AI without a conscientious operator is just math. The instrument magnifies; the human decides.
A Short, Usable Checklist For Tomorrow Morning
If you’re a developer, researcher, policymaker—or just a citizen with a smartphone—try this three-step loop for any automated decision that touches a life. First, locate the stop button: who can interrupt the system, how fast, and with what accountability. Second, run the counterfactual: if this outcome were about you, what explanation would satisfy you enough to accept or appeal it. Third, practice the minimal objection: could you defend a respectful “I’d rather not” to your manager, your peers, your future self. If the answer is no, redesign the system or the process. The aim isn’t drama; it’s decency.
Why This Matters For FreeAstroScience
At FreeAstroScience, we’re in the business of demystifying complex systems—from galaxies to genomes to GPUs. The pattern is always the same: power without perspective breeds superstition. In astronomy, we cross-check instruments to avoid fooling ourselves. In AI governance, we cross-check power to avoid hurting others. Same discipline, different sky.
The article’s thesis—that algorithms can’t object and therefore must be governed by people who can—is both unromantic and urgent. Laws set the stage, as they did in 1972; culture writes the play, one workplace at a time. If the age of conscription created the conscientious objector, the age of automation must create the conscientious operator.
Look up from the feed for a second. The world won’t end if a model pauses for a human. But something essential ends when a human stops pausing for a model.
Written for you by Gerd, FreeAstroScience—where we keep complex principles simple, and human judgement front and centre.
Sources:
[1] https://magia.news/coscienzia-lobiezione-morale-al-tempo-dellautomazione/
[3] https://www.edizionieuropee.it/LAW/HTML/0/zn10_07_007.html
[5] https://sites.units.it/etica/2025_1/E&P2025_1.pdf
[7] https://old.liceoporporato.edu.it/CATALOGO/
[8] https://www.ibs.it/elogio-dell-obiezione-di-coscienza-libro-piero-polito/e/9788896177853
[9] https://www.storiaeletteratura.it/catalogo/preferirei-di-no/21220
[11] https://stensen.org/attivita/obiezione-di-coscienza-e-diritti/
[14] https://www.lafeltrinelli.it/preferirei-di-no-fuori-guerra-libro-pietro-polito/e/9791256930012
Post a Comment