Have you ever wondered what it really costs — not in dollars, but in water, carbon, and human dignity — every time you ask an AI a question?
Welcome to FreeAstroScience.com, where we break down complex scientific ideas into plain language. Whether you're a student, a curious mind, or someone scrolling through your morning commute, we're glad you're here. My name is Gerd Dani, and I'm writing this from my wheelchair — which has never stopped me from standing up for science, for people, and for the truth.
Today, we're going to talk about something that touches every one of us. Artificial Intelligence. It's in our phones, our hospitals, our courtrooms, and our job applications. We celebrate it as a miracle. But behind the shiny interface, there's a shadow most of us never see.
Stick with us to the end. What you'll discover might change the way you think about the technology you use every single day.
Table of Contents
1. How Much Does Training an AI Model Really Cost the Earth?
Here's something that stopped me in my tracks. Training a single large language model (LLM) — just one — produces between 200 and 600 tonnes of CO₂. To put a face on that number: OpenAI's GPT-3 alone generated roughly 552 tonnes of carbon emissions during its training phase.
That's not a typo. Five hundred and fifty-two tonnes. From one model. One time.
And we rarely hear about it. The people typing prompts into chatbots don't see smoke stacks. They see a clean text box. A blinking cursor. The environmental toll stays invisible — hidden behind elegant user interfaces and cheerful marketing language .
But the damage doesn't stop at carbon. When we look at the full life cycle of AI systems — hardware production, GPU manufacturing, continuous energy consumption during operation, raw material extraction — the picture grows darker.
The Numbers Behind the Machine
By 2025, AI's energy consumption reached about 23 gigawatts — nearly half of all global data center power usage . The carbon emissions from that? Comparable to the entire city of New York. And the water? Around 764.6 billion litres, a volume roughly equal to the world's entire annual bottled water consumption .
Let that sink in. We're draining lakes to train chatbots.
2. Why Can't Technology Simply Fix Its Own Problems?
There's a comforting story we tell ourselves. It goes something like this: "Technology created the problem, so technology will fix it." Incremental innovation. Better chips. Greener data centres. The market will sort it out.
This is what scholars call the technocentric paradigm — the belief that progress is unlimited and that technological systems can always self-correct toward sustainability.
It's a seductive idea. It's also incomplete.
As Nicola Rotundo writes in his analysis for MagIA (the Magazine on Artificial Intelligence at the University of Turin), this assumption "ignores the physical reality of thermodynamic and environmental constraints" . In plain language: you can't code your way around the laws of physics. The planet has hard limits. Energy is finite. Water is finite. Rare earth minerals are finite.
No software update changes that.
The technocentric view treats nature as an infinite warehouse. But nature doesn't negotiate. When we push past the boundaries, we don't get an error message — we get droughts, heat waves, and ecosystem collapse.
A Quick Look at the Thermodynamic Reality
The Second Law of Thermodynamics reminds us that every energy transformation creates waste — typically as heat. No computational process is 100% efficient. Every AI query, every model training run, every GPU cycle generates entropy. There's no free lunch in physics, and there won't be one in AI.
We need a different framework. One that doesn't just ask "How can we make AI faster?" but "How can we make AI wiser?"
3. Who Is Responsible When an Algorithm Discriminates?
Let's shift gears. Beyond the environmental toll, AI poses ethical questions that cut even deeper.
Picture this scenario. A hiring algorithm screens thousands of job applicants. It systematically rejects women and ethnic minorities at higher rates. Not because someone wrote a line of code that says "reject women." But because the training data — years of historical hiring decisions — already contained those biases .
The algorithm learned discrimination the same way a sponge absorbs water: automatically and without awareness.
Here's where things get uncomfortable. Rotundo makes a sharp observation: these aren't just "technical bugs" that a patch can fix. They are "expressions of power structures and inequality inscribed in the code itself".
That sentence deserves a second read.
Three Areas Where Algorithmic Bias Harms Real People
- Hiring systems that discriminate against women and minorities
- Credit scoring algorithms that reproduce historical economic inequality
- Facial recognition that performs with dramatically lower accuracy on non-Caucasian faces
Now, here's the philosophical knot: who is accountable?
An AI system has no consciousness. No intention. No moral compass. It can't be "guilty" of discrimination the way a human can. So where does the responsibility fall? On the programmers who built the model? The company that deployed it? The HR manager who trusted it blindly?
This is what ethicists call the "moral responsibility gap" — and right now, we don't have a clean answer . What we do know is that ignoring the question isn't an option.
4. What Happens to Human Dignity When Machines Take Over?
There's a quieter crisis unfolding alongside the environmental and ethical ones. And it has to do with something deeply personal: the meaning of work.
AI is replacing translators, coders, journalists, legal researchers, and graphic designers at a pace that would've seemed like science fiction ten years ago . The standard economic response — "new jobs will emerge" — has started to ring hollow. Even if it's partly true, it ignores two realities:
- The speed of this transition far outpaces most workers' ability to retrain.
- Many of the "new jobs" demand specialized skills that aren't equally accessible to everyone .
But there's something even more fundamental at stake. Work isn't just an economic activity. For many of us, it's how we express creativity, build identity, and feel useful. When a machine does in seconds what you spent years learning, something inside shifts. And no unemployment check can replace that feeling of purpose.
As someone who has spent his life navigating a world not always built for him, I understand what it means to fight for your place, your voice, your contribution. Every person's dignity — regardless of ability, background, or job title — is non-negotiable.
The Values Embedded in the Code
Here's the thing we often forget: AI isn't neutral . Every model carries the values, goals, and worldview of its creators. If the driving logic behind AI development remains pure profit maximization, algorithmic efficiency without moral limits, and power concentrated in a handful of mega-corporations — then we're accelerating toward what Rotundo calls "a future of profound inhumanity and ecological devastation" .
That's not fear-mongering. That's a logical consequence of the current trajectory.
5. Can AI Become a Tool for Liberation Instead of Domination?
Now — and this matters — we aren't here to leave you in despair. Because the story doesn't have to end that way.
Rotundo's article points to an alternative vision, one rooted in the common good . Drawing on Catholic social thought — but speaking in terms that resonate far beyond any single faith tradition — he argues for a shift from the logic of domination to the logic of service.
What would that look like in practice?
Imagine AI designed not to maximize clicks or shareholder value, but to:
- Free people from exhausting physical labour
- Help diagnose diseases in underserved communities
- Reduce isolation for the elderly and disabled
- Support education in places where teachers are scarce
In this vision, AI becomes a tool that liberates — from fatigue, from illness, from loneliness — and gives us back the time and energy for what truly matters: "the search for meaning, the building of authentic relationships, the service of the common good" .
That's not naive optimism. That's a design choice. And design choices are made by people. By us.
What Ethical AI Governance Looks Like
Renewing the ethical governance of AI and the economy means renewing our understanding of what it means to live well together . Not a life of hyper-consumption and infinite stimulation. But a life where:
- Every person's dignity is recognized unconditionally
- Community and solidarity are treated as invaluable
- The natural world is respected as a gift, not exploited as an endless resource
This demands political will, international cooperation, and — yes — a bit of moral courage. The choice of how we develop, deploy, and govern AI is an ethical, political, and spiritual decision .
Final Thoughts: The Choice Is Ours
Let's take a breath and look at where we are.
We've seen that AI's environmental footprint is staggering — 552 tonnes of CO₂ for a single model, energy consumption rivalling entire cities, and water usage that matches the world's bottled supply . We've recognized that the technocentric dream of endless self-correcting progress runs headfirst into the hard walls of physics . We've confronted the reality of algorithmic bias, the moral responsibility gap, and the quiet erosion of human dignity in the workplace .
And we've glimpsed a different path. One where AI serves people rather than the other way around .
The sleep of reason breeds monsters — Goya told us that centuries ago, and it still holds true. At FreeAstroScience.com, we believe in keeping your mind awake. We exist to explain science in plain terms, because knowledge isn't a luxury. It's a right. And an informed mind is the best defence against a world that sometimes prefers you not to think too hard.
So don't turn off your curiosity. Don't scroll past the hard questions. The future of AI — and honestly, the future of our species on this planet — depends on whether enough of us decide to pay attention.
Come back to FreeAstroScience.com whenever you need clarity in a noisy world. We'll be here, turning complexity into understanding, one article at a time.
Because science belongs to everyone. Including you.
Sources
- Rotundo, N. (2026). "Intelligenza Artificiale e bene comune: oltre il tecnocentrismo, verso una visione antropologica." MagIA – Magazine Intelligenza Artificiale, University of Turin. Published February 15, 2026.
- Mitu, N. E. & Mitu, G. T. (2024). "The Hidden Cost of AI: Carbon Footprint and Mitigation Strategies." Revista de Științe Politice.
- de Vries-Gao, A. (2025). "The Carbon and Water Footprints of Data Centers and What This Could Mean for Artificial Intelligence." Patterns, 7, 101430.
- Dicastery for the Doctrine of the Faith & Dicastery for Culture and Education (2025). Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence. Vatican.
- Rotundo, N. (2025). "Artificial Intelligence and Human Person." Cuadernos de Teología – Universidad Católica del Norte, 17: e6665.

Post a Comment