Can Artificial Intelligence Really Care for Us?


When an algorithm suggests a diagnosis, who is really taking care of you? Welcome, dear readers, to FreeAstroScience. Today we’ll walk into a space where medicine, algorithms, and philosophy collide: the era of artificial intelligence in healthcare.

In this article (written by, only for you), we’ll ask an unsettling but straightforward question: can a machine take part in the act of caring, without turning the patient into a data point?

We’ll explore how AI already supports doctors, where its mathematical limits lie, why human dignity must stay central, and what a digital humanism of care could look like. Stick with us to the end: the deeper we go, the clearer the stakes become.



What changes when algorithms enter the clinic?

Let’s start from a key idea: instead of thinking of AI as artificial intelligence, many researchers suggest we see it as “augmented intelligence”.

So, not a robot doctor replacing human judgment, but:

  • extra eyes on radiology images,
  • predictive models for risks and complications,
  • decision-support tools for complex therapies.

In this view, AI becomes a kind of cognitive exoskeleton. It amplifies what clinicians can do, especially under pressure, but it does not replace the human decision-maker.

This distinction matters because healthcare is never just a technical process. It’s also:

  • profoundly anthropological (how we understand the person),
  • deeply ethical (who decides, and on what basis),
  • strongly relational (how trust and empathy are built).

If we reduce care to “input data → output decision”, we risk turning the patient into a case instead of a person.

So the real challenge isn’t only technological. It’s about what kind of humans we want to be in an age of Homo technosapiens—humans entangled with machines.


Why isn’t human intelligence just better computation?

It’s tempting to say:

“The brain is like a computer. If we make a big enough AI, we’ll get human-level care.”

But the philosophical and clinical critique is sharp: bio-intelligence and techno-intelligence are not the same thing.

To see why, compare them side by side:

Human clinician vs medical AI: key differences
Dimension Human clinician Medical AI system Clinical implication
Embodiment Lives in a vulnerable, aging body Runs on servers and silicon Understands pain and vulnerability differently
Relationality Builds trust, reads tone and context Processes data points and patterns Alliance with the patient is asymmetrical
Responsibility Can be held ethically and legally accountable Acts via code written and deployed by humans Liability must trace back to people and institutions
Learning Integrates experience, stories, culture Learns from labeled data and feedback Some knowledge never appears in datasets

A human clinician is a body-mind unity, embedded in a world of gestures, smells, silences, cultural codes. An AI system manipulates symbols and numbers with extraordinary speed, but without living in that world.

So, if we say “the relationship of care is just pattern recognition,” we risk:

  • erasing corporeality (your body, your touch, your fatigue),
  • eroding relationality (the shared story between doctor and patient).

That’s why many ethicists resist a purely computational picture of medicine. They defend the idea that the patient remains a person with “almost infinite” dignity, not a vector of features in a model.


What do Gödel, Church and Turing tell us about ‘superintelligent’ medicine?

Now let’s bring in some hard math and logic. Don’t worry, we’ll keep it friendly.

Three towering results from the 20th century quietly shape how we should think about AI:

  • Gödel’s incompleteness theorems
  • The Church–Turing thesis
  • The halting problem (Turing)

Together, they whisper a powerful message:

Not everything true can be computed, and not every decision can be automated.

How does incompleteness limit the dream of perfect algorithms?

Gödel showed that in any sufficiently strong, consistent formal system (S), there are true statements that cannot be proved within (S).

In a simplified MathML sketch:

G : Consistent(S) ¬ S G ¬ S ¬G

Read in plain language:

There exists a statement (G) such that, if the system (S) is consistent, then (S) can prove neither (G) nor its negation.

The philosophical punchline for AI? Even in a perfect formal world, logical completeness is impossible. That’s a useful vaccine against fantasies of a “superintelligence” that could, in principle, decide everything for us.

What does ‘computable’ really mean?

The Church–Turing thesis says, roughly, that a function is “effectively computable” if a Turing machine can calculate it.

We can picture a computable function as:

f : â„• â„•

That is, a rule turning a natural number input into a natural number output, step by step.

Turing then proved something striking: there is no general algorithm that, for every program and every input, can decide whether that program will eventually stop or run forever. This is the famous halting problem.

So, even in the neat world of code, there are questions of the form:

“Will this process end well or spiral forever?”

that are in principle undecidable by a universal algorithm.

Can every medical decision be automated?

If not everything is provable or computable, then we should be careful with slogans like “AI will decide better than doctors in all cases”.

Medicine is full of:

  • ambiguous symptoms,
  • conflicting guidelines,
  • values that can’t be reduced to numbers,
  • language rich in polysemy, metaphor, and irony (“he’s carrying the world on his shoulders”).

These live in a space where formal rules help, but practical wisdom (what Aristotle would call phronesis) still matters.

A clinically relevant equation from machine learning is:

Error= Bias+ Variance+ Noise

Even the best model will always carry some bias, variance, and noise. There is no algorithmic paradise where error becomes exactly zero.

So, we shouldn’t grant AI a mythical access to meaning or a magical immunity from uncertainty. It is powerful, but not omniscient.


What is tacit knowledge, and why can’t AI fully capture it?

Hungarian scientist-philosopher Michael Polanyi called a big part of human knowing “tacit knowledge”:

We know more than we can tell.

In medicine, tacit knowledge appears when:

  • A nurse notices “something isn’t right” about a patient’s breathing.
  • A surgeon feels tension in tissue before a complication shows in monitors.
  • A GP reads anxiety in a pause, not in the words.

This kind of knowing is:

  • embodied (in the trained hands, the trained eye),
  • relational (built over years with patients and teams),
  • contextual (sensitive to local culture and constraints).

AI systems learn from what’s recorded: lab values, images, notes, billing codes. But much of tacit knowledge never makes it into the dataset. It’s lived, not written.

So, even extremely powerful AI remains—at best—an epistemic ally, not a complete substitute for the nuanced, tacit skills of clinicians.


How should we govern medical AI to protect patients?

Given these conceptual and mathematical limits, governance can’t just say “let’s use AI and see what happens.”

Classical bioethics usually lists four key principles:

  1. Autonomy
  2. Beneficence
  3. Non-maleficence
  4. Justice

Recent debates, especially in AI and digital medicine, add a crucial fifth: explicability (sometimes called explainability or intelligibility).

Here’s a compact overview:

Bioethical pillars for AI in healthcare
Principle Core idea AI-in-health example
Autonomy Respect the patient’s informed choices Explain that an algorithm is involved in the decision
Beneficence Act for the patient’s good Use AI when it clearly improves outcomes
Non-maleficence Avoid causing harm Test models to detect dangerous failure modes
Justice Distribute benefits and burdens fairly Monitor performance across groups (age, gender, ethnicity)
Explicability Make systems and decisions understandable Document data sources, validation, and reasoning paths

Explicability adds something very concrete:

  • Disclose the training data and known biases.
  • Describe how the model was validated and how often it is re-evaluated.
  • Allow external audits, including performance by demographic subgroups.
  • Clarify who is responsible when something goes wrong.

Anyway, this isn’t just paperwork. It’s what allows patients and professionals to trust that the system is aligned with their good.

We can even sketch a simple “responsibility function”:

<p>
  If we call R the overall responsibility for an AI-supported decision, we might say:
</p>

<math xmlns="http://www.w3.org/1998/Math/MathML" display="block">
  <mrow>
    <mi>R</mi><mo>=</mo>
    <mi>R</mi><mo>_</mo><mi>designer</mi>
    <mo>+</mo>
    <mi>R</mi><mo>_</mo><mi>institution</mi>
    <mo>+</mo>
    <mi>R</mi><mo>_</mo><mi>clinician</mi>
  </mrow>
</math>

The point isn’t to calculate this with a calculator. The point is to remember that responsibility is distributed across:

  • those who design the system,
  • those who deploy and regulate it,
  • those who use it at the bedside.

Human control is needed both upstream (in design and deployment) and downstream (in supervision and use).


What happens to the doctor–patient relationship in the age of AI?

Here we touch the emotional heart of the matter.

Care is not just about getting the right diagnosis. It’s about feeling:

  • listened to,
  • respected,
  • supported in decisions that can change your life.

AI can help with many tasks—triage, risk prediction, image analysis. But it can also:

  • standardize interactions (same protocol for everyone),
  • push clinicians toward the screen, not the person,
  • encourage blind trust in “what the algorithm says”.

Two concrete risks stand out:

  1. Therapeutic alliance erosion If the doctor follows the system without reflection, the relationship becomes “patient vs. protocol,” not “patient with clinician.”

  2. Informed consent overshadowed If decisions are presented as “the algorithm decided,” the patient may feel they have no real choice, just a technical destiny.

That’s why many ethicists insist on supervision instead of delegation. AI can suggest; clinicians must:

  • interpret,
  • contextualize,
  • sometimes disagree with the algorithm,
  • and explain this process clearly to the patient.

An AI that is “reliable and explainable” is not a shiny label. It is a daily practice of using technology without letting it quietly govern us.


Who owns the data, and why does power concentration matter?

By the way, algorithms don’t fall from the sky. They run on infrastructure owned by someone, somewhere.

Healthcare AI lives inside a broader ecosystem marked by:

  • surveillance capitalism (data as raw material for profit),
  • platform monopolies (few actors control the pipelines),
  • digital divides (unequal access to connectivity, devices, and literacy).

So we should ask unsettling questions:

  • Who owns the medical images used to train models?
  • Who profits when algorithms are sold back to public hospitals?
  • Who bears the cost when a biased system worsens care for a minority group?

Without fair institutions and strong public oversight, “augmented intelligence” can turn into augmented inequality:

  • better AI for wealthy hospitals,
  • poorer tools for under-resourced regions,
  • opaque models that nobody outside a private company can properly audit.

A more just path would include:

  • public or cooperative datasets with clear governance,
  • open standards for audit, traceability and impact assessment,
  • investment in digital literacy for professionals and citizens.

So, power and justice are not side issues. They’re part of what care means in a connected world.


What could a ‘digital humanism of care’ really look like?

Let’s put everything together and aim a bit higher.

Many contemporary thinkers propose a digital humanism of care, which we can sketch in three moves:

  1. Theoretical move – Recognize limits and specificity

    • Accept the logical limits of computation (Gödel, Turing).
    • Respect the uniqueness of human language and practical judgment.
    • Stop treating AI as a candidate replacement for human meaning.
  2. Ethical–institutional move – Design for responsibility

    • Build systems that are explainable by design, not as an afterthought.
    • Keep humans in the loop, with clear authority to override.
    • Allocate and track responsibility across designers, institutions, clinicians.
  3. Anthropological move – Keep relation and body at the center

    • Protect the time and space for real encounters between doctor and patient.
    • Treat the body not just as a data source, but as a lived reality.
    • Refuse to see the person as a means to datasets; always as an end.

In that vision, AI becomes:

  • an epistemic ally (helping us know more),
  • a cognitive prosthesis (extending our reasoning),
  • but never a sovereign algorithm that silently rules care.

So, how do we keep care human while using ever-smarter machines?

Let’s pause and breathe for a second.

We’ve seen that:

  • AI in healthcare is powerful, but not magical.
  • Mathematical results (Gödel, Church, Turing) remind us that not everything can be formalized or automated.
  • Human intelligence is embodied, relational, and partly tacit—no dataset captures it all.
  • Governance requires not only autonomy, beneficence, non-maleficence, and justice, but also explicability.
  • Power and data ownership shape who really benefits from medical AI.
  • A digital humanism of care is possible if we treat AI as an ally, not a ruler.

Now comes the personal part for each of us:

  • As patients, we can ask: “How was this decision made? Is an algorithm involved? Can you explain it to me?”
  • As clinicians, we can cultivate the courage to say: “The system says X, but given your context, I recommend Y—and here’s why.”
  • As citizens and policymakers, we can demand transparent governance, public audits, and support for fair, solidarity-based healthcare systems.

Reason, here, is our most precious ally. If we switch it off and outsource everything to opaque systems, il sonno della ragione genera mostri—the sleep of reason breeds monsters.

At FreeAstroScience, we believe the opposite path is still open: curious, critical, compassionate intelligence, supported (not replaced) by technology.

Let’s stay awake together. Let’s keep asking hard questions, understanding the formulas, reading the tables, and defending the simple truth that every patient is a person, not just a data point.


This article is inspired by recent work on AI, medicine and neuroethics, especially reflections by Nicola Di Bianco and Palma Sgreccia on care in the age of artificial intelligence.

Post a Comment

Previous Post Next Post