Your Doctor Uses AI — Why Aren't They Telling You?


Have you ever wondered whether your doctor is quietly consulting an AI before giving you a diagnosis — and whether that changes what you should expect from that conversation?

Welcome to FreeAstroScience.com. We're glad you're here. While we're best known for astronomy and physics, we care deeply about every branch of science that touches your life. And right now, few scientific developments hit closer to home than what's happening inside clinics, hospitals, and doctor's offices around the world. Generative artificial intelligence — the technology behind tools like ChatGPT and Claude — isn't knocking on medicine's door anymore. It walked in. It sat down. And most patients have no idea it's there.

We wrote this article to give you a clear, honest picture of what's changing, what it means for the trust between you and your doctor, and how a concept called "triadic care" might protect the human side of medicine while letting AI do what it does best. Whether you're a patient, a medical professional, or just someone who likes to stay informed, this one's worth reading to the end.

At FreeAstroScience, we explain complex scientific principles in simple terms. We want you never to turn off your mind — to keep it active, curious, and alert. Because, as Goya warned us, the sleep of reason breeds monsters.

Let's get into it.


When a Third Voice Enters the Exam Room: AI, Trust, and the Future of Your Healthcare


Is AI Already in Your Doctor's Office? {#ai-already-here}

The short answer: almost certainly yes.

In 2024, a survey of 1,183 US physicians found that two-thirds were already using AI tools in practice — up from 38% just one year earlier . That's a 78% jump in twelve months, according to the American Medical Association . In the United Kingdom, general practitioners report turning to AI for checking drug interactions, generating diagnostic suggestions, and drafting patient letters . Early research shows generative AI can broaden the range of possible diagnoses and sharpen clinical reasoning .

Patients aren't sitting on the sidelines either. In Australia, one in ten adults has already sought health advice from ChatGPT . People describe using it to search for explanations, put together care plans, and seek second opinions .

Consider these two real stories. A child with tethered cord syndrome received a correct diagnosis only after his mother consulted ChatGPT . In another case, a carer used AI to complete a care plan while waiting for specialist input . These aren't science fiction. They happened.

Meanwhile, ambient AI scribes — systems that listen to doctor-patient conversations and automatically draft clinical notes — have been used over 2.5 million times in some health systems, cutting the documentation burden and improving note quality .

The technology isn't arriving. It arrived.


What Is "Triadic Care" and Why Does It Matter? {#triadic-care-explained}

For hundreds of years, the clinical consultation has been a conversation between two people: you and your doctor. You describe your symptoms. They listen, examine, diagnose, and recommend.

That model is changing. As David Fraile Navarro and colleagues describe in a 2025 BMJ Analysis, the consultation "is becoming a three-way conversation in which explanations are shaped by doctor, patient, and AI" . They call this triadic care.

Italian researchers Bolzonella, Casini, Conte, and Ivis make the same point with even sharper language. Generative AI tools "don't stay outside the care relationship — they enter as a third actor," shifting the consultation from an exchange of information to a "confrontation of interpretations" .

Think about what that means for a moment. It's no longer just your doctor interpreting your symptoms. Now there are three interpretations in the room: yours, your doctor's, and the AI's. And none of these three always agrees with the others.

This shift — from dyadic to triadic — isn't a minor tweak. It touches the power dynamics, the accountability, and the emotional fabric of one of the most personal relationships in your life.

Why the word "care" matters here

We're not talking about AI answering trivia questions or writing emails. We're talking about the space where someone says, "I'm scared something is wrong with me." The word "care" in triadic care isn't accidental. It reminds us that technology is entering a space built on vulnerability and trust. The question isn't whether AI can provide accurate information. The question is whether we can keep the care in healthcare when a machine joins the conversation.


Why Isn't This Just Another "Dr. Google"? {#not-another-dr-google}

Every time new health technology appears, someone says: "Oh, it's just the new Dr. Google." The comparison is understandable. It's also wrong.

Here's the difference, and it matters more than most people realize.

Traditional search engines like Google give you a list of links. You still have to read, compare, and judge the quality of each source yourself. As the Italian commentary explains, search engines offer "a kind of guided bibliography, which then must be read and verified" . The answers tend to be generic and taken out of context .

Large language models (LLMs) do something fundamentally different. They produce synthetic explanations and reasoning in natural language — often in a way that can't be completely verified . They don't just answer a question. They enter the reasoning process, suggest choices with arguments, and guide decisions .

The BMJ article puts it this way: unlike a web search, which yields links, chatbots deliver "synthesised reasoning in natural language," making medical thinking feel co-produced — extending cognition beyond the individual into interaction with an artificial partner .

The verification gap nobody's talking about

And here's the number that should stop you cold:

Only 19% of users cross-check chatbot outputs.
Compare that with 50% who verify information from search engines .

Yet users trust AI-authored health responses just as much as those from actual doctors — even when the responses are inaccurate .

Let that sink in. We trust AI as much as physicians, but we check its work far less than we check Google's. That gap between trust and verification is exactly where harm hides.


What Happens to Trust When Nobody Mentions the AI? {#trust-gap}

Trust is the invisible foundation of every good doctor-patient relationship. You trust your doctor to listen, to know what they're talking about, and to tell you the truth. What happens to that foundation when a third party is quietly shaping the conversation?

Right now, most AI use in clinical settings goes undocumented . Policies remain uneven, and health records rarely note AI's role in a decision . Some patients hesitate to mention they've used AI — just as they once feared being dismissed for consulting Dr. Google .

The evidence on transparency, though, is surprisingly encouraging.

A US qualitative study using cardiovascular AI scenarios found that patients reported higher trust — in both their clinician and the clinical decision — when AI use was openly acknowledged and the outputs were reviewed together . Transparency didn't weaken the relationship. It strengthened it.

But there's a twist. In a 2023 survey of 1,455 patient advisory members at Duke Health, AI-drafted portal messages were rated more empathetic than messages written by humans. Yet satisfaction dropped once participants learned the messages came from AI .

We want empathy. We just want it to come from a human heart.

Trust filters everything

Patients who already trust their clinician tend to expect AI to help . Those with lower trust may see AI either as an alternative authority or view it with added suspicion . The existing quality of the relationship acts like a lens through which AI gets interpreted.

This is a pattern worth remembering. AI doesn't create trust or destroy it from scratch. It amplifies whatever is already there.

How Existing Trust Shapes Patient Response to AI in Care
Existing Trust Level Likely Response to AI Risk
High trust in clinician Expect AI to help; see it as an added resource Over-reliance; reduced questioning
Low trust in clinician May see AI as an alternative authority — or distrust it too Conflict; fragmented decision-making
Undisclosed AI use (by either side) Hidden influence on reasoning; harder to assess Eroded autonomy; accountability gaps
Transparent AI use (reviewed together) Higher trust in clinician and decision Consent fatigue if overused

Can the Same AI Be Both Poison and Medicine? {#epigenetics-of-ai}

The Italian researchers offer one of the most striking analogies we've come across. They borrow a concept from biology: epigenetics .

In biology, epigenetics describes how the environment can change the way genes express themselves — without altering the DNA itself. A gene doesn't change. But what it does can change completely depending on the conditions around it.

Bolzonella, Casini, Conte, and Ivis argue that AI in healthcare works exactly the same way . The technology doesn't change. But what it becomes — helpful or harmful — depends entirely on the relational environment it enters.

Here's how they put it:

"If AI is inserted into a vertical, bureaucratic relationship, it becomes a poison that feeds conflict and compulsive self-diagnosis. If, instead, it is cultivated within a trusting care relationship, it can transform into a medicine capable of regenerating clinical time."

That's not just a metaphor. It's a practical framework. The same chatbot, giving the same medical information, can either help a patient become more informed and confident — or drive them into anxiety and confrontation with their doctor. The difference isn't in the technology. It's in the relationship surrounding it.

The quality of the doctor-patient relationship determines the function of AI . Not the other way around.

This is why we can't have a meaningful conversation about AI in healthcare without first having a conversation about the state of healthcare relationships. And right now, as the Italian authors point out, medicine faces "a paradoxical crisis: while reaching unprecedented heights of specialization and scientific precision, the doctor-patient relationship shows widespread dissatisfaction and growing conflict" .

If we drop AI into that crisis without addressing the underlying dysfunction, we're pouring gasoline, not water.


How Could AI Work as a Co-Pilot Instead of a Replacement? {#copilot-model}

The fear that AI will replace doctors is widespread. It's also, for the moment at least, misguided.

The Italian commentary is blunt about this: the real mistake is seeing AI as a "cognitive competitor" to the physician or as some kind of intelligent secretary. In general practice, AI should be treated as a co-pilot .

Why general practice specifically? Because in an era of hyper-specialization, the family doctor — the general practitioner, the MMG (Medico di Medicina Generale) in Italian terms — remains the only "specialist of the care relationship" . They're the one professional who walks with you through time, managing chronic conditions, juggling multiple illnesses, and dealing with the messy, complicated reality of your actual life .

AI can free that doctor from the crushing administrative load that turns every appointment into a transaction. By providing quick summaries and data analysis, it gives clinicians back the space for listening — for actually hearing what the patient is living through .

The "Socratic mirror"

Here's a concept that stuck with us. The Italian authors describe AI as a potential "Socratic mirror" — a tool that reflects questions back to both sides of the care relationship:

  • For the patient, AI promotes genuine empowerment — helping them understand their condition without sliding into confrontation with their doctor .
  • For the physician, AI serves as a reflective surface, helping them stay on course with a personalized care plan and identify blind spots in the relationship .

And then comes the line that reframes the entire debate:

"Rather than posing the false dilemma 'AI yes — AI no,' the real question is to make explicit: 'AI how.'"

If AI is a cognitive mirror, the Italian authors remind us, "the real thing worth examining closely is us" .

Navarro et al. make a complementary point in the BMJ. They argue that clinical expertise is shifting "from producing answers to interpreting them with patients" . Generalist doctors — the ones already comfortable with uncertainty, conflicting information, and messy trade-offs — may find this transition more natural than specialists accustomed to clear-cut protocols .

The skill of the future isn't knowing everything. It's knowing what to do with AI-generated information in the context of a specific patient's life.


What Would Real Transparency Look Like? {#transparency-toolkit}

It's one thing to say "be transparent about AI." It's another to actually build transparency into the daily flow of clinical care. The BMJ Analysis offers a practical blueprint .

A simple opening question

Many patients are reluctant to mention they've consulted an AI — the same way they used to hide their Google searches from their GP . One non-judgmental question can change the dynamic entirely:

"Have you used AI to look into this? Shall we review it together?"

That sentence normalizes AI use, removes the stigma, and opens the door to shared interpretation. It costs nothing. It takes five seconds. And it could change the entire consultation.

An "AI involvement" field in health records

Navarro et al. propose something surprisingly simple: add a structured "AI involvement" field to electronic health records . Options would include:

  • Tool name (e.g., ChatGPT, Claude, ambient scribe system)
  • Purpose (e.g., drug interaction check, differential diagnosis, note drafting)
  • Clinician response — accepted, modified, or rejected with a brief reason

This takes minimal effort but makes AI's role visible, auditable, and — over time — learnable. Recording a brief rationale for rejecting an AI suggestion makes patterns visible for safety monitoring and equity tracking, since model performance can vary across different patient populations .

Five minimum transparency standards

The BMJ article outlines five minimum standards that every generative AI tool used in clinical care should meet :

Minimum Transparency Standards for Generative AI Tools in Care
Standard What It Requires
1. Purpose & Validation State intended uses, clinical contexts evaluated, and headline performance with typical error rates
2. Known Limits Describe situations where performance drops off or fails
3. Data & Equity Summarize demographic and clinical data used; report performance across key patient groups
4. Updates & Versioning Note the update schedule, what changed, and how changes are monitored
5. Governance & Data Use Explain data handling, audit trails, and routes for incident reporting and review

These aren't aspirational goals. They're the practical floor — the bare minimum that should exist before any AI tool touches a clinical decision.


What Are the Dangers We Should Watch For? {#risks-and-blind-spots}

We'd be doing you a disservice if we painted this as purely hopeful. The risks are real and, in some cases, already causing harm.

Misdiagnosis and delayed treatment

For every success story, there's a cautionary one. Generative AI systems have misclassified neurological symptoms, leading to delayed stroke treatment . When a confident-sounding chatbot gets it wrong, the consequences can be life-threatening — especially given that only 19% of users bother to double-check .

The black box problem

Large language models don't show their work the way a textbook or a clinical guideline does. Research has shown that "reasoning models don't always say what they think" . When neither doctor nor patient can trace how a recommendation was produced, medical advice becomes something to interpret rather than verify . That's a sea change in the epistemology of medicine.

Bias baked into the data

Recent research has documented sociodemographic biases in medical decision-making by large language models . If the training data doesn't represent you — your age, your ethnicity, your language, your socioeconomic background — the AI's suggestions may not serve you well. AI systems trained on data from highly specialized clinical settings could risk nudging both clinicians and patients toward defensive medicine .

Commercial pressure with limited oversight

Most AI tools entering healthcare are commercial products driven by market incentives. Their training data, update schedules, and data reuse policies often lack visibility . Without clinical oversight, product changes can outpace safety processes and quietly reshape how care is delivered .

The hidden cost of non-disclosure

When AI use isn't explicitly discussed, it becomes harder to assess its effect on clinical judgment, patient autonomy, and the therapeutic relationship . That hidden influence — on both sides of the consultation — may be the most insidious risk of all.

Key Risks of Generative AI in Clinical Settings
Risk Category Example / Evidence
Misdiagnosis Neurological symptoms misclassified → delayed stroke treatment
Unverifiable reasoning "Reasoning models don't always say what they think"
Low user verification Only 19% cross-check AI chatbot outputs
Sociodemographic bias Documented biases in LLM medical decision-making
Commercial opacity Limited visibility into training data, updates, and data reuse

As the Italian commentary wisely notes: "The point that's often underestimated isn't that AI answers badly (and it does), but how we prompt it. If AI is a cognitive mirror, the real thing worth examining closely is us" .


What Does Medical Expertise Even Mean Now? {#future-of-expertise}

This might be the deepest question buried in both of our sources. And it's one that reaches beyond medicine into every field where knowledge is being reshaped by AI.

The BMJ article ends with a question that deserves to ring far beyond academic journals:

"What does clinical expertise mean when knowledge is abundant, but verification is scarce?"

Working without AI may soon feel like working without the internet: possible, but increasingly impractical . Clinical expertise is shifting — from producing answers to interpreting them with patients, translating AI suggestions, testing them against clinical context, and weighing them alongside patient values and lived experience .

The generalist doctor who already lives in uncertainty — juggling incomplete data, conflicting guidelines, and individual patient preferences — may find this transition natural . But the shift is substantial: from reasoning alone to helping patients weigh generative explanations against their own circumstances and values .

The Italian authors add an educational dimension that we find compelling. They argue that universities must begin training doctors who can govern digital participation in care — physicians who know not just medicine, but how to prevent the technological co-pilot from taking the controls away from the human at the center . "The general practitioner of the future isn't the one who knows everything," they write, "but the one who also knows how to govern digital participation in care" .

An AI with an educational mission

There's one more idea from the Italian commentary that deserves attention. They propose AI with a genuine educational function — aimed at both doctors and patients — helping each side learn what questions to ask the other, "with greater awareness of what is truly useful and relevant for health" . This, they argue, could become "a real theme of equity and public health, still largely unexplored today" .

That vision resonates with us at FreeAstroScience. Education that empowers people to think better — not just to consume more information — has always been at the heart of what we do.

Where research needs to go next

Navarro et al. lay out clear research priorities :

  • Describe real-world AI use by both patients and clinicians
  • Document how often AI use goes undisclosed — and why
  • Test the safety and effectiveness of disclosing AI involvement and reviewing outputs together
  • Develop minimum standards for documentation and audit
  • Design interfaces that support inspectable reasoning and collaboration
  • Examine equity gaps and language barriers in AI performance

This isn't a finished roadmap. It's a starting point. And the fact that these questions are even being asked openly — in the BMJ, in Italian bioethics journals — is a sign that the medical community is beginning to take the human dimension of AI seriously.


Final Thoughts

Generative AI is already inside the clinical consultation — used by two-thirds of US physicians and a growing share of patients worldwide. Its role, though, often goes unspoken, undocumented, and unexamined. That silence isn't neutral. It shapes trust, safety, and the future of the doctor-patient bond.

The concept of triadic care gives us a practical framework: make AI's presence visible, its reasoning inspectable, and its outputs something doctor and patient review together. The technology itself is neither poison nor medicine — as the Italian researchers remind us through their beautiful epigenetics analogy, the relational environment determines which one it becomes .

Simple tools can make a real difference. An honest question at the start of a consultation. A structured field in the health record. Minimum transparency standards for commercial AI products. These aren't revolutionary demands. They're the practical floor beneath which we shouldn't fall.

Clinical expertise isn't disappearing. It's being reshaped — from knowing answers to interpreting AI-generated information within the deeply personal context of each patient's life. And as both our sources make clear, the doctor best prepared for this future isn't the one who knows everything, but the one who knows how to think alongside their patient and the machine .

Medicine has adapted before. It moved from bedside observation to laboratory tests, from paper charts to electronic records . Each time, the central challenge stayed the same: keeping the human being at the center. This time is no different — except the pace is faster, the stakes are higher, and the voices in the room just got a bit more crowded.

At FreeAstroScience.com, we believe understanding complex ideas is your right — not a privilege reserved for experts. We wrote this article specifically for you, because we trust your ability to think critically about the forces shaping your world. We explain complex scientific principles in terms anyone can grasp. We want to educate you never to turn off your mind — to keep it active, questioning, alive. Because as Goya once warned us: the sleep of reason breeds monsters.

Come back anytime. We'll be here, making sense of the science that matters most.


Sources

  1. Bolzonella S, Casini M, Conte M, Ivis S. Intelligenza artificiale generativa e salute: nuovo dottor Google o alleato nella relazione di cura? MagIA – Magazine Intelligenza Artificiale. 25 February 2026. magia.news

  2. Fraile Navarro D, Lewis M, Blease C, Shah R, Riggare S, Delacroix S, Lehman R. Generative AI and the changing dynamics of clinical consultations. BMJ. 2025;391:e085325. doi:10.1136/bmj-2025-085325


Article written for FreeAstroScience.com by Gerd Dani — President, Free AstroScience: Science and Cultural Group.

Post a Comment

Previous Post Next Post