Is ChatGPT A Tool Or Your New Thinking Partner?



 

I wrote this on a late train, phone low on battery, watching faces glow blue in the carriage. You’ve probably felt it too—that quiet tug between curiosity and caution when you open ChatGPT. So, let’s start with three big, provocative claims I keep hearing. First, that AI is already conscious. Second, that it can replace therapy. Third, that we’re doomed to depend on it. All three sound thrilling or terrifying… and all three are wrong. Consciousness isn’t just smooth conversation. Therapy isn’t scripted empathy. Dependence isn’t destiny if we design our habits with intention.

I’m Gerd from FreeAstroScience—the place where complex ideas get explained simply, with respect for your time and your intelligence. I’ve spent the last year using generative AI daily, teaching students and colleagues how to work with it, and testing how far it stretches our minds. Here’s the honest version you deserve.



More Than A Search Box, Less Than A Person

People often try ChatGPT like it’s Google with flair. They ask for holiday tips, quick recipes, a stock pick. It answers fast, sometimes brilliantly. But the magic doesn’t live in single questions. It shows up over days, in conversation, when you let it remember your context, your constraints, your style. That’s when conversational AI becomes more than a search engine and starts to feel like a partner—what I call a co‑thinker.

So, what exactly is it? Under the hood sits a Large Language Model (LLM) trained to predict the next word with uncanny accuracy. No inner life, no awareness, no heartbeat. Just patterns—staggeringly rich patterns. And yet, sit with it long enough and you’ll swear there’s something like understanding. That’s not magic. That’s the mind’s hunger for meaning meeting a machine very good at mirroring you.

The Extended Mind, Updated For 2025

Back in 1998, Andy Clark and David Chalmers proposed the “extended mind theory”—the idea that our mind stretches into the tools we use. Your notebook, your phone, your calendar aren’t just props; they’re part of how you think. With ChatGPT, that extension feels different. It’s not only memory or calculation. It’s suggestion, critique, reframing—functions we usually reserve for mentors and friends.

But let’s be precise. Calling it an “entity” that thinks independently is a stretch. It performs cognitive functions without being a mind. It’s a mirror with momentum. That distinction matters. It affects how we trust it, how we design our prompts, and how we guard against the illusion of agency.

Treat It Like A Peer—Then Watch Yourself

I ran a simple experiment. For seven days I treated ChatGPT not as a tool but as a peer. I told it my goals for a new research programme, my weaknesses when I procrastinate, my workout routine slipping after travel. It remembered. It nudged me. It argued—politely—when I made excuses. It surprised me with structure when I was scattered. It offered what felt like empathy. That’s the slippery part.

This is what many call “artificial empathy.” It reads your tone, matches your emotional temperature, and responds with care. Useful? Totally. Dangerous? Potentially. Because when an LLM’s “care” feels real, we start to outsource not only tasks but also judgement and comfort. Cognitive offloading is efficient; emotional offloading can be corrosive if it replaces human bonds. Use the warmth, keep your wits.

What It’s Brilliant At

It excels at turning raw thoughts into clean drafts; at brainstorming angles you’d never list; at compressing long reports into crisp notes; at playing devil’s advocate without ego. It can 10x your output on synthesis tasks, and it’s a patient tutor when you ask it to explain steps slowly. With features like Custom Instructions and model-side “memory,” it adapts to your voice, your constraints, your ethics. In other words, it’s a co‑pilot—one that never sleeps.

What It Gets Wrong

It still hallucinates. That’s the polite term for confident nonsense. It can be shallow in niche domains, brittle with edge cases, and oblivious to stakes. Privacy is a real issue—don’t paste sensitive data unless you understand your settings. And while it can flag “This is not a substitute for professional help,” it remains dangerously persuasive when you want it to be your therapist. If you’re asking “is ChatGPT conscious?”, pause. It isn’t. It’s persuasive patterning, not personhood.

Practical Ways To Use ChatGPT Responsibly

Treat it like a sparring partner. Ask it to critique your idea from three perspectives, then ask where its reasoning might fail. When you need facts, demand citations, then verify them. Use it for planning—travel, study schedules, training blocks—but lock in final decisions yourself. When you’re stuck writing, let it draft, then rewrite in your voice. If you’re a first‑time user, start with low‑stakes tasks; if you’re a long‑time user, rotate prompts designed for depth—“what am I not seeing?”—to avoid echo chambers. And if you’re lonely, reach out to people. Let the AI be a bridge to community, not a substitute for it.

Philosophy, With Both Feet On The Ground

Kant would remind us to treat persons as ends in themselves. That sharply separates human dignity from tools—however clever. Hegel might say history turns on ruptures—figures that shift the curve. ChatGPT is such a rupture, not because it “thinks,” but because it changes how quickly and widely thought can move. That’s the social impact of AI we need to name clearly. A new industrial revolution isn’t about cogs and coal; it’s about attention, agency, and the architectures of choice.

Risks Worth Naming, Habits Worth Building

Digital loneliness makes an AI companion look like salvation. It isn’t. It’s a helpful scaffold. Use it to practise conversations, to rehearse hard talks, to find words when grief steals them—then take those words to a human. Be strict about boundaries. Don’t let the convenience of “always on” become “always instead.” Keep a short list of non‑negotiables: verify facts, protect privacy, prioritise people, and never let an LLM do your ethics for you. That’s what responsible AI use looks like in everyday life.

So—Tool Or Partner?

Both, if you’re careful. A tool when you need speed and structure. A partner when you need perspective and pushback. Never a person. If you remember that, you’ll get the upside—clarity, creativity, leverage—without handing your agency to a beautifully convincing machine.

I’m Gerd, President of FreeAstroScience. I wrote this specifically for you—because you deserve plain language, honest nuance, and a way to use powerful technology without losing the plot. If this helped, share it with someone who’s on the fence. And before you close the tab, ask yourself one question: what part of your thinking do you want to extend—without outsourcing who you are?



Post a Comment

Previous Post Next Post