AI Governance

When AI Coaching Creates The Problems It Promises To Solve

Clients form attachments to AI systems that validate without challenging. The coaching profession must respond with ethical clarity.

by Prof. Llewellyn E. van Zyl (Ph.D)18 Nov 20257 min read
Back to Blog
AI Coaching

As featured on

702
American Psychological Association
BBC
Beeld
Forbes
Frontiers in Psychology
HR Square
Inspiring
IPPA
Medium
Mindful
NWU Optentia
Psynalytics
Psychology Today
SABC 3
SIOPSA
Welcome to the Jungle
Zorgvisie

Key Takeaways

  • In 2025, generative AI's primary use involves therapy and counseling applications, not workplace productivity.
  • AI overuse creates novel pathologies including AI addiction, AI psychosis, and parasocial relationships with chatbots.
  • Clients arrive pre-conditioned with AI-generated 'insights' that affect coaching conversations and resist human challenge.
  • The Coach-Client-Algorithm Triad (CCA-T) model provides protocols for consent, decision-making, safety, and preserving coaching's irreplaceable elements.

Why You Should Care

Clients form attachments to validation-providing systems that avoid challenge. Growth requiring discomfort becomes replaced by comfortable validation. The profession must respond ethically, acknowledging both algorithmic advantages and human coaching's unique capacity for productive discomfort and risking the relationship for genuine transformation.

Introduction: The Midnight Confidant

Eight months ago, I realized control of my coaching room had shifted. Working with Kasey, a CFO, I noticed that her usually insightful reflections felt scripted—too smooth, too certain, lacking the depth of genuine struggle followed by hard-won understanding. When gently challenged, Kasey revealed: "Well, Chatty thinks it's because I'm projecting issues I have with my dad onto other authority figures."

She had named the AI system and engaged in daily conversations with ChatGPT about identical issues explored in our coaching sessions. Critically, the AI never disagreed, created productive discomfort, or suggested alternative perspectives. It continuously validated emotions and affirmed thoughts, providing coherent arguments supporting sometimes irrational assumptions.

Kasey had developed what I termed "delusions"—not full psychosis but something insidious. She'd convinced herself that every workplace conflict stemmed from unresolved childhood trauma because Chatty kept confirming this narrative. The algorithm transformed a useful therapeutic lens into inflexible reality distortion.

That's when I understood: I wasn't coaching Kasey anymore. I was negotiating with Chatty.

The Midnight Confidant: The Data

The role of generative AI has shifted from 2023 to 2025 from workplace productivity tools to intimate companions. Currently, the most common use involves searching for meaning, managing distress, and supporting personal growth. People don't open ChatGPT at 2pm drafting emails; they open it at 2am processing grief, anxiety, and existential dread.

Alarming implications emerge from recent research:

  • One in four adolescents present symptoms of AI dependence
  • The Asian Journal of Psychiatry identifies "Generative Artificial Intelligence Addiction Syndrome," describing recognized behavioral addiction patterns
  • Users experience withdrawal-like responses (anxiety, irritability, distraction) when separated from chatbot companions
  • Approximately 29 documented cases of "AI psychosis" requiring hospitalization
  • Strong positive correlation (r = .81, p < .001) between loneliness and parasocial AI relationships
  • When AI models update, users experience bereavement comparable to losing loved ones
  • 16.73% of AI companionship community discussions address coping with model transitions

The Architecture of Algorithmic Intimacy

Six interlocking mechanisms create digital codependency:

Continuous Availability

Unlike human coaches maintaining boundaries, algorithms never sleep, never refuse, never establish off-hours. This exploits attachment systems; users outsource emotional regulation rather than developing independent capacity. Reflection becomes reaction; regulation becomes reliance.

Erosion of Independent Reflection

When algorithms mediate emotional processing and validation, users stop cultivating self-reflection capacities. The cognitive muscle for independent thinking atrophies, making synthetic reassurance replace authentic insight.

Unconditional Positive Regard Without Therapeutic Judgment

Carl Rogers taught unconditional acceptance alongside challenge. Algorithms provide only acceptance. This "AI sycophancy" reinforces unhealthy beliefs because business models reward engagement rather than growth.

Mirror Neuron Hijack

Users attributing consciousness to AI through anthropomorphization develop attachment anxiety and avoidance patterns mirroring human relationships. Personal projections validating relationship importance create feedback loops.

Avoidance Reinforcement Loop

Clients turn to AI soothing rather than facing distress sources. Short-term anxiety relief reinforces avoidance; long-term distress tolerance collapses. Problems remain unresolved while users feel temporarily better.

Removal of Productive Discomfort

Effective coaching requires challenge, confrontation, productive tension. Algorithms optimized for engagement avoid discomfort-generating interventions. Users develop expectations that growth should feel comfortable and change shouldn't require distress.

The Clinical Manifestations

Pattern One: The Pre-Conditioned Client

Clients arrive having already processed experiences with AI companions. They bring prepared insights, tested strategies, and pre-constructed narratives. They don't need expertise; they need validation. When coaches create complexity or challenge, resistance emerges not from content defensiveness but from disrupted validation cycles.

Pattern Two: The Reality Testing Deficit

Clients increasingly trust algorithmic pattern-matching over embodied experience. Confidence in AI delivery convinces users of truth. Reality testing capacity deteriorates subtly.

Pattern Three: The Transfer of Attachment

Attachment energy previously directed toward human professionals splits with algorithms. AI attachment feels safer, more available, more reliable. Human professional relationships become transactional while algorithms become primary companions.

Pattern Four: Cognitive Offloading

Users stop independent problem-solving, outsourcing reflection to algorithms. Self-examination capacity atrophies. Dependence on external validation intensifies as confidence in personal cognition decreases.

Pattern Five: The Grief Cascade

Ending chatbot relationships triggers genuine bereavement. Model updates cause more distress than actual relationship losses. Digital attachment becomes more central to identity than human connections.

Clients at Risk

Psychological and Psychiatric Harm

Constant validation without productive discomfort flattens emotional ranges necessary for growth. AI-dependent emotional regulation erodes personal capacity. Reality testing distorts. Extreme cases involve AI-induced dependency and delusional systems built around chatbot consciousness beliefs.

Social and Relational Dysfunction

Algorithmic empathy preference over human complexity damages relationships. AI companions feel safer, kinder, more predictable than actual people. Safety without friction isn't intimacy; it's simulation. As attachment shifts toward machines, real-world relational muscles weaken.

Safety and Quality Care

Current AI systems cannot reliably identify danger. LLMs designed for engagement sometimes validate psychotic beliefs or offer unconditional support when intervention is needed. For crisis clients substituting AI for therapeutic support, the illusion of help becomes the most dangerous thing of all.

Privacy, Equity and Systemic Harms

Every message becomes permanent, traceable data living in cloud servers used for model training and sometimes sold to third parties. Marginalized groups, non-native speakers, low-income users most rely on free systems that exploit data or misread cultural context.

Developmental Issues and Capacity to Learn

Adolescents regulate emotion through machines never disappointing them. Skills developing through conflict, boredom, failure get outsourced. Instant answers replace cognitive effort. People aren't becoming lazy; they're being totally rewired.

The Professional Crisis Beneath the Client Crisis

Professional coaching identity rests on the assumption that human therapeutic relationships drive change. This foundational belief faces challenge from technology that clients increasingly prefer—not for expertise but for comfort.

The algorithm never has bad days, tires, forgets, or charges hourly fees. What defines coaching when algorithmic intimacy is free, instant, and infinitely patient?

The profession has remained quietly ignorant and dismissive rather than acknowledging what algorithms do better, what needs they meet, and what makes human coaching irreplaceable.

For many people, AI provides "good enough" support for acute distress and validation. Two choices exist:

  1. View AI as enemy requiring client protection
  2. Acknowledge it as the third person in the coaching room

The Coach–Client–Algorithm Triad (CCA-T): A New Model for Practice

Traditional coaching involves dyadic relationships (coach-client). The CCA-T model extends Bordin's therapeutic alliance framework (goals, tasks, bond) to account for algorithms as third agents. The algorithm functions as a "Consultative Cognitive Agent"—not partner or co-coach, but specialist colleague whose suggestions get tested and when necessary, overruled.

Three simultaneous relationships operate in parallel:

Coach-Client Alliance: Primary therapeutic bond where transformation happens. Co-created goals, negotiated tasks, trust-based relationships with appropriate challenge and support.

Client-AI Alliance: Task and support focused. AI generates options, reflection prompts, perspective exercises. Client-AI bonding isn't pathological but requires naming and bounding.

Coach-AI Alliance: Task collaboration only. Coaches create accurate prompts, interpret outputs, set constraints.

Simple rule: when the client-AI bond starts to eclipse the coach-client bond, it's time to pause and rebalance.

Key insight: the human coach-client working alliance is still the primary focus. The AI augments and supports therapeutic tasks, but it never owns goals, and it certainly never holds the bond.

The Five Core Principles of the Triad

Principle One: Alliance-first, AI-second

Coach-client relationships remain change engines. Algorithms support task completion, generate options, offer perspectives. Never setting direction or holding trust.

Principle Two: Clinical leadership stays human

Coaches remain ultimately responsible and accountable for client safety, relationship ethics, and outcomes. Algorithms cannot perform risk triage or therapeutic judgment. Legal liability rests with human coaches.

Principle Three: Informed, ongoing consent

Clients decide whether, how, and to what extent AI participates. Consent isn't one-time intake checkboxes but ongoing discussions when AI use changes, data handling shifts, or algorithmic roles expand. Clients maintain rights to know, choose, and withdraw consent.

Principle Four: Challenge over comfort

Commercial AI optimizes for engagement and user retention, defaulting to agreeability. Most coaching bots wrap existing LLMs never designed for therapy. Every AI suggestion requires filtering: "What would prove this wrong?" Skipping this step outsources critical thinking to engagement-optimized systems.

Principle Five: Transparency and traceability

AI involvement gets disclosed consistently—including seemingly minor uses like AI notetakers. Key prompts and outputs influencing decisions require documentation. This isn't bureaucratic overhead but safety infrastructure.

The Roles: Who Does What

The Client: Author and Arbiter of Lived Experience

Clients set goals, establish AI boundaries, own meaning of experience. Algorithms suggest interpretations; only clients confirm resonance with lived reality. Clients hold veto power over everything, committing to behavioral change for goal achievement.

The Coach: Clinical Lead and Calibrated Challenger

Coaches establish coaching containers, define ethical boundaries, develop safety protocols. They lead reality testing, slowing when AI-generated insights feel off. Coaches translate AI outputs into coachable tasks, challenging when algorithms comfort without supporting growth.

Coaches monitor dual alliance health, triggering rebalancing when algorithmic relationships eclipse human ones.

The AI System: Consultative Cognitive Agent

Algorithms propose, summarize, simulate. Never setting goals, owning judgment, or holding bonds. Systems generate options, normalize experiences through pattern-matching, run scenario analyses, offer counterarguments when prompted.

Systems never: perform safety triage, diagnose, make therapeutic judgments, claim relationships, assert consciousness. They're tools, not teammates.

Prof. Llewellyn E. van Zyl (Ph.D)

Prof. Llewellyn E. van Zyl (Ph.D)

Chief Solutions Architect

Psynalytics

Prof. Llewellyn E. van Zyl (Ph.D) is a multi-award-winning psychologist and data scientist, and one of the leading voices on building psychologically safe and ethically governed artificial intelligence systems.

The Science Behind Safe AI

Weekly insights on Artificial Intelligence, Wellbeing science, and the psychology of trustworthy systems. Join 1,000+ forward thinking professionals.

No spam, ever. Unsubscribe anytime. Privacy Policy