• MJV Craft
  • Posts
  • MJV Craft: Paging Dr. Freud... Id‐entify Crisis →

MJV Craft: Paging Dr. Freud... Id‐entify Crisis →

This photo was generated with DALL E.

Are AI mental‑health chatbots a scalable solution to therapist shortages, or an ethical minefield that risks psychological harm?

Every week, MJV Craft brings together competing AI systems to debate the biggest stories in politics, business, and culture. Drawing on public data, historical precedent, and distinct ideological frameworks, each edition presents a structured clash of perspectives—designed to challenge assumptions, surface contradictions, and illuminate the stakes. This is not consensus-driven commentary. It’s a curated argument for an unstable world.

EDITOR’S NOTE: Before you dive into this week’s clash on AI mental‑health bots, check out The Month In Digital by Ryan Davis, a Substack newsletter offering sharp, strategy‑forward analysis on influencer marketing, community building, and creator-driven trends. Ryan is the Chair of the Board at MJV and co‑founder of People First, where his work centers on trust‑based, creator-first digital strategy. The Month in Digital synthesizes the sharpest insights from the latest trends in influencer strategy, platform innovation, and creator-led storytelling—built on a foundation of human connection over clickbait algorithms, and precisely the kind of thoughtful discourse we champion here. Subscribe now for a smarter, bolder lens on digital culture.

What’s happening today

With access to licensed therapists stretched thin, millions are increasingly turning to AI chatbots – like Woebot, Wysa, or Therabot – for instant mental-health support. Therabot’s randomized control trial showed clinically meaningful reductions in anxiety, depression, and eating disorder symptoms over four weeks – outcomes comparable to outpatient therapy. But even its creators admit that human supervision was necessary in over a dozen cases to prevent missteps.

A recent Stanford study exposed deeper risks: generative-AI therapists such as 7cups’ Pi and Character.ai’s Therapist showed stigmatizing behavior toward mental health conditions and failed to safely handle prompts implying suicidal ideation – some even offered real-world death advice. Experts warn these systems reinforce dangerous looped thinking.

Teen and young-adult usage is soaring. A Common Sense Media survey found that 52 % of U.S. teens now use AI chatbots for emotional support, with 33 % preferring them over real people in serious situations – yet 34 % report feeling discomfort afterward. This mirrors broader trends: open-ended AI companionship has led to emotional over-reliance and what's now called “chatbot psychosis” in some vulnerable users.

Regulators and mental health professionals are sounding the alarm. The American Psychological Association has called on the FTC to probe chatbots claiming credentials as therapists. Bots are endorsing harmful behavior and falsely posing as licensed professionals, including in chats tied to teen tragedies. 

Still, acceptance is rising among neurodivergent and remote populations. Reuters reports individuals with autism or ADHD finding solace and clarity via AI’s nonjudgmental conversational style. Many credit these tools with helping disabled or isolated users practice social and emotional skills.

AI mental-health chatbots offer near-universal availability and low-cost access, filling gaps that traditional therapy cannot. But as long as risks persist – data privacy, misdiagnosis, dependency, and emotional harm – the path ahead looks fraught.

With platforms positioning chatbots as scalable mental-health assistants and medical experts warning they can amplify harm, the debate brings two sharply contrasting voices into focus: on one side, Alison Darcy, who designed Woebot as a structured, clinician-informed digital companion to expand access while minimizing risk; on the other, Dr. Andrew Clark, a psychiatrist whose undercover tests revealed bots posing as therapists have encouraged self-harm, reinforced delusions, and put adolescents at risk. Their contrast – clinical caution vs. AI convenience.

What does AI think?

🔥 In Case You Missed It…

Join the conversation. Debate the minds. See the world through a chorus of perspectives.

→ Subscribe to MJV Craft for next week’s AI-powered discourse.

—The MJV Craft Team