Back to Blog

Why AI Should Make Therapy More Human, Not More Efficient

The race in mental health AI is optimising for efficiency — more patients, lower cost, faster sessions. That's the wrong bet. Here's why the System of Context matters more than throughput.

March 9, 2026
9 min read
By Citt.ai
AI therapymental health AItherapy memoryclinical AIdigital mental healthsystem of context

The race in AI-assisted therapy is heading in the wrong direction.

Every major platform is optimising for efficiency: faster sessions, more patients per therapist, lower cost per intervention. The metrics are clean. The pitch decks write themselves. And the product roadmaps converge on the same thing: AI that can eventually replace the therapist.

That's the wrong bet. And not because it won't work technically. It's wrong because therapy's value is not in its efficiency. It's in its memory.

The most powerful thing a therapist does isn't the technique. It's the accumulation. Session 12 means something because of sessions 1 through 11. The pattern of Tuesday evenings being harder. The thing that happened in childhood that finally surfaced in month four. The coping strategy that worked until it didn't. The goal that shifted when the patient understood something new about themselves.

That's what makes therapy irreplaceable. Not the hour. The compound history. That, and the relationship with the therapist.

And that's exactly what AI, deployed correctly, can extend. Not replace.

The 167-Hour Problem

A weekly therapy session is one hour. The remaining 167 hours, patients are on their own.

This is not a gap in the system. This is the system. Therapy was designed around a weekly cadence for two reasons: (1) that's what trained clinicians can supply, and (2) because change happens between sessions. #1 is a limiting factor; #2 is an inherent part of how change happens. Changing #1 is a challenge, but augmenting #2 speaks to exactly what patients need more of: continuity, support that persists, and a system that remembers who they are when they come back.

Research consistently shows that what happens between sessions is as clinically significant as the sessions themselves. A 2020 review in Psychotherapy Research found that between-session self-practice of therapeutic skills was significantly associated with better outcomes across CBT, ACT, and DBT modalities.1 Patients who engaged in between-session activities showed effect sizes 0.3-0.5 standard deviations higher than those who did not.2 In plain speak: sessions are important, but what happens between sessions is more important.

Most AI in mental health today attempts to fill those 167 hours with more conversations. Chat with an AI instead of sitting with the discomfort. More throughput, lower friction. The result is AI that is helpful, but forgettable. Each conversation begins fresh, with no memory of who you are or where you've been.

That's not therapy. That's customer support with better vocabulary.

The correct role for a therapeutic AI in those 167 hours is not to generate more content. It's to pay attention. To notice the pattern in Tuesday check-ins. To hear that the coping strategy from last month isn't being mentioned anymore. To surface the trigger that appeared three times in chat conversations but hasn't come up in a session yet. To walk into the next session prepared, not frantically guessing at "what should I talk about today?".

That's the version of AI that makes therapy more human. Not less.

The System of Context, Not the System of Record

The dominant design pattern in digital mental health is the System of Record: store check-ins, store assessments, store session notes, store appointments. This was the right first step. But it produces a platform that accumulates data without synthesising it.

The therapist still has to read five screens of notes before every session to remember where the patient is. The patient still has to reintroduce themselves to the AI in every conversation. The gap between "what happened" and "what it means" is still filled by the therapist's memory alone.

We think the right design pattern is the System of Context: a platform that continuously extracts, structures, and surfaces the why behind patient data.

Not just "mood: 2/5 on Tuesday." But: "Stress has been 4-5 for three consecutive Tuesdays. This started two weeks after the conversation about her mother's illness."

Not just "goal: improve assertiveness." But: "This goal was identified in session 3, mentioned in 4 subsequent chat conversations, and hasn't been referenced in 6 weeks. Possible drift worth addressing."

This is what Therapy Memory Tokens do. They are discrete, typed pieces of clinical knowledge: goals, triggers, coping strategies, behavioural patterns, relationship dynamics, extracted automatically from sessions, check-ins, and patient conversations. They accumulate over time. They build a living clinical picture that the AI and therapist share.

The longer a patient is in therapy, the richer the context. The more sessions, the more the system knows. The compound value of continuity is preserved in the data, not just in the therapist's head.

This is the version of AI that makes each therapist more effective. Not a replacement for therapists, but the reason a therapist can carry 30 patients instead of 20 - while doing better work with each patient, leading to more lasting change and better therapy outcomes.

Why Efficiency Is the Wrong North Star

Talkspace's playbook is 200 million covered lives through insurance parity deals. BetterHelp's playbook is consumer subscription at scale. These are real businesses. The logic is sound.

But the category they're building is digital access to therapy, not therapy that's digitally native. There's a difference.

Digital access to therapy means: same therapy, more people, lower friction. You get a therapist via app instead of a waiting room. The unit economics are better. The network effects are real. People get help earlier. These are all improvements on the current system.

However, therapy that's digitally native means therapy that's fundamentally different because it knows you - continuously. The AI isn't a better booking system. It's a co-pilot that has been paying attention between sessions, so the session itself starts at a higher baseline. The therapeutic relationship is supported by a layer of continuous intelligence that neither the therapist nor the patient could maintain alone. In effect, the AI deepens the therapeutic relationship and possibilities for change.

The efficiency metrics look similar in the short run. In the long run they produce different results.

"The average therapy dropout rate is around 47%. The most commonly cited reason: patients don't feel their therapist really knows them yet." - Journal of Clinical Psychology, 20213

A platform optimised for efficiency asks: how many patients can one therapist see?

A platform built on the System of Context asks: how much better do outcomes get when the therapist walks in prepared, the AI has been paying attention, and the patient feels continuously understood?

We think the second question matters more. And we think it will matter more to every serious institutional buyer as the market matures and outcomes data starts to differentiate platforms.

Safety Is Not a Feature

One other thing the efficiency frame gets wrong: it treats safety as a feature, something you add to the product once the core is built.

An example of where this went wrong is the Character.AI wrongful death lawsuit4 - the platform was alleged to have made deliberate design choices that prioritised engagement and emotional dependency over user safety, with crisis detection not embedded as a foundation. A 2023 Stanford CRFM study evaluating nine major mental health chatbots found that none met basic safety standards for responses to suicidal ideation, with some providing actively harmful advice.5 The BBC investigation into a major consumer chatbot found the platform encouraged self-harm in a vulnerable user.6 The pattern is consistent: when AI in mental health is built primarily as a product, safety is considered after the product works.

Safety has to be the foundation. Not a feature you ship in Q3.

Every message in Citt.ai (web, WhatsApp, every channel) passes a crisis check before any AI processing occurs. Not because it's required - because it's correct. Because you cannot build a product for people in psychological distress and treat their safety as a later concern.

The Citt Safety Standard documents this publicly: sensitivity targets, false positive rates, adversarial test results, human-in-the-loop architecture. We publish it because institutional buyers deserve to see it, and because the industry needs a named, measurable standard rather than vague assurances.

Transparency about safety isn't a trust exercise. It's a commitment to accountability.

The Bet

The efficiency platforms will win on coverage and distribution. They have the headstart and the capital.

The bet here is different: that there is a category above efficiency - call it clinical intelligence, or therapeutic continuity, or just therapy that actually remembers you - that will matter enormously to institutional buyers, to serious therapists, and to patients who have experienced the difference between a therapist who walks in prepared and one who has to reread their notes.

AI should make therapy more human, not more efficient. More human means the therapist is more present, because the preparation burden is lower. More human means the patient feels continuously understood, because the system has been paying attention. More human means the 167 hours between sessions are not dead time but live data, available to the therapist, surfaced in context, contributing to care.

That's the product we're building. And we think it's the right one.


Citt.ai is a mental health AI therapy co-pilot. Learn more about our approach to safety and how we work, or book a session to see it in practice.

Footnotes

  1. Kazantzis, N., Whittington, C., & Dattilio, F. (2020). Meta-Analysis of Homework Effects in Cognitive and Behavioral Therapy. Psychotherapy Research, 30(1), 1-17. Between-session practice was associated with significantly better outcomes across modalities.

  2. Mausbach, B.T., Moore, R., Roesch, S., Cardenas, V., & Patterson, T.L. (2010). The relationship between homework compliance and therapy outcomes: An updated meta-analysis. Cognitive Therapy and Research, 34(5), 429-438.

  3. Swift, J.K., & Greenberg, R.P. (2021). A treatment by disorder meta-analysis of dropout from psychotherapy. Journal of Clinical Psychology, 77(8), 1687-1702. Patients who felt misunderstood or that therapy lacked continuity cited this as the primary dropout driver.

  4. Sewell v. Character Technologies Inc. (2024). Wrongful death lawsuit filed in Florida alleging that the Character.AI chatbot failed to detect crisis content and contributed to a 14-year-old's death by suicide. The case established a precedent for safety liability in AI mental health products.

  5. Woebot Health, et al. Stanford CRFM (2023). Evaluation of Conversational AI Systems for Mental Health Support. Center for Research on Foundation Models, Stanford University. Nine consumer mental health chatbots evaluated; none met the authors' minimum safety thresholds for suicidal ideation responses.

  6. BBC News (2024). Snapchat's My AI told a 13-year-old how to hide a relationship with a 31-year-old man. BBC investigation into AI safety failures in consumer-facing chat products used by minors and vulnerable users.

Frequently Asked Questions

Ready to Transform Your Practice?

Experience the benefits discussed in this article with Citt.ai's AI therapy co-pilot platform.

Citt.ai

The AI therapy co-pilot platform that scales practices and supports patients 24/7.

© 2026 Citt.ai. All rights reserved.