The AI Your Patients Use Between Sessions Was Trained on Data That Actively Works Against Your Clinical Goals
A 15-year dataset of 52 million Reddit comments shows crowd-sourced relationship advice trending toward 50% "end it." AI trained on that data carries that prior into every patient conversation. Here is what that means for your clinical work.
On this page
- The AI problem hiding in that dataset
- What a therapist does that a crowd cannot
- The "Seek Therapy" line matters more than it looks
- Why human oversight is not optional
- What this means practically for therapists
- Frequently Asked Questions
- Why does AI give relationship advice that sounds like "just leave"?
- What does the Reddit relationship advice dataset show?
- How should AI handle relationship and mental health questions for therapy patients?
- What is "human in the loop" in AI therapy and why does it matter?
- How does Citt.ai prevent AI from undermining a therapist's clinical work?
The AI your patients use between sessions was trained on internet data. Here is what that data actually contains.
A researcher filtered r/relationship_advice down to 1,166,592 quality comments across 5 million posts and tracked what people actually recommend, year by year, from 2010 to 2025. The result is the most detailed picture ever assembled of crowd-sourced relationship guidance.
In 2010, "End Relationship" made up about 30% of advice. By 2025, it is approaching 50%. Every single year, one direction.
Meanwhile: "Communicate" fell from 22% to 14%. "Give Space / Time" dropped from 25% to 13%. "Compromise" collapsed from 7% to 3%. Every category that involves staying in difficulty, working through conflict, or tolerating ambiguity lost ground every year for fifteen years.
Between your weekly sessions, this is what your patients are reading.
The AI problem hiding in that dataset
If you train a language model on internet data (forums, social media, advice threads) it inherits the distribution of that data. A model trained on Reddit's r/relationship_advice would carry a 50% prior that the answer to relationship difficulty is to leave. Not because the model is biased or broken. Because it is accurately learning from its training data.
This is not a hypothetical. Most consumer AI systems are trained on exactly this kind of large-scale web data. The result is a system that will confidently and consistently nudge patients toward the most common internet response, which for relationship problems is "end it."
The model is not giving bad advice. It is giving the median crowd's advice. That is a different thing entirely from clinical advice from a professional who knows the patient.
This is why the AI your patients use between sessions needs to be under your oversight, not a standalone consumer chatbot applying population priors to your patient's specific situation.
What a therapist does that a crowd cannot
A therapist working with a patient in relational difficulty does not run a query against 52 million comments. They draw on accumulated context: the patient's attachment history, their communication patterns, what they have tried before, what progress looks like for them specifically, what goal they set in session three.
That context is what makes therapeutic guidance meaningful. Without it, the most you can offer is the median opinion, which, as the data shows, has been trending toward "leave" for fifteen consecutive years.
Population statistics describe what most people think about most situations. They say nothing about the specific person in front of you.
The therapeutic relationship is irreplaceable precisely because it is individual and cumulative. The AI tools you use should extend that relationship, not replace your judgment with a crowd's.
The "Seek Therapy" line matters more than it looks
The one growth category besides "leave" is "Seek Therapy / Counselling." It rose from about 1% in 2010 to 6% in 2025.
That shift represents something real: a growing recognition, even in crowd-sourced advice, that relationship problems have more dimensions than an anonymous stranger can access. The right response to "what should I do?" is not always a verdict. Sometimes it is a referral.
Six percent is still a small fraction. But it signals what the data confirms intuitively, that professional support is the right answer far more often than the crowd acknowledges, and that friction is the main reason people do not access it.
For most patients, the gap between "I'm struggling" and "I'm working with a qualified clinician" involves cost, scheduling, uncertainty, and stigma. Reducing that gap is one of the most impactful things the mental health sector can do. An AI system that makes it easier to stay in a therapeutic relationship, rather than easier to get generic crowd advice, closes that gap.
Why human oversight is not optional
The copilot model of AI in therapy exists for exactly this reason.
Good AI support does not apply population priors to individual patients. It does something more specific: it carries context between sessions, surfaces patterns for the clinician to act on, and keeps the therapist in the loop rather than substituting for their judgment.
At Citt.ai, this is structural. Every patient conversation is reviewable by the assigned therapist. Crisis detection runs on every message before any AI processing occurs. The AI does not make clinical recommendations. It extends the reach of the clinician who knows the patient. When something needs a human response, it surfaces it immediately.
This is not just safer. It is clinically correct. The therapist holds the context that makes advice meaningful. The AI should be extending that context, not bypassing it.
Between-session AI support that operates without therapist oversight is a consumer chatbot trained on internet data. Consumer chatbots trained on internet data now carry a 50% prior that your patient should end their relationship. That prior will show up in their responses, subtly and consistently, in every difficult conversation.
That is not a hypothetical risk. That is the dataset.
What this means practically for therapists
When your patients engage with AI support between sessions, what they experience should be an extension of the work you are doing together, not the median Reddit commenter's opinion about their relationship.
The practical difference is:
AI without oversight: Trained on broad internet data, applies population priors, nudges toward the most common response, has no knowledge of the therapeutic relationship or goals established in sessions.
AI with therapist oversight: Carries the context of the therapeutic relationship, surfaces patterns for the clinician, escalates when needed, operates within the treatment goals the therapist has set. The therapy memory accumulated across sessions informs every interaction.
One of these tools could be quietly working against the goals you have set with your patient. The other amplifies them.
The Reddit data is a useful mirror. It shows clearly what happens when advice-seeking connects people with a crowd rather than with a professional. The right response is not to avoid AI in mental health. It is to build AI that keeps the right human in the room.
Citt.ai is a mental health AI therapy co-pilot for therapists. Every patient conversation is therapist-reviewed. See how human oversight works in practice, or explore what AI should actually do between sessions.
Frequently Asked Questions
Ready to Transform Your Practice?
Experience the benefits discussed in this article with Citt.ai's AI therapy co-pilot platform.
Related Articles
- The "Glass Box" Approach: Why We Don't Hide Our AI Behind a Curtain
How we build trust in AI-assisted therapy through transparency, explainable AI, human oversight, and clear boundaries. The architecture of trust, not just platitudes.
- Why AI Should Make Therapy More Human, Not More Efficient
The race in mental health AI is optimising for efficiency — more patients, lower cost, faster sessions. That's the wrong bet. Here's why the System of Context matters more than throughput.
- Bridging the "Monday Morning Gap": How to Extend Care Without Extending Your Hours
Discover how automated interventions bridge the adherence gap between sessions. Just-in-Time Adaptive Interventions (JITAI) deliver care when patients need it most.