The New Face of Therapy Has Code, Not Credentials
A therapy app just raised almost $100 million in venture funding—and it’s not a human-run practice, it’s an AI called Ash. Investors like Andreessen Horowitz (a16z) and Felicis Ventures are betting big that AI-driven mental health support is the next big frontier.
But not everyone is cheering. Stanford researchers and mental health professionals are sounding the alarm: AI might help, but it’s not ready to replace human therapists. And that raises some tough questions for anyone who’s ever needed help—or thought about turning to an app for it.
Why AI Therapy Is Getting So Much Attention
Mental Health Demand Is Outpacing Supply
Therapy access is broken. 70% of patients face significant wait times to see a licensed therapist (NAMI, 2024). Costs are high, and many rural areas have zero providers.
Apps like Ash promise something different: instant support without the waitlist, and at a fraction of the cost.
Investors Smell Opportunity
Ash isn’t the first AI therapy app—Woebot, Wysa, and Replika have been around for years—but $93M from a16z and Felicis puts it in another league. It signals something bigger: venture capital believes AI could scale mental health access like telehealth scaled physical medicine.
That’s why you’re seeing headlines calling Ash the “future of therapy.”
Users Like the Idea (At First)
People already talk to chatbots every day. Adding emotional support into the mix feels natural to some. Early studies show people engage more consistently with AI journaling and check-ins than with traditional paper methods. For many users, “some help” feels better than no help at all.
Why Experts Are Pumping the Brakes
Empathy Isn’t Just a Feature
Therapists don’t just deliver advice—they build trust, read subtle emotional cues, and adapt in ways AI can’t yet replicate. Stanford researchers explicitly state:
“AI tools can support mental health care, but should not replace licensed mental health providers.”
Clinical Risk Is Real
Even the best AI models can make mistakes. A wrong response to a crisis situation or a misunderstood symptom could have serious consequences. There’s also the risk of over-reliance—someone might avoid seeking human care because an AI “seems good enough.”
Ethics and Privacy
Mental health data is some of the most sensitive data there is. Who owns what you tell an AI? How is it stored? Who trains the model on your data? These are open questions, and most consumers don’t read the fine print.
The Ethical Minefield We’re Walking Into
The Commodification of Human Suffering
Here’s what makes me uncomfortable about Ash’s $93 million funding round: it represents the financialization of human vulnerability. When venture capitalists see mental health crisis as a market opportunity, we need to ask hard questions about whose interests are really being served.
AI therapy apps operate on a fundamentally different economic model than traditional therapy. A human therapist can see maybe 30-40 clients per week. An AI can handle thousands simultaneously. The unit economics are compelling for investors—but what does that mean for the quality of care?
The uncomfortable truth: AI therapy is often pitched as “democratizing mental health,” but it might actually be creating a two-tiered system where those who can afford human therapists get genuine care, while everyone else gets an algorithmic approximation.
The Consent Paradox
When someone is in emotional distress, how meaningful is their consent to data collection and AI training? People seeking mental health support are, by definition, in a vulnerable state. They’re looking for help, not making calculated decisions about privacy trade-offs.
Yet AI therapy apps depend on this data to improve their models. Every conversation, every emotional breakthrough, every moment of crisis becomes training data for the next version of the algorithm. Users become unpaid contributors to a product that will be sold to other vulnerable people.
The question we’re not asking: Is it ethical to build AI therapy systems on the emotional labor and private struggles of people who had few other options for care?
The Illusion of Understanding
Perhaps the most troubling aspect of AI therapy is how convincing it can be. These systems are designed to simulate empathy, understanding, and care. They’ve gotten remarkably good at it. But there’s a profound difference between simulated empathy and genuine human understanding—even if users can’t always tell the difference.
When someone pours their heart out to an AI and receives what feels like compassionate, insightful responses, they’re experiencing a kind of emotional placebo effect. The relief is real, but the understanding is artificial. This raises fundamental questions about authenticity in therapeutic relationships.
What happens when people become accustomed to “therapy” that never challenges them, never gets frustrated with them, never has bad days, and never brings their own humanity to the relationship? Do we risk creating a generation that prefers the predictable responses of machines to the messy complexity of human connection?
The Good News: AI Can Still Help—If Used Right
Augmentation, Not Replacement
Experts say AI works best when it’s paired with human oversight:
- Triage: Determining urgency and routing people to the right human support.
- Check-ins: Helping patients log moods or track goals between sessions.
- Psychoeducation: Teaching coping strategies or cognitive behavioral techniques.
Real-Life Wins
- Employee Wellness Programs: Some companies use AI to monitor burnout trends and offer resources before crises happen.
- University Counseling Centers: AI chatbots help with after-hours support, providing basic coping tools when counselors aren’t available.
- Chronic Care: Pairing AI check-ins with physical health programs improves overall care outcomes.
These use cases don’t replace human therapists—they extend their reach.
The Path Forward: Building Ethical AI Mental Health Tools
What Responsible AI Therapy Would Look Like
If we’re going to build AI mental health tools, we need to do it with integrity:
Transparent Limitations: AI therapy apps should clearly communicate what they can and cannot do. No marketing language suggesting they’re equivalent to human therapists.
Data Sovereignty: Users should own their therapeutic data and have meaningful control over how it’s used. This means more than just privacy policies—it means giving people real agency over their most intimate conversations.
Human Oversight: Every AI therapy system should have qualified mental health professionals involved in monitoring, training, and quality assurance.
Equity Focus: These tools should prioritize serving underserved communities, not just markets that are profitable to venture capitalists.
Evidence-Based: Claims about effectiveness should be backed by rigorous, independent research—not just internal company studies or user satisfaction surveys.
Questions We Need to Keep Asking
As AI therapy becomes more sophisticated and widespread, we need ongoing dialogue about:
What happens to the therapy profession when AI can provide convincing approximations of therapeutic conversation?
Where This Is Going Next
Ash’s funding shows where mental health tech is heading: faster, cheaper, more automated. That’s great for access—but risky if it leads to thinking machines can fully handle complex, human problems.
Expect more:
Hybrid care models combining AI and human therapy.
Regulatory pressure to ensure AI mental health tools are safe and transparent.
Ethical debates about data, consent, and human oversight.
But also expect resistance. As these tools become more common, we’ll likely see pushback from people who recognize that some aspects of human experience shouldn’t be automated, no matter how convenient or cost-effective it might be.
Practical Takeaways
If you need therapy now and can’t access it: AI tools like Ash can help with coping and education—but use them as a supplement, not a substitute. Keep seeking human care when possible.
If you’re considering AI therapy: Understand what you’re getting into. Read the privacy policy. Know how your data will be used. Set realistic expectations about what AI can and cannot provide.
If you’re building or using AI in healthcare: Focus on data privacy and human oversight. Remember that your users are people in vulnerable states, not just data points.
If you’re a mental health professional: Don’t dismiss AI therapy entirely, but advocate for ethical implementation. Your expertise is needed to ensure these tools are developed responsibly.
If you’re just curious: Try these apps, but keep expectations realistic. They’re tools, not therapists. And pay attention to how they make you feel about human relationships and authentic connection.
Final Word: The Human Element We Cannot Code
Ash’s $93M funding isn’t just a story about one startup. It’s a signal: AI is moving deep into human spaces like therapy. Whether that’s a good thing depends on how it’s used, regulated, and understood.
But here’s what I think we’re really grappling with: In our rush to solve the mental health crisis with technology, are we addressing the symptoms while ignoring the root causes? Many mental health struggles stem from isolation, meaninglessness, economic insecurity, and broken communities. AI therapy might help people cope with these conditions, but it doesn’t address why so many people are struggling in the first place.
There’s something profound about the act of being heard, understood, and accepted by another human being. It’s not just therapeutic technique—it’s recognition of our shared humanity. When we outsource this to algorithms, we might gain efficiency, but we risk losing something essential about what it means to heal and be healed.
For now, the expert advice remains simple: AI can help you cope, but it can’t replace someone who truly understands you. And that’s something no machine—no matter how well-funded or sophisticated—can fully replicate.
The question isn’t whether AI therapy will continue to grow (it will), but whether we can develop it in ways that enhance rather than diminish our capacity for genuine human connection. That’s a challenge that requires not just better algorithms, but better values, better policies, and a clearer understanding of what we’re willing to sacrifice in the name of convenience and scale.
The conversation is just beginning. And it’s one that’s too important to leave to the venture capitalists alone.
