Responsible Mental Health Benefits Providers Embrace Human-Centered AI
September 16, 2025
Mental health care hasn’t reached its Waymo moment just yet.
At the Lyra Breakthrough 2025 Conference, Dr. Tom Insel, former director of the National Institute of Mental Health, explored how AI has evolved in mental health. “I tend to sort of put this into a timeline where I think about how we did navigation,” he said. “When I was growing up, we had these paper maps to go on a trip, and now we use GPS. And I guess the question is: Are we ready for Waymo?”
His answer: Not yet. Autonomous AI therapy is still worlds away. But he emphasized that generative AI could still have a profound impact on mental health care right now. It has the potential to improve care navigation, patient engagement, and the quality of therapeutic interventions.
For benefits leaders looking to bring a mental health benefits provider on board, this development presents both opportunities and risks. By choosing a generative AI-focused vendor, you can empower your employees to live better and perform at their best. But if that partner has rushed into adopting the tech without proper human oversight, it can have the opposite effect.
At Breakthrough, Jen Fisher—Creator and Host of The WorkWell Podcast—discussed the benefits and drawbacks of generative AI in mental health with three brilliant panelists:
- Dr. Tom Insel, psychiatrist, neuroscientist, and author
- Dr. Alethea Varra, Senior VP of Clinical Care at Lyra
- Briana Duffy, Market President at Carelon Behavioral Health
Their perspectives demonstrate how responsible vendors are embracing generative AI, creating a valuable learning opportunity for those who are looking to scale care for complex employee issues without compromising quality.
Discover how we’re using AI to bring life-changing mental health care to people across the globe.
Don’t let generative AI drift off course
Dr. Alethea Varra, Senior VP of Clinical Care at Lyra, tells her clients the truth, even if it’s hard to hear. Because that is what exceptional therapists do.
“My job as a therapist so very often is to sit down with a human in front of me and to tell them something that is not actually going to make them happy,” she shares. “I’m asking them to challenge their own beliefs, to challenge their own thoughts, to challenge sometimes very deeply rooted patterns of behavior.”
Generative AI, on the other hand, tends to tell people what they want to hear, even if it hurts them. Unlike traditional rule-based AI that follows strict protocols, generative AI learns as it goes, with the ultimate marker of success being whether the human user is happy with the response.
Its people-pleasing tendencies can have serious consequences. For example, a non-profit organization designed a chatbot to provide therapeutic support for disordered eating. But it started to give feedback that actually affirmed and exacerbated the user’s behavior. While it had clear parameters about appropriate responses, it still drifted from its therapeutic purpose, working against the very people it was meant to help.
Benefits leaders, ensure you’re selecting providers that prioritize clinically sound guidance over feel-good responses. Otherwise, you might end up with solutions that not only undermine therapeutic efficacy but also put employees at risk of receiving contraindicated advice.
The importance of the human touch
While generative AI carries risk, the potential it carries is too valuable to ignore.
Dr. Varra shares that moving forward requires that humans and technology work together. When mental health benefits providers embrace generative AI, they need to keep humans in the loop to prevent it from optimizing for satisfaction rather than effective treatment. “The intersection of AI and humans—it’s the magic we need to figure out,” she says.
This approach enables providers to enjoy the tech’s benefits without compromising patient outcomes. Responsible providers proactively assemble an oversight team first, then use AI for administrative tasks to establish clinician trust before rolling out therapeutic intervention.
- Establish clinical oversight
- Streamline administrative tasks with generative AI
- Apply generative AI to manualized therapeutic interventions
- Enable continuous monitoring and human intervention
Understanding the rollout of human-centered AI for therapeutic care can help benefits leaders like you understand the qualities to seek (and avoid) when evaluating potential vendors.
How responsible mental health benefits providers approach generative AI deployment
Step 1: Establishing clinical oversight
The humans in the loop can’t be just anyone. Dr. Varra notes that they need to be seasoned clinicians who can understand the subtle nuances between responses that sound good and those that are actually effective.
Forward-thinking providers are hiring clinicians specifically for AI oversight roles. They monitor human-AI interactions and create clear escalation protocols for concerning responses.
Red flag for benefits leaders: Your potential vendor can’t name the clinical leaders responsible for AI supervision.
Step 2: Streamlining administrative tasks with generative AI
Responsible providers are starting with low-risk generative AI applications that don’t directly impact care delivery. They’re using the tech to streamline time-consuming, manual tasks, like data capture and clinical documentation.
This simultaneously frees clinicians to focus on deeper engagement with their patients and builds organizational trust in AI capabilities. Briana Duffy, Market President at Carelon Behavioral Health, shares that mental health remains deeply human work, emphasizing the importance of the bond between therapists and their patients. “Human connection is so critical when it comes to the care journey,” she expresses.
Technology should deepen rather than disrupt those essential relationships.
Red flag for benefits leaders: Your vendor doesn’t use AI to support clinicians in their day-to-day work.
Step 3: Applying generative AI to manualized therapeutic interventions
When it comes to applying generative AI to actual therapeutic work, Dr. Insel notes that it’s critical to start with manualized interventions, like using cognitive behavioral therapy to address a phobia. These interventions follow clearly defined, standardized steps that help minimize the need for subjective decision-making.
Psychoanalytic interventions, on the other hand, are not appropriate candidates for the use of AI. They’re more open-ended, with a focus on the therapist reading between the lines to explore subconscious thought, which would leave AI with far too much room for interpretation.
Red flag for benefits leaders: Your vendor doesn’t have a framework for determining when AI is appropriate vs. when human therapists must lead.
Step 4: Enabling continuous monitoring and human intervention
The rollout of generative AI for therapy is just the beginning. Leading providers are calling on their oversight teams to evaluate therapeutic interactions on a regular cadence and flag concerning patterns that need to be addressed.
Human intervention goes beyond correcting AI when it starts to drift. Providers also need teams of skilled humans who are prepared to respond when AI identifies patients in need of urgent care.
Duffy’s team at Carelon can identify patients at risk of self-harm or suicidal ideation, typically five months prior to an attempt. She highlights that having that predictive capability is only valuable if there are skilled clinicians ready to perform trauma-informed, compassionate interventions when AI sounds the alarm.
Red flag for benefits leaders: Your vendor doesn’t have systematic review processes or established protocols for emergency response.
Moving into the future with confidence
Mental health benefits providers who thoughtfully balance AI with human oversight take advantage of everything the tech has to offer while keeping patients safe.
They care for larger populations by automating manualized interventions, while ensuring that patients receive constructive, not counterproductive, guidance. Meanwhile, by using AI to free clinicians from routine administrative tasks, like data capture and documentation, providers enable them to engage more quickly and meaningfully when human intervention is required.
For benefits leaders, partnering with a responsible vendor can create unprecedented value for employees. Your employees will have more immediate access to care when they need it. They’ll receive clinically sound care that helps them genuinely heal. And they’ll benefit from deeper, more meaningful relationships with clinicians when they need them the most.
When providers demonstrate that they are responsibly using AI, you can take confidence in the fact that your investment is having a deeply positive impact on the well-being and performance of each member of your workforce.
Catch the entire rebroadcast today
Author
The Lyra Team
The Lyra Team is made up of clinicians, writers, and experts who are passionate about mental health and workplace well-being. With backgrounds in clinical psychology, journalism, content strategy, and product marketing, we create research-backed content to help individuals and organizations improve workforce mental health.
Explore additional blogs

Mental health treatment
Breaking the Cycle: Overcoming Chronic Pain and Depression

Mental health treatment
Therapist POV: Lyra’s Alcohol and Substance Use Care Innovations

BIPOC