The Polaris Principles: Lyra’s blueprint for promoting safety and effectiveness in mental health AI

Share this article

October 9, 2025

Artificial Intelligence offers a powerful opportunity to expand access to mental health care. But not all AI tools are designed with the safeguards and clinical rigor that people engaging in mental health support need, especially when they’re in crisis. At Lyra, we believe that AI should never lower care quality—it should elevate it.

The most pressing question now is how we can effectively and safely harness AI's power for mental health. 

AI is evolving quickly, but the foundations of safe, ethical, and effective mental health care remain the same. Lyra’s Polaris Principles ground us in these fundamentals. They’re our blueprint to ensure AI is used responsibly in mental health to deliver on its transformative potential.

For a decade, Lyra has delivered evidence-based mental health care to millions worldwide, combining technology with a human touch. As we expand the role of AI to strengthen our care, we continue to uphold strict ethical standards and keep people’s wellbeing at the center of everything we do. Preserving the humanity at the heart of care is essential, which means respecting each person’s autonomy, lived experience, and background, as well as retaining connection to human providers when needed. With clinical outcomes as our north star, our approach is guided by science, member and customer feedback, and wide-ranging expertise.

Principle I: Safety is paramount

Lyra is committed to following strong ethical guidelines and clinical protocols to deliver safe, effective, and practical AI support for mental health, including those from respected organizations like the American Psychological Association, World Health Organization, and the International Coaching Federation.

What makes this possible is the guardrails we built in our AI solutions from the start. Unlike generic AI, Lyra AI is designed with mental health-specific protocols at its core to protect people and promote therapeutic progress. These guardrails include: 

  • Clinical-grade training - Decades of evidence-based training shape how Lyra AI responds, guiding members towards more helpful choices and behaviors and not validating harmful ones.
  • Active monitoring and escalation - AI interactions are monitored, with clear protocols for looping in a human provider if risks or other concerns arise.
  • Rigorous testing and evaluation - Predefined benchmarks assess the safety, appropriateness, and inclusiveness of Lyra AI, promoting meaningful outcomes for each innovation.
  • Human expertise at the core - Clinical experts guide every stage of AI development and provide oversight. Drawing on diverse backgrounds and specialties, they ensure Lyra AI is both effective and inclusive.

Principle II: Human providers are critical 

AI can offer tremendous value to members via round-the-clock support. AI also has limitations which is why expert mental health providers remain the foundation of care. Lyra AI is designed to enhance and complement clinicians, creating a synergistic solution that drives better outcomes, accountability, and engagement.

Key clinical priorities:

  • Integrated care - Human providers and AI deliver a stronger care solution that is more coordinated, timely, and personalized for each client. Clients retain access to human providers when that is the right level of care for their needs. 
  • Accountability and oversight - Clinicians maintain ultimate responsibility for care, ensuring overall care supported by AI aligns with best practices and therapeutic goals. 
  • Enhanced outcomes - Combining human expertise with AI allows for more consistent, around-the-clock support and reinforcement of skills, improved monitoring of progress, and timely intervention when challenges arise. 

Principle III: Culturally responsive care is key to global reach 

AI has the potential to expand mental health support around the world, reaching people who might otherwise experience significant barriers to care. To be truly inclusive and effective, AI must reflect the values, language, and culture of the people it serves. Lyra prioritizes culturally responsive care to build trust, reduce stigma, and ensure meaningful impact across diverse populations.  

Key clinical priorities:

  • Inclusive AI design - Lyra AI is trained on diverse datasets and tested to reduce bias, to increase its relevance and effectiveness across different communities.
  • Local expertise - We tailor AI for local languages, customs, and cultures, in close collaboration with regional clinical experts.
  • Equity and accessibility - Lyra AI is designed to support people with a wide range of comfort with technology, so care is accessible and inclusive.

Principle IV: Innovation driven by science

Lyra advances mental health care by leveraging rigorous clinical science and research in AI development, so every intervention is evidence-based and designed to deliver meaningful clinical outcomes. Our approach sets a high standard for AI in mental health, which is not yet common across the industry.

Key clinical priorities:

  • Evidence-based - Lyra AI is trained on high-quality data linked to proven clinical outcomes.
  • Ongoing research and knowledge sharing - Lyra contributes to the field through peer-reviewed publications and clinical studies, transparently sharing outcomes to advance best practices in AI-supported mental health. 
  • Measurable clinical outcomes - AI innovations are evaluated for real-world improvements in clinical effectiveness, impact on overall well-being as well as client engagement and satisfaction.

Guiding the future of mental health care

We’re excited about AI’s potential to expand access to mental health care at an unprecedented scale - and we’re committed to doing it with the highest ethical and clinical standards.

We apply our rigorous privacy standards to all Lyra technologies, including Lyra AI. Data is protected by strict security measures, and there is transparency on collecting and using data to help improve our AI models.

Lyra’s privacy policy 

Our commitment to security is foundational to everything we do, including our use of generative AI. We protect all client data with a comprehensive security program that is HITRUST certified and compliant with HIPAA. We are also ISO 27001 certified. 

Lyra’s security policy 

Author

Anita Lungu, PhD

VP of Clinical Product & Research

Dr. Lungu is a licensed clinical psychologist and clinical researcher who leads the Clinical Product and Research team at Lyra. She completed a PhD in computer science at Duke University, a PhD in clinical psychology at the University of Washington, and a postdoctoral fellowship at UCSF. At Lyra she was the lead clinical architect for all Lyra Care programs, responsible for their clinical definition, digital tools development, and scientific evaluation. She has published research in both computer science and clinical psychology with more than 30 peer-reviewed articles in journals such as JAMA Psychiatry, and Journal of Medical Internet Research.

Explore additional blogs

Lyra news

Introducing Lyra AI: Clinically rigorous mental health AI with a human touch

Learn more

Lyra news

The GLP-1 Boom Is Costing You. Here’s the Missing Piece.

Learn more

Lyra news

Lyra Brings Outcomes-Based Pricing to Mental Health

Learn more

Take your workforce to the next level