What does it mean to stay human in an AI-driven world? As a licensed therapist, mindfulness teacher, and former innovation team member at New Hanover Regional Medical Center, I’ve spent years navigating this exact question.
This space isn’t about hype. It’s about intention.
It's about designing ethical, emotionally intelligent AI tools that support—not replace—human connection.
In my work, I emphasize face-to-face relationships, real-world grounding, and community care. But I also recognize that Gen Z—and generations to come—are growing up immersed in tech. Ignoring that reality won’t protect them. Designing better, safer tools might.
On this page, I share my personal and professional journey with AI:
• My past work helping build digital human support tools like Cardiac Coach
• How I’m using over 800 original therapeutic writings to inform AI language models
• Why human-centered AI can be a powerful harm reduction tool for mental health
Because no matter how advanced the technology becomes, the mission remains clear:
Stay grounded. Stay connected. Stay human.
What is AI?
A system that learns from data to generate responses or actions. In therapy, that could mean instant support—but it needs heart.
Why Does It Matter in Mental Health?
Students are turning to AI models late at night when no human is available. Let’s ensure what they encounter is trauma-informed, ethical, and deeply compassionate.
Human + AI ≠ Replacement
My vision is not to replace human care—but to offer scalable, emotionally intelligent support when humans can’t be there.
With a background as a Psychotherapist, Licensed Clinical Social Worker and mindfulness teacher, I’ve always been drawn to the question: How can technology meet people where they are—without replacing what matters most?
My career has always lived at the intersection of mental health and innovation.
I served on the innovation team at New Hanover Regional Medical Center, where I helped develop Cardiac Coach, a digital human avatar designed to support cardiac patients post-discharge. My role included crafting trauma-informed scripts, emotional rapport systems, and behaviorally engaging content.
That work taught me this: when AI is guided by compassion and clinically sound language, it can meet people where they are—even when no one else can.
Today, I bring the same ethos into my private practice, writing, and AI design work.
I advocate for supportive harm reduction when it comes to AI use. Rather than pretend these technologies aren’t influencing human behavior, I believe we need to shape their evolution—ethically, creatively, and with trauma-informed awareness.
I approach AI and mental health through a strict ethical and relational lens:
🟦 Human-Centered – Tools should support agency, not dependency
🟦 Clinically Authored – All content is grounded in evidence-based practices (CBT, ACT, MBSR, Mindful Self-Compassion, trauma-informed care)
🟦 Harm Reduction First – Design for real-world needs, not perfection
🟦 Voice Matters – A kind, poetic tone builds trust and presence
🟦 Transparency Always – Users know when they’re engaging with AI
I believe that empathy can be encoded, not artificially—but through thoughtful, lived design.
The question isn’t whether AI belongs in mental health.
It’s: Can we shape it to reflect our highest values?
I believe we can.
This page is the start of that dialogue. I welcome connection, collaboration, and curiosity—especially from fellow clinicians, students, academics, and designers who are thinking deeply about this intersection of care and code.
Open today | 09:00 am – 05:00 pm |
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.