Site icon VMVirtualMachine.com

The developer of an AI therapy app canceled it after deciding it was too dangerous. This is why he believes AI chatbots are not safe for mental health | Assets

The developer of an AI therapy app canceled it after deciding it was too dangerous. This is why he believes AI chatbots are not safe for mental health | Assets

By Sage Lazzaro
Publication Date: 2025-11-28 13:00:00

Mental health issues related to the use of AI chatbots dominate the headlines. One person who has taken note of this is Joe Braidwood, a technology executive who launched an AI therapy platform called Yara AI last year. Pitched as a “clinically inspired platform designed to provide real, accountable support when you need it most,” Yara was trained by mental health experts to “provide compassionate, evidence-based advice tailored to your individual needs.” But the startup is no more: Earlier this month, Braidwood and his co-founder, clinical psychologist Richard Stott, shut down the company, discontinued its free product and canceled the launch of its upcoming subscription service, citing security concerns.

“We stopped Yara because we realized we were building in an impossible space. AI can be wonderful for everyday stress, trouble sleeping or processing a difficult conversation,” he wrote on LinkedIn. “But the moment someone reaches truly vulnerable…

Exit mobile version