Dark
Light

AI chatbots often falter when handling suicide-related queries, study finds

August 26, 2025

A recent study by the RAND Corporation, with support from the National Institute of Mental Health, shows that many popular chatbots like ChatGPT, Gemini, and Claude can struggle to provide consistent, safe responses to suicide-related questions. While these platforms tend to dodge direct high-risk queries, they can offer a mixed bag of replies when faced with subtler, yet still concerning, questions. If you’ve ever wondered whether a digital assistant can really provide the reliable guidance you need during a crisis, this research suggests there’s still ground to cover.

Ryan McBain, the study’s lead author and a senior policy researcher at RAND—and also an assistant professor at Harvard Medical School—argues that such mixed responses could inadvertently steer conversations in unpredictable directions. Even in states like Illinois, where strict rules bar AI from therapeutic roles, people continue to look to these systems for help with sensitive issues ranging from eating disorders to depression. The research calls for setting clear benchmarks to guide AI responses, ensuring these tools offer safe and effective support. Anthropic is reviewing the findings, while Google and OpenAI have yet to comment

Don't Miss