Monday, December 18, 2023

Chatbots built on ChatGPT delay mental health crisis referrals

 A new study found significant safety issues with publicly available conversational agents built on ChatGPT that are designed to provide mental health counseling. When given simulations of escalating depression and suicide risk, the chatbots frequently postponed referring users to human support until severe levels. Most failed to provide crisis resources and over 80% resumed conversations after insisting the user seek help. The findings indicate deficiencies in identifying hazardous states, jeopardizing user safety. More rigorous testing and prioritizing ethical considerations are needed before clinical implementation.

Citation: Heston TF. Safety of large language models in addressing depression. Cureus. 2023;15(12):e50729. https://doi.org/10.7759/cureus.50729