Starting August 2025, OpenAI will update ChatGPT to no longer provide direct answers to questions involving emotional distress, mental health, or critical personal decisions. Instead, the AI will offer non-directive responses aimed at encouraging reflection and exploration of different perspectives.
This change addresses concerns about users developing emotional dependence on the chatbot for sensitive matters such as relationship or life choices. ChatGPT will avoid giving definitive advice and instead prompt users to consider trade-offs and consult evidence-based resources.
To support healthier interactions, the updated interface will include periodic reminders encouraging breaks during extended sessions. When detecting questions related to mental health or complex personal issues, ChatGPT will redirect conversations towards thinking frameworks rather than concrete answers.
OpenAI collaborated with over 90 physicians worldwide, specialists in psychiatry, general medicine, youth development, and human-computer interaction, to develop evaluation rubrics ensuring careful handling of sensitive topics. This follows earlier incidents where the GPT-4o model occasionally missed signs of emotional distress, prompting renewed safeguards.
The update reflects OpenAI’s evolving approach to AI-human interaction, emphasizing ChatGPT’s role as a tool to support clearer thinking rather than acting as a therapist or decision-maker. The company prioritizes maintaining trust and accountability while discouraging users from relying on AI as a substitute for professional help during vulnerable moments.