ChatGPT Gets a Check-Up: OpenAI Partners with Physicians to Enhance Mental Health Insights
OpenAI, the operator of ChatGPT, has announced new mental health features after acknowledging that its model fell short in recognizing signs of delusion or emotional dependency. The changes come as researchers warn that programs like ChatGPT pose risks to mental health.
"We don’t always get it right," OpenAI said. "Earlier this year, an update made the model too agreeable, sometimes saying what sounded nice instead of what was actually helpful. We rolled it back, changed how we use feedback, and are improving how we measure real-world usefulness over the long term, not just whether you liked the answer in the moment."
OpenAI worked with over 90 physicians to create a model that responds to critical moments. The company will also provide "gentle reminders during long sessions to encourage breaks." Additionally, OpenAI is developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.
Earlier this summer, Stanford University researchers released a study showing that many Americans were turning to large language models instead of contacting therapists. “LLM-based systems are being used as companions, confidants, and therapists, and some people see real benefits,” said Nick Haber, an assistant professor at the Stanford Graduate School of Education. “But we find significant risks, and I think it’s important to lay out the more safety-critical aspects of therapy and to talk about some of these fundamental differences.”
OpenAI is enhancing its model after acknowledging that it had fallen short in recognizing signs of delusion or emotional dependency.