Introducing Trusted Contact in ChatGPT

OpenAI has embedded a crisis-intervention layer into ChatGPT by rolling out Trusted Contact, a feature that flags severe self-harm signals and alerts a user-designated contact. This move signals how frontier labs are operationalizing harm-mitigation beyond content filters, shifting responsibility partly to human networks rather than relying solely on algorithmic detection. The feature reflects growing pressure on AI providers to address mental-health edge cases and liability concerns, setting a precedent for how consumer LLM platforms might integrate social safety nets into their core experience.
Modelwire context
Skeptical readThe announcement is quiet on what actually constitutes a triggering signal, who audits that threshold, and whether OpenAI retains any liability if the alert fails to reach a contact in time. The feature also requires users to opt in and designate someone, meaning the population most at risk may be least likely to configure it.
This sits uncomfortably alongside OpenAI's behavioral tracking rollout covered here from The Decoder on May 2nd, where the company enabled ad-targeting data collection by default for free-tier users. That story established a two-tier model where free users carry more exposure. Trusted Contact, which also requires user action to activate, follows the same structural logic: safety infrastructure is available but not guaranteed, and the burden falls on the user. Together, these moves suggest OpenAI is building consumer-platform habits, both monetization and social-safety features, onto what began as a research product, without fully resolving the consent and reliability questions either approach raises.
Watch whether any third-party crisis organization, such as the 988 Lifeline or Crisis Text Line, publicly endorses or audits the triggering criteria within the next six months. Absence of that validation would suggest the feature is more liability management than clinically grounded intervention.
Coverage we drew on
This analysis is generated by Modelwire’s editorial layer from our archive and the summary above. It is not a substitute for the original reporting. How we write it.
MentionsOpenAI · ChatGPT · Trusted Contact
Modelwire Editorial
This synthesis and analysis was prepared by the Modelwire editorial team. We use advanced language models to read, ground, and connect the day’s most significant AI developments, providing original strategic context that helps practitioners and leaders stay ahead of the frontier.
Modelwire summarizes, we don’t republish. The full content lives on openai.com. If you’re a publisher and want a different summarization policy for your work, see our takedown page.