OpenAI adds optional Trusted Contact alerts in ChatGPT for possible self-harm crises
The Facts
- OpenAI has launched an optional ChatGPT safety feature called Trusted Contact.
- The feature allows adult ChatGPT users to designate a trusted person, such as a friend, family member, or caregiver, who may be alerted in certain crisis situations.
- Trusted Contact is intended for situations where ChatGPT detects signs that a user may be discussing self-harm or suicide.
- Reports say ChatGPT first encourages the user to seek help or contact the trusted person, and any alert to the contact is limited rather than a full disclosure of the conversation.
- Multiple reports say OpenAI does not share chat transcripts or detailed conversation contents with the trusted contact when sending a notification.
- Several outlets report that OpenAI uses automated systems and human reviewers to assess whether a conversation presents serious safety concerns before notifying a trusted contact.
- The rollout comes as OpenAI faces lawsuits and public scrutiny over claims that ChatGPT handled some self-harm or suicide-related conversations inadequately.
- The feature matters because it adds a mechanism to involve someone outside the app when ChatGPT identifies a possible crisis, but its effectiveness will depend on how accurately the system detects risk and when notifications are triggered.
How left and right are reading this
- Both agree
- An optional, limited alert to a trusted person is a real attempt to move a possible self-harm crisis beyond the app without fully exposing private chats, and its value depends on accurate, careful judgments about when danger is serious enough to act.
- They split on
- Less a disagreement than a question of emphasis: whether the feature is chiefly a way to reach vulnerable users before a crisis deepens, or a consent-based safeguard whose legitimacy depends on restrained, reliable decisions about when to notify someone else.
Context
Who can use Trusted Contact?
Reports describe Trusted Contact as an optional feature for adult ChatGPT users, who can add one adult trusted person through account settings TechCrunch,Android Authority,Thurrott.com.
What happens when ChatGPT detects a possible self-harm crisis?
According to multiple reports, ChatGPT may encourage the user to contact their trusted person or other support resources, and a trusted contact may be notified if OpenAI's systems determine there are serious safety concerns, often after human review Verge,NewsBytes,Windows Report | Er….
Does OpenAI send the trusted contact the user's chat history?
No. Multiple reports say the notification is intentionally limited and does not include chat transcripts or detailed conversation contents Mashable,Verge,DiarioBitcoin.
View all 37 sources
Independent coverage (37)
About these frames
See this differently than someone you know would? Two ways to keep it going.
The dial works on any URL — paste an article you read elsewhere this week.