ChatGPT. [Photo: Shutterstock]

OpenAI is introducing a feature in ChatGPT that alerts a contact pre-designated by the user if it detects mental health or safety crisis situations.

On May 7, U.S. technology outlet The Verge reported that adult users can register the contact details of another adult, such as a family member or friend, in account settings.

The feature is optional. Adult users can turn it on by registering another adult’s contact information in ChatGPT account settings. In South Korea, only people aged 19 or older can be designated as a trusted contact, and the designated person must accept the invitation within a week. Users can edit or delete the contact in settings, and the other person can disconnect at any time.

OpenAI said the feature is designed to connect users in crisis situations with someone they trust. The company said: "Emergency contacts are based on the simple premise that when someone is in crisis, simply being connected to someone they trust can make a meaningful difference." It added that the goal is to provide another layer of support alongside the local helplines already offered in ChatGPT.

Notifications do not include conversation content or full chat transcripts. If OpenAI’s automated systems detect a user is having a conversation about self-harm, ChatGPT first encourages the user to ask the emergency contact for help and informs the user that an alert may be sent to that contact.

A small team trained separately by OpenAI then reviews the situation. If the conversation is judged to indicate serious safety concerns, ChatGPT sends the emergency contact a brief email, message or in-app notification.

Similar responses are also continuing across the platform industry. Operational standards for how generative AI services and social media (SNS) detect mental health and safety risk signals, and how far they intervene, are also expected to draw attention.

OpenAI does not operate the feature as a fully automated notification system. Users must turn on the feature in advance and register a contact, and a human review process is required before any alert is sent. The company has chosen a structure that expands the scope of safety intervention while limiting the sharing of conversation content.

A person designated as a trusted contact does not take on the role of a professional counselor or crisis responder. OpenAI said the role of the contact is to reach out to the user to check on their condition and, if needed, connect them to additional support such as family, friends, mental health professionals, helplines or emergency services.

The feature aligns with a trend of expanding safeguards so that ChatGPT does not merely continue conversations in mental health crisis situations, but steers users toward connecting with a real person. OpenAI has continued work to better recognize signals of self-harm and suicide risk in sensitive conversations and to improve how it connects users to real-world support systems.

Keyword

#OpenAI #ChatGPT #The Verge #South Korea #SNS
Copyright © DigitalToday. All rights reserved. Unauthorized reproduction and redistribution are prohibited.