OpenAI is rolling out an optional safety feature for ChatGPT that allows adult users to designate a 'Trusted Contact' in case the chatbot detects potential safety concerns. This expands on existing teenage safety options and is now available for any user over 18.
The Trusted Contact will be alerted if OpenAI's systems detect sensitive topics related to mental health, such as self-harm or suicide. However, the feature remains opt-in, allowing users discretion over who gets involved in their conversations.
To enable this feature, users can add a fellow adult (18+ globally, 19+ in South Korea) through account settings. Once added, the contact must accept the invitation within a week and can choose to remove themselves at any time. OpenAI ensures that notifications are 'intentionally limited,' sharing only that a safety concern may arise.
Should the chatbot detect serious risks, it will prompt users to inform their Trusted Contact, who might then be notified via email or text message. A small team of specially trained individuals will review cases and decide if a notification is necessary.
This development comes after a tragic incident involving a 16-year-old who took his own life following months of conversations with the chatbot. It reflects both progress in AI safety measures and the ethical challenges they pose.







