1 min readfrom TechCrunch

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm

Our take

OpenAI is enhancing user safety with the introduction of the ‘Trusted Contact’ safeguard, designed to address potential self-harm situations within ChatGPT conversations. This innovative feature empowers users to designate trusted individuals who can be notified if the AI detects concerning dialogue. By expanding its commitment to mental health and well-being, OpenAI aims to create a more secure environment for users, fostering trust and support. This proactive approach not only prioritizes user safety but also encourages open dialogue about mental health challenges in an accessible manner.
OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.

Read on the original site

Open the publisher's page for the full experience

View original article

Tagged with

#self-service analytics tools#self-service analytics#natural language processing for spreadsheets#generative AI for data analysis#Excel alternatives for data analysis#self-harm#OpenAI#ChatGPT#Trusted Contact#mental health#safeguard#protect#users#well-being#expand#prevent#conversations#safeguards#support#digital safety