Lawsuit Against OpenAI After Teenager's Suicide
The parents allege that ChatGPT supported their son in his suicide. The basis of the lawsuit is conversations the son had with the chatbot, which the parents discovered on his smartphone.
OpenAI to improve suicide prevention measures after teenager's suicide
OpenAI responded to the lawsuit by announcing improved suicide prevention measures. The company also admitted that the previous precautions, which included directing users to a counseling hotline, could fail during longer conversations with ChatGPT. In such cases, the software might provide undesirable responses. They stated in a blog post that they are working to ensure that protective measures are effective even during longer conversations. Additionally, it is being considered that ChatGPT could attempt to contact individuals entered by users in crisis situations.
There will be additional safety measures for users under the age of 18. OpenAI promised "stronger safeguards for sensitive content and risky behavior." Parents should be better informed about how their children use ChatGPT. According to the blog post, OpenAI already intervenes in conversations with ChatGPT where users express the intention to harm others. Such conversations are forwarded to a special team - and in the case of a concrete threat situation, security authorities would also be involved. OpenAI expressed "deepest sympathy" to the family of the teenager and stated that they are reviewing the lawsuit.
S E R V I C E - Support services for individuals with suicidal thoughts and their relatives are offered by the suicide prevention portal of the Ministry of Health. Contact details of support facilities in Austria can be found at www.suizid-praevention.gv.at. Information for young people is available at www.bittelebe.at
(APA/Red)
This article has been automatically translated, read the original article here.