- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Jesus fucking Christ.
OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user’s closest confidant.
It’s now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe. OpenAI had “been able to mitigate the serious mental health issues” associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a “suicide coach” for a vulnerable teenager named Adam Raine, the family’s lawsuit said.
Altman’s post came on October 14. About two weeks later, 40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit filed by his mother, Stephanie Gray.
In her complaint, Gray said that Gordon repeatedly told the chatbot he wanted to live and expressed fears that his dependence on the chatbot might be driving him to a dark place. But the chatbot allegedly only shared a suicide helpline once as the chatbot reassured Gordon that he wasn’t in any danger, at one point claiming that chatbot-linked suicides he’d read about, like Raine’s, could be fake.



People seeking ai “help” is really troubling. They are already super vulnerable…it is not an easy task to establish rapport and build relationships of trust especially as providers, who will dig into these harmful issues. It’s hard stuff. The bot will do…whatever the user wants. There is no fiduciary duty to their well-being. There is no humanity nor could there be.
There is also a shortage of practitioners, combined with insurance gatekeeping care if you are in the US. This is yet another barrier to legitimate care that I fear will continue to push people to use bots.
God, one year at the school paper, the applicant for ad manager talked about her “wonderful repertoire with editorial.” Some malaprops, you can handle. This was just like “how the fuck?”