I tried to watched the video last night but it was very hard-hitting as someone with depression and who’s occasionally suicidal, I had to stop around 10 minutes as it was getting too much that ChatGPT kept encouraging people to be self-destructive.
I tried to watched the video last night but it was very hard-hitting as someone with depression and who’s occasionally suicidal, I had to stop around 10 minutes as it was getting too much that ChatGPT kept encouraging people to be self-destructive.
No matter how many safety nets or guard rails we install; no matter how many warning signs or disclaimers we slap on things; no matter how many PSA’s or safety campaigns we make; Darwinism will always continue to exist.
I’m not saying people who commit suicide because an AI chatbot told them to don’t deserve sympathy. Nor am I saying these companies don’t deserve criticism. And of course there are obviously other factors like social isolation that is rampant in our world. But at the same time, I don’t think the people who are able to be convinced to kill themselves by an AI chatbot are the best kind of people to carry on the future of the human race.
Dying from mental illnesses is not a fucking darwin award, dont be so callous. These people were already vulnerable, its not like they were neurotypical before ChatGPT.
Darwinism tends to kill off those considered “vulnerable”. Also, Darwinism is not the same as the Darwin Award.
What the fuck are you even talking about.
That’s cold. One thing I do want to say is that (which is often more at online spaces) that people suggest using ChatGPT as a therapist which is both concerning and wreckless. As well as if ChatGPT have implement some form of safeguarding, I think ChatGPT could of prevented many suicides but they don’t care. It’s easy to plan to take your life if you have someone encourage it and you already at a worse headspace.
So I don’t blame the victims for using it and feeling like it had helped them despite it did the total opposite.