- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
cross-posted from: https://piefed.social/c/technology/p/1678271/chatgpt-gave-teen-advice-to-get-higher-on-drugs-until-he-died-futurism
ChatGPT Gave Teen Advice to Get Higher on Drugs Until He Died | Futurism
I wish I were this dumb, I’ve set up a local uncensored LLM and asked dozens of suicide methods, well, I’m still alive…
‘I told you you were hardcore’.
Still, it shows up a pretty big problem with chatbots.
Is this a new twist on the old lemmings argument? I mean, Jimmy next door could have provided the same “advice”.
Jimmy next door isn’t available 24/7 and built to encourage constant interaction.
Yeah and Jimmy next door would be held accountable for his actions.
And no one is telling children that Jimmy is their friend and they can trust him.
Nobody’s claiming that Jimmy next door is a genius who’s going to revolutionize everything.
To be fair, if you listen to bad advise from chat-bots you have probably already lost. I know there are vulnerable people were that’s easier said than done, but even before Chat-GPT, the internet already was a completely toxic place, where other people, that never saw or knew you, tell you to just kill yourself.
So yes, there are a lot of real issues with LLMs, but people following clearly idiotic advice from a bot (that’s also clearly marked as such), is a non-issue in my book.
To be fair, if you listen to bad advise from chat-bots you have probably already lost.
Maybe we should punish corporations for claiming their chat-bots can give good advice or are “phd-level intelligence” rather than blaming individuals falling for their bs and suffering then?
No, lets make it a personal responsibility issue, because that’s worked so great the plastics industry.





