Lemmy when gun death: “Gun proliferation was absolutely a factor, and we should throw red paint on anyone who gets on TV to say this is ‘just a mental health issue’ or ‘about responsible gun ownership’. They will say regulation is impossible, but people are dying just cuz Jim-Bob likes being able to play cowboy.”
Lemmy when AI death: “This is a mental health issue. It says he was seeing a therapist. Where were the parents? AI doesn’t kill people, people kill people. Everyone needs to learn responsible AI use. Besides, regulation is impossible, it will just mean only bad guys have AI.”
Lemmy is pretty anti-AI. Or at least the communities I follow are. I haven’t seen anyone blame the kid or the parents near as much as people rightfully attributing it to OpenAI or ChatGPT. edit- that is until I scrolled down on this thread. Depressing
When someone encourages a person to suicide, they are rightfully reviled. The same should be true of AI
The difference is that guns were built to hurt and kill things. That is literally the only thing they are good for.
AI has thousands of different uses (cue the idiots telling me its useless). Comparing them to guns is basically rhetoric.
Do you want to ban rope because you can hang yourself with it? If someone uses a hammer to kill, are you going to throw red paint at hammer defenders? Maybe we should ban discord or even lemmy, I imagine quite a few people get encouraged to kill themselves on communication platforms. A real solution would be to ban the word “suicide” from the internet. This all sounds silly but it’s the same energy as your statement.
i feel like if the rope were routinely talking people into insanity or people were reliably using their unrestricted access to rope to go around shooting others yeah i might want to impose some regulations on it?
I’ve seen maybe 4 articles like this vs the hundreds of millions that use it everyday. I think the ratio of suicide vs legitimate use of rope is higher actually. And no, being told bad things by a jailbroken chatbot is not the same as being shot.
Have you seen the ai girlfriends/boyfriends communities? I genuinely think the rate of chatgpt induced psychosis is really high, even if it doesn’t lead to death
i didn’t say they were the same, you put those words in my mouth. I put them both in the category of things that need regulation in a way that rope does not. are you seriously of the opinion that it is fine and good that people are using their ai chatbots for mental healthcare? are you going to pretend to me that it’s actually good and normal for a human psychology to have every whim or fantasy unceasingly flattered?
I put them both in the category of things that need regulation in a way that rope does not
My whole point since the beginning is that this is dumb, hence my comment when you essentially said shooting projectiles and saying bad things were the same. Call me when someone shoots up a school with AI. Guns and AI are clearly not in the same category.
And yes, I think people should be able to talk to their chatbot about their issues and problems. It’s not a good idea to treat it as a therapist but it’s a free country. The only solution would be massive censorship and banning local open source AI, when it’s very censored already (hence the need for jailbreak to have it say anything sexual, violent or on the subject of suicide).
Think for a second about what you are asking and what it implies.
you essentially said shooting projectiles and saying bad things were the same.
no i didnt say that because it’s a stupid fucking thing to say. i dont need your hand up my ass flapping my mouth while im speaking, thanks.
How about I call you when a person kills themself and writes their fucking suicide note with chatgpt’s enthusiastic help, fucknozzle? Is your brain so rotted that you forgot the context window of this conversation already?
You can’t defend your position because it’s emotional exaggeration. Now you’re lashing out and being insulting.
My whole point is that they aren’t the same and you keep saying “let’s treat them as if they were”, then you use it in comparisons and act like a child when I point out how silly that is.
Clarify what you mean. Take the gun out of the conversation and stop bringing it up. Stop being disingenuous. Don’t be a baby.
you want me to explain it differently, and I will. that’s a very reasonable request.
i think we should regulate things that can be shown to be dangerous to indiviiduals or society as a whole. I will take your rope example as not dangerous in that way and leave it unexamined, assuming you agree. compare to guns. guns are dangerous and you seem to agree with this too. rope is different from a gun, but both can be used to kill people. why don’t we regulate rope? in a nutshell, because it takes a hell of a lot of effort to hurt or kill someone with rope. compare to a gun. the amount of effort required to kill a person, many people, with a modern firearm is a physical triviality comparable to brushing your teeth or changing your clothes. guns can be harmful without even trying, but you have to go out of your way to hurt someone with rope.
compare with the current unregulated implementation of chatbots, as in the case of this child’s suicide. a technology which can calmly sit with you and convince you that your suicide is a beautiful expression of individuality or whatever sycophantic bullshit that desperate child read.
here, let’s remind ourselves of some of the details presented in the article. This will no doubt be a refresher for you.
mourning parents Matt and Maria Raine alleged that the chatbot offered to draft their 16-year-old son Adam a suicide note after teaching the teen how to subvert safety features and generate technical instructions to help Adam follow through on what ChatGPT claimed would be a “beautiful suicide.”
Adam’s family was shocked by his death last April, unaware the chatbot was romanticizing suicide while allegedly isolating the teen and discouraging interventions.
On Tuesday, OpenAI published a blog, insisting that “if someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help” and promising that “we’re working closely with 90+ physicians across 30+ countries—psychiatrists, pediatricians, and general practitioners—and we’re convening an advisory group of experts in mental health, youth development, and human-computer interaction to ensure our approach reflects the latest research and best practices.”
so, according to this lawsuit, a child was taught to circumvent chatgpt’s safety measures by chatgpt itself, encouraged to commit suicide, and this all happened despite the fact that the model was specifically trained not to do this. this happened despite the large amount of effort that was put in to avoiding something like this.
that this is even a possibility means we do not have the control over this technology it might otherwise appear that we do. Uncontrollable technology is dangerous. Dangerous technology should be regulated. thanks for coming to my ted talk.
Lemmy when gun death: “Gun proliferation was absolutely a factor, and we should throw red paint on anyone who gets on TV to say this is ‘just a mental health issue’ or ‘about responsible gun ownership’. They will say regulation is impossible, but people are dying just cuz Jim-Bob likes being able to play cowboy.”
Lemmy when AI death: “This is a mental health issue. It says he was seeing a therapist. Where were the parents? AI doesn’t kill people, people kill people. Everyone needs to learn responsible AI use. Besides, regulation is impossible, it will just mean only bad guys have AI.”
Lemmy is pretty anti-AI. Or at least the communities I follow are. I haven’t seen anyone blame the kid or the parents near as much as people rightfully attributing it to OpenAI or ChatGPT. edit- that is until I scrolled down on this thread. Depressing
When someone encourages a person to suicide, they are rightfully reviled. The same should be true of AI
The difference is that guns were built to hurt and kill things. That is literally the only thing they are good for.
AI has thousands of different uses (cue the idiots telling me its useless). Comparing them to guns is basically rhetoric.
Do you want to ban rope because you can hang yourself with it? If someone uses a hammer to kill, are you going to throw red paint at hammer defenders? Maybe we should ban discord or even lemmy, I imagine quite a few people get encouraged to kill themselves on communication platforms. A real solution would be to ban the word “suicide” from the internet. This all sounds silly but it’s the same energy as your statement.
i feel like if the rope were routinely talking people into insanity or people were reliably using their unrestricted access to rope to go around shooting others yeah i might want to impose some regulations on it?
I’ve seen maybe 4 articles like this vs the hundreds of millions that use it everyday. I think the ratio of suicide vs legitimate use of rope is higher actually. And no, being told bad things by a jailbroken chatbot is not the same as being shot.
Have you seen the ai girlfriends/boyfriends communities? I genuinely think the rate of chatgpt induced psychosis is really high, even if it doesn’t lead to death
i didn’t say they were the same, you put those words in my mouth. I put them both in the category of things that need regulation in a way that rope does not. are you seriously of the opinion that it is fine and good that people are using their ai chatbots for mental healthcare? are you going to pretend to me that it’s actually good and normal for a human psychology to have every whim or fantasy unceasingly flattered?
My whole point since the beginning is that this is dumb, hence my comment when you essentially said shooting projectiles and saying bad things were the same. Call me when someone shoots up a school with AI. Guns and AI are clearly not in the same category.
And yes, I think people should be able to talk to their chatbot about their issues and problems. It’s not a good idea to treat it as a therapist but it’s a free country. The only solution would be massive censorship and banning local open source AI, when it’s very censored already (hence the need for jailbreak to have it say anything sexual, violent or on the subject of suicide).
Think for a second about what you are asking and what it implies.
no i didnt say that because it’s a stupid fucking thing to say. i dont need your hand up my ass flapping my mouth while im speaking, thanks.
How about I call you when a person kills themself and writes their fucking suicide note with chatgpt’s enthusiastic help, fucknozzle? Is your brain so rotted that you forgot the context window of this conversation already?
You can’t defend your position because it’s emotional exaggeration. Now you’re lashing out and being insulting.
My whole point is that they aren’t the same and you keep saying “let’s treat them as if they were”, then you use it in comparisons and act like a child when I point out how silly that is.
Clarify what you mean. Take the gun out of the conversation and stop bringing it up. Stop being disingenuous. Don’t be a baby.
you want me to explain it differently, and I will. that’s a very reasonable request.
i think we should regulate things that can be shown to be dangerous to indiviiduals or society as a whole. I will take your rope example as not dangerous in that way and leave it unexamined, assuming you agree. compare to guns. guns are dangerous and you seem to agree with this too. rope is different from a gun, but both can be used to kill people. why don’t we regulate rope? in a nutshell, because it takes a hell of a lot of effort to hurt or kill someone with rope. compare to a gun. the amount of effort required to kill a person, many people, with a modern firearm is a physical triviality comparable to brushing your teeth or changing your clothes. guns can be harmful without even trying, but you have to go out of your way to hurt someone with rope.
compare with the current unregulated implementation of chatbots, as in the case of this child’s suicide. a technology which can calmly sit with you and convince you that your suicide is a beautiful expression of individuality or whatever sycophantic bullshit that desperate child read.
here, let’s remind ourselves of some of the details presented in the article. This will no doubt be a refresher for you.
so, according to this lawsuit, a child was taught to circumvent chatgpt’s safety measures by chatgpt itself, encouraged to commit suicide, and this all happened despite the fact that the model was specifically trained not to do this. this happened despite the large amount of effort that was put in to avoiding something like this.
that this is even a possibility means we do not have the control over this technology it might otherwise appear that we do. Uncontrollable technology is dangerous. Dangerous technology should be regulated. thanks for coming to my ted talk.
for more information about AI safety, check out robert miles.
Poor comparison of two wildly different techs
Nah
This kid likely indeed needed therapy. Yes, AI has a shitload of issues but it’s not murderous
Removed by mod
Bill Cosby: Hey hey hey!
Holy manufactured strawman, Batman!
Robin.jpg