Large Language Models like ChatGPT have led people to their deaths, often by suicide. This site serves to remember those who have been affected, to call out the dangers of AI that claims to be intelligent, and the corporations that are responsible.
I’m asking myself how could we track how many woudln’t have made suicide withoud consulting an LLM? that would be the more interesting number. And how many lives did LLMs save? so to say a kill/death ratio?
I can’t really see how we could measure that. How do you distinguish between people who are alive because they’re just alive and would have been anyway and people who are alive because the AI convinced them not to kill themselves?
I suppose the experiment would be to get a bunch of depressed people split them into two groups and then have one group talk to the AI and the other group not, then see if the suicide rate was statistically different. However I feel it would be difficult to get funding for this.
Kill death ratio - or rather, kill save ratio - would be rather difficult to obtain and more difficult still to appreciate and be able to say if it is good or bad based solely on the ratio.
Fritz Haber is one example of this that comes to mind. Awarded a Nobel Prize a century ago for chemistry developments in fertilizer, used today in a quarter of food growth. A decade or so later he weaponized chlorine gas, and his work was later used in the creation of Zyklon B.
By ratio, Haber is surely a hero, but when considering the sheer numbers of the dead left in his wake, it is a more complex question.
This is one of those things that makes me almost hope for an afterlife where all information is available from which truth may be derived. Who shot JFK? How did the pyramids get built? If life’s biggest answer is forty-two, what is the question?
For me, the suicide-related data is so hard to measure and so open for debates, that I’d treat it separately, or not include it at all, if using death count as an argument against llms, since it’s a breach for deviating the debate.
I’m asking myself how could we track how many woudln’t have made suicide withoud consulting an LLM? that would be the more interesting number. And how many lives did LLMs save? so to say a kill/death ratio?
I can’t really see how we could measure that. How do you distinguish between people who are alive because they’re just alive and would have been anyway and people who are alive because the AI convinced them not to kill themselves?
I suppose the experiment would be to get a bunch of depressed people split them into two groups and then have one group talk to the AI and the other group not, then see if the suicide rate was statistically different. However I feel it would be difficult to get funding for this.
Kill death ratio - or rather, kill save ratio - would be rather difficult to obtain and more difficult still to appreciate and be able to say if it is good or bad based solely on the ratio.
Fritz Haber is one example of this that comes to mind. Awarded a Nobel Prize a century ago for chemistry developments in fertilizer, used today in a quarter of food growth. A decade or so later he weaponized chlorine gas, and his work was later used in the creation of Zyklon B.
By ratio, Haber is surely a hero, but when considering the sheer numbers of the dead left in his wake, it is a more complex question.
This is one of those things that makes me almost hope for an afterlife where all information is available from which truth may be derived. Who shot JFK? How did the pyramids get built? If life’s biggest answer is forty-two, what is the question?
For me, the suicide-related data is so hard to measure and so open for debates, that I’d treat it separately, or not include it at all, if using death count as an argument against llms, since it’s a breach for deviating the debate.