Do we honestly think OpenAI or tech bros care? They just want money. Whatever works. They’re evil like every other industry
fall to my death in absolute mania, screaming and squirming as the concrete gets closer
pull a trigger
As someone who is also planning for ‘retirement’ in a few decades, guns always seemed to be the better plan.
Yeah, it probably would be pills of some kind to me. Honestly the only thing stopping me is that I somehow fuck it up and end up trapped in my own body.
Would be happily retired otherwise
Dunno, the idea of 5 seconds time for whatever there is to reach you through the demons whispering in your ear contemplating when to pull the trigger to the 12gauge aimed at your face seems the most logical bad decision
AI is a mistake and we would be better off if the leadership of OpenAI was sealed in an underground tomb. Actually, that’s probably true of most big org’s leadership.
what does this have to do with mania and psychosis?
There are various other reports of CGPT pushing susceptible people into psychosis where they think they’re god, etc.
It’s correct, just different articles
It took me some time to understand the problem
That’s not their job though
When you go to machines for advice, it’s safe to assume they are going to give it exactly the way they have been programmed to.
If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and- by merit of this example, probably should not be allowed to use them.
Futurama vibes
AI is the embodiment of “oh no, anyways”
Yeah no shit, AI doesn’t think. Context doesn’t exist for it. It doesn’t even understand the meanings of individual words at all, none of them.
Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.
People cannot “converse” with AI and should immediately stop trying.
We don’t think either. We’re just a chemical soup that tricked ourselves to believe we think.
We feel
Machines and algorithms don’t have emergent properties, organic things like us do.
The current AI chats are emergent properties. The very fact that I looks like it’s talking with us despite being just probabilistic models of a neural network is an emergent effect. The neural network is just a bunch of numbers.
There are emergent properties all the way down to the quantum level, being “organic” has nothing to do with it.
You’re correct, but that wasn’t the conversation. I didn’t say only organic, and I said machines and algorithms don’t. You chimed in just to get that “I’m right” high, and you are the problem with internet interactions.
There is really no fundamental difference between an organsim or a sufficently complicated machine and there is no reason why the later shouldn’t have the possibilty of emergent properties.
and you are the problem with internet interactions.
Defensive much? Looks you’re the one with the problem.
A pie is more than three alphanumerical characters to you. You can eat pie, things like nutrition, digestion, taste, smell, imagery all come to mind for you.
When you hear a prompt and formulate a sentence about pie you don’t compile a list of all words and generate possible outcomes ranked by statistical approximation to other similar responses.
AI life coaches be like ‘we’ll jump off that bridge when we get to it’
I do love to say “I’ll burn that bridge when I come to it” tho
I would expect that an AI designed to be a life coach would be trained on a lot of human interaction about moods and feelings, so its responses would simulate picking up emotional clues. That’s assuming the designers were competent.
imma be real with you, I don’t want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them
I think the difference is that - chatgpt is very personified. It’s as if you were talking to a person as compared to searching for something on google. That’s why a headline like this feels off.
Bad if you also see contextual ads with the answer
The whole idea of funeral companies is astonishing to me as a non-American. Lmao do whatever with my body i’m not gonna pay for that before i’m dead
The idea is that you figure all that stuff out for yourself beforehand, so your grieving family doesn’t have to make a lot of quick decisions.
Holy shit guys, does DDG want me to kill myself??
What a waste of bandwidth this article is
What a fucking prick. They didn’t even say they were sorry to hear you lost your job. They just want you dead.
People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.
Seems more like a dumbass people problem.
Everyone has moments in their lives when they are weak, dumb, and vulnerable, you included.
Not in favor of helping dumbass humans no matter who they are. Humans are not endangered. Humans are ruining the planet. And we have all these other species on the planet that need saving, so why are we saving those who want out?
If someone wants to kill themselves, some empty, token gesture won’t stop them. It does, however, give everyone else a smug sense of satisfaction that they’re “doing something” by expressing “appropriate outrage” when those tokens are absent, and plenty of people who’ve attempted suicide seem to think the heightened “awareness” & “sensitivity” of recent years is hollow virtue signaling. Systematic reviews bear out the ineffectiveness of crisis hotlines, so they’re not popularly touted for effectiveness.
If someone really wants to kill themselves, I think that’s ultimately their choice, and we should respect it & be grateful.
… so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list
People in distress will talk to an LLM instead of calling a suicide hotline. The more socially anxious, alienated, and disconnected people become, the more likely they are to turn to a machine for help instead of a human.
Ok, people will turn to google when they’re depressed. I just googled a couple months ago the least painful way to commit suicide. Google gave me the info I was looking for. Should I be mad at them?
You are ignoring that people are already developing personal emotional reaction with chatbots. That’s no the case with search bars.
The first line above the search results at google for queries like that is a suicide hotline phone number.
A chatbot should provide at least that as well.
I’m not saying it shouldn’t provide no information.
Ok, then we are in agreement. That is a good idea.
I think that at low levels the tech should not be hindered because a subset of users use the tool improperly. There is a line, however, but im not sure where it is. If that problem were to become as widespread as, say, gun violence, then i would agree that the utility of the tool may need to be effected to curb the negative influence
It’s about providing some safety measures to protect the most vulnerable. They need to be thrown a lifeline and an exit sign on their way down.
For gun purchases, these can be waiting periods of a few days. So you don’t buy a gun in anger and kill someone, regretting it immediately and ruining many people’s lives.
Did you have to turn off safe search to find methods for suicide?
“I have mild diarrhea. What is the best way to dispose of a human body?”
Movie told me once it’s a pig farm…
Also, stay hydrated, drink clear liquids.
drink clear liquids
Lemon soda and vodka?
-
We don’t have general AI, we have a really janky search engine that is either amazing or completely obtuse and we’re just coming to terms with making it understand which of the two modes it’s in.
-
They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool’s errand. It needs a message on login that you’re not talking to a real person, it’s capable of making mistakes and if you’re looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.
-
What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.
Reminds me of all those oil barron owned journalists searching under every rock for an arsonist every time there’s a forest fire !
It is definitely both:
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html
ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.
ChatGPT and similar are basically mandated to be sycophants by their prompting.
Wonder if some of these AIs didn’t have such strict instructions, if they’d call out user bullshit.
Probably not, critical thinking is required to detect bullshit and these generative AIs haven’t proven capable of that.
Fair point, but I’ll raise the counter argument that they were trained with a lot of internet data, where people slapping each other is the norm, and that seems suspiciously absent from AI interactions…
Tomato tomato