There is no wrong way to use it considering they want everyone hooked on it. So it’s their fault. Under ZERO circumstances would I ever say an LLM company is ever in the clear for their LLM guiding people towards killing themselves, no matter what they claim.
”Holding your phone wrong” moment?
If a teen can bypass safety measures on your software that easily then they don’t work 🤷♂️
Oh, that’s not a lawsuit in the making…Ugh, I can’t wait for AI to lose all its staunch supporters.
It’s not even real ai either, just generative machine learning that outputs what we think we like based on a prompt. No prompt, no response, not intelligent
Yeah, which is what really grinds my gears; people let these techbros get away with shaping the narrative about their dressed up LLMs that aren’t ever going to be intelligent. LLMs themselves can be useful for stuff, but they will never be a swiss army knife that can be used for anything. True AI, is going to be the product of proper research and investment of effort from multiple disciplines; neuroscience, psychology, and other tech fields will need to work together to manifest something that is artificially intelligent.
Oh, that’s not a lawsuit in the making…
The article is talking about one of the ongoing lawsuits against the company.
My mistake, commenting on the internet while tipsy, is…A choice. I forgot to read this article.







