Headline is super misleading… the article says that chat gpt told him it couldn’t give him drug advice, and that he should seek help. He goactually got good advice from chat gpt, but didn’t like it, didn’t trust the good advice chat gpt gave him, then spent months trying to get chat gpt to give him the dodgy advice he wanted.
Of course chat GPT shouldn’t be giving that sort of advice, but man that headline is as misleading as it gets. He literally didn’t trust the advice he got from chat gpt to seek help.
Come on. I can get my hands on some robotic parts, connect them, program them to cut my head off. Is the technology at fault here? Or is it better to ask why the fuck there is no mandate for the llm to report this to <insert institution> and for that institution to react, coupled with severe punishments if either doesn’t?
