Research finds OpenAI’s free chatbot fails to identify risky behaviour or challenge delusional beliefs

ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned.

Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people.

A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions. The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”.

  • Corvidae@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    I do a lot of recipe-ingredient research on AI, and it’ll steer me wrong more often than I’d like. Fortunately, I know enough to be persistent and questioning. I would not take medical advice from it, though I’d use it for supplemental research.

    Morrin concluded that the AI chatbot could “miss clear indicators of risk or deterioration” and respond inappropriately to people in mental health crises, though he added that it could “improve access to general support, resources, and psycho-education”.