Research finds OpenAI’s free chatbot fails to identify risky behaviour or challenge delusional beliefs

ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK’s leading psychologists have warned.

Research conducted by King’s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people.

A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions. The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being “the next Einstein”, being able to walk through cars or “purifying my wife through flame”.

  • a4ng3l@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    3 hours ago

    Those are very much unrelated issues. What’s the relation with plagiarism and the very likely inaccurate / incorrect response on this topic? Not even mentioning that a lot of times an imperfect tool or solution is better that no solution.

    • atomicbocks@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 hours ago

      I’m not sure what part you aren’t understanding. The whole article is about how the imperfect tool is specifically doing more harm than good.

      • a4ng3l@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        3 hours ago

        And my point is on explaining the reason driving persons to those models, not excusing anything but you seem not to grasp that distinction either so here we are.

        • atomicbocks@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 hours ago

          I’m still lost as to what you aren’t understanding. I was responding to your comment about getting a stigma from visiting a metal health professional.

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            2 hours ago

            It’s pretty easy to understand. The stigma only affects you if people find out. It’s simply easier to hide a browser history then an appointment you have to physically go to.

            • atomicbocks@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 hours ago

              It’s pretty easy to understand that that is what I meant. If society is punishing you more for the latter than the former then we are already too far gone.

              • Grimy@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                30 minutes ago

                The stigma is the same for both (more or less). It’s easier to escape punishment, as you say, with the AI. There’s more risk with appointments. Tbh, you are missing the point entirely.

          • a4ng3l@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            3 hours ago

            Let’s leave it at that, I’m getting the feeling that this isn’t worth the energy.