• James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    103
    ·
    5 days ago

    LLMs cannot lie/gaslight because they do not know what it means to be honest. They are just next-word predictors.

    I think the ads are terrible too, but it’s a fool’s errand to try and rationalize with an LLM chatbot

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      27
      ·
      5 days ago

      Man, seriously, every time I see someone get into these weird conversations where they try to convince a chatbot of something it’s slightly disturbing. Both not being aware of how pointless it is and knowing but still being compelled by the less uncanny valley-ish language are about on par with each other.

      People keep sharing this as proof of AI shortcomings, but it honestly makes me worry most about the human side. There’s zero new info to be gained from the chatbot behavior.

      • James R Kirk@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        Well said! On one hand I suppose I am “happy” to see people questioning the value of these bots, but assuming it “understands” anything, or has “motive” is still giving them power they don’t have and IMO, leaving the door open to allow yourself to be fooled/manipulated by them in the future.

    • Jankatarch@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      5 days ago

      They take a sentence and predict what the first result on google or response on whatsapp would look like.