Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    1 year ago

    The way that one learns which of one’s beliefs are “hallucinations” is to test them against reality — which is one thing that an LLM simply cannot do.

      • KevonLooney@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        1 year ago

        Why do you assume they will improve over time? You need good data for that.

        Imagine a world where AI chatbots create a lot of the internet. Now that “data” is scraped and used to train other AIs. Hallucinations could easily persist in this way.

        Or humans could just all post “the sky is green” everywhere. When that gets scraped, the resulting AI will know the word “green” follows “the sky is”. Instant hallucination.

        These bots are not thinking about what they type. They are copying the thoughts of others. That’s why they can’t check anything. They are not programmed to be correct, just to spit out words.

    • uranos@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Yeah, because it would he impossible to have an LLM running a robot with visual, tactile, etc recognition right?