• JeremyHuntQW12@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 hours ago

    No that’s only a tiny part of what LLMs do.

    When you enter a sentence, it first parses the sentence to obtain vectors, then it ranks the vectors, then it vectors down to a database, then it reconstructs the sentence from the information its obtained.

    Unlike most software we’re familiar with, LLMs are probabilistic in nature. This means the link between the dataset and the model is broken and unstable. This instability is the source of generative AI’s power, but it also consigns AI to never quite knowing the 100 percent truth of its thinking.

    But what is truth ? As Lionel Huckster would say.

    Most of these so-called “hallucinations” are not errors at all. What has happened is that people have had multiple entries and they have only posted the last result.

    For instance, one example was where Gemini suggested cutting the legs off couch to fit it into a room. What the poster failed to reveal was that they were using Gemini to come up with solutions to problems in a text adventure game…