A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don’t have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

  • mountainriver@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 days ago

    general-purpose simulators which simulate conversations that agents, oracles, genies, or tools might have

    Good formulation, but in the spirit of the article I would say “might have had”. Being per definition trained on existing material they can produce likely imitations of conversations that already exists. One would suppose the value of a conversation between oracles and geniuses would be to produce something new, on effect text that is more than the statistically likely output.

    Good article, thanks for linking it.