• mindbleach@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    This kind of assertion wildly overestimates how well we understand intelligence.

    Higher levels of bullshitting require more abstraction and self-reference. Meaning must be inferred from observation, to make certain decisions, even when picking words from a list.

    Current models are abstract enough to see a chessboard in an Atari screenshot, figure out which pieces each jumble of pixels represents, and provide a valid move. Scoffing because it’s not actually good at chess is a bizarre line to draw, to say there’s zero understanding involved.

    Current models might be abstract enough to teach them a new game by explaining the rules.

    Current models are not abstract enough to explain why they’re bad at a game and expect them to improve.