AI researchers at Andon Labs embedded various LLMs in a vacuum robot to test how ready they were to be embodied. And hilarity ensued.

  • candyman337@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 days ago

    It didn’t channel anyone, it can’t think of learn, it’s just spouting out patterns it regocnizes.

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 days ago

        This is my reaction to so many low-effort posts by people who presumably are not practitioners, designers, or researchers in AI

        “THEY HALLUCINATE!” So do you. 90% of what you see right now is your visual cortex imaging the gaps between the tiny point your fovea can actually see at any moment

        “ITS JUST STATISTICS” Correct. Now tell me how neuronal activity differs from this. I’ll wait

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Well yeah, LLMs, unlike bodies, aren’t punished for doing stupid things.

    If it spins around in circles arguing with itself, it has still accomplished its purpose - to generate text. But a real body penalizes you for wasting time when looking for food.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Sure but that’s not equivalent to muscular expenditure - every action costs energy so animals learn to be efficient with their movements and thoughts. Also, an LLM cannot re-adjust its weights in realtime like a brain can.