AI researchers at Andon Labs embedded various LLMs in a vacuum robot to test how ready they were to be embodied. And hilarity ensued.
It didn’t channel anyone, it can’t think of learn, it’s just spouting out patterns it regocnizes.
Sounds human
This is my reaction to so many low-effort posts by people who presumably are not practitioners, designers, or researchers in AI
“THEY HALLUCINATE!” So do you. 90% of what you see right now is your visual cortex imaging the gaps between the tiny point your fovea can actually see at any moment
“ITS JUST STATISTICS” Correct. Now tell me how neuronal activity differs from this. I’ll wait
Well yeah, LLMs, unlike bodies, aren’t punished for doing stupid things.
If it spins around in circles arguing with itself, it has still accomplished its purpose - to generate text. But a real body penalizes you for wasting time when looking for food.
Yes they are punished have you not heard of backprop?
Sure but that’s not equivalent to muscular expenditure - every action costs energy so animals learn to be efficient with their movements and thoughts. Also, an LLM cannot re-adjust its weights in realtime like a brain can.





