AI researchers at Andon Labs embedded various LLMs in a vacuum robot to test how ready they were to be embodied. And hilarity ensued.
It didn’t channel anyone, it can’t think or learn, it’s just spouting out patterns it regocnizes.
Sounds human
Sounding human doesn’t make something human, it sounds human until it doesn’t, because it doesn’t really think. That concept is why we originally developed the turing test.
This is my reaction to so many low-effort posts by people who presumably are not practitioners, designers, or researchers in AI
“THEY HALLUCINATE!” So do you. 90% of what you see right now is your visual cortex imaging the gaps between the tiny point your fovea can actually see at any moment
“ITS JUST STATISTICS” Correct. Now tell me how neuronal activity differs from this. I’ll wait
Humans have moral context, societal context, the ability to identify something as factual in the real world and more. It imagines everything, it has no way to discern fact from fiction.
Well yeah, LLMs, unlike bodies, aren’t punished for doing stupid things.
If it spins around in circles arguing with itself, it has still accomplished its purpose - to generate text. But a real body penalizes you for wasting time when looking for food.
Yes they are punished have you not heard of backprop?
Sure but that’s not equivalent to muscular expenditure - every action costs energy so animals learn to be efficient with their movements and thoughts. Also, an LLM cannot re-adjust its weights in realtime like a brain can.





