AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-210 hours agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square323fedilinkarrow-up1834arrow-down137file-textcross-posted to: [email protected]
arrow-up1797arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-210 hours agomessage-square323fedilinkfile-textcross-posted to: [email protected]
minus-squareMangoCats@feddit.itlinkfedilinkEnglisharrow-up2·13 hours agoMy impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.
My impression of LLM training and deployment is that it’s actually massively parallel in nature - which can be implemented one instruction at a time - but isn’t in practice.