AbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-22 months agoApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isexternal-linkmessage-square347fedilinkarrow-up1877arrow-down142file-textcross-posted to: [email protected]
arrow-up1835arrow-down1external-linkApple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.archive.isAbuTahir@lemm.ee to Technology@lemmy.worldEnglish · edit-22 months agomessage-square347fedilinkfile-textcross-posted to: [email protected]
minus-squarereksas@sopuli.xyzlinkfedilinkEnglisharrow-up39arrow-down2·2 months agodoes ANY model reason at all?
minus-square4am@lemm.eelinkfedilinkEnglisharrow-up35arrow-down1·2 months agoNo, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
minus-squareMiles O'Brien@startrek.websitelinkfedilinkEnglisharrow-up8·2 months ago… So you’re saying there’s a chance?
minus-squareRefurbished Refurbisher@lemmy.sdf.orglinkfedilinkEnglisharrow-up3·2 months agoThat sounds really floppy.
minus-squareauraithx@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up9·edit-22 months agoDefine reason. Like humans? Of course not. They lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.
minus-squarereksas@sopuli.xyzlinkfedilinkEnglisharrow-up1·2 months agoas it is defined in the article
minus-squareMrLLM@ani.sociallinkfedilinkEnglisharrow-up2·2 months agoI think I do. Might be an illusion, though.
does ANY model reason at all?
No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
… So you’re saying there’s a chance?
10^36 flops to be exact
That sounds really floppy.
Define reason.
Like humans? Of course not. They lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.
as it is defined in the article
I think I do. Might be an illusion, though.