themachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · edit-210 days agoDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comexternal-linkmessage-square19fedilinkarrow-up1437arrow-down13cross-posted to: [email protected]
arrow-up1434arrow-down1external-linkDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comthemachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · edit-210 days agomessage-square19fedilinkcross-posted to: [email protected]
minus-squareRobotToaster@mander.xyzlinkfedilinkEnglisharrow-up1·10 days agoDo NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
minus-squareL_Acacia@lemmy.mllinkfedilinkEnglisharrow-up2·10 days agoThe support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
minus-squareFermiverse@gehirneimer.delinkfedilinkarrow-up1·10 days agohttps://github.com/patientx/ComfyUI-Zluda Works with the 395+
minus-squareSuspciousCarrot78@lemmy.worldlinkfedilinkEnglisharrow-up1·edit-210 days agoNPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.
Do NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
The support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
https://github.com/patientx/ComfyUI-Zluda
Works with the 395+
NPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.