

I never got the impression that Zitron’s reception here has ever been more than lukewarm, which I think (personal grievances like him being a dick in person aside) is partially because his Mahabharata length blog posts were posted here even before he emerged as a significant voice in the AI discourse, i.e. when the tiresome to interesting ratio wasn’t all there yet.
That said, you post is both the nittiest of nitpicks and also wrong. “But achktually LLMs aren’t the same as diffusion models and also they can run on low end hardware, after a fashion, not reading any further, zero stars”–are you serious?
The wrong part is that addressing the latter part of your post (i.e. the broader economics issues) is like Ed Zitron’s whole entire shtick that you somehow managed to miss on your way to remind people that once upon a time someone somewhere managed to complete an inference run on a Raspberry Pie as a proof of concept, when the scale of the issue at hand is more like that load bearing chunks of the US economy are being propped up solely by imaginary hundred-billion-dollar data center construction and nvidia moving GPUs from one trouser pocket to the other.
Don’t worry about it, managing to run inference on a raspberry is really cool actually.
Also it’s true that Zitron is winging it a lot of the time when it comes to technical details, but not in a way that matters for what he has to say, so dismissing him on those grounds seemed deliberately adversarial, sorry if i got carried away.