They mention “all of your computation as a service” and “ChatGPT to do everything for you”, it seems several other comments in the thread, the person you’re replying to, and myself read this as a comment about pushing people towards cloud computing. I don’t think that’s an unreasonable read especially considering the major hardware spike is in ram, not vram or graphics cards, which would be more a comment on self hosted LLMs.
Further local hosting of LLMs is already pretty outside the mindset of any regular user, and will likely never be comparable to cloud LLMs. The idea that they are intentionally driving up ram prices to hundreds of dollars as a direct method of boxing out the self hosted LLM linux nerds that want DRAM bound models is possibly more absurd.
They mention “all of your computation as a service” and “ChatGPT to do everything for you”, it seems several other comments in the thread, the person you’re replying to, and myself read this as a comment about pushing people towards cloud computing. I don’t think that’s an unreasonable read especially considering the major hardware spike is in ram, not vram or graphics cards, which would be more a comment on self hosted LLMs.
Further local hosting of LLMs is already pretty outside the mindset of any regular user, and will likely never be comparable to cloud LLMs. The idea that they are intentionally driving up ram prices to hundreds of dollars as a direct method of boxing out the self hosted LLM linux nerds that want DRAM bound models is possibly more absurd.