• Grimy@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    1 day ago

    Maybe this’ll slow down the adoption of self hosted LLMs

    That is what the tweet is about…?

    The whole point is that llms need resources that the current average computer doesn’t have, and they are locking away the higher end computers behind price hikes.

    They are making it so online llms are the only option.

    I didn’t see anything about this being about local LLMs

    What? He talks about selling computation as a service and specifically mentions chatgpt. It’s not about stadia.

    • Postimo@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      They mention “all of your computation as a service” and “ChatGPT to do everything for you”, it seems several other comments in the thread, the person you’re replying to, and myself read this as a comment about pushing people towards cloud computing. I don’t think that’s an unreasonable read especially considering the major hardware spike is in ram, not vram or graphics cards, which would be more a comment on self hosted LLMs.

      Further local hosting of LLMs is already pretty outside the mindset of any regular user, and will likely never be comparable to cloud LLMs. The idea that they are intentionally driving up ram prices to hundreds of dollars as a direct method of boxing out the self hosted LLM linux nerds that want DRAM bound models is possibly more absurd.