• Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    18 hours ago

    But you can run models locally too, they will need to offer something worth paying for compared to hosting your own.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      6
      ·
      edit-2
      16 hours ago

      Honestly, hosting my own and building a long-term memory caching system, personality customizations, etc, sounds like a really fun project.

      Edit: Is ChatGPT downvoting us? 😂

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        5 hours ago

        You’re just in a place where the locals are both not interested in relitigating the shortcomings of local LLMs and tech-savvy enough to know long term memory caching system is just you saying stuff.

        Hosting your own model and adding personality customizations is just downloading ollama and inputting a prompt that maybe you save as a text file after. Wow what a fun project.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        13 hours ago

        no, you fuckers wandered into an anti-AI community and started jacking off about local models

        • Korhaka@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          11 hours ago

          It’s a factual statement regardless of what you think of AI. People won’t pay for something if the free option that can’t be taken away from them is just as good.

          Maybe that will at some point kill off the big overvalued companies

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            11 hours ago

            what the numbers show is that nobody gives a shit. nobody’s paying for LLMs and nobody’s running the models locally either, because none of it has a use case. masturbating in public about how invested you are in your special local model changes none of this.

      • Fedegenerate@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        17 hours ago

        Tonight, I installed Open Web UI to see what sort of performance I could get out of it.

        My entire homelab is a single n100 mini, so it was a bit of squeeze to add even Gemma3n:e2b onto it.

        It did something. Free chatgpt is better performance, as long as I remember to use place holder variables. At least for my use case: vibe coding compose.yamls and as a rubber duck/level 0 tech support for trouble shooting. But it did something, I’m probably going to re-test when I upgrade to 32gb of ram, then nuke the LXC and wait till I have a beefier host though.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          13 hours ago

          case in point: you jacked off all night over your local model and still got a disappointing result