• ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    4 days ago

    I don’t think Yogthis is using a generative LLM produced by China on their computer.

    Yes, that’s exactly what I’m doing. I run DeepSeek and Qwen models using Ollama locally. They work great. I also use full DeepSeek online. It’s absolutely bizarre that you would make this assumption without even asking. I also run Stable Diffusion models locally using https://github.com/AUTOMATIC1111/stable-diffusion-webui

    I looked around; there are two of them, and neither seem to have their source published on an English-speaking website.

    No you haven’t because if you did then you’d quickly find plenty of Chinese models ready to use that are in English. I’ve linked a few in this comment https://lemmygrad.ml/post/8454753/6668311

    You’ve literally done zero investigation before spewing nonsense here. It’s incredible to see such low effort trolling on here.

    • ferret@fedi.workersofthe.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      6
      ·
      edit-2
      4 days ago

      @yogthos Ollama is, if I recall, an American project sponsored by Meta. And Stable Diffusion, as already mentioned, is American too. So, after your very first post to me in here was proclaiming that you’re using Chinese software, now you admit that you aren’t.

      In your post there, you have two text-to-image generation models. Again, one of them I’d found (and is the one that I could find nothing but broken links for), one of them I hadn’t.

      (Edit: Two that I had; I see Qwen is the one that Alibaba is released, so, my bad on reading comprehension)

      I’m not lacking in research, you’re lacking in honesty. And you accuse me of low effort trolling when you’re literally using AI to manufacture replies to me? Dude.

      • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
        link
        fedilink
        arrow-up
        12
        ·
        4 days ago

        Ollama is an open source project. The fact that this tech originates in the US does not mean it shouldn’t be used. Most of the software and hardware you use is of US origin in one way or another. The discussion was about the models themselves, which are of course Chinese. You are shamefully ignorant on the subject you’re attempting to discuss here.

        The only one lacking in honesty here is you bud, and you ain’t fooling anyone here.

        • sudo_halt@lemmygrad.ml
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          On another note Ollama is really shitty and they’re trying to push their own API format, please consider using the one true community deriven gigachad Russian developed ‘llama.cpp’. KoboldCPP even has a nice web UI for use.