Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    27
    ·
    2 days ago

    It’s just AI haters trying to find any way to disparage AI. They’re trying to be “holier than thou”.

    The model weights are data, not code. It’s perfectly fine to call it open source even though you don’t have the means to reproduce the data from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      30
      arrow-down
      7
      ·
      edit-2
      2 days ago

      Let’s transfer your bullshirt take to the kernel, shall we?

      The kernel is instructions, not code. It’s perfectly fine to call it open source even though you don’t have the code to reproduce the kernel from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.

      🤡

      Edit: It’s more that so-called “AI” stakeholders want to launder it’s reputation with the “open source” label.

    • WraithGear@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      6
      ·
      2 days ago

      Right. You could train it yourself too. Though its scope would be limited based on capability. But that’s not necessarily a bad thing. Taking a class? Feed it your text book. Or other available sources, and it can help you on that subject. Just because it’s hard didn’t mean it’s not open

      • Ajen@sh.itjust.works
        link
        fedilink
        arrow-up
        12
        ·
        2 days ago

        The weights aren’t the source, they’re the output. Modifying the weights is analogous to editing a compiled binary, and the training dataset is analogous to source code.

        • WraithGear@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 days ago

          Are you talking source as in source code? Or are you talking about source as in the data the llm uses? Because the source code is available. The weights are not the output, they are a function. The LLM response is The output

          but the weights can be changed, the input data can be changed. And if they are… it’s still deepseek and if you can change them they are not what makes deepseek; deepseek.

          I use boot.dev it has an AI. But they changed the data set to only cover relevant topics, and changed its weights, and gave it tone instruction. And wile it plays a character, it’s still chatgpt.

          • Ajen@sh.itjust.works
            link
            fedilink
            arrow-up
            5
            ·
            2 days ago

            I used the word “source” a couple times in that post… The first time was in a general sense, as an input to generate an output. The training data is the source, the model is the “function” (using the mathematics definition here, NOT the computer science definition!), and the weights are the output. The second use was “source code.”

            Weights can be changed just like a compiled binary can be changed. Closed source software can be modified without having access to the source code.

            • WraithGear@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              4
              ·
              edit-2
              2 days ago

              The LLM is a machine that when simplified down takes two inputs. A data set, and weight variables. These two inputs are not the focus of the software, as long as the structure is valid, the machine will give an output. The input is not the machine, and the machines source code is open source. The machine IS what is revolutionary about this LLM. Its not being praised because its weights are fine tuned, it didn’t sink Nvidia’s stock price by 700 billion because it has extra special training data. Its special because of its optimizations, and its novel method of using two halves to bounce ideas back and forth and to value its answers. Its the methodology of its function. And that is given to you open to see its source code

              • Ajen@sh.itjust.works
                link
                fedilink
                arrow-up
                7
                ·
                edit-2
                2 days ago

                I don’t know what, if any, CS background you have, but that is way off. The training dataset is used to generate the weights, or the trained model. In the context of building a trained LLM model, the input is the dataset and the output is the trained model, or weights.

                It’s more appropriate to call deepseek “open-weight” rather than open-source.

          • Fushuan [he/him]@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            What most people understand as deepseek is the app thauses their trained model, not the running or training engines.

            This post mentions open source, not open source code, big distinction. The source of a trained model is part the training engine, and way bigger part the input data. We only have access to a fraction of that “source”. So the service isn’t open source.

            Just to make clear, no LLM service is open source currently.

      • Prunebutt@slrpnk.netOP
        link
        fedilink
        arrow-up
        10
        arrow-down
        5
        ·
        2 days ago

        You could train it yourself too.

        How, without information on the dataset and the training code?

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          2 days ago

          Training code created by the community always pops up shortly after release. It has happened for every major model so far. Additionally you have never needed the original training dataset to continue training a model.

          • Prunebutt@slrpnk.netOP
            link
            fedilink
            arrow-up
            13
            arrow-down
            2
            ·
            2 days ago

            So, Ocarina of Time is considered open source now, since it’s been decompiled by the community, or what?

            Community effort and the ability to build on top of stuff doesn’t make anything open source.

            Also: initial training data is important.

        • WraithGear@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          edit-2
          2 days ago

          So i am leaning as much as i can here, so bear with me. But it accepts tokenized data and structures it via a transformer as a json file or sun such. The weights are a binary file that’s separate and is used to, well, modify the tokenized data to generate outcomes. As long as you used a compatible tokenization structure, and weights structure, you could create a new training set. But that can be done with any LLM. You can’t pull the data from this just as you can’t make wheat from dissecting bread. But they provide the tools to set your own data, and the way the LLM handles that data is novel, due to being hamstrung by US sanctions. A “necessity is the mother of invention” and all that. Running comparable ai’s on inferior hardware and much smaller budget is what makes this one stand out, not the training data.

    • General_Effort@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      2 days ago

      Another theory is that it’s the copyright industry at work. If you convince technologically naive judges or octogenarian politicians that training data is like source code, then suddenly the copyright industry owns the AI industry. Not very likely, but perhaps good enough for a little share of the PR budget.