Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • WraithGear@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    edit-2
    2 days ago

    The LLM is a machine that when simplified down takes two inputs. A data set, and weight variables. These two inputs are not the focus of the software, as long as the structure is valid, the machine will give an output. The input is not the machine, and the machines source code is open source. The machine IS what is revolutionary about this LLM. Its not being praised because its weights are fine tuned, it didn’t sink Nvidia’s stock price by 700 billion because it has extra special training data. Its special because of its optimizations, and its novel method of using two halves to bounce ideas back and forth and to value its answers. Its the methodology of its function. And that is given to you open to see its source code

    • Ajen@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      2 days ago

      I don’t know what, if any, CS background you have, but that is way off. The training dataset is used to generate the weights, or the trained model. In the context of building a trained LLM model, the input is the dataset and the output is the trained model, or weights.

      It’s more appropriate to call deepseek “open-weight” rather than open-source.