Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • maplebar@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    8 hours ago

    Yeah, this shit drives me crazy. Putting aside the fact that it all runs off stolen data from regular people who are being exploited, most of this “AI” shit is basically just freeware if anything, it’s about as “open source” as Winamp was back in the day.

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      8 hours ago

      I’m including Facebook’s LLM in my critique. And I dislike the current hype on LLMs, no matter where they’re developed.

      And LLMs are not “AI”. I’ve called them “so-called ‘AIs’” waaay before.

  • Jocker@sh.itjust.works
    link
    fedilink
    arrow-up
    108
    ·
    15 hours ago

    Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company “OpeanAI”

  • surph_ninja@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    4
    ·
    14 hours ago

    Judging by OP’s salt in the comments, I’m guessing they might be an Nvidia investor. My condolences.

  • Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    34
    arrow-down
    5
    ·
    edit-2
    14 hours ago

    The training data would be incredible big. And it would contain copyright protected material (which is completely okay in my opinion, but might invoce criticism). Hell, it might even be illegal to publish the training data with the copyright protected material.

    They published the weights AND their training methods which is about as open as it gets.

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      25
      arrow-down
      5
      ·
      16 hours ago

      They could disclose how they sourced the training data, what the training data is and how you could source it. Also, did they publish their hyperparameters?

      They could jpst not call it Open Source, if you can’t open source it.

      • Naia@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        15 hours ago

        For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.

        They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.

        The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.

        • Poik@pawb.social
          link
          fedilink
          arrow-up
          2
          ·
          11 hours ago

          That… Doesn’t align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and “classical” at least, but most of those don’t scale enough for what is being looked at.

          Also, datasets inherently impose bias upon networks, and it’s easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.

          Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that’s kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.

          Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      15 hours ago

      Hell, for all we know it could be full of classified data. I guess depending on what country you’re in it definitely is full of classified data…

  • Dkarma@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    9
    ·
    15 hours ago

    I mean that’s all a model is so… Once again someone who doesn’t understand anything about training or models is posting borderline misinformation about ai.

    Shocker

    • FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      14 hours ago

      A model is an artifact, not the source. We also don’t call binaries “open-source”, even though they are literally the code that’s executed. Why should these phrases suddenly get turned upside down for AI models?

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      16
      ·
      14 hours ago

      A model can be represented only by its weights in the same way that a codebase can be represented only by its binary.

      Training data is a closer analogue of source code than weights.

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      13
      arrow-down
      17
      ·
      15 hours ago

      Yet another so-called AI evangelist accusing others of not understanding computer science if they don’t want to worship their machine god.

        • Prunebutt@slrpnk.netOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          4
          ·
          13 hours ago

          It’s not like you need specific knowledge of Transformer models and whatnot to counterargue LLM bandwagon simps. A basic knowledge of Machine Learning is fine.

              • surph_ninja@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                3
                ·
                12 hours ago

                I mean if you both think this is overhyped nonsense, then by all means buy some Nvidia stock. If you know something the hedge fund teams don’t, why not sell your insider knowledge and become rich?

                Or maybe you guys don’t understand it as well as you think. Could be either, I guess.

                • Prunebutt@slrpnk.netOP
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  9 hours ago

                  Yeah, let’s all base our decisions and definitions on what the stock market dictates. What could possibly go wrong?

                  /s 🙄

                • Poik@pawb.social
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  11 hours ago

                  Because over-hyped nonsense is what the stock market craves… That’s how this works. That’s how all of this works.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  12 hours ago

                  I didn’t say it is all overhyped nonsense, my only point is that I agree with the opinion stated in the meme, and I don’t think people who disagree really understand AI models or what “open source” means.

                • Fungah@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  12 hours ago

                  I have spent a very considerable amount of time tinkering with using ai models of all sorts.

                  Personally, I don’t know shit. I learned about… Zero entropy loss functions (?) The other day. That was interesting. I don’t know a lick of calculus and was able to grok what was going on thanks to a very excellent YouTube video. Anyway, I guess my point is that suddenly everyone is an expert.

                  I’m not. But I think it’ neat.

                  Like. I’ve spent hundreds or possibly thousands of hours learning as much as I can about AI of all sorts (as a hobby) and I still don’t know shit. I trained a gan once. On reddit porn. Terrible results. Great learning.

                  Its a cool state to be in cuz there’s so much out there to learn about.

                  I’m not entirely sure what my point is here beyond the fact that most people I’ve seen grandstanding about this stuff online tend to get schooled by an actual expert.

                  I love it when that happens.

  • Ugurcan@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    17 hours ago

    There are lots of problems with the new lingo. We need to come up with new words.

    How about “Open Weightings”?

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 hours ago

    Or as a human without all the previous people’s examples we learned from without paying them, aka normal life.

  • acargitz@lemmy.ca
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    18 hours ago

    Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.

    • Poik@pawb.social
      link
      fedilink
      arrow-up
      4
      ·
      11 hours ago

      … Statistical engines are older than personal computers, with the first statistical package developed in 1957. And AI professionals would have called them trained models. The interpreter is code, the weights are not. We have had terms for these things for ages.

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      16 hours ago

      There were e|forts. Facebook didn’t like those. (Since their models wouldn’t be considered open source anymore)

        • Aqarius@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          11 hours ago

          Well, yes, but usually it’s the code that’s the main deal, and the part that’s open, and the data is what you do with it. Here, the training weights seem to be “it”, so to speak.

    • Preflight_Tomato@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      10 hours ago

      Yes please, let’s use this term, and reserve Open Source for it’s existing definition in the academic ML setting of weights, methods, and training data. These models don’t readily fit into existing terminology for structure and logistic reasons, but when someone says “it’s got open weights” I know exactly what set of licenses and implications it may have without further explanation.

  • Azenis@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    17 hours ago

    Open sources will eventually surpass all closed-source softwares in some day, no matter how many billions of dollars are invested in them.

    • Maalus@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      11
      ·
      17 hours ago

      Never have I used open source software that has achieved that, or was even close to achieving it. Usually it is opinionated (you need to do it this way in this exact order, because that’s how we coded it. No, you cannot do the same thing but select from the back), lacks features and breaks. Especially CAD - comparing Solidworks to FreeCAD for instance, where in FreeCAD any change to previous ops just breaks everything. Modelling software too - Blender compared to 3ds Max - can’t do half the things.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        edit-2
        14 hours ago
        • 7-zip
        • VLC
        • OBS
        • Firefox did it only to mostly falter to Chrome but Chrome is largely Chromium which is open source.
        • Linux (superseded all the Unix, very severely curtailed Windows Server market)
        • Nearly all programming language tools (IDEs, Compilers, Interpreters)
        • Essentially all command line ecosystem (obviously on the *nix side, but MS was pretty much compelled to open source Powershell and their new Terminal to try to compete)

        In some contexts you aren’t going to have a lively enough community to drive a compelling product even as there’s enough revenue to facilitate a company to make a go of it, but to say ‘no open source software has acheived that’ is a bit much.

      • Test_Tickles@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        14 hours ago

        While I completely agree with 90% of your comment, that first sentence is gross hyperbole. I have used a number of pieces of open source options that are are clearly better. 7zip is a perfect example. For over a decade it was vastly superior to anything else, open or closed. Even now it may be showing its age a bit, but it is still one of the best options.
        But for the rest of your statement, I completely agree. And yes, CAD is a perfect example of the problems faced by open source. I made the mistake of thinking that I should start learning CAD with open source and then I wouldn’t have to worry about getting locked into any of the closed source solutions. But Freecad is such a mess. I admit it has gotten drastically better over the last few years, but it still has serious issues. Don’t get me wrong, I still 100% recommend that people learn it, but I push them towards a number of closed source options to start with. Freecad is for advanced users only.

  • KillingTimeItself@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    7
    ·
    1 day ago

    i mean, if it’s not directly factually inaccurate, than, it is open source. It’s just that the specific block of data they used and operate on isn’t published or released, which is pretty common even among open source projects.

    AI just happens to be in a fairly unique spot where that thing is actually like, pretty important. Though nothing stops other groups from creating an openly accessible one through something like distributed computing. Which seems to be a fancy new kid on the block moment for AI right now.

    • Fushuan [he/him]@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      18 hours ago

      The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.

      When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.

      As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.

      Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet

      This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.

    • FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      edit-2
      19 hours ago

      But it is factually inaccurate. We don’t call binaries open-source, we don’t even call visible-source open-source. An AI model is an artifact just like a binary is.

      An “open-source” project that doesn’t publish everything needed to rebuild isn’t open-source.

    • Miaou@jlai.lu
      link
      fedilink
      arrow-up
      2
      ·
      18 hours ago

      Is it common? Many fields have standard, open datasets. That’s not the case here, and this data is the most important part of training an LLM.

  • thespcicifcocean@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    3
    ·
    18 hours ago

    It’s not just the weights though is it? You can download the training data they used, and run your own instance of the model completely separate from their servers.