Office space meme:
“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”
Yeah, this shit drives me crazy. Putting aside the fact that it all runs off stolen data from regular people who are being exploited, most of this “AI” shit is basically just freeware if anything, it’s about as “open source” as Winamp was back in the day.
I like how when America does it we call it AI, and when China does it it’s just an LLM!
I’m including Facebook’s LLM in my critique. And I dislike the current hype on LLMs, no matter where they’re developed.
And LLMs are not “AI”. I’ve called them “so-called ‘AIs’” waaay before.
Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company “OpeanAI”
Especially after it was founded as a nonprofit with the mission to push open source AI as far and wide as possible to ensure a multipolar AI ecosystem, in turn ensuring AI keeping other AI in check so that AI would be respectful and prosocial.
Sorry, that was a PR move from the get-go. Sam Altman doesn’t have an altruistic cell in his whole body.
It’s even crazier that Sam Altman and other ML devs said that they reached the peak of what current Machine Learning models were capable of years ago
But that doesn’t mean shit to the marketing departments
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
Investment goes up.
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
American magic bean companies like Beanco, The Boston Bean Company, and Nvidia
Lol
Judging by OP’s salt in the comments, I’m guessing they might be an Nvidia investor. My condolences.
Nah, just a 21st century Luddite.
The training data would be incredible big. And it would contain copyright protected material (which is completely okay in my opinion, but might invoce criticism). Hell, it might even be illegal to publish the training data with the copyright protected material.
They published the weights AND their training methods which is about as open as it gets.
They could disclose how they sourced the training data, what the training data is and how you could source it. Also, did they publish their hyperparameters?
They could jpst not call it Open Source, if you can’t open source it.
For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.
They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.
The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.
That… Doesn’t align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and “classical” at least, but most of those don’t scale enough for what is being looked at.
Also, datasets inherently impose bias upon networks, and it’s easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.
Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that’s kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.
Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.
Hell, for all we know it could be full of classified data. I guess depending on what country you’re in it definitely is full of classified data…
I mean that’s all a model is so… Once again someone who doesn’t understand anything about training or models is posting borderline misinformation about ai.
Shocker
A model is an artifact, not the source. We also don’t call binaries “open-source”, even though they are literally the code that’s executed. Why should these phrases suddenly get turned upside down for AI models?
A model can be represented only by its weights in the same way that a codebase can be represented only by its binary.
Training data is a closer analogue of source code than weights.
Yet another so-called AI evangelist accusing others of not understanding computer science if they don’t want to worship their machine god.
Praise the Omnisiah! … I’ll see myself out.
Do you think your comments here are implying an understanding of the tech?
It’s not like you need specific knowledge of Transformer models and whatnot to counterargue LLM bandwagon simps. A basic knowledge of Machine Learning is fine.
And you believe you’re portraying that level of competence in these comments?
I at least do.
I mean if you both think this is overhyped nonsense, then by all means buy some Nvidia stock. If you know something the hedge fund teams don’t, why not sell your insider knowledge and become rich?
Or maybe you guys don’t understand it as well as you think. Could be either, I guess.
Yeah, let’s all base our decisions and definitions on what the stock market dictates. What could possibly go wrong?
/s 🙄
Because over-hyped nonsense is what the stock market craves… That’s how this works. That’s how all of this works.
I didn’t say it is all overhyped nonsense, my only point is that I agree with the opinion stated in the meme, and I don’t think people who disagree really understand AI models or what “open source” means.
I have spent a very considerable amount of time tinkering with using ai models of all sorts.
Personally, I don’t know shit. I learned about… Zero entropy loss functions (?) The other day. That was interesting. I don’t know a lick of calculus and was able to grok what was going on thanks to a very excellent YouTube video. Anyway, I guess my point is that suddenly everyone is an expert.
I’m not. But I think it’ neat.
Like. I’ve spent hundreds or possibly thousands of hours learning as much as I can about AI of all sorts (as a hobby) and I still don’t know shit. I trained a gan once. On reddit porn. Terrible results. Great learning.
Its a cool state to be in cuz there’s so much out there to learn about.
I’m not entirely sure what my point is here beyond the fact that most people I’ve seen grandstanding about this stuff online tend to get schooled by an actual expert.
I love it when that happens.
There are lots of problems with the new lingo. We need to come up with new words.
How about “Open Weightings”?
Weights available?
That’s fat shaming
That sounds like a segment on “My 600lb Life”
Or as a human without all the previous people’s examples we learned from without paying them, aka normal life.
Would you accept a Smalltalk image as Open Source?
Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.
… Statistical engines are older than personal computers, with the first statistical package developed in 1957. And AI professionals would have called them trained models. The interpreter is code, the weights are not. We have had terms for these things for ages.
There were e|forts. Facebook didn’t like those. (Since their models wouldn’t be considered open source anymore)
I don’t care what Facebook likes or doesn’t like. The OSS community is us.
Is it even really software, or just a datablob with a dedicated interpreter?
Isn’t all software just data plus algorithms?
Well, yes, but usually it’s the code that’s the main deal, and the part that’s open, and the data is what you do with it. Here, the training weights seem to be “it”, so to speak.
Meta’s “open source AI” ad campaign is so frustrating.
Open weights
Yes please, let’s use this term, and reserve Open Source for it’s existing definition in the academic ML setting of weights, methods, and training data. These models don’t readily fit into existing terminology for structure and logistic reasons, but when someone says “it’s got open weights” I know exactly what set of licenses and implications it may have without further explanation.
Open sources will eventually surpass all closed-source softwares in some day, no matter how many billions of dollars are invested in them.
Just look at blender vs maya for example.
Never have I used open source software that has achieved that, or was even close to achieving it. Usually it is opinionated (you need to do it this way in this exact order, because that’s how we coded it. No, you cannot do the same thing but select from the back), lacks features and breaks. Especially CAD - comparing Solidworks to FreeCAD for instance, where in FreeCAD any change to previous ops just breaks everything. Modelling software too - Blender compared to 3ds Max - can’t do half the things.
- 7-zip
- VLC
- OBS
- Firefox did it only to mostly falter to Chrome but Chrome is largely Chromium which is open source.
- Linux (superseded all the Unix, very severely curtailed Windows Server market)
- Nearly all programming language tools (IDEs, Compilers, Interpreters)
- Essentially all command line ecosystem (obviously on the *nix side, but MS was pretty much compelled to open source Powershell and their new Terminal to try to compete)
In some contexts you aren’t going to have a lively enough community to drive a compelling product even as there’s enough revenue to facilitate a company to make a go of it, but to say ‘no open source software has acheived that’ is a bit much.
While I completely agree with 90% of your comment, that first sentence is gross hyperbole. I have used a number of pieces of open source options that are are clearly better. 7zip is a perfect example. For over a decade it was vastly superior to anything else, open or closed. Even now it may be showing its age a bit, but it is still one of the best options.
But for the rest of your statement, I completely agree. And yes, CAD is a perfect example of the problems faced by open source. I made the mistake of thinking that I should start learning CAD with open source and then I wouldn’t have to worry about getting locked into any of the closed source solutions. But Freecad is such a mess. I admit it has gotten drastically better over the last few years, but it still has serious issues. Don’t get me wrong, I still 100% recommend that people learn it, but I push them towards a number of closed source options to start with. Freecad is for advanced users only.I reckon C++ > Delphi
i mean, if it’s not directly factually inaccurate, than, it is open source. It’s just that the specific block of data they used and operate on isn’t published or released, which is pretty common even among open source projects.
AI just happens to be in a fairly unique spot where that thing is actually like, pretty important. Though nothing stops other groups from creating an openly accessible one through something like distributed computing. Which seems to be a fancy new kid on the block moment for AI right now.
The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.
When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.
As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.
Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet
This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.
But it is factually inaccurate. We don’t call binaries open-source, we don’t even call visible-source open-source. An AI model is an artifact just like a binary is.
An “open-source” project that doesn’t publish everything needed to rebuild isn’t open-source.
Is it common? Many fields have standard, open datasets. That’s not the case here, and this data is the most important part of training an LLM.
That “specific block of data” is more than 99% of such a project. Hardly insignificant.
It’s not just the weights though is it? You can download the training data they used, and run your own instance of the model completely separate from their servers.
You don’t download the training data when running an LLM locally. You are downloading the already baked model.
Did “they” publish the training data? And the hyperparameters?
I mean, I downloaded it from the repo.
You downloaded the weights. That’s something different.
I may misunderstand, but are the weights typically several hundred gigabytes large?
Yes. The training data is probably a few hundred petabytes.
Oh wow that’s fuckin huge
Yeah, some models are trained on pretty much the entire content of the publicly accessible Internet.