Quite the opposite: I recognize there’s a difference, and it horrifies me that corporations spin AI as something you – “you” meaning the general public who don’t understand how to use it – should put your trust in. It similarly horrifies me that in an attempt to push back on this, people will jump straight to vibes-based, unresearched, and fundamentally nonsensical talking points. I want the general public to be informed, because like the old joke comparing tech enthusiasts to software engineers, learning these things 1) equips you with the tools to know and explain why this is bad, and 2) reveals that it’s worse than you think it is. I would actually prefer specificity when we’re talking about AI models; that’s why instead of “AI slop”, I use “LLM slop” for text and, well, unfortunately, literally nobody in casual conversation knows what other foundation models or their acronyms are, so sometimes I just have to call it “AI slop” (e.g. for imagegen). I would love it if more people knew what a transformer model is so we could talk about transformer models instead of the blanket “AI”.
By trying to incorrectly differentiate “AI” from “machine learning”, we’re giving dishonest corporations more power by implying that only now do we truly have “artificial intelligence” and that everything that came before is merely “machine learning”. By muddling what’s actually a very straightforward hierarchy of terms (opposed to a murky, nonsensical dichotomy of “AI is anything that I don’t like, and ML is anything I do”), we’re misinforming the public and making the problem worse. By showing that “AI” is just a very general field that GPTs live inside, we reduce the power of “AI” as a marketing buzzword word.
Quite the opposite: I recognize there’s a difference, and it horrifies me that corporations spin AI as something you – “you” meaning the general public who don’t understand how to use it – should put your trust in. It similarly horrifies me that in an attempt to push back on this, people will jump straight to vibes-based, unresearched, and fundamentally nonsensical talking points. I want the general public to be informed, because like the old joke comparing tech enthusiasts to software engineers, learning these things 1) equips you with the tools to know and explain why this is bad, and 2) reveals that it’s worse than you think it is. I would actually prefer specificity when we’re talking about AI models; that’s why instead of “AI slop”, I use “LLM slop” for text and, well, unfortunately, literally nobody in casual conversation knows what other foundation models or their acronyms are, so sometimes I just have to call it “AI slop” (e.g. for imagegen). I would love it if more people knew what a transformer model is so we could talk about transformer models instead of the blanket “AI”.
By trying to incorrectly differentiate “AI” from “machine learning”, we’re giving dishonest corporations more power by implying that only now do we truly have “artificial intelligence” and that everything that came before is merely “machine learning”. By muddling what’s actually a very straightforward hierarchy of terms (opposed to a murky, nonsensical dichotomy of “AI is anything that I don’t like, and ML is anything I do”), we’re misinforming the public and making the problem worse. By showing that “AI” is just a very general field that GPTs live inside, we reduce the power of “AI” as a marketing buzzword word.