• petrol_sniff_king@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    4
    ·
    10 hours ago

    Edit: TL;DR: You can’t just weasel your way into a position where “AI is all the bad stuff and machine learning is all the good stuff” under the guise of linguistic relativism.

    You can, actually, because the inverse is exactly what marketers are vying for: AI, a term with immense baggage, is easier for layman to recognize, and implies a hell of a lot more than it actually does. It is intentionally leaning on the very cool futurism of AI to sell itself as the next evolutionary stage of human society—and so, has consumed all conversation about AI entirely. It is Hannibal Lecter wearing the skin of decades of sci-fi movies.

    “Machine learning” is not a term used by sycophants (as often), and so infers different things about the person saying it. For one, they may have actually seen a college with their eyes.

    So, you seem to be implying their isn’t a difference, but there is: people who suck say one, people who don’t say the other. No amount of academic rigor can sidestep this problem.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      6 hours ago

      Quite the opposite: I recognize there’s a difference, and it horrifies me that corporations spin AI as something you – “you” meaning the general public who don’t understand how to use it – should put your trust in. It similarly horrifies me that in an attempt to push back on this, people will jump straight to vibes-based, unresearched, and fundamentally nonsensical talking points. I want the general public to be informed, because like the old joke comparing tech enthusiasts to software engineers, learning these things 1) equips you with the tools to know and explain why this is bad, and 2) reveals that it’s worse than you think it is. I would actually prefer specificity when we’re talking about AI models; that’s why instead of “AI slop”, I use “LLM slop” for text and, well, unfortunately, literally nobody in casual conversation knows what other foundation models or their acronyms are, so sometimes I just have to call it “AI slop” (e.g. for imagegen). I would love it if more people knew what a transformer model is so we could talk about transformer models instead of the blanket “AI”.

      By trying to incorrectly differentiate “AI” from “machine learning”, we’re giving dishonest corporations more power by implying that only now do we truly have “artificial intelligence” and that everything that came before is merely “machine learning”. By muddling what’s actually a very straightforward hierarchy of terms (opposed to a murky, nonsensical dichotomy of “AI is anything that I don’t like, and ML is anything I do”), we’re misinforming the public and making the problem worse. By showing that “AI” is just a very general field that GPTs live inside, we reduce the power of “AI” as a marketing buzzword word.