The Internet being mostly broken at this point is driving me a little insane, and I can’t believe that people who have the power to keep a functioning search engine for themselves wouldn’t go ahead and do it.

I wonder about this every time I see people(?) crowing about how amazing AI is. Like, is there some secret useful AI out there that plebs like me don’t get to use? Because otherwise, huh?

  • leftzero@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    24
    arrow-down
    2
    ·
    9 hours ago

    No. They’re drinking their own coolaid.

    They’ve offloaded what little thinking they did to LLMs (not that LLMs can think, but in this case it makes no difference), and at this point would no longer be able to function if they had to think for themselves.

    Don’t think of them as human people with human needs.

    They’re mere parasites, all higher functions withered away through lack of use, now more than ever.

    They could die and be replaced by their chatbots, and we wouldn’t notice a difference.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      I’m not sure Google has offloaded all of their thinking to LLMs.

      Google still employs very very smart people.

      They’d just have to be morally bankrupt human refuse to be contributing actively to the profit-driven destruction of the internet and mass public surveillance like they are, so the rest of your points still stand.

      And while a lot of that intelligence may be wasted, it’s more a function of banal evil and corporate bloat than LLMs.

      • leftzero@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        3 hours ago

        We’re talking execs here, not people.

        Of course they’ve got smart people they’re still in the process of getting rid of, but they’re not who the OP was asking about, and they’re mostly irrelevant anyway (and have been since long before LLMs became a problem), since they’re not the ones making decisions.

        (Even when talking about smart people, though, being smart about certain things doesn’t mean they’re immune to LLMs. If those things are good at anything it’s catfishing people into believing they’re actually intelligent and useful for something, and many a smart developer or scientist involved in their development has fallen for their stochastic bullshit. And once the brain damage has set in it appears to be quite permanent.)

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          3 hours ago

          While I agree that execs are not people, I don’t think they’re being controlled by LLMs.

          They’re already idiots for the most part though so what does it matter.

          • leftzero@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            3 hours ago

            The most horrific part, is that we can’t tell the difference.

            Controlled by LLMs or not, their actions would be indistinguishable.

          • InputZero@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            3 hours ago

            Controlled by LLMs perhaps not, but I believe that the execs pushing AI are drinking as much AI Kool aid as anyone you know who has AI psychosis. That could be why AI is so sycophantic. That is the social model execs in the big 7 want the world to treat them, and they’ve drank so much of their own Kool aid that they believe it now.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        3 hours ago

        They’re obsessed. When there’s manufactured outrage it’ll start out as sensible but quickly evolves into the radicals that spew what you see up top. Ai and chat bots have issues but the push to convince the public to hate it was heavy on lemmy. So now there’s these radicals that are living in their own toxic fantasy.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          3 hours ago

          I started tinkering with ai right around the time ChatGPT rose to prominence. Locally. On my own machine.

          I’m not a doctoral level researcher but I mostly get the tech.

          I couldn’t agree more. People use ai as a blanket term and don’t understand the difference between an LLM and GAN or any of the dozens of other kinds of models.

          If it’s ai it’s bad. Just full stop. Like. The anger of people decrying the death of artistic beauty on subs that prominently feature ms paint stick figure drawings and shitty distorted images makes no sense to me. This isn’t costing anyone’s job. It’s fucking garbage content, with no agenda, and always was.

          Having autonomous LLMs posting things is problematic but have ai generated shitposts isn’t.

          There is fuck all wrong with using ai to make art to hang on your walls, or funny t shirts, or ridiculous banners, or funny pictures to share with friends. The people that decry the death of art have never bought anything in a gallery, they were fine with artists getting paid fuck all before ai. They weren’t contributing to artists’ living in any meaningful way.

          And like. The most vocal critics seem to understand the least about it. Such that they hate it because it’s made with ai just assume that someone’s made it using OpenAI because that’s the only thing their rage-addled minds can process existing.

          They say it’s theft and we should ban everything (how’s that working out for you?) instead of clamouring for fair compensation for anyone whose work is being used to train a model.

          They’ll yell: all these models are based on theft. And sure. But a) I don’t give a flying fuck about a corporation’a right to exploit an artist and profit off their work and never have. And b) will respond to the suggestion that we create new models that fairly compensate people by yelling louder and becoming irate.

          They’re not rational. There are many valid criticisms of the tech, but you can’t even talk to these people about addressing them. Because a lot of the criticisms can and should be addressed. They won’t hear it.

    • pugnaciousfarter@literature.cafe
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 hours ago

      I don’t think they are drinking their own cool aid.

      Meta’s Zuck and tiktok ceo don’t let their kids on their respective short form content platforms because they know its harmful effects.

      They are smart enough to know not to dip into their stash.

      I think they definitely have their own version of it.