• Rooty@lemmy.world
    link
    fedilink
    arrow-up
    26
    ·
    6 days ago

    Ffs, neural networks and LLMs have their place and can be useful, but setting up datacentres that snort up the entire internet indiscriminately to create a glorified chatbot that spews data that may or may not be correct is insane.

  • the_q@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 days ago

    AI ingesting AI slop and falling apart is not dissimilar to boomers ingesting rightwing slop and falling apart.

  • Blaster M@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    edit-2
    7 days ago

    So, reading this article, it’s not about model collapse, but about RAG - letting the AI model google the question essentially. The problem is, the first 10 pages of google search results are all low effort adfarming slop sites, because of course it is, which is making the answers from the AI worse, as these slop sites often have incorrect or otherwise unproofed articles, which biases the AI to fork out the wrong answer.

    I’m sure the major AI services will try and fix this with some slop site detection routines.

    • frunch@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      7 days ago

      I’m sure the major AI services will try and fix this with some slop site detection routines.

      Which will be run by AI 🙃

      • melechric@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        7 days ago

        Don’t forget! A lot of the slop on those first few pages of results is AI-generated.

        Ouroboros is a very apt moniker for this phenomena.

        • avattar@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          6 days ago

          We need a new, stronger name for this. Like shit ouroboros, or shouroboros. Yes, AI eating it’s own shit and then regurgitating it is shouroboros.

    • ℍ𝕂-𝟞𝟝@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      some slop site detection routines.

      Why would they? I mean how are their incentives different from that of the search engine operators themselves?

      I can see a future when the internet is degraded to a point where if you try to find out how to peel an apple, you will get back word salad and 25 different porn ads.

    • JeremyHuntQW12@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      6 days ago

      Yesterday, there was the usual slew of artificial computer-generated news stories on YouTube about GM closing down all factories in north america (happens about once a month).

      Well I typed in “is GM closing down in the US” in Google and the Gemini generated answer said “Yes, GM has announced the closure of all plants in the US” and put up those fake YT videos as reference…

      I’m sure the major AI services will try and fix this with some slop site detection routines.

      They already do this through data determination routines in LLMs, unfortunately they suffer from the same type of infection as the data itself.

      • Echo Dot@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        6 days ago

        You probably would get better results from literally any other AI Gemini is routinely the worst. I don’t know what Google are playing at surely they could actually put some real effort into this but they just seem to be doing it in the most naive way possible.

        It comes to something when the Chinese are been the most innovative.

    • MrSilkworm@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      7 days ago

      I’m sure the major AI services will try and fix this with some slop site detection routines.

      No they will not, because this will harm their short term bottom line, which is always, “add short term value for the shareholder”

  • _druid@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    33
    ·
    7 days ago

    Aww boo hoo, did someone generate a degenerative feedback loop? Yeah? Did someone make a big ol’ oppsy whoopsy that’s gunna accelerate in hallucinations and slop as it collapses in on itself? How’s the coded version of a microphone whine going to go, you silly buttholes?

    • Wilco@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      People are putting AI generated pitfalls to guard their content.

      They reference nonsense links that usually cannot even be seen by normal users, the AI reads the pages and finds more garbage links even as more are generated by the site.

      • _druid@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 days ago

        It’s just so unfortunate that, in causing AI to delve down these winding paths, to propagate these slopfest feedback loops, the computers that are running the AI are burning real resources, polluting our atmosphere.

        Unfortunate is not the right word to describe the deep lament I feel, to cause such destruction for so little, if any, gain at all. My heart is heavy with regret for us all. Not just you and I, but for beast, bird, plant as well. Such a shame.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    5 days ago

    If I had the money and a computer able to handle the amount of stuff I’d be throwing at it with a local model, I would have a giant website full of AI generated nonsense purely for the purpose of letting AI gobble it up to help the AI incest problem.

    Imagine if a whole metric ton of “websites” did this. The thieving AI companies would either have to start blocking all of these sites or deal with an issue they don’t wanna because they’re too stingy and will probably just have their AI try ( and fail ) to fix the problem.

  • BigMacHole@lemm.ee
    link
    fedilink
    arrow-up
    27
    arrow-down
    2
    ·
    7 days ago

    Oh no! I HOPE us Taxpayers can Bail Out these AI Companies when they go Under! AFTER ALL we CUT my Child’s LIFESAVING MEDICATION so I KNOW we have the Funds to Help these Poor Billionaire CEOS!

    • Etterra@discuss.online
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      I can’t afford groceries now! I’m sure all those billionaires will help us out now that they’ve got a little but more though.

    • utopiah@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      Help these Poor Billionaire CEOS!

      Right, self-made billionaires for whom the way to success was already paved by subsidies. Yes, those surely need help to “build” absolutely pointless non-working projects that are supposed to “save humanity”. That’s great. /$

  • GoldenQuetzal@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    6 days ago

    I’ve been predicting this for a while now and people kept telling me I was wrong. Prepare for dot com burst two, electric boogaloo.

    • bthest@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 days ago

      I hope it crashes but what if the market completely embraces feels-based economics and just says that incomprehensible AI slop noise is what customers crave? Maybe CEOs will interpret AI gibberish output in much the same way as ancient high priests made calls by sifting through the entrails of sacrificed animals. Tesla meme stock is evidence that you can defy all known laws of economic theory and still just coast by.

  • avattar@lemmy.sdf.org
    link
    fedilink
    arrow-up
    15
    ·
    6 days ago

    There is a solution to this. Make a **perfect ** AI detecting tool. The only way I can think of is through adding a tag to every bit of AI-generated data,

    Though it could easily be removed from text, I guess.And no, training AI to recognize AI will never work. Also every model would have to join this, or it won’t work.

    Related XKCD

    • Etterra@discuss.online
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      LOL you’re suggesting people already doing something unbelievably stupid should do something smart to compensate.

    • bthest@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 days ago

      Also people won’t be able to pass AI work off as their own if it is labeled as such. Cheating and selling slop is the chief use for AI so any tag or watermark will be removed on the vast majority of stuff.

      There’s also liability. If your AI generates code that’s used to program something important and a lot of people are injured or die, do you really want a tag that can be traceable to back the company to be on the evidence? Or slapped all over the child sex abuse images that their wonderful invention is churning out?

  • tostos@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    6 days ago

    fill up your free cloud services with ai generated info. i mean thousand text file. like “how to make homemade butterfly”. all of them will scrap by ai.