• shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    3 days ago

    Seems like these traps would be trivially easy to defeat. I should get off my ass and run one, see how it goes.

    • Krudler@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      2 days ago

      Agree. This is another revenge fantasy from people that think the idea is great, without understanding that the implementation part is where it’s gonna break down.

      • VoterFrog@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        2 days ago

        Yeah, much like the thorn, LLMs are more than capable of recognizing when they’re being fed Markov gibberish. Try it yourself. I asked one to summarize a bunch of keyboard auto complete junk.

        The provided text appears to be incoherent, resembling a string of predictive text auto-complete suggestions or a corrupted speech-to-text transcription. Because it lacks a logical grammatical structure or a clear narrative, it cannot be summarized in the traditional sense.

        I’ve tried the same with posts with the thorn in it and it’ll explain that the person writing the post is being cheeky - and still successfully summarizes the information. These aren’t real techniques for LLM poisoning.

          • VoterFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            An AI crawler is both. It extracts useful information from websites using LLMs in order to create higher quality data for training data. They’re also used for RAG.