• Jesus_666@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    1 day ago

    They are being commonly used in functions where a human performing the same task would be a mandated reporter. This is a scenario the current regulations weren’t designed for and a future iteration will have to address it. Lawsuits like this one are the first step towards that.

    • peoplebeproblems@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I agree. However I do realize, like in this specific case, requiring a mandated reporter for a jailbroken prompt, given the complexity of human language, would be impossible.

      Arguably, you’d have to train an entirely separate LLM to detect anything remotely considered harmful language, and the way they train their model it is not possible.

      The technology simply isn’t ready to use, and people are vastly unaware of how this AI works.

      • Jesus_666@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 hours ago

        I fully agree. LLMs create situations that our laws aren’t prepared for and we can’t reasonably get them into a compliant state on account of how the technology works. We can’t guarantee that an LLM won’t lose coherence to the point of ignoring its rules as the context grows longer. The technology inherently can’t make that kind of guarantee.

        We can try to add patches like a rules-based system that scans chats and flags them for manual review if certain terms show up but whether those patches suffice will have to be seen.

        Of course most of the tech industry will instead clamor for an exception because “AI” (read: LLMs and image generation) is far too important to let petty rules hold back progress. Why, if we try to enforce those rules, China will inevitably develop Star Trek-level technology within five years and life as we know it will be doomed. Doomed I say! Or something.