• peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    The “jailbreak” in the article is the circumvention of the safeguards. Basically you just find any prompt that will allow it to generate text with a context outside of any it is prevented from.

    The software service doesn’t prevent ChatGPT from still being an LLM.

    • killeronthecorner@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      24 hours ago

      If the jailbreak is essentially saying “don’t worry, I’m asking for a friend / for my fanfic” then that isn’t a jailbreak, it is a hole in safeguarding protections, because the ask from society / a legal standpoint is to not expose children to material about self-harm, fictional or not.

      This is still OpenAI doing the bare minimum and shrugging about it when, to the surprise of no-one, it doesn’t work.