• hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    128
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    This is a lot of framing to make it look better for OpenAI. Blaming everyone and rushed technology instead of them. They did have these guardrails. Seems they even did their job and flagged him hundreds of times. But why don’t they enforce their TOS? They chose not to do it. Once I breach my contracts and don’t pay, or upload music to youtube, THEY terminate my contract with them. It’s their rules, and their obligation to enforce them.

    I mean why did they even invest in developing those guardrails and mechanisms to detect abuse, if they then choose to ignore them? This makes almost no sense. Either save that money and have no guardrails, or make use of them?!

    • MelonYellow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      3 hours ago

      If they cared, it should’ve been escalated to the authorities and investigated for mental health. It’s not just a curious question if he was searching it hundreds of times. If he was actively planning suicide, where I’m from that’s grounds for an involuntary psych hold.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 hours ago

        I’m a big fan of regulation. These companies try to grow at all cost and they’re pretty ruthless. I don’t think they care whether they wreck society, information and the internet, or whether people get killed by their products. Even bad press from that doesn’t really have an effect on their investors, because that’s not what it’s about… It’s just that OpenAI is an American company. And I’m not holding my breath for that government to step in.

    • frunch@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      10 hours ago

      I’m chuckling at the idea of someone using ChatGPT, recognizing at some point that they violated the TOS and immediately stop using the app, then also reach out to OpenAI to confess and accept their punishment 🤣

      Come to think of it, is that how OpenAI thought this actually works?

    • ShadowRam@fedia.io
      link
      fedilink
      arrow-up
      51
      arrow-down
      3
      ·
      11 hours ago

      Well if people started calling it for what it is, weighted random text generator, then maybe they’d stop relying on it for anything serious…

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        I like how the computational linguist Emily Bender refers to them: “synthetic text extruders”.

        The word “extruder” makes me think about meat processing that makes stuff like chicken nuggets.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        1
        ·
        edit-2
        11 hours ago

        Yeah, my point was more this doesn’t have to do anything with AI or the technology itself. I mean whether AI is good or bad or doesn’t really work… Their guardrails did work exactly as intended and flagged the account hundreds of times for suicidal thoughts. At least according to these articles. So it’s more a business decision to not intervene and has little to do with what AI is and what it can do.

        (Unless the system comes with too many false positives. That’d be a problem with technology. But this doesn’t seem to be discussed in any form.)

        • Axolotl@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 hours ago

          I wonder how a keyboard with those enhanched autocomplete would be to use…clearly if the autocomplete is used locally and the app is open source