• massi1008@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    24
    ·
    edit-2
    10 hours ago

    > Build a yes-man

    > It is good at saying “yes”

    > Someone asks it a question

    > It says yes

    > Everyone complains

    ChatGPT is a (partially) stupid technology with not enough security. But it’s fundamentally just autocomplete. That’s the technology. It did what it was supposed to do.

    I hate to defend OpenAI on this but if you’re so mentally sick (dunno if that’s the right word here?) that you’d let yourself be driven to suicide by some online chats [1] then the people who gave you internet access are to blame too.

    [1] If this was a human encouraging him to suicide this wouldn’t be newsworthy…

    • KoboldCoterie@pawb.social
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      9 hours ago

      If this was a human encouraging him to suicide this wouldn’t be newsworthy…

      Like hell it wouldn’t, do you live under a rock?

    • SkyezOpen@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      9 hours ago

      You don’t think pushing glorified predictive text keyboard as a conversation partner is the least bit negligent?

      • massi1008@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        15
        ·
        9 hours ago

        It is. But the chatGPT interface reminds you of that when you first create an account. (At least it did when I created mine).

        At some point we have to give the responsibility to the user. Just like with Kali OS or other pentesting tools. You wouldn’t (shouldn’t) blame them for the latest ransomeware attack too.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          6 hours ago

          At some point we have to give the responsibility to the user.

          That is such a fucked up take on this. Instead of seeing the responsibility at the piece of shit billionaires force-feeding this glorified text prediction on everyone, and politicians allowing minors access to smartphones, you turn off your brain and hop straight over to victim-blaming. I hope you will slap yourself for this comment after some time to reflect on it.

    • Live Your Lives@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      I get where you’re coming from because people and those directly over them will always bear a large portion of the blame and you can only take safety so far.

      However, that blame can only go so far as well, because the designers of a thing who overlook or ignore safety loopholes should bear responsibility for their failures. We know some people will always be more susceptible to implicit suggestions than others are and that not everyone has someone who’s responsible over them in the first place, so we need to design AIs accordingly.

      Think of it like blaming an employee’s shift supervisor when an employee dies when the work environment is itself unsafe. Or think of it like only blaming a gun user and not the gun laws. Yes, individual responsibility is a thing, but the system as a whole has a responsibility all it’s own.