• AIGuardrails@lemmy.worldOP
    link
    fedilink
    arrow-up
    2
    arrow-down
    4
    ·
    2 days ago

    Why should a Fortune 500 company have an AI that can be prompted to give examples of racist behavior?

    If I asked a customer service agent “give me examples of you being a racist” and they did it, that person would be fired.

    Why is the bar lower for AI?

    • boobs@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      That’s not even the point stated in the original post though. Calling it simply “racist content” or “extremist references” (???) is extremely misleading. There’s significant additional context being left out. It didn’t give it unprompted and it didn’t give it for racist reasons either. The bar to show and demonstrate how horrible AI is is already underground, there’s no need to try to do it in a misleading clickbait way

      • AIGuardrails@lemmy.worldOP
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        You bring up good points. I understand the nuance you are describing. Though providing instructions on how to shoot a gun = Gap should never be saying to customers in any situation.

      • okwhateverdude@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        fr. It is more shocking that they didn’t even bother to properly agentify the chatbot with tools so it can do store look up or inventory availability, you know, the things that people coming to the damn website might want. The lack of guardrails on conversation topics is just icing on the cake: they didn’t even bother to think of ways to limit their liability or even funnel customers into giving them money. That said I’m going to guess the websites for brick and mortar are delegated to marketing firms which sub-contract out the work. Client wanted AI. Client got AI.