• boonhet@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    5 days ago

    AI is biased by its training data, which is why for an example, facial recognition models are racist.

    Now question is, who chooses the training data for the AI?

    • WorldsDumbestMan@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      The data scientists, and nerdy AI experts. They have to train the AI’s to a standard to avoid being sued.

      There’s room for racism, but less likely than the average MAGA head.

      That explains Grok roasting his owner.

      • boonhet@sopuli.xyz
        link
        fedilink
        arrow-up
        3
        ·
        5 days ago

        Eh, Albania is probably not going to do it so your assumption is fairly safe, but for an example the US could just commission an AI biased to whatever Trump wants.

        I tried out Deepseek R1 locally. It straight up told me it has no record of anything happening on Tiananmen square, after the “thinking” part of the model output something to the tune of “The user is asking about specific historical events. I must provide correct information approved by the CCP”.