• Jason2357@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    I cant imagine a model being trained like this /not/ end up encoding a bunch of features that correlate with race. It will find the white people, then reward its self as the group does statistically better.

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      43 minutes ago

      Even a genuinely perfect model would immediately skew to bias; the moment some statistical fluke gets incorporated into the training data that becomes self re-enforcing and it’ll create and then re-enforce that bias in a feedback loop.

      • Jason2357@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 minutes ago

        Usually these models are trained on past data, and then applied going forward. So whatever bias was in the past data will be used as a predictive variable. There are plenty of facial feature characteristics that correlate with race, and when the model picks those because the past data is racially biased (because of over-policing, lack of opportunity, poverty, etc), they will be in the model. Guaranteed. These models absolutely do not care that correlation != causation. They are correlation machines.