• CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    Even a genuinely perfect model would immediately skew to bias; the moment some statistical fluke gets incorporated into the training data that becomes self re-enforcing and it’ll create and then re-enforce that bias in a feedback loop.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Usually these models are trained on past data, and then applied going forward. So whatever bias was in the past data will be used as a predictive variable. There are plenty of facial feature characteristics that correlate with race, and when the model picks those because the past data is racially biased (because of over-policing, lack of opportunity, poverty, etc), they will be in the model. Guaranteed. These models absolutely do not care that correlation != causation. They are correlation machines.