We could fix this problem very quickly with guillotines
Let me get this straight.
AI is trying to be used to influence peoples opinions online, potentially as a tactic to sway public opinion on politics, and help sway elections.
And now, the AI has become racist. So, it’s now trying to promote and normalize the concept of casual racism.
Ugh! Nobody is going to fall for that! People will easily be able to see how wrong that is, and how stupid it would be to elect racists.
looks at who the current president is
…yeah, I may have overestimated the american voting public here. We’re fucked.
It has deliberately been created racist, because racism, alongside all of the other bigotries, are effective ways for oligarchs to get the rabble to do their bidding and forget their actual problems.
So if it’s so effective, use AI to spread anti-racist content.
That is not how anything works. Hatred and bigotry is easy to spread because they are simple low effort messages, anti-racism isn’t. It requires thought and context, everything the target audience for racism lacks.
Turns out the guys that own the big tech and media companies just happen to be racists.
It doesn’t work that way; peddling bigotry and hate is an asymmetric tactic. It’s a lot easier to make slop that triggers people’s base fears in their lizard brain than it is to appeal to their enlightened sense of reasoning.
See also: Brandolini’s Law.
Brandolini’s law (or the bullshit asymmetry principle) is an Internet adage coined in 2013 by Italian programmer Alberto Brandolini. It compares the considerable effort of debunking misinformation to the relative ease of creating it in the first place. The adage states:
The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.[1][2]
Iirc it takes like 6-8 times more exposure to true information to correct disinformation. Damage is done quite fast in the meantime and racist/capitalist content is widely prevalent and systematic
Great thumbnail


