Want to wade into the spooky surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Happy Halloween, everyone!)

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    Apologies for doing journal club instead of sneer club.

    Voiseux, G., Tao Zhou, R., & Huang, H.-C. (Brad). (2025). Accepting the unacceptable in the AI era: When & how AI recommendations drive unethical decisions in organizations. Behavioral Science & Policy, 0(0). https://doi.org/10.1177/23794607251384574

    abstract:

    In today’s workplaces, the promise of AI recommendations must be balanced against possible risks. We conducted an experiment to better understand when and how ethical concerns could arise. In total, 379 managers made either one or multiple organizational decisions with input from a human or AI source. We found that, when making multiple, simultaneous decisions, managers who received AI recommendations were more likely to exhibit lowered moral awareness, meaning reduced recognition of a situation’s moral or ethical implications, compared with those receiving human guidance. This tendency did not occur when making a single decision. In supplemental experiments, we found that receiving AI recommendations on multiple decisions increased the likelihood of making a less ethical choice. These findings highlight the importance of developing organizational policies that mitigate ethical risks posed by using AI in decision-making. Such policies could, for example, nudge employees toward recalling ethical guidelines or reduce the volume of decisions that are made simultaneously.

    so is the moral decline a side effect, or technocapitalism working as designed.