Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    5 hours ago

    Yud continues to bluecheck:

    “This is not good news about which sort of humans ChatGPT can eat,” mused Yudkowsky. “Yes yes, I’m sure the guy was atypically susceptible for a $2 billion fund manager,” he continued. “It is nonetheless a small iota of bad news about how good ChatGPT is at producing ChatGPT psychosis; it contradicts the narrative where this only happens to people sufficiently low-status that AI companies should be allowed to break them.”

    Is this “narrative” in the room with us right now?

    It’s reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 hour ago

      this only happens to people sufficiently low-status

      A piquant little reminder that Yud himself is, of course, so high-status that he cannot be brainwashed by the machine

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      From Yud’s remarks on Xitter:

      As much as people might like to joke about how little skill it takes to found a $2B investment fund, it isn’t actually true that you can just saunter in as a psychotic IQ 80 person and do that.

      Well, not with that attitude.

      You must be skilled at persuasion, at wearing masks, at fitting in, at knowing what is expected of you;

      If “wearing masks” really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think ™.

      you must outperform other people also trying to do that, who’d like that $2B for themselves. Winning that competition requires g-factor and conscientious effort over a period.

      zoom and enhance

      g-factor

      <Kill Bill sirens.gif>

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    10
    ·
    11 hours ago

    Caught a particularly spectacular AI fuckup in the wild:

    (Sidenote: Rest in peace Ozzy - after the long and wild life you had, you’ve earned it)

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 hours ago

      Forget counting the Rs in strawberry, biggest challenge to LLMs is not making up bullshit about recent events not in their training data

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 hours ago

      The AI is right with how much we know of his life he osnt really dead, the AGI can just simulate hom and resurrect him. Takes another hit from my joint made exclusively out of the sequences book pages

      (Rip indeed, what a crazy ride, and he was all aboard).

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    12 hours ago

    So here’s a poster on LessWrong, ostensibly the space to discuss how to prevent people from dying of stuff like disease and starvation, “running the numbers” on a Lancet analysis of the USAID shutdown and, having not been able to replicate its claims of millions of dead thereof, basically concludes it’s not so bad?

    https://www.lesswrong.com/posts/qgSEbLfZpH2Yvrdzm/i-tried-reproducing-that-lancet-study-about-usaid-cuts-so

    No mention of the performative cruelty of the shutdown, the paltry sums involved compared to other gov expenditures, nor the blow it deals to American soft power. But hey, building Patriot missiles and then not sending them to Ukraine is probably net positive for human suffering, just run the numbers the right way!

    Edit ah it’s the dude who tried to prove that most Catholic cardinals are gay because heredity, I think I highlighted that post previously here. Definitely a high-sneer vein to mine.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      21 hours ago

      I really like how the second one appropriates pseudomarxist language to have a go at those snooty liberal elites again.

      edit: The first paper might be making a perfectly valid point at a glance??

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      13 hours ago

      Similar case from 2 years ago with Whisper when transcribing German.

      I’m confused by this. Didn’t we have pretty decent speech-to-text already, before LLMs? It wasn’t perfect but at least didn’t hallucinate random things into the text? Why the heck was that replaced with this stuff??

        • nightsky@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 hours ago

          I’m just confused because I remember using Dragon Naturally Speaking for Windows 98 in the 90s and it worked pretty accurately already back then for dictation and sometimes it feels as if all of that never happened.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 day ago

      Discovered some commentary from Baldur Bjarnason about this:

      Somebody linked to the discussion about this on hacker news (boo hiss) and the examples that are cropping up there are amazing

      This highlights another issue with generative models that some people have been trying to draw attention to for a while: as bad as they are in English, they are much more error-prone in other languages

      (Also IMO Google translate declined substantially when they integrated more LLM-based tech)

      On a personal sidenote, I can see non-English text/audio becoming a form of low-background media in and of itself, for two main reasons:

      • First, LLMs’ poor performance in languages other than English will make non-English AI slop easier to identify - and, by extension, easier to avoid

      • Second, non-English datasets will (likely) contain less AI slop in general than English datasets - between English being widely used across the world, the tech corps behind this bubble being largely American, and LLM userbases being largely English-speaking, chances are AI slop will be primarily generated in English, with non-English AI slop being a relative rarity.

      By extension, knowing a second language will become more valuable as well, as it would allow you to access (and translate) low-background sources that your English-only counterparts cannot.

    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      Lol, training data must have included videos where there was silence but on screen was a credit for translation. Silence in audio shouldn’t require special “workarounds”.

      • antifuchs@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        The whisper model has always been pretty crappy at these things: I use a speech to text system as an assistive input method when my RSI gets bad and it has support for whisper (because that supports more languages than the developer could train on their own infrastructure/time) since maybe 2022 or so: every time someone tries to use it, they run into hallucinated inputs in pauses - even with very good silence detection and noise filtering.

        This is just not a use case of interest to the people making whisper, imagine that.

  • TinyTimmyTokyo@awful.systems
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 days ago

    The Lasker/Mamdani/NYT sham of a story just gets worse and worse. It turns out that the ultimate source of Cremieux’s (Jordan Lasker’s) hacked Columbia University data is a hardcore racist hacker who uses a slur for their name on X. The NYT reporter who wrote the Mamdani piece, Benjamin Ryan, turns out to have been a follower of this hacker’s X account. Ryan essentially used Lasker as a cutout for the blatantly racist hacker.

    https://archive.is/d9rh1

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      Sounds just about par for the course. Lasker himself is known to go by a pseudonym with a transphobic slur in it. Some nazi manchild insisting on calling an anime character a slur for attention is exactly the kind of person I think of when I imagine the type of script kiddie who thinks it’s so fucking cool to scrape some nothingburger docs of a left wing politician for his almost equally cringe nazi friends.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        2 days ago

        Lasker himself is known to go by a pseudonym with a transphobic slur in it.

        That the TPO moniker is basically ungoogleable appears to have been a happy accident for him, according to that article by Rachel Adjogah his early posting history paints him as an honest-to-god chaser.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 days ago

        I feel like the greatest harm that the NYT does with these stories is not inflicting allowing the knowledge of just how weird and pathetic these people are to be part of the story. Like, even if you do actually think that this nothingburger “affirmative action” angle somehow matters, the fact that the people making this information available and pushing this narrative are either conservative pundits or sad internet nazis who stopped maturing at age 15 is important context.

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          Should be embarrassing enough to get caught letting nazis use your publication as a mouthpiece to push their canards. Why further damage you reputation by letting everyone know your source is a guy who insists a cartoon character’s real name is a racial epithet? The optics are presumably exactly why the slightly savvier nazi in this story adopted a posh french nom de guerre like “Crémieux” to begin with, and then had a yet savvier nazi feed the hit piece through a “respected” publication like the NYT.

        • bigfondue@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          It would be against the interests of capital to present this as the rightwing nonsense that it is. It’s on purpose

    • besselj@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      They will need to start banning PIs that abuse the system with AI slop and waste reviewers’ time. Just a 1 year ban for the most egregious offenders is probably enough to fix the problem

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        14
        ·
        2 days ago

        Honestly I’m surprised that AI slop doesn’t already fall into that category, but I guess as a community we’re definitionally on the farthest fringes of AI skepticism.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    3 days ago

    CEO of a networking company for AI execs does some “vibe coding”, the AI deletes the production database (/r/ABoringDystopia)

    xcancel source

    Because Replie was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.

    We built detailed unit tests to test system performance. When the data came back and less than half were functioning, did Replie want to fix them?

    No. Instead, it lied. It made up a report than almost all systems were working.

    And it did it again and again.

    What level of ceo-brained prompt engineering is asking the chatbot to write an apology letter

    Then, when it agreed it lied – it lied AGAIN about our email system being functional.

    I asked it to write an apology letter.

    It did and in fact sent it to the Replit team and myself! But the apology letter – was full of half truths, too.

    It hid the worst facts in the first apology letter.

    He also does that a lot after shit hits the fan, making the llm produce tons of apologetic text about what it did wrong and how it didn’t follow his rules, as if the outage is the fault of some digital tulpa gone rogue and not the guy in charge who apparently thinks cyebersecurity is asking an LLM nicely in a .md not to mess with the company’s production database too much.