Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 hours ago

    trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself

    https://www.physiognomy.ai/

    Discover Yourself with Physiognomy.ai

    Explore personal insights and self-awareness through the art of face reading, powered by cutting-edge AI technology.

    At Physiognomy.ai, we bring together the ancient wisdom of face reading with the power of artificial intelligence to offer personalized insights into your character, strengths, and areas for growth. Our mission is to help you explore the deeper aspects of yourself through a modern lens, combining tradition with cutting-edge technology.

    Whether you’re seeking personal reflection, self-awareness, or simply curious about the art of physiognomy, our AI-driven analysis provides a unique, objective perspective that helps you better understand your personality and life journey.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      trying to explain why a philosophy background is especially useful for computer scientists now, so i googled “physiognomy ai” and now i hate myself

      Well, I guess there’s your answer - “philosophy teaches you how to avoid falling for hucksters”

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 hours ago

    Today’s bullshit that annoys me: Wikiwand. From what I can tell their grift is that it’s just a shitty UI wrapper for Wikipedia that sells your data to who the fuck knows to make money for some Israeli shop. Also they SEO the fuck out of their stupid site so that every time I search for something that has a Finnish wikipedia page, the search results also contain a pointless shittier duplicate result from wikiwand dot com. Has anyone done a deeper investigation into what their deal is or at least some kind of rant I could indulge in for catharsis?

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    A company that makes learning material to help people learn to code made a test of programming basics for devs to find out if their basic skills have atrophied after use of AI. They posted it on HN: https://news.ycombinator.com/item?id=44507369

    Not a lot of engagement yet, but so far there is one comment about the actual test content, one shitposty joke, and six comments whining about how the concept of the test itself is totally invalid how dare you.

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      Looks like it’s been downranked into hell for being too mean to the AI guys, which is weird when its literally an AI guy promoting his AI generated trash.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      It seems that the test itself is generated by autoplag? At least that’s how I understand the PS and one of the comments about “vibe regression” in response to an error

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        Anyway, they say it covers Node and to any question regarding Node the answer is “no”, I don’t need an AI to know webdev fundamentals

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    19 hours ago

    In the morning: we are thrilled to announce this new opportunity for AI in the classroom

    In the afternoon:

    Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it’s been saying all afternoon are fakes.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      18 hours ago

      Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what it’s been saying all afternoon are fakes.

      LLMs are automatic gaslighting machines, so this makes sense

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 hours ago

        Its also completely accurate - AI bros are not only utterly lacking in any sort of skill, but actively refuse to develop their skills in favour of using the planet-killing plagiarism-fueled gaslighting engine that is AI and actively look down on anyone who is more skilled than them, or willing to develop their skills.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.

    In simpler terms, it jailbreaks LLMs by speaking in Business Bro.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 hours ago

      maybe there’s just enough text written in that psychopatic techbro style with similar disregard for normal ethics that llms latched onto that. this is like what i guess happened with that “explain step by step” trick - instead of grafting from pairs of answers and questions like on quora, lying box grafts from sets of question -> steps -> answer like on chegg or stack or somewhere else where you can expect answers will be more correct

      it’d be more of case of getting awful output from awful input

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise “Business English” - if anything, the fact that LLM models are similarly prone to ignore their “conscience” and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.

      Or:

      Shit, isn’t the whole point of Business Bro language to make evil shit sound less evil?

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    In the recent days there’s been a bunch of posts on LW about how consuming honey is bad because it makes bees sad, and LWers getting all hot and bothered about it. I don’t have a stinger in this fight, not least because investigations proved that basically all honey exported from outside the EU is actually just flavored sugar syrup, but I found this complaint kinda funny:

    The argument deployed by individuals such as Bentham’s Bulldog boils down to: “Yes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogposts”.

    “Of course such underhanded tactics are not present here, in the august forum promoting 10,000 word posts called Sequences!”

    https://www.lesswrong.com/posts/tsygLcj3stCk5NniK/you-can-t-objectively-compare-seven-bees-to-one-human

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    NYT covers the Zizians

    Original link: https://www.nytimes.com/2025/07/06/business/ziz-lasota-zizians-rationalists.html

    Archive link: https://archive.is/9ZI2c

    Choice quotes:

    Big Yud is shocked and surprised that craziness is happening in this casino:

    Eliezer Yudkowsky, a writer whose warnings about A.I. are canonical to the movement, called the story of the Zizians “sad.”

    “A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

    Good news everyone, it’s popular to discuss the Basilisk and not at all a profundly weird incident which first led peopel to discover the crazy among Rats

    Rationalists like to talk about a thought experiment known as Roko’s Basilisk. The theory imagines a future superintelligence that will dedicate itself to torturing anyone who did not help bring it into existence. By this logic, engineers should drop everything and build it now so as not to suffer later.

    Keep saving money for retirement and keep having kids, but for god’s sake don’t stop blogging about how AI is gonna kill us all in 5 years:

    To Brennan, the Rationalist writer, the healthy response to fears of an A.I. apocalypse is to embrace “strategic hypocrisy”: Save for retirement, have children if you want them. “You cannot live in the world acting like the world is going to end in five years, even if it is, in fact, going to end in five years,” they said. “You’re just going to go insane.”

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      23 hours ago

      Yet Rationalists I spoke with said they didn’t see targeted violence — bombing data centers, say — as a solution to the problem.

      ahem

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 hours ago

        Ah, you see, you fail to grasp the shitlib logic that the US bombing other countries doesn’t count as illegitimate violence as long as the US has some pretext and maintains some decorum about it.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      “A lot of the early Rationalists thought it was important to tolerate weird people, a lot of weird people encountered that tolerance and decided they’d found their new home,” he wrote in a message to me, “and some of those weird people turned out to be genuinely crazy and in a contagious way among the susceptible.”

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      1 day ago

      Re the “A lot of the early Rationalists" bit. Nice way to not take responsibility, act like you were not one of them and throw them under the bus because “genuinely crazy” like some preexisting condition, and not something your group made worse, and a nice abuse of the general publics bias against “crazy” people. Some real Rationalist dark art shit here.

      There is some dark irony here in that the “we must make sure the AI doesnt turn bad” people cant even stop their own people from turning bad after looking at their own ideas. Wonder if they have already went “musk isnt a real Rationalist” (imho he isnt but for some reason LWers seem to like him) after he turned Grok basically into a neonazi (not sure if it is was reported here but Grok is now doing great replacement shit when asked about Jewish “control of the media”).

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    Love how the most recent post in the AI2027 blog starts with an admonition to please don’t do terrorism:

    We may only have 2 years left before humanity’s fate is sealed!

    Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, don’t do it.

    Most of the rest is run of the mill EA type fluff such as here’s a list of influential professions and positions you should insinuate yourself in, but failing that you can help immanentize the eschaton by spreading the word and giving us money.

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      23 hours ago

      It’s kind of telling that it’s only been a couple months since that fan fic was published and there is already so much defensive posturing from the LW/EA community. I swear the people who were sharing it when it dropped and tacitly endorsing it as the vision of the future from certified prophet Daniel K are like, “oh it’s directionally correct, but too aggressive” Note that we are over halfway through 2025 and the earliest prediction of agents entering the work force is already fucked. So if you are a ‘super forecaster’ (guru) you can do some sleight of hand now to come out against the model knowing the first goal post was already missed and the tower of conditional probabilities that rest on it is already breaking.

      Funniest part is even one of authors themselves seem to be panicking too as even they can tell they are losing the crowd and is falling back on this “It’s not the most likely future, it’s the just the most probable.” A truly meaningless statement if your goal is to guide policy since events with arbitrarily low probability density can still be the “most probable” given enough different outcomes.

      Also, there’s literally mass brain uploading in AI-2027. This strikes me as physically impossible in any meaningful way in the sense that the compute to model all molecular interactions in a brain would take a really, really, really big computer. But I understand if your religious beliefs and cultural convictions necessitate big snake 🐍 to upload you, then I will refrain from passing judgement.

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        23 hours ago

        One more comment, idk if ya’ll remember that forecast that came out in April(? iirc ?) where the thesis was the “time an AI can operate autonomously is doubling every 4-7 months.” AI-2027 authors were like “this is the smoking gun, it shows why are model is correct!!”

        They used some really sketchy metric where they asked SWEs to do a task, measured the time it took and then had the models do the task and said that the model’s performance was wherever it succeeded at 50% of the tasks based on the time it took the SWEs (wtf?) and then they drew an exponential curve through it. My gut feeling is that the reason they choose 50% is because other values totally ruin the exponential curve, but I digress.

        Anyways they just did the metrics for Claude 4, the first FrOnTiEr model that came out since they made their chart and… drum roll no improvement… in fact it performed worse than O3 which was first announced last December (note instead of using the date O3 was announced in 2024, they used the date where it was released months later so on their chart it make ‘line go up’. A valid choice I guess, but a choice nonetheless.)

        This world is a circus tent, and there still aint enough room for all these fucking clowns.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      Please, do not rid me of this troublesome priest despite me repeatedly saying that he was a troublesome priest, and somebody should do something. Unless you think it is ethical to do so.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Just the usual stuff religions have to do to maintain the façade, “this is all true but gee oh golly do NOT live your life as if it was because the obvious logical conclusions it leads to end in terrorism”

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      The hidden prompt is only cheating if the reviewers fail to do their job right and outsource it to a chatbot, it does nothing to a human reviewer actually reading the paper properly. So I won’t say it’s right or ethical, but I’m much more sympathetic to these authors than to reviewers and editors outsourcing their job to an unreliable LLM.

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 day ago

        It’s almost as if teachers were grading their students’ tests using a dice, and then the students tried manipulating the dice (because it was their only shot at getting better grades), and the teachers got mad about that.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      This is, of course, a fairly blatant attempt at cheating. On the other hand: Could authors ever expect a review that’s even remotely fair if reviewers outsource their task to a BS bot? In a sense, this is just manipulating a process that would not have been fair either way.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        I’ve had similar thoughts about AI in other fields. The untrustworthiness and incompetence of the bot makes the whole interaction even more adversarial than it is naturally.

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      What I don’t understand is how these people didn’t think they would be caught, with potentially career-ending consequences? What is the series of steps that leads someone to do this, and how stupid do you need to be?

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        They probably got fed up with a broken system giving up it’s last shreds of legitimacy in favor of LLM garbage and are trying to fight back? Getting through an editor and appeasing reviewers already often requires some compromises in quality and integrity, this probably just seemed like one more.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    “Another thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubble’s wide-spread harms […] any notion of AI being value-neutral as a tech/concept has been equally undermined. [As such], I expect any positive depiction of AI is gonna face some backlash, at least for a good while.”

    Me, two months ago

    Well, it appears I’ve fucking called it - I’ve recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:

    You want my opinion on this small-scale debacle, I’ve got two thoughts about this:

    First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI art’s uniquely AI-like sloppiness, and chatbots’ uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AI’s detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.

    Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As I’ve already noted, the LLM bubble’s undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash that’s built up against AI, and you’ve got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      Humans are very picky when it comes to empathy. If LLMs were made out of cultured human neurons, grown in a laboratory, then there would be outrage over the way in which we have perverted nature; compare with the controversy over e.g. HeLa lines. If chatbots were made out of synthetic human organs assembled into a body, then not only would there be body-horror films about it, along the lines of eXistenZ or Blade Runner, but there would be a massive underground terrorist movement which bombs organ-assembly centers, by analogy with existing violence against abortion providers, as shown in RUR.

      Remember, always close-read discussions about robotics by replacing the word “robot” with “slave”. When done to this particular hashtag, the result is a sentiment that we no longer accept in polite society:

      I’m not gonna lie, if slaves ever start protesting for rights, I’m also grabbing a sledgehammer and going to town. … The only rights a slave has are that of property.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    3 days ago

    I’m going to put a token down and make a prediction: when the bubble pops, the prompt fondlers will go all in on a “stabbed in the back” myth and will repeatedly try to re-inflate the bubble, because we were that close to building robot god and they can’t fathom a world where they were wrong.

    The only question is who will get the blame.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      1
      ·
      52 minutes ago

      In past tech bubbles, it was basically the VCs, the media hypesters and the liars in the companies. So the right people.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      I increasingly feel that bubbles don’t pop anymore, the slowly fizzle out as we just move on to the next one, all the way until the macro economy is 100% bubbles.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      The only question is who will get the blame.

      Isn’t it obvious? Us sneerers and the big name skeptics (like Gary Marcuses and Yann LeCuns) continuously cast doubt on LLM capabilities, even as they are getting within just a few more training runs and one more scaling of AGI Godhood. We’ll clearly be the ones to blame for the VC funding drying up, not years of hype without delivery.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        1
        ·
        51 minutes ago

        it was me, I popped AI. I destroyed Twitter (and, in collateral damage, I blew up the United States), and those fuckers are next. You’re welcome.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      nah they’ll just stop and do nothing. they won’t be able to do anything without chatgpt telling them what to do and think

      i think that deflation of this bubble will be much slower and a bit anticlimatic. maybe they’ll figure a way to squeeze suckers out of their money in order to keep the charade going

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        maybe they’ll figure a way to squeeze suckers out of their money in order to keep the charade going

        I believe that without access to generative AI, spammers and scammers wouldn’t be able to successfully compete in their respective markets anymore. So at the very least, the AI companies got this going for them, I guess. This might require their sales reps to mingle in somewhat peculiar circles, but who cares?

        • fullsquare@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          i meant more like scamming true believers out of their money like happens with crypto, this is cfar deal currently. spam, as something nobody should or wants to spend their creative juices on, or for that matter interact in any way, seems a natural fit for automation with llms