Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • TinyTimmyTokyo@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    21 minutes ago

    This one’s been making the rounds, so people have probably already seen it. But just in case…

    Meta did a live “demo” of their recording new AI.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 hours ago

    Recently thought about how this one xkcd has probably done more recruiting for the rat community per unit effort spent making it than that 700k word salad.

    Where are we on xkcd? I haven’t looked at it regularly for over a decade now. Nothing personally against the author or comic itself, I just completely deconverted from consuming nerd celebrity content at that point in the past.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      He kind of left his prime I think, the humor becoming alternatingly a bit too esoteric or a bit too obvious, and kind of stale in general. Nothing particularly objectionable about the author comes to mind otherwise.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 hours ago

    OT: Baldur Bjarnason’s lamented how his webdev feed has turned to complete shit:

    Between the direct and indirect support of fascism and the uncritical embrace of LLMs and the overwhelming majority of the dev sites in my feed reader have turned to an undifferentiated puddle of nonsense…

    …Two years ago these feeds (I never subscribed to any of the React grifters) were all largely posts on concrete problem-solving and, y’know, useful stuff. Useful dev discourse has collapsed into a tiny handful of blogs.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 hours ago

        Word problems referring to aliens from cartoons. “Bobbby on planet Glorxon has four Strawberies, which are similar to but distinct from earth strawberries, and Kleelax has seven…”

        I also wonder if you could create context breaks, or if they’ve hit a point where that isn’t as much of a factor. "A train leaves Athens, KY traveling at 45 mph. Another train leaves Paris, FL traveling at 50 mph. If the track is 500 miles long, how long is a train trip from Athens to Paris?

        • BlueMonday1984@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          LLM’s ability to fake solving word problems hinges on being able to crib the answer, so using aliens from cartoons (or automatically-generating random names for objects/characters) will prove highly effective until AI corps can get the answers into their training data.

          As for context breaks, those will remain highly effective against LLMs pretty much forever - successfully working around a context break requires reasoning, which LLMs are categorically incapable of doing.

          Constantly and subtly twiddling with questions (ideally through automatic means) should prove effective as well - Apple got “reasoning” text extruders to flounder and fail at simple logic puzzles through such a method.

    • BigMuffN69@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Nice result, not too shocking after IMO performance. A friend of mind told me that this particular competition is highly time constrained for human competitors, i.e., questions aren’t impossibly difficult per se, but some are time sinks that you simply avoid to get points elsewhere. (5 hours on 12 Qs is tight…)

      So when you are competing against a data center using a nuclear reactor vs 3 humans running on broccoli, the claims of superhuman performance definitely require an * attached to them.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      10 hours ago

      Also accidentally posted in an old thread:

      Hot take: If a text extruder’s winning gold medals at your contest, that’s not a sign the text extruder’s good at something, that’s a sign your contest is worthless for determining skill.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    10 hours ago

    Many of our favorite people abuse meth and meth adjacent sustances. In the long term, this behavior visibly degrades dental health.

    Therefore, it wont be long until we witness actual real life cases of smartmouth.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    23 hours ago

    Some of our younger readers might not be fully inoculated against high-control language. Fortunately, cult analyst Amanda Montell is on Crash Course this week with a 45min lecture introducing the dynamics of cult linguistics. For example, describing Synanon attack therapy, Youtube comments, doomscrolling, and maybe a familiar watering hole or two:

    You know when people can’t stop posting negative or conspiratorial comments, thinking they’re calling someone out for some moral infraction, when really they’re just aiming for clout and maybe catharsis?

  • flere-imsaho@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 day ago

    david heinemeier hanson of the ruby on rails fame decided to post a white supremacist screed with a side of transphobia because now he doesn’t need to pretend anything anymore. it’s not surprising, he was heading this way for a while, but seeing the naked apology of fascism is still shocking for me.

    any reasonable open source project he participates in should immediately cut ties with the fucker. (i’m not holding my breath waiting, though.)

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 day ago

      The commentator who thinks that USD 120k / year is a poor income for someone with a PhD makes me sad. That is what you earn if you become a professor of physics at a research university or get a good postdoc, but she aged out of all of those jobs and was stuck on poorly paid short-term contracts. There are lots of well-paid things that someone with a PhD in physics can do if she is willing to network and work for it, but she chose “rogue intellectual.”

      A German term to look up is WissZeitVG but many academic jobs in many countries are only offered to people no more than x years after receiving their PhD (yep, this discriminates against women and the disabled and those with sick spouses or parents).

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    Was reading some science fiction from the 90’s and the AI/AGI said ‘im an analog computer, just like you, im actually really bad at math.’ And I wonder how much damage these one of these ideas (the other being there are computer types that can do more/different things. Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science) did.

    The idea that a smart computer will be worse at math (which makes sense from a storytelling perspective as a writer, because smart AI who also can do math super well is gonna be hard to write), which now leads people who read enough science fiction to see the machine that can’t count nor run doom and go ‘this is what they predicted!’.

    Not a sneer just a random thought.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      It’s because of research in the mid-80s leading to Moravec’s paradox — sensorimotor stuff takes more neurons than basic maths — and Sharp’s 1983 international release of the PC-1401, the first modern pocket computer, along with everybody suddenly learning about Piaget’s research with children. By the end of the 80s, AI research had accepted that the difficulty with basic arithmetic tasks must be in learning simple circuitry which expresses those tasks; actually performing the arithmetic is easy, but discovering a working circuit can’t be done without some sort of process that reduces intermediate circuits, so the effort must also be recursive in the sense that there are meta-circuits which also express those tasks. This seemed to line up with how children learn arithmetic: a child first learns to add by counting piles, then by abstracting to symbols, then by internalizing addition tables, and finally by specializing some brain structures to intuitively make leaps of addition. But sometimes these steps result in wrong intuition, and so a human-like brain-like computer will also sometimes be wrong about arithmetic too.

      As usual, this is unproblematic when applied to understanding humans or computation, but not a reasonable basis for designing a product. Who would pay for wrong arithmetic when they could pay for a Sharp or Casio instead?

      Bonus: Everybody in the industry knew how many transistors were in Casio and Sharp’s products. Moravec’s paradox can be numerically estimated. Moore’s law gives an estimate for how many transistors can be fit onto a chip. This is why so much sci-fi of the 80s and 90s suggests that we will have a robotics breakthrough around 2020. We didn’t actually get the breakthrough IMO; Moravec’s paradox is mostly about kinematics and moving a robot around in the world, and we are still using the same kinematic paradigms from the 80s. But this is why bros think that scaling is so important.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Could be, not sure the science fiction authors thought this much about it. (Or if the thing I was musing about is even real and not just a coincidence that I read a few works in which it is a thing). Certainly seems likely that this sort of science is where the idea came from.

        Moravec’s Paradox

        Had totally forgotten the name of that (Being better at remembering random meme stuff but not names of concepts like this, or a lot of names in general is a curse, also a source of imposter syndrome). But I recall having read the wikipedia page of that before. (Moravec also was the guy who thought of bush robots, wonder if that idea survived the more recent developments of nanotechnology.

        Rodney brooks wiki page on AI was amusing

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science

      The general idea among computer scientists is that analog TMs are not more powerful than digital TMs. The supposed advantage of an analog machine is that it can store real numbers that vary continuously while digital machines can only store discrete values, and a real number would require an infinite number of discrete values to simulate. However, each real number “stored” by an analog machine can only be measured up to a certain precision, due to noise, quantum effects, or just the fact that nothing is infinitely precise in real life. So, in any reasonable model of analog machines, a digital machine can simulate an analog value just fine by using enough precision.

      There aren’t many formal proofs that digital and analog are equivalent, since any such proof would depend on exactly how you model an analog machine. Here is one example.

      Quantum computers are in fact (believed to be) more powerful than classical digital TMs in terms of efficiency, but the reasons for why they are more powerful are not easy to explain without a fair bit of math. This causes techbros to get some interesting ideas on what they think quantum computers are capable of. I’ve seen enough nonsense about quantum machine learning for a lifetime. Also, there is the issue of when practical quantum computers will be built.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Thanks. I know some complexity theory, but not enough. (Enough to know it wasn’t gonna be my thing).

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 days ago

      this is one of those things that’s, in a narrative sense, a great way to tell a story, while being completely untethered from fact/reality. and that’s fine! stories have no obligation to be based in fact!

      to put a very mild armchair analysis about it forward: it’s playing on the definition of the conceptual “smart” computer, as it relates to human experience. there’s been a couple of other things in recent history that I can think of that hit similar or related notes (M3GAN, the whole “omg the AI tricked us (and then the different species with a different neurotype and capability noticed it!)” arc in ST:DIS, the last few Mission Impossible films, etc). it’s one of those ways in which art and stories tend to express “grappling with $x to make sense of it”

      The idea that a smart computer will be worse at math (which makes sense from a storytelling perspective as a writer, because smart AI who also can do math super well is gonna be hard to write)

      personally speaking, one of the ways about it that I find most jarring is when the fantastical vastly outweighs anything else purely for narrative reasons - so much so that it’s a 4th-wallbreak for me ito what the story means to convey. I reflect on this somewhat regularly, as it’s a rather cursed rabbithole that instances repeatedly: “is it my knowledge of this domain that’s spoiling my enjoyment of this thing, or is the story simply badly written?” is the question that comes up, and it’s surprisingly varied and complicated in its answering

      on the whole I think it’s often good/best to keep in mind that scifi is often an exploration and a pressure valve, but that it’s also worth keeping an eye on how much it’s a pressure valve. too much of the latter, and something™ is up

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Ow yeah the way it used in this story also made sense but not in a computer science way. Just felt a bit how Gibson famously had never used a modem before he wrote his cyberpunk series.

        • Charlie Stross@wandering.shop
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          @Soyweiser @techtakes You misremembered: Gibson wrote his early stories and Neuromancer on a typewriter, he didn’t own a computer until he bought one with the royalties (an Apple IIc, which then freaked him out by making graunching noises at first—he had no idea it needed a floopy disk inserting).

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            Thanks! I should have looked up the whole quote, but I just made a quick reply I knew I had worded it badly and I had it wrong, but just didn’t do anything about it. My bad.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      This isn’t an idea that I’ve heard of until you mentioned it, so it likely hasn’t got much purchase in the public consciousness. (Intuitively speaking, a computer which sucks at maths isn’t a good computer, let alone AGI material.)

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        Yeah, I was also just wondering, as obv what I read is not really typical of the average public. Can’t think of any place where this idea spread in non-written science fiction for example, with an exception being the predictions of C-3PO, who always seems to be wrong. But he is intended as a comedic sidekick. (him being wrong can also be seen as just the lack of value in calculating odds like that, esp in a universe with The Force).

        But yes, not likely to be a big thing indeed.

  • PMMeYourJerkyRecipes@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Getting pretty far afield here, but goddamn Matt Yglesias’s new magazine sucks:

    The case for affirmative action for conservatives

    “If we cave in and give the right exactly what they want on this issue, they’ll finally be nice to us! Sure, you might think based on the last 50,000 times we’ve tried this strategy that they’ll just move the goalposts and demand further concessions, but then they’ll totally look like hypocrites and we’ll win the moral victory, which is what actually matters!”

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 days ago

      In collaboration with cryptocurrency outfits Coinbase, MetaMask, and the Ethereum foundation, Google also produced an extension that would integrate the cryptocurrency-oriented x402 protocol, allowing for AI-driven purchasing from crypto wallets.

      what could possibly go wrong

      In either case, the goal is to maintain an auditable trail that can be reexamined in cases of fraud.

      Which is a thing that you only need to worry about if you use these types of agents.

      Which in any case you can’t, because

      The protocol is built for a future in which AI agents routinely shop for products on customers’ behalf and engage in complex real-time interactions with retailers’ AI agents.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        roko’s basilisk but instead of simulating torture it’s simulating mundane purchases. Broko’s Grocerlist

        • nightsky@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          You’ll have to endlessly scroll Amazon and decide whether to buy the identical product from brand YDAKVKR or BNRTGRIV, reading ALL the fake reviews.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        The protocol is built for a future in which AI agents routinely shop for products on customers’ behalf

        as I was ranting in dm earlier elsewhere, the part about this that especially fucks me off is how much of this is not just simply unnecessary but also strictly worse than what we already used to have!

        ~15yo ago the entire bloody internet was awash in APIs and accessible interactions! hell, it’s the whole reason shit like Yahoo Pipes and IFTTT became a thing!

        (and then after that ~everyone made fucking fences to wall their gardens because they want to Capture Users! to this day I still don’t know if it could’ve gone any other way under how capitalism operates, but fuck it sucks.)

        meanwhile so many people (both those who’ve come up Touching Computers, as well as casual users, in the last 10~15y or so (who I typically refer to as the Cloud Generation) typically don’t even have a conception of doing it any other way but The Billable Platform Way. I have long suspected that this won’t hold out (it’s a truism that at some threshold people will start asking “wait why am I paying for this?”) and I am heartened by seeing some indicators of this starting to happen, but… fuck. there’s been so much damage from years of this shit

        I still stay hopeful for change (esp. because this current way can’t hold), but I also grimace about what’s coming in the near future (because I know that a fair number of these platforms will be cognizant of the same problem)

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          Especially considering that the whole “your AI will negotiate with theirs” speaks to the kind of algorithmic price discrimination that you see in Uber and the like, where the system is designed specifically to maximize how much you’re willing and able to pay for a ride and minimize how much the driver is willing to accept for it. Hardcore techno libertarians want nothing more than to make it impossible for anyone to make meaningful informed choices about their lives that might prevent them from being taken advantage of by hardcore techno libertarians.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        what could possibly go wrong

        Unrelated to this specific topic but more cryptocurrency fails. This reminds me of hardware wallets which, on the wallet show information about the transaction. Which seems smart, so you can make sure the data from your perhaps compromised machine is correct. Only, the problem with these wallets was that they didn’t understand smart contracts. So if you got a smart contract you could still get hacked this way, because the information on the hardware wallet didn’t make sense (there were fixes for this, but think most people only really went in to fix this after the North Koreans made off with billions of fake coins).