Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • TinyTimmyTokyo@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    Daniel Koko’s trying to figure out how to stop the AGI apocalypse.

    How might this work? Install TTRPG afficionados at the chip fabs and tell them to roll a saving throw.

    Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don’t roll a 1 are destroyed on the spot.

    And if that doesn’t work? Koko ultimately ends up pretty much where Big Yud did: bombing the fuck out of the fabs and the data centers.

    “For example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes.”

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 minutes ago

      Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don’t roll a 1 are destroyed on the spot.

      Ah, yes, artificially kneecap chip fabs’ yields, I’m sure that will go over well with the capitalist overlords who own them

  • ________@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    12 hours ago

    Ian Lance Taylor (of GOLD, Go, and other tech fame) had a take on chatbots being AGI that I liked to see from an influential person of computing. https://www.airs.com/blog/archives/673

    The summary is that chatbots are not AGI, using the current AI wave as the usher to AGI is not it, and all around dislikes in a very polite way that chatbot LLMs are seen as AI.

    Apologies if this was posted when published.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      The whole internet loves Éspèrature Trouvement, the grumpy old racist! 5 seconds later We regret to inform you the racist is not that old and actually has a pretty normal name. Also don’t look up his runescape username.

    • maol@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 hours ago

      Fucking hell. Not the most important part of the story, but his elaborate lies about being Jewish are very very weird. Kind of like white Americans pretending that they’re Cherokee I guess?

      • TinyTimmyTokyo@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        11 hours ago

        It’s not that weird when you understand the sharks he swims with. Race pseudoscientists routinely peddle the idea that Ashkenazi Jews have higher IQs than any other ethnic or racial group. Scoot Alexander and Big Yud have made this claim numerous times. Lasker pretending to be a Jew makes more sense once you realize this.

  • nfultz@awful.systems
    cake
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    https://www.profgalloway.com/ice-age/ Good post until I hit the below:

    Instead of militarizing immigration enforcement, we should be investing against the real challenge: AI. The World Economic Forum says 9 million jobs globally may be displaced in the next five years. Anthropic’s CEO warns AI could eliminate half of all entry-level white-collar jobs. Imagine the population of Greece storming the shores of America and taking jobs (even jobs Americans actually want), as they’re willing to work 24/7 for free. You’ve already met them. Their names are GPT, Claude, and Gemini.

    Having a hard time imagining 300 but AI myself, Scott. Could we like, not shoehorn AI into every other discussion?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      2 days ago

      Iirc Galloway was a pro cryptocurrency guy. So this tracks

      E: imagine if the 3d printer people had the hype machine behind them like this. ‘China better watch out, soon all manufacturing of products will be done by people at home’. Meanwhile China: [Laughs in 大跃进].

      • mlen@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 hours ago

        I think that 3D printing never picked up, because it’s one of those things that empower the people, i.e. to repair stuff or build their own things, so the number of opportunities to grift seems to be smaller (although I’m probably underestimating it).

        Most of the recently hyped technologies had goals that were exact opposites of empowering the masses.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 hours ago

          Tangential: I’ve heard that there are 3D printer people that print junk and sell them. This would not be much of a problem if they didn’t pollute the spaces they operate in. The example I’ve heard of is artist alleys at conventions- a 3D printer person will set up a stall and sell plastic models of dragons or pokemon or whatever. Everything is terrible!

          • BlueMonday1984@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 minutes ago

            Tangential: I’ve heard that there are 3D printer people that print junk and sell them. This would not be much of a problem if they didn’t pollute the spaces they operate in.

            So, essentially AI slop, but with more microplastics. Given the 3D printer bros are much more limited in their ability to pollute their spaces (they have to pay for filament/resin, they’re physically limited in where they can pollute, and they produce slop much slower than an LLM), they’re hopefully easier to deal with.

      • nfultz@awful.systems
        cake
        link
        fedilink
        English
        arrow-up
        4
        ·
        17 hours ago

        I liked his stuff on wework back in the day. Funny how he could see one tech grift really clearly and fall for another. Then again, WeWork is in the black these days. Anyway I think Galloway pivoted (apologies) to Mens Rights lately; and he also gave some money to UCLA Extension (ie not the main campus) which is a bit hard to interpret.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          Yeah, but we never got that massive hype cycle for 3d printers. Which in a way is a bit odd, as it could have happend. Nanomachine! Star trek replicators! (Getting a bit offtopic from Galloway being a cryptobro).

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            17 hours ago

            I can imagine it clear… a chart showing minimum feature size decreasing over time (using cherry picked data points) with a dotted line projection of when 3d printers would get down nanotech scale. 3d printer related companies would warn of dangers of future nanotech and ask for legislation regulating it (with the language of the legislation completely failing to effect current 3d printing technology). Everyone would be buying 3d printers at home, and lots of shitty startups would be selling crappy 3d printed junk.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    2 days ago

    Here’s an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don’t Get Why Normies Don’t Freak Out:

    For quite a while, I’ve been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.

    (Dude then goes on to try to game-theorize this, I didn’t bother to poke holes in it)

    The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of “omnicide” is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.

    At least on commenter gets it:

    Most people distinguish between intentional acts and shit that happens.

    (source)

    Edit never read the comments (again). The commenter referenced above obviously didn’t feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice “save”, dipshit.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Hmm, should I be more worried and outraged about genocides that are happening at this very moment, or some imaginary scifi scenario dreamed up by people who really like drawing charts?

      One of the ways the rationalists try to rebut this is through the idiotic dust specks argument. Deep down, they want to smuggle in the argument that their fanciful scenarios are actually far more important than real life issues, because what if their scenarios are just so bad that their weight overcomes the low probability that they occur?

      (I don’t know much philosophy, so I am curious about philosophical counterarguments to this. Mathematically, I can say that the more they add scifi nonsense to their scenarios, the more that reduces the probability that they occur.)

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        You know, I hadn’t actually connected the dots before, but the dust speck argument is basically yet another ostensibly-secular reformulation of Pascal’s wager. Only instead of Heaven being infinitely good if you convert there’s some infinitely bad thing that happens if you don’t do whatever Eliezer asks of you.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        2 days ago

        reverse dust specks: how many LWers would we need to permanently deprive of access to internet to see rationalist discourse dying out?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Recently, I’ve realized that there is a decent explanation for why so many people believe that - if we model them as operating under a strict zero-sum game model of the world… ‘everyone loses’ is basically an incoherent statement - as a best approximation it would either denote no change and therefore be morally neutral, or an equal outcome, and would therefore be preferable to some.

      Yes, this is why people think that. This is a normal thought to think others have.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Here’s my unified theory of human psychology, based on the assumption most people believe in the Tooth Fairy and absolutely no other unstated bizarre and incorrect assumptions no siree!

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        I mean if you want to be exceedingly generous (I sadly have my moments), this is actually remarkably close to the “intentional acts” and “shit happens” distinction, in a perverse Rationalist way. ^^

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          Thats fair, if you want to be generous, if you are not going to be Id say there are still conceptually large differences between the quote and “shit happens”. But yes, you are right. If only they had listened to Scott when he said “talk less like robots”

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Somebody found a relevant reddit post:

    Dr. Casey Fiesler ‪@cfiesler.bsky.social‬ (who has clippy earrings in a video!) writes: " This is fascinating: reddit link

    Someone “worked on a book with ChatGPT” for weeks and then sought help on Reddit when they couldn’t download the file. Redditors helped them realized ChatGPT had just been roleplaying/lying and there was no file/book…"

    • blakestacey@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      After understanding a lot of things it’s clear that it didn’t. And it fooled me for two weeks.

      I have learned my lesson and now I am using it to generate one page at a time.

      qu1j0t3 replies:

      that’s, uh, not really the ideal takeaway from this lesson

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      2 days ago

      you have to scroll through the person’s comments to find it, but it does look they did author the body of the text and uploaded it as a docx into ChatGPT. so points for actually creating something unlike the AI bros

      it looks like they tried to use ChatGPT to improve narration. to what degree the token smusher has decided to rewrite their work in the smooth, recycled plastic feel we’ve all come to know and despise remains unknown

      they did say they are trying to get it to generate illustrations for all 700 pages, and moreover appear[ed] to believe it can “work in the background” on individual chapters with no prompting. they do seem to have been educated on the folly of expecting this to work, but as blakestacey’s other reply pointed out, they appear to now be just manually prompting one page at a time. godspeed

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        They now deleted their post and I assume a lot of others, but they also claim they have no time to really write and just wanted a collection of stories for their kid(s). Which doesnt make sense, creating 700 pages of kids stories is a lot of work, even if you let a bot improve the flow. Unless they just stole a book of children’s stories from somewhere. (I know these books exist, as a child from one of my brothers tricked me into reading two stories from one).

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      looks like there’s either downvote brigade keeping critical comments at +1 or 0, or reddit brigading countermeasures went on in defense of wittle promprfondler

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    3 days ago

    Better Offline was rough this morning in some places. Props to Ed for keeping his cool with the guests.

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      Oof, that Hollywood guest (Brian Koppelman) is a dunderhead. “These AI layoffs actually make sense because of complexity theory”. “You gotta take Eliezer Yudkowsky seriously. He predicted everything perfectly.”

      I looked up his background, and it turns out he’s the guy behind the TV show “Billions”. That immediately made him make sense to me. The show attempts to lionize billionaires and is ultimately undermined not just by its offensive premise but by the world’s most block-headed and cringe-inducing dialog.

      Terrible choice of guest, Ed.

      • lagrangeinterpolator@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        I study complexity theory and I’d like to know what circuit lower bound assumption he uses to prove that the AI layoffs make sense. Seriously, it is sad that the people in the VC techbro sphere are thought to have technical competence. At the same time, they do their best to erode scientific institutions.

        • BigMuffN69@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          My hot take has always been that current Boolean-SAT/MIP solvers are probably pretty close to theoretical optimality for problems that are interesting to humans & AI no matter how “intelligent” will struggle to meaningfully improve them. Ofc I doubt that Mr. Hollywood (or Yud for that matter) has actually spent enough time with classical optimization lore to understand this. Computer go FOOM ofc.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Only way I can make the link between complexity theory and laying off people is thinking about putting people in ‘can solve up to this level of problem’ style complexity classes (which regulars here should realize gets iffy fast). So hope he explained it more than that.

          • BlueMonday1984@awful.systems
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            The only complexity theory I know of is the one which tries to work out how resource-intensive certain problems are for computers, so this whole thing sounds iffy right from the get-go.

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              2 days ago

              Yeah but those resource-intensive problems can be fitted into specific classes of problems (P, NP, PSPACE etc), which is what I was talking about, so we are talking about the same thing.

              So under my imagined theory you can classify people as ‘can solve: [ P, NP, PSPACE, … ]’. Wonder what they will do with the P class. (Wait, what did Yarvin want to do with them again?)

              • lagrangeinterpolator@awful.systems
                link
                fedilink
                English
                arrow-up
                8
                ·
                edit-2
                2 days ago

                There’s really no good way to make any statements about what problems LLMs can solve in terms of complexity theory. To this day, LLMs, even the newfangled “reasoning” models, have not demonstrated that they can reliably solve computational problems in the first place. For example, LLMs cannot reliably make legal moves in chess and cannot reliably solve puzzles even when given the algorithm. LLM hypesters are in no position to make any claims about complexity theory.

                Even if we have AIs that can reliably solve computational tasks (or, you know, just use computers properly), it still doesn’t change anything in terms of complexity theory, because complexity theory concerns itself with all possible algorithms, and any AI is just another algorithm in the end. If P != NP, it doesn’t matter how “intelligent” your AI is, it’s not solving NP-hard problems in polynomial time. And if some particularly bold hypester wants to claim that AI can efficiently solve all problems in NP, let’s just say that extraordinary claims require extraordinary evidence.

                Koppelman is only saying “complexity theory” because he likes dropping buzzwords that sound good and doesn’t realize that some of them have actual meanings.

                • Soyweiser@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 day ago

                  Yeah but I was trying to combine complexity theory as a loose theory misused by tech people in relation to ‘people who get fired’. (Not that I don’t appreciate your post btw, I sadly have not seen any pro-AI people be real complexity theory cranks re the capabilities. I have seen an anti be a complexity theory crank, but that is only when I reread my own posts ;) ).

      • BurgersMcSlopshot@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Yeah, that guy was a real piece of work, and if I had actually bothered to watch The Bear before, I would stop doing so in favor of sending ChatGPT a video of me yelling in my kitchen and ask it if what is depicted was the plot of the latest episode.

  • HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    I have been thinking about the true cost of running LLMs (of course, Ed Zitron and others have written about this a lot).

    We take it for granted that large parts of the internet are available for free. Sure, a lot of it is plastered with ads, and paywalls are becoming increasingly common, but thanks to economies of scale (and a level of intrinsic motivation/altruism/idealism/vanity), it still used to be viable to provide information online without charging users for every bit of it. Same appears to be true for the tools to discover said information (search engines).

    Compare this to the estimated true cost of running AI chatbots, which (according to the numbers I’m familiar with) may be tens or even hundreds of dollars a month for each user. For this price, users would get unreliable slop, and this slop could only be produced from the (mostly free) information that is already available online while disincentivizing creators from producing more of it (because search engine driven traffic is dying down).

    I think the math is really abysmal here, and it may take some time to realize how bad it really is. We are used to big numbers from tech companies, but we rarely break them down to individual users.

    Somehow reminds me of the astronomical cost of each bitcoin transaction (especially compared to the tiny cost of processing a single payment through established payment systems).

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      The big shift in per-action cost is what always seems to be missing from the conversation. Like, in a lot of my experience the per-request cost is basically negligible compared to the overhead of running the service in general. With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that don’t appear to be getting priced in or planned for in discussions of the glorious AI technocapital future

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that don’t appear to be getting priced in or planned for in discussions of the glorious AI technocapital future

        This is a very important point, I believe. I find it particularly ironic that the “traditional” Internet was fairly efficient in particular because many people were shown more or less the same content, and this fact also made it easier to carry out a certain degree of quality assurance. Now with chatbots, all this is being thrown overboard and extreme inefficiencies are being created, and apparently, the AI hypemongers are largely ignoring that.

    • corbin@awful.systems
      cake
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      I’ve done some of the numbers here, but don’t stand by them enough to share. I do estimate that products like Cursor or Claude are being sold at roughly an 80-90% discount compared to what’s sustainable, which is roughly in line with what Zitron has been saying, but it’s not precise enough for serious predictions.

      Your last paragraph makes me think. We often idealize blockchains with VMs, e.g. Ethereum, as a global distributed computer, if the computer were an old Raspberry Pi. But it is Byzantine distributed; the (IMO excessive) cost goes towards establishing a useful property. If I pick another old computer with a useful property, like a radiation-hardened chipset comparable to a Gamecube or G3 Mac, then we have a spectrum of computers to think about. One end of the spectrum is fast, one end is cheap, one end is Byzantine, one end is rad-hardened, etc. Even GPUs are part of this; they’re not that fast, but can act in parallel over very wide data. In remarkably stark contrast, the cost of Transformers on GPUs doesn’t actually go towards any useful property! Anything Transformers can do, a cheaper more specialized algorithm could have also done.

    • besselj@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      My guess is that vibe-physics involves bruteforcing a problem until you find a solution. That method sorta works, but is wholly inefficient and rarely robust/general enough to be useful.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 days ago

        Nah, he’s just talking to an LLM.

        “I’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” Kalanick explained. “And we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”

        And I don’t think you can brute force physics in general, having to experimentally confirm or disprove every random-ass intermediary hypothesis the brute force generator comes up with seems like quite the bottle neck.

        • besselj@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          ·
          3 days ago

          For sure. There’s an infinite amount of ways to get things wrong in math and physics. Without a fundamental understanding, all they can do is prompt-fondle and roll dice.

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 days ago

            They are not even rolling the dice. The bot is just humoring them, it apparently just defaults to eventually going ‘you are close to the edge of what is, known, well done keep going’.

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 days ago

        If infinite monkeys with typewriters can compose Shakespeare, then infinite monkeys with slop machines can produce Einstein (but you need to pump in infinite amounts of money first into my CodeMonkeyfy startup, just in case).

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    3 days ago

    Remember last week when that study on AI’s impact on development speed dropped?

    A lot of peeps take away on this little graphic was “see, impacts of AI on sw development are a net negative!” I think the real take away is that METR, the AI safety group running the study, is a motley collection of deeply unserious clowns pretending to do science and their experimental set up is garbage.

    https://substack.com/home/post/p-168077291

    “First, I don’t like calling this study an “RCT.” There is no control group! There are 16 people and they receive both treatments. We’re supposed to believe that the “treated units” here are the coding assignments. We’ll see in a second that this characterization isn’t so simple.”

    (I am once again shilling Ben Recht’s substack. )

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 days ago

      While I also fully expect the conclusion to check out, it’s also worth acknowledging that the actual goal for these systems isn’t to supplement skilled developers who can operate effectively without them, it’s to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 days ago

        True. They aren’t building city sized data centers and offering people 9 figure salaries for no reason. They are trying to front load the cost of paying for labour for the rest of time.

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      3 days ago

      When you look at METR’s web site and review the credentials of its staff, you find that almost none of them has any sort of academic research background. No doctorates as far as I can tell, and lots of rationalist junk affiliations.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      oh yeah that was obvious when you see who they are and what they do. also, one of the large opensource projects was the lesswrong site lololol

      i’m surprised it’s as well constructed a study as it is even given that

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 days ago

    So recently (two weeks ago), I noticed Gary Marcus made a lesswrong account to directly engage with the rationalists. I noted it in a previous stubsack thread

    Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass.

    And sure enough, he has started talking about P(Doom). I hate being right. To be more than fair to him, he is addressing the scenario of Elon Musk or someone similar pulling off something catastrophic by placing too much trust in LLMs shoved into something critical. But he really should know better by now that using their lingo and their crit-hype terminology strengthens them.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      3 days ago

      using their lingo and their crit-hype terminology strengthens them

      We live in a world where the US vice president admits to reading siskind AI fan fiction, so that ship has probably sailed.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        It has but we dont have to make it worse, we can create a small village that resists. Like the one small village in Gaul that resisted the Roman occupation.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          18 hours ago

          Yeah, that metaphor fits my feeling. And to extend the metaphor, I thought Gary Marcus was, if not a member of the village, at least an ally, but he doesn’t seem to actually realize the battle lines. Like maybe to him hating on LLMs is just another way of pushing symbolic AI?

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            17 hours ago

            Could also just be environment, pretty hard to stay on one site staunchly if half the people around you are then people your radically oppose.

            Also wonder about the only game in town factor

            • scruiser@awful.systems
              link
              fedilink
              English
              arrow-up
              2
              ·
              17 hours ago

              He knows the connectionist have basically won (insofar as you can construe competing scientific theories and engineering paradigms as winning or losing… which is kind of a bad framing), so that is why he pushing the “neurosymbolic” angle so hard.

              (And I do think Gary Marcus is right that the neurosymbolic approaches has been neglected by the big LLM companies because they are narrower and you can’t “guarantee” success just by dumping a lot of compute on them, you need actual domain expertise to do the symbolic half.)

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          I appreciate the reference having read half a dozen Astérix albums in the last few days. I just hope our Alesia has yet to come.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      It’s extremely annoying everywhere. GitHub’s updates were about AI for so fucking long that I stopped reading them, which means I now miss actually useful stuff until someone informs me of it months later.

      For example, did you know GitHub Actions now has really good free ARM runners? It’s amazing! I love it! Shame GitHub only bother’s to tell me about their revolutionary features of “please spam me with useless PRs” and… make a pong game? What? Why would I want this?

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 days ago

      rsyslog goes “AI first”

      what

      Thanks for the “from now on stay away from this forever” warning. Reading that blog post is almost surreal (“how AI is shaping the future of logging”), I have to remind myself it’s a syslog daemon.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        4 days ago

        I would’ve stan’d syslog-ng but they’ve also been pulling some fuckery with docs again lately that’s making me anxious, so I’m very :|||||

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      4 days ago

      Potential hot take: AI is gonna kill open source

      Between sucking up a lot of funding that would otherwise go to FOSS projects, DDOSing FOSS infrastructure through mass scraping, and undermining FOSS licenses through mass code theft, the bubble has done plenty of damage to the FOSS movement - damage I’m not sure it can recover from.

        • BlueMonday1984@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          4 days ago

          The deluge of fake bug reports is definitely something I should have noted as well, since that directly damages FOSS’ capacity to find and fix bugs.

          Baldur Bjanason has predicted that FOSS is at risk of being hit by “a vicious cycle leading to collapse”, and security is a major part of his hypothesised cycle:

          1. Declining surplus and burnout leads to maintainers increasingly stepping back from their projects.

          2. Many of these projects either bitrot serious bugs or get taken over by malicious actors who are highly motivated because they can’t relay on pervasive memory bugs anymore for exploits.

          3. OSS increasingly gets a reputation (deserved or not) for being unsafe and unreliable.

          4. That decline in users leads to even more maintainers stepping back.

          • gerikson@awful.systems
            link
            fedilink
            English
            arrow-up
            12
            ·
            4 days ago

            yeah but have you considered how much it’s worth that gramma can vibecode a todo app in seconds now???

      • ________@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 days ago

        I remember popping into IRC or a mailing list to ask subsystem questions to learn from the sources themselves how something works (or should work). Depending who what and where definitely had differing experiences but overall I felt like there was typically a helpful person on the other side. Nowadays I fear the slop will make people a lot less willing to help when they are overwhelmed with AI generated garbage patches or mails losing some of the rose-tinted charm of open source.

  • nightsky@awful.systems
    link
    fedilink
    English
    arrow-up
    30
    ·
    5 days ago

    I need to rant about yet another SV tech trend which is getting increasingly annoying.

    It’s something that is probably less noticeable if you live in a primarily English-speaking region, but if not, there is this very annoying thing that a lot of websites from US tech companies do now, which is that they automatically translate content, without ever asking. So English is pretty big on the web, and many English websites are now auto-translated to German for me. And the translations are usually bad. And by that I mean really fucking bad. (And I’m not talking about the translation feature in webbrowsers, it’s the websites themselves.)

    Small example of a recent experience: I was browsing stuff on Etsy, and Etsy is one of the websites which does this now. Entire product pages with titles and descriptions and everything is auto-translated, without ever asking me if I want that.

    On a product page I then saw:

    Material: gefühlt

    This was very strange… because that makes no sense at all. “Gefühlt” is a form (participle) of the verb “fühlen”, which means “to feel”. It can be used in a past tense form of the verb.

    So, to make sense of this you first have to translate that back to English, the past tense “to feel” as “felt”. And of course “felt” can also mean a kind of fabric (which in German is called “Filz”), so it’s a word with more than one meaning in English. You know, words with multiple meanings, like most words in any language. But the brilliant SV engineers do not seem to understand that you cannot translate words without the context they’re in.

    And this is not a singular experience. Many product descriptions on Etsy are full of such mistakes now, sometimes to the point of being downright baffling. And Ebay does the same now, and the translated product titles and descriptions are a complete shit show as well.

    And Youtube started replacing the audio of English videos by default with AI-auto-generated translations spoken by horrible AI voices. By default! It’s unbearable. At least there’s a button to switch back to the original audio, but I keep having to press it. And now Youtube Shorts is doing it too, except that the YT Shorts video player does not seem to have any button to disable it at all!

    Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      Ooooh that would explain a similarly weird interaction I had on a ticket-selling website, buying a streaming ticket to a live show for the German retro game discussion podcast Stay Forever: they translated the title of the event as “Bleib für immer am Leben”, guess they named it “Stay Forever Live”? No way to know for sure, of course.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 days ago

      Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?

      This really gets on my nerves too. They probably came up with the idea that they could increase time spent on their platforms and thus revenue by providing more content in their users’ native languages (especially non-English). Simply forcing it on everyone, without giving their users a choice, was probably the cheapest way to implement it. Even if this annoys most of their user base, it makes their investors happy, I guess, at least over the short term. If this bubble has shown us anything, it is that investors hardly care whether a feature is desirable from the users’ point of view or not.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 days ago

        if it’s opt-out, it also keeps use of the shitty ai dubbing high thus making it an artficial use case. it’s like with gemini counting every google search as single use of it

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 days ago

      Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?

      Considering how many are Trump bros, they probably consider getting consent to be Cuck Shittm and treat hearing anything but English as sufficient grounds to call ICE.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 days ago

      Ah, im not the only one, yes very annoying. I wonder if there isn’t also a setting they can ask the browsers about the users preferred language usage. Like how you can change languages on a windows install and some installers/etc follow that preferred language.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 days ago

      I found out about that too when I arrived at Reddit and it was translated to Swedish automatically.

      • nightsky@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 days ago

        Yes, right, Reddit too! Forgot that one. When I visit there I use alternative Reddit front-ends now which luckily spare me from this.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 days ago

      An underappreciated 8th-season Star Trek: TNG episode where Data tries to get closer to humanity by creating an innovative new metamaterial out of memories of past emotions

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      5 days ago

      aliexpress did that since forever but you can just set display language once and you’re done. these ai-dubs are probably worst so far but can be turned off by uploader (it’s opt-out) (for now)