

To be fair to DnD, it is actually more sophisticated than the IQ fetishists, it has 3 stats for mental traits instead of 1!
To be fair to DnD, it is actually more sophisticated than the IQ fetishists, it has 3 stats for mental traits instead of 1!
If your decision theory can’t address weird totally plausible in the near future hypotheticals with omniscient God-AIs offering you money in boxes if you jump through enough cognitive hoops, what is it really good for?
It’s always the people you most expect.
It’s pretty screwed up that humble bragging about putting their own mother out of a job is a useful opening to selling a scam-service. At least the people that buy into it will get what they have coming?
Nice job summarizing the lore in only 19 minutes (I assume this post was aimed at providing full context to people just joining or at least relatively new to tracking all this… stuff).
Some snarky comments, not because it wasn’t a good summary that should have included them (all the asides you could add could easily double the length and leave a casual listener/reader more confused), but because I think they are funny and I need to vent
You’ll see him quoted in the press as an “AI researcher” or similar.
Or decision theorist! With an entire one decision theory paper that he didn’t bother getting through peer review because the reviewers wanted, like actual context, and an actual decision theory and not just hand waves at paradoxes on the fringes of decision theory.
What Yudkowsky actually does is write blog posts.
He also writes fanfiction!
I’m not even getting to the Harry Potter fanfic, the cult of Ziz, or Roko’s basilisk today!
Yeah this rabbit hole is deep.
The goal of LessWrong rationality is so Eliezer Yudkowsky can live forever as an emulated human mind running on the future superintelligent AI god computer, to end death itself.
Yeah in hindsight the large number of ex-Christians it attracts makes sense.
And a lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.
He wrote a lot of blog posts about how smart and powerful the Torment Nexus would be, and how we really need to build the Anti-Torment Nexus, so if he had proper skepticism of Silicon Valley and Startup/VC Culture, he really should have seen this coming
There was also a huge controversy in Effective Altruism last year when half the Effective Altruists were shocked to discover the other half were turbo-racists who’d invited literal neo-Nazis to Effective Altruism conferences. The pro-racism faction won.
I was mildly pleasantly surprised to see there was a solid half pushing back in the comments in the response to the first manifest, but it looks like the anti-racism faction didn’t get any traction to change anything and the second manifest conference was just as bad or worse.
I think the problem is that the author doesn’t want to demonize any of those actual ideologies that oppose TESCREALism either explicitly or incidentally because they’re more popular and powerful and because rather than being foundationally opposed to “Progress” as he defines it they have their own specific principles that are harder to dismiss.
This is a good point. I’ll go even further and say a lot of the component ideologies of anti-TESCREALISM are stuff that this author might (at least nominally claim to) be in favor of so they can’t name the specific ideologies.
I feel like lesswrong’s front page has what would be a neat concept in a science fiction story at least once a week. Like what if an AGI had a constant record of it’s thoughts, but it learned to hide what it was really thinking in them with complex stenography! That’s a solid third act twist of at least a B sci-fi plot, if not enough to carry a good story by itself. Except lesswrong is trying to get their ideas passed in legislation and they are being used as the hype wing of the latest tech-craze. And they only occasionally write actually fun stories, as opposed to polemic stories beating you over the head with their moral or ten thousand word pseudo-academic blog posts.
That’s true. “Passing itself off as scientific” also describes Young Earth Creationism and Intelligent Design and various other pseudosciences. And in terms of who is pushing pseudoscience… the curent US administration is undeniably right-wing and opposed to all mainstream science.
Also, I would at least partially disagree with this:
Very few of the people making this argument are militant atheists who consider religion bad in of itself.
I would identify as an atheist, if not a militant one. And looking at Emile Torres’ Wikipedia page, he is an atheist also. Judging by the uncommon occasions it comes up on sneerclub, I think a lot of us are atheist/agnostic. Just not, you know, “militant”. And in terms of political allegiance, a lot of the libertarians on lesswrong are excited for the tax cuts and war on woke of the Trump administration even if it means cutting funding to all science and partnering up with completely batshit Fundamenalist Evangelicals.
Oh duh, I remember that meme now. With the people getting on the bus wearing weird white robe outfits?
I’d probably be exaggerating if I said that every time I looked under the hood of Wikipedia, it reaffirmed how I don’t have the temperament to edit there.
The lesswrongers hate dgerad’s Wikipedia work because they perceive it as calling them out, but if anything Wikipedia’s norms makes his “call outs” downright gentle and routine.
“Yall are in a cult, and it is TESCREAL.”
So I know you were going for a snappy summary, but I think one of the important things to note is that the TESCREAL essay doesn’t call them a singular cult, it draws connections between the letters of the acronym including inspirations, people in multiple letters of the acronym, common terminology, common ideological assumptions, and such.
I think a hypothetical more mature rationalist movement would acknowledge their historical and current influences and think critically about how they relate to them instead of just going nuh-uh. Like the relatively more reasonable EAs occasionally point out problematic trends in their movement and at least try to address them (not particularly effectually, but at least they aren’t all in total denial).
It feels like this person was mad at the TESCREAL label and decided to make a blog post going “nuh-uh, I know you are but what am I”… except they have none of the academic ability of the TESCREAL authors so they just sort of pile on labels and ideologies without properly showing any causal or ideological relationship (like the TESCREAL authors do). Heck, they outright screw up words and definitions in a few places, (Orate sticks out to me).
Keep in mind I was wildly guessing with a lot of numbers… like I’m sure 90 GB vRAM is enough for decent quality pictures generated in minutes, but I think you need a lot more compute to generate video at a reasonable speed? I wouldn’t be surprised if my estimate is off by a few orders of magnitude. $.30 is probably enough that people can’t spam lazily generated images, and a true cost of $3.00 would keep it in the range of people that genuinely want/need the slop… but yeah I don’t think it is all going cleanly away once the bubble pops or fizzles.
After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.
That’s actually more batshit than I thought! Like I thought Sam Altman knew the AGI thing was kind of bullshit and the hesitancy to stick a GPT-5 label on anything was because he was saving it for the next 10x scaling step up (obviously he didn’t even get that far because GPT-5 is just a bunch of models shoved together with a router).
Even if was noticeably better, Scam Altman hyped up GPT-5 endlessly, promising a PhD in your pocket, and an AGI and warning that he was scared of what he created. Progress has kind of plateaued, so it isn’t even really noticeably better, it scores a bit higher on some benchmarks, and they’ve patched some of the more meme’d tests (like counting rs in strawberry… except it still can’t count the r’s in blueberry, so they’ve probably patched the more obvious flubs with loads of synthetic training data as opposed to inventing some novel technique that actually improves it all around). The other reason the promptfondlers hate it is because, for the addicts using it as a friend/therapist, it got a much drier more professional tone, and for the people trying to use it in actual serious uses, losing all the old models overnight was really disruptive.
There are a couple of speculations as to why… one is that GPT-5 variants are actually smaller than the previous generation variants and they are really desperate to cut costs so they can start making a profit. Another is that they noticed that there naming scheme was horrible (4o vs o4) and confusing and have overcompensated by trying to cut things down to as few models as possible.
They’ve tried to simplify things by using a routing model that makes the decision for the user as to what model actually handles each user interaction… except they’ve screwed that up apparently (Ed Zitron thinks they’ve screwed it up badly enough that GPT-5 is actually less efficient despite their goal of cost saving). Also, even if this technique worked, it would make ChatGPT even more inconsistent, where some minor word choice could make the difference between getting the thinking model or not and that in turn would drastically change the response.
I’ve got no rational explanation lol. And now they overcompensated by shoving a bunch of different models under the label GPT-5.
There are techniques for caching some of the steps involved with LLMs. Like I think you can cache the tokenization and maybe some of the work of the attention head is doing if you have a static, known, prompt? But I don’t see why you couldn’t just do that caching separately for each model your model router might direct things to? And if you have multiple prompts you just do a separate caching for each one? This creates a lot of memory usage overhead, but not more excessively more computation… well you do need to do the computation to generate each cache. I don’t find it that implausible that OpenAI couldn’t manage to screw all this up somehow, but I’m not quite sure the exact explanation of the problem Zitron has given fits together.
(The order of the prompts vs. user interactions does matter, especially for caching… but I think you could just cut and paste the user interactions to separate it from the old prompt and stick a new prompt on it in whatever order works best? You would get wildly varying quality in output generated as it switches between models and prompts, but this wouldn’t add in more computation…)
Zitron mentioned a scoop, so I hope/assume someone did some prompt hacking to get GPT-5 to spit out some of it’s behind the scenes prompts and he has solid proof about what he is saying. I wouldn’t put anything past OpenAI for certain.
If they got a lot of usage out of a model this constant cost would contribute little to the cost of each model in the long run… but considering they currently replace/retrain models every 6 months to 1 year, yeah this cost should be factored in as well.
Also, training compute grows quadratically with model size, because its is a multiple of training data (which grows linearly with model size) and the model size.
Even bigger picture… some standardized way of regularly handling possible combinations of letters and numbers that you could use across multiple languages. Like it handles them as expressions?
I know like half the facts I would need to estimate it… if you know the GPU vRAM required for the video generation, and how long it takes, then assuming no latency, you could get a ballpark number looking at nVida GPU specs on power usage. For instance, if a short clip of video generation needs 90 GB VRAM, then maybe they are using an RTX 6000 Pro… https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ , take the amount of time it takes in off hours which shouldn’t have a queue time… and you can guessestimate a number of Watt hours? Like if it takes 20 minutes to generate, then at 300-600 watts of power usage that would be 100-200 watt hours. I can find an estimate of $.33 per kWh (https://www.energysage.com/local-data/electricity-cost/ca/san-francisco-county/san-francisco/ ), so it would only be costing $.03 to $.06.
IDK how much GPU-time you actually need though, I’m just wildly guessing. Like if they use many server grade GPUs in parallel, that would multiply the cost up even if it only takes them minutes per video generation.
Weird rp wouldn’t be sneer worthy on it’s own (although it would still be at least a little cringe), it’s contributing factors like…
the constant IQ fetishism (Int is superior to Charisma but tied with Wis and obviously a true IQ score would be both Int and Wis)
the fact that Eliezer cites it like serious academic writing (he’s literally mentioned it to Yann LeCunn in twitter arguments)
the fact that in-character lectures are the only place Eliezer has written up many of his decision theory takes he developed after the sequences (afaik, maybe he has some obscure content that never made it to lesswrong)
the fact that Eliezer think it’s another HPMOR-level masterpiece (despite how wordy it is, HPMOR is much more readable, even authors and fans of glowfic usually acknowledge the format can be awkward to read and most glowfics require huge amounts of context to follow)
the fact that the story doubles down on the HPMOR flaw of confusion of which characters are supposed to be author mouthpieces (putting your polemics into the mouths of character’s working for literal Hell… is certainly an authorial choice)
and the continued worldbuilding development of dath ilan, the rationalist utopia built on eugenics and censorship of all history (even the Hell state was impressed!)
…At least lintamande has the commonsense understanding of why you avoid actively linking your bdsm dnd roleplay to your irl name and work.
And it shouldn’t be news to people that KP supports eugenics given her defense of Scott Alexander or comments about super babies, but possibly it is and headliner of weird roleplay will draw attention to it.