https://pivottoai.libsyn.com/20250817-ai-doomsday-and-ai-heaven-live-forever-with-ai-god - podcast
19 minutes, sit yourself down for this one
text version https://pivot-to-ai.com/2025/08/17/ai-doomsday-and-ai-heaven-live-forever-in-ai-god/
https://pivottoai.libsyn.com/20250817-ai-doomsday-and-ai-heaven-live-forever-with-ai-god - podcast
19 minutes, sit yourself down for this one
text version https://pivot-to-ai.com/2025/08/17/ai-doomsday-and-ai-heaven-live-forever-in-ai-god/
Nice job summarizing the lore in only 19 minutes (I assume this post was aimed at providing full context to people just joining or at least relatively new to tracking all this… stuff).
Some snarky comments, not because it wasn’t a good summary that should have included them (all the asides you could add could easily double the length and leave a casual listener/reader more confused), but because I think they are funny
and I need to ventOr decision theorist! With an entire one decision theory paper that he didn’t bother getting through peer review because the reviewers wanted, like actual context, and an actual decision theory and not just hand waves at paradoxes on the fringes of decision theory.
He also writes fanfiction!
Yeah this rabbit hole is deep.
Yeah in hindsight the large number of ex-Christians it attracts makes sense.
He wrote a lot of blog posts about how smart and powerful the Torment Nexus would be, and how we really need to build the Anti-Torment Nexus, so if he had proper skepticism of Silicon Valley and Startup/VC Culture, he really should have seen this coming
I was mildly pleasantly surprised to see there was a solid half pushing back in the comments in the response to the first manifest, but it looks like the anti-racism faction didn’t get any traction to change anything and the second manifest conference was just as bad or worse.
decision theory is when there’s a box with money but if you take the box it doesn’t have money
If your decision theory can’t address
weirdtotally plausible in the near future hypotheticals with omniscient God-AIs offering you money in boxes if you jump through enough cognitive hoops, what is it really good for?Tbh whenever I try to read anything on decision theory (even written by people other than rationalists), I end up wondering how do they think a redundant autopilot (with majority vote) would ever work. In an airplane, that is.
Considering just the physical consequences of a decision doesn’t work (unless theres a fault, consequences don’t make it through the voting electronics, so the alternative decisions made for the alternative that there is no fault, never make it through).
Each one simulating the two or more other autopilots is scifi-brained idiocy. Requiring that autopilots are exact copies is stupid (what if we had two different teams write different implementations, I think Airbus actually sort if did that).
Nothing is going to be simulating anything, and to make matters even worse for philosophers amateur and academic alike, the whole reason for redundancy is that sometimes there is a glitch that makes them not compute the same values, so any attempt to be clever with “ha, we just treat copies as one thing” doesn’t cut it either.
Yeah, even if computers predicting other computers didn’t require overcoming the halting problem (and thus contradict the foundations of computer science) actually implementing such a thing with computers smart enough to qualify as AGI in a reliable way seems absurdly impossible.