https://pivottoai.libsyn.com/20250817-ai-doomsday-and-ai-heaven-live-forever-with-ai-god - podcast
19 minutes, sit yourself down for this one
text version https://pivot-to-ai.com/2025/08/17/ai-doomsday-and-ai-heaven-live-forever-in-ai-god/
https://pivottoai.libsyn.com/20250817-ai-doomsday-and-ai-heaven-live-forever-with-ai-god - podcast
19 minutes, sit yourself down for this one
text version https://pivot-to-ai.com/2025/08/17/ai-doomsday-and-ai-heaven-live-forever-in-ai-god/
If your decision theory can’t address
weirdtotally plausible in the near future hypotheticals with omniscient God-AIs offering you money in boxes if you jump through enough cognitive hoops, what is it really good for?Tbh whenever I try to read anything on decision theory (even written by people other than rationalists), I end up wondering how do they think a redundant autopilot (with majority vote) would ever work. In an airplane, that is.
Considering just the physical consequences of a decision doesn’t work (unless theres a fault, consequences don’t make it through the voting electronics, so the alternative decisions made for the alternative that there is no fault, never make it through).
Each one simulating the two or more other autopilots is scifi-brained idiocy. Requiring that autopilots are exact copies is stupid (what if we had two different teams write different implementations, I think Airbus actually sort if did that).
Nothing is going to be simulating anything, and to make matters even worse for philosophers amateur and academic alike, the whole reason for redundancy is that sometimes there is a glitch that makes them not compute the same values, so any attempt to be clever with “ha, we just treat copies as one thing” doesn’t cut it either.
Yeah, even if computers predicting other computers didn’t require overcoming the halting problem (and thus contradict the foundations of computer science) actually implementing such a thing with computers smart enough to qualify as AGI in a reliable way seems absurdly impossible.