Some light sneerclub content in these dark times.
Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).
In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.
Eliezer invents HPMOR wireheads in reaction to this.
I’m reminded of a My Little Pony singularity fan-fiction (Friendship is Optimal) that I read back when I had poor taste. An AI for a pony MMORPG goes rogue and converts everyone into digital ponies to maximize happiness but with a pony theme. The victims live out impossibly long, but ultimately superficial, lives doing pony stuff and goodness gracious why is there such a weird relationship between rationalists and fanfiction writers.
most charitable psychoanalysis: projecting their sense of rationality onto a fictional world is a way to express a deep longing for rules and logic in an often cruelly irrational world
least charitable: their sense of rationality can only be true in a fictional world, so they want to live in that rather than reality
Neutral charity: the author is dead, all interpretation is essentially fanfiction, and since we are all individuals, all relationships with texts/fanfiction are weird.
the most euphemistic description yet of the cursed slab of ponyfucking
“I dig a pony … Well, you can penetrate any place you go / Yes, you can penetrate any place you go / I told you so”
Whatever, I’ll be a pony. Where do I sign up?
Pleasure Island, from Pinocchio. You gotta ask for the pony pass though, or else you’re just gonna get turned into a donkey. To reverse the transformation you gotta go to the island of Dr. Moreau.
they’re both extremely online. next question
It’s the combination of big imaginations and little real-world experience. In Friendship is Optimal, the AGI goes from asking for more CPUs to asking for information on how to manufacture its own CPUs, somehow without involving the acquisition of silicon crystals or ASML hardware along the way. Rationalist writers imagine that AGI will somehow provide its own bounty of input resources, rather than participating in the existing resource economy.
In reality, none of our robots have demonstrated the sheer instrumentality required to even approach this sort of doomsday scenario. I think rationalists have a bit of the capitalist blind spot here, imagining that everything and everybody (and everypony!) is a resource.