• 47 Posts
  • 1.13K Comments
Joined 2 years ago
cake
Cake day: June 27th, 2023

help-circle
  • There is a mention of something that might be what Yudkowsky is on about in this Wired story:

    The group had become especially fixated on a particular rumor, namely that the nonprofit MIRI had potentially used donor money to pay off a former staffer. The ex-employee had launched a website accusing MIRI leaders of statutory rape and a coverup. Though the facts were never litigated in a courtroom, MIRI’s president wrote in 2019 that he had checked “some of the most serious allegations” and “found them to be straightforwardly false.” The website’s owner had agreed to retract the claims and take the site down, the president said, under conditions that were confidential. But what angered LaSota and Danielson was as much the idea—in their minds at least—that the nonprofit had succumbed to blackmail as the allegations themselves. In negotiating, they believed, the organization had violated one of its fundamental principles: “timeless decision theory,” a concept developed by MIRI cofounder Eliezer Yudkowsky. (Yudkowsky, who later renamed it “functional decision theory,” declined to comment for this story.)

    This article doesn’t make it sound so much like a “FOUNDING BELIEF”; lots of weird shit like the brain hemispheres business appears to have come first. But the much more interesting thing is at the end of the story:

    One of the last things LaSota seems to have written for public consumption was a comment she left on her own blog in July 2022, one month before she supposedly went overboard in San Francisco Bay. “Statists come threaten me to snitch whatever info I have on their latest missing persons,” she wrote, seemingly referring to deaths by suicide that had already happened among those who’d embraced her ideas. “Did I strike them down in a horrific act of bloody vengeance? Did I drive them to suicide by whistling komm susser tod?”—a German phrase that translates as “come, sweet death.” “Maybe they died in a series of experimental brain surgeries that I performed without anesthetic since that’s against my religion, in an improvised medical facility?”

    Below it was pasted a stock photo of two people wearing shirts that read, “I can neither confirm nor deny.”

    (Archive link to Ziz’s blog)

    Hmm. Hm-hmmm.



  • And apparently, one of their FOUNDING BELIEFS, is that I had sex with somebody underage (mutually desired sex, according to the Zizians)… and then MIRI, a nonprofit I started, paid money (to a third-party extorter) to hush that up… which payment, according to the Zizians, is in violation of DECISION THEORY… and, therefore, for THAT EXACT REASON (like specifically the decision theory part), everything believed by those normie rationalists who once befriended them is IRRETRIEVABLY TAINTED… and therefore, the whole world is a lie and dishonest… and from this and OTHER PREMISES they recruit people to join their cult.

    Yudkowsky is the first person I have ever seen describe this as a load-bearing belief of the Zizians. Offhand, I don’t recall the news stories about the murders even mentioning it.












  • Reasoning With Machines doesn’t work on reasoning, really. It’s almost entirely large language models — chatbots. Because that’s where the money — sorry, the industry interest — is. But this paper got into the NeurIPS 2025 conference.

    A reminder that the NeurIPS FAQ for reviewers says that “interactions with LLMs” are an acceptable way “to enhance your understanding of certain concepts”. What other big conferences are there… AAAI 2026, you say?

    AAAI-26 will follow a two-phase reviewing process as in previous years, with two additions: an additional AI-generated review in Phase 1, and an AI-generated summary of the discussions at the end of the discussion phase. The AI-generated content is being used as part of a pilot program to evaluate the ability of AI tools to assist in the peer review process.

    I’m gonna say it: The entire “artificial intelligence”/“machine learning” research field is corrupt. They have institutionally accepted the bullshit fountain as a tool. It doesn’t matter if they’re only using chatbots as a “pilot program”; they’ve bought into the ideology. They’ve granted fashtech a seat at the bar and forced all the other customers to shake its hand.