The pair also suggest that signs of AI plateauing, as seems to be the case with OpenAI’s latest GPT-5 model, could actually be the result of a clandestine superintelligent AI sabotaging its competitors.
copium-intubation.tiff
Also this seems like the natural progression of that time Yud embarrassed himself by cautioning actual ML researchers to be weary of ‘sudden drops in loss function during training’, which was just an insanely uninformed thing to say out loud.
From the second link
first, I’ve seen that one of the most common responses is that anyone criticising the original post clearly doesn’t understand it and is ignorant of how language models work
And
PS: please don’t respond to this thread with “OK the exact words don’t make sense, but if we wave our hands we can imagine he really meant some different set of words that if we squint kinda do make sense”.
I don’t know why some folks respond like this every single *time
Lol.
And of course Yud doubles down in the replies and goes on about a “security mindset”. You can see why he was wowed by ceos, he just loves the buzzwords. (‘what if the singularity happens’ is not a realistic part of any security mindset, it gets even sillier here, as the recursive selfimprovement here just instantly leads to a undetectable AGI without any intervening steps)
It gets even better, in defending himself he points out that using the wrong words is fine and some people who do research on it actually say loss function at times, and as an example he uses a tweet that is seemingly mocking him (while also being serious about job offers as nothing is not on several levels of ironic online) https://xcancel.com/aidangomez/status/1651207435275870209#m
Remember, when your code doesn’t compile, it might mean you made a mistake in coding, or your code is about to become selfaware.
Remember, when your code doesn’t compile, it might mean you made a mistake in coding, or your code is about to become selfaware.
Good analogy actually.
And a good writer. Verbosity being the soul of wit.
i actually got hold of a review copy of this
(using the underhand scurvy weasel trick of asking for one)
that was two weeks ago and i still haven’t opened it lol
better get to that, sigh
this review has a number of issues (he liked HPMOR) but the key points are clear: bad argument, bad book, don’t bother
They also seem to broadly agree with the ‘hey, humans are pretty shit at thinking too, you know’ line of LLM apologetics.
“LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work,” say the pair – again, I’m in full agreement.
But judging from the rest of the review I can see how you kind of have to be at least somewhat rationalist-adjacent to have a chance of actually reading the thing to the end.
this review has a number of issues
For example, it doesn’t even get through the subhead before calling Yud an “AI researcher”.
All three of these movements [Bay Area rationalists, “AI safety” and Effective Altruists] attempt to derive their way of viewing the world from first principles, applying logic and evidence to determine the best ways of being.
Sure, Jan.
“AI researcher blogger”
logic and evidence
Please, it’s “facts and logic”. Has this author never been on the internet?
yeah, I read the article and I’m looking forward to reading the book in a week when it comes out. I guess the article makes some decent points but I find it so reductive and simplistic to boil it down to “why are you even making these arguments because we have climate change todeal with now.” It didn’t seem like a cohesive argument against the book, but I will know more in a week or two.
The arguments made against the book in the review are that it doesn’t make the case for LLMs being capable of independent agency, it reduces all material concerns of an AI takeover to broad claims of ASI being indistinguishable from magic and that its proposed solutions are dumb and unenforceable (again with the global GPU prohibition and the unilateral bombing of rogue datacenters).
That towards the end they note that the x-risk framing is a cognitive short-circuit that causes the faithful to ignore more pressing concerns like the impending climate catastrophe in favor of a mostly fictitious problem like AI doom isn’t really a part of their core thesis against the book.
I find it so reductive and simplistic to boil it down to “why are you even making these arguments because we have climate change todeal with now.
reductio ad reductionem fallacy