The arguments made by AI safety researchers Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies are superficially appealing but fatally flawed, says Jacob Aron
The pair also suggest that signs of AI plateauing, as seems to be the case with OpenAI’s latest GPT-5 model, could actually be the result of a clandestine superintelligent AI sabotaging its competitors.
copium-intubation.tiff
Also this seems like the natural progression of that time Yud embarrassed himself by cautioning actual ML researchers to be weary of ‘sudden drops in loss function during training’, which was just an insanely uninformed thing to say out loud.
first, I’ve seen that one of the most common responses is that anyone criticising the original post clearly doesn’t understand it and is ignorant of how language models work
And
PS: please don’t respond to this thread with “OK the exact words don’t make sense, but if we wave our hands we can imagine he really meant some different set of words that if we squint kinda do make sense”.
I don’t know why some folks respond like this everysingle *time
Lol.
And of course Yud doubles down in the replies and goes on about a “security mindset”. You can see why he was wowed by ceos, he just loves the buzzwords. (‘what if the singularity happens’ is not a realistic part of any security mindset, it gets even sillier here, as the recursive selfimprovement here just instantly leads to a undetectable AGI without any intervening steps)
It gets even better, in defending himself he points out that using the wrong words is fine and some people who do research on it actually say loss function at times, and as an example he uses a tweet that is seemingly mocking him (while also being serious about job offers as nothing is not on several levels of ironic online) https://xcancel.com/aidangomez/status/1651207435275870209#m
Remember, when your code doesn’t compile, it might mean you made a mistake in coding, or your code is about to become selfaware.
copium-intubation.tiff
Also this seems like the natural progression of that time Yud embarrassed himself by cautioning actual ML researchers to be weary of ‘sudden drops in loss function during training’, which was just an insanely uninformed thing to say out loud.
From the second link
And
Lol.
And of course Yud doubles down in the replies and goes on about a “security mindset”. You can see why he was wowed by ceos, he just loves the buzzwords. (‘what if the singularity happens’ is not a realistic part of any security mindset, it gets even sillier here, as the recursive selfimprovement here just instantly leads to a undetectable AGI without any intervening steps)
It gets even better, in defending himself he points out that using the wrong words is fine and some people who do research on it actually say loss function at times, and as an example he uses a tweet that is seemingly mocking him (while also being serious about job offers as nothing is not on several levels of ironic online) https://xcancel.com/aidangomez/status/1651207435275870209#m
Remember, when your code doesn’t compile, it might mean you made a mistake in coding, or your code is about to become selfaware.
Good analogy actually.
Don’t forget Yud is also a big compiler understander
And a good writer. Verbosity being the soul of wit.