The arguments made by AI safety researchers Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies are superficially appealing but fatally flawed, says Jacob Aron
They also seem to broadly agree with the ‘hey, humans are pretty shit at thinking too, you know’ line of LLM apologetics.
“LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work,” say the pair – again, I’m in full agreement.
But judging from the rest of the review I can see how you kind of have to be at least somewhat rationalist-adjacent to have a chance of actually reading the thing to the end.
For example, it doesn’t even get through the subhead before calling Yud an “AI researcher”.
All three of these movements [Bay Area rationalists, “AI safety” and Effective Altruists] attempt to derive their way of viewing the world from first principles, applying logic and evidence to determine the best ways of being.
i actually got hold of a review copy of this
(using the underhand scurvy weasel trick of asking for one)
that was two weeks ago and i still haven’t opened it lol
better get to that, sigh
this review has a number of issues (he liked HPMOR) but the key points are clear: bad argument, bad book, don’t bother
They also seem to broadly agree with the ‘hey, humans are pretty shit at thinking too, you know’ line of LLM apologetics.
But judging from the rest of the review I can see how you kind of have to be at least somewhat rationalist-adjacent to have a chance of actually reading the thing to the end.
Born to create meaning
Forced to produce sentences
For example, it doesn’t even get through the subhead before calling Yud an “AI researcher”.
Sure, Jan.
Please, it’s “facts and logic”. Has this author never been on the internet?