The arguments made by AI safety researchers Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies are superficially appealing but fatally flawed, says Jacob Aron
I think the question of “general intelligence” is kind of a red herring. Evolution for example creates extremely complex organisms and behaviors, all without any “general intelligence” working towards some overarching goal.
The other issue with Yudkowsky is that he’s an unimaginative fool whose only source of insights on the topic is science fiction, which he doesn’t even understand. There is no fun in having Skynet start a nuclear war and then itself perish in the aftermath, as the power plants it depend on cease working.
Humanity itself doesn’t possess that kind of intelligence envisioned for “AGI”. When it comes to science and technology, we are all powerful hivemind. When it comes to deciding what to do with said science and technology, we are no more intelligent than an amoeba, crawling along a gradient.
I think the question of “general intelligence” is kind of a red herring. Evolution for example creates extremely complex organisms and behaviors, all without any “general intelligence” working towards some overarching goal.
The other issue with Yudkowsky is that he’s an unimaginative fool whose only source of insights on the topic is science fiction, which he doesn’t even understand. There is no fun in having Skynet start a nuclear war and then itself perish in the aftermath, as the power plants it depend on cease working.
Humanity itself doesn’t possess that kind of intelligence envisioned for “AGI”. When it comes to science and technology, we are all powerful hivemind. When it comes to deciding what to do with said science and technology, we are no more intelligent than an amoeba, crawling along a gradient.