The arguments made by AI safety researchers Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies are superficially appealing but fatally flawed, says Jacob Aron
I think that’s still putting the cart before the horse a bit. We don’t understand how the brain creates consciousness or have a meaningful definition of “general intelligence” other than “y’know; like a people does”. Assuming that simulating a human brain is the best way to get to this poorly-defined goals overestimates our understanding of the underlying problem just as much as assuming that the confabulatron will determine get there soon.
I think the question of “general intelligence” is kind of a red herring. Evolution for example creates extremely complex organisms and behaviors, all without any “general intelligence” working towards some overarching goal.
The other issue with Yudkowsky is that he’s an unimaginative fool whose only source of insights on the topic is science fiction, which he doesn’t even understand. There is no fun in having Skynet start a nuclear war and then itself perish in the aftermath, as the power plants it depend on cease working.
Humanity itself doesn’t possess that kind of intelligence envisioned for “AGI”. When it comes to science and technology, we are all powerful hivemind. When it comes to deciding what to do with said science and technology, we are no more intelligent than an amoeba, crawling along a gradient.
I think that’s still putting the cart before the horse a bit. We don’t understand how the brain creates consciousness or have a meaningful definition of “general intelligence” other than “y’know; like a people does”. Assuming that simulating a human brain is the best way to get to this poorly-defined goals overestimates our understanding of the underlying problem just as much as assuming that the confabulatron will determine get there soon.
I think the question of “general intelligence” is kind of a red herring. Evolution for example creates extremely complex organisms and behaviors, all without any “general intelligence” working towards some overarching goal.
The other issue with Yudkowsky is that he’s an unimaginative fool whose only source of insights on the topic is science fiction, which he doesn’t even understand. There is no fun in having Skynet start a nuclear war and then itself perish in the aftermath, as the power plants it depend on cease working.
Humanity itself doesn’t possess that kind of intelligence envisioned for “AGI”. When it comes to science and technology, we are all powerful hivemind. When it comes to deciding what to do with said science and technology, we are no more intelligent than an amoeba, crawling along a gradient.