The arguments made by AI safety researchers Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies are superficially appealing but fatally flawed, says Jacob Aron
The thing about synapses etc argument is that the hype crowd argues that perhaps the AI could wind up doing something much more effective than what-ever-it-is-that-real-brains-do.
If you look at capabilities, however, it is inarguable that “artificial neurons” seem intrinsically a lot less effective than real ones, if we consider small animals (like e.g. a jumping spider or a bee, or even a roundworm).
It is a rather unusual situation. When it comes to things like e.g. converting chemical energy to mechanical energy, we did not have to fully understand and copy muscles to be able to build a steam engine that has higher mechanical power output than you could get out of an elephant. That was the case for arithmetic, too, and hence there was this expectation of imminent AI in the 1960s.
I think it boils down to intelligence being a very specific thing evolved for a specific purpose, less like “moving underwater from point A to point B” (which submarine does pretty well) and more like “fish doing what fish do”. The submarine represents very little progress towards fishiness.
The thing about synapses etc argument is that the hype crowd argues that perhaps the AI could wind up doing something much more effective than what-ever-it-is-that-real-brains-do.
If you look at capabilities, however, it is inarguable that “artificial neurons” seem intrinsically a lot less effective than real ones, if we consider small animals (like e.g. a jumping spider or a bee, or even a roundworm).
It is a rather unusual situation. When it comes to things like e.g. converting chemical energy to mechanical energy, we did not have to fully understand and copy muscles to be able to build a steam engine that has higher mechanical power output than you could get out of an elephant. That was the case for arithmetic, too, and hence there was this expectation of imminent AI in the 1960s.
I think it boils down to intelligence being a very specific thing evolved for a specific purpose, less like “moving underwater from point A to point B” (which submarine does pretty well) and more like “fish doing what fish do”. The submarine represents very little progress towards fishiness.