The arguments made by AI safety researchers Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies are superficially appealing but fatally flawed, says Jacob Aron
at minimum this requires additionally keeping position of neurons, modelling concentration of any neurotransmitters and their diffusion (taking into account shape of cells around) and their degradation products, some of which are active on their own. whatever set of interactions might be between neurons, it’ll probably make it changing with time and probably not sparse (information exchange isn’t just packaged neatly within synapses)
Yeah, maybe, all those things could be necessary for sure. It’s possible that our brains aren’t the exact most optimal way of structuring such a thing, and it’s not guaranteed that the best way to replicate it is to stimulate it. It’s also plausible that there are calculations which capture a good deal of the complexity of the relative positions of neurons in simpler terms. Maybe there are way more complications than that. Maybe some of them work against each other in our brains and it would be better to leave them out of a simulation. There are many orders of magnitudes of unknowns. But it seems really likely that it’s at least as complicated as what the earlier poster described. And I think that’s quite a strong position already for most practical arguments about it.
it’s a guess of what can be abstracted away and what has to remain. i’d just add that evolutionarily, peptide signalling is older than synapses so these, or something that works like these, probably can’t be just left out of the picture, and there’s a couple of processes that seem important that depend on them (you can live normal life while packed full of naloxone, which blocks activity of opioid peptides, but this probably won’t work with, say, orexin which is important for sleep/wake cycle)
at minimum this requires additionally keeping position of neurons, modelling concentration of any neurotransmitters and their diffusion (taking into account shape of cells around) and their degradation products, some of which are active on their own. whatever set of interactions might be between neurons, it’ll probably make it changing with time and probably not sparse (information exchange isn’t just packaged neatly within synapses)
Yeah, maybe, all those things could be necessary for sure. It’s possible that our brains aren’t the exact most optimal way of structuring such a thing, and it’s not guaranteed that the best way to replicate it is to stimulate it. It’s also plausible that there are calculations which capture a good deal of the complexity of the relative positions of neurons in simpler terms. Maybe there are way more complications than that. Maybe some of them work against each other in our brains and it would be better to leave them out of a simulation. There are many orders of magnitudes of unknowns. But it seems really likely that it’s at least as complicated as what the earlier poster described. And I think that’s quite a strong position already for most practical arguments about it.
it’s a guess of what can be abstracted away and what has to remain. i’d just add that evolutionarily, peptide signalling is older than synapses so these, or something that works like these, probably can’t be just left out of the picture, and there’s a couple of processes that seem important that depend on them (you can live normal life while packed full of naloxone, which blocks activity of opioid peptides, but this probably won’t work with, say, orexin which is important for sleep/wake cycle)