• corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    It’s important to understand that the book’s premise is fairly hollow. Yudkowsky’s rhetoric really only gets going once we agree that (1) intelligence is comparable, (2) humans have a lot of intelligence, (3) AGIs can exist, (4) AGIs can be more intelligent than humans, and finally (5) an AGI can exist which has more intelligence than any human. They conclude from those premises that AGIs can command and control humans with their intelligence.

    However, what if we analogize AGIs and humans to humans and housecats? Cats have a lot of intelligence, humans can exist, humans can be more intelligent than housecats, and many folks might believe that there is a human who is more intelligent than any housecat. Assuming intelligence is comparable, does it follow that that human can command and control any housecat? Nope, not in the least. Cats often ignore humans; moreover, they appear to be able to choose to ignore humans. This is in spite of the fact that cats appear to have some sort of empathy for humans and perceive us as large slow unintuitive cats. A traditional example in philosophy is to imagine that Stephen Hawking owns a housecat; since Hawking is incredibly smart and capable of spoken words, does it follow that Hawking is capable of e.g. talking the cat into climbing into a cat carrier? (Aside: I recall seeing this example in one of Sean Carroll’s papers, but it’s also popularized by Cegłowski’s 2016 talk on superintelligence. I’m not sure who originated it, but I’d be unsurprised if it were Hawking himself; he had had that sort of humor.)