• FrictionFiction@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    10
    ·
    17 hours ago

    yeah, I read the article and I’m looking forward to reading the book in a week when it comes out. I guess the article makes some decent points but I find it so reductive and simplistic to boil it down to “why are you even making these arguments because we have climate change todeal with now.” It didn’t seem like a cohesive argument against the book, but I will know more in a week or two.

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      It’s important to understand that the book’s premise is fairly hollow. Yudkowsky’s rhetoric really only gets going once we agree that (1) intelligence is comparable, (2) humans have a lot of intelligence, (3) AGIs can exist, (4) AGIs can be more intelligent than humans, and finally (5) an AGI can exist which has more intelligence than any human. They conclude from those premises that AGIs can command and control humans with their intelligence.

      However, what if we analogize AGIs and humans to humans and housecats? Cats have a lot of intelligence, humans can exist, humans can be more intelligent than housecats, and many folks might believe that there is a human who is more intelligent than any housecat. Assuming intelligence is comparable, does it follow that that human can command and control any housecat? Nope, not in the least. Cats often ignore humans; moreover, they appear to be able to choose to ignore humans. This is in spite of the fact that cats appear to have some sort of empathy for humans and perceive us as large slow unintuitive cats. A traditional example in philosophy is to imagine that Stephen Hawking owns a housecat; since Hawking is incredibly smart and capable of spoken words, does it follow that Hawking is capable of e.g. talking the cat into climbing into a cat carrier? (Aside: I recall seeing this example in one of Sean Carroll’s papers, but it’s also popularized by Cegłowski’s 2016 talk on superintelligence. I’m not sure who originated it, but I’d be unsurprised if it were Hawking himself; he had had that sort of humor.)

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 hours ago

      The arguments made against the book in the review are that it doesn’t make the case for LLMs being capable of independent agency, it reduces all material concerns of an AI takeover to broad claims of ASI being indistinguishable from magic and that its proposed solutions are dumb and unenforceable (again with the global GPU prohibition and the unilateral bombing of rogue datacenters).

      That towards the end they note that the x-risk framing is a cognitive short-circuit that causes the faithful to ignore more pressing concerns like the impending climate catastrophe in favor of a mostly fictitious problem like AI doom isn’t really a part of their core thesis against the book.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      I find it so reductive and simplistic to boil it down to “why are you even making these arguments because we have climate change todeal with now.

      reductio ad reductionem fallacy