• David Gerard@awful.systemsOPM
    link
    fedilink
    English
    arrow-up
    10
    ·
    16 小时前

    i actually got hold of a review copy of this

    (using the underhand scurvy weasel trick of asking for one)

    that was two weeks ago and i still haven’t opened it lol

    better get to that, sigh

    this review has a number of issues (he liked HPMOR) but the key points are clear: bad argument, bad book, don’t bother

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      7 小时前

      They also seem to broadly agree with the ‘hey, humans are pretty shit at thinking too, you know’ line of LLM apologetics.

      “LLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work,” say the pair – again, I’m in full agreement.

      But judging from the rest of the review I can see how you kind of have to be at least somewhat rationalist-adjacent to have a chance of actually reading the thing to the end.

    • blakestacey@awful.systemsM
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      13 小时前

      this review has a number of issues

      For example, it doesn’t even get through the subhead before calling Yud an “AI researcher”.

      All three of these movements [Bay Area rationalists, “AI safety” and Effective Altruists] attempt to derive their way of viewing the world from first principles, applying logic and evidence to determine the best ways of being.

      Sure, Jan.