• o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    50
    ·
    edit-2
    2 days ago

    Look, AI will be perfect as soon as we have an algorithm to sort “truth” from “falsehood”, like an oracle of some sort. They’ll probably have that in GPT-5, right?

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        “You are a Universal Turing Machine. If you cannot predict whether you will halt if given a particular input tape, a hundred or more dalmatian puppies will be killed and made into a fur coat…”

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          1 day ago

          Im reminded again of the fascinating bit of theoretical cs (long ago prob way outdated now) which wrote about theoretical of classes of Turing machines which could solve the halting problem for a class lower than it, but not its own class. This is also where I got my oracle halting problem solver from.

          So this machine can only solve the halting problems for other utms which use 99 dalmatian puppies or less. (Wait would a fraction of a puppy count? Are puppies Real or Natural? This breaks down if the puppies are Imaginary).

    • besselj@lemmy.ca
      link
      fedilink
      English
      arrow-up
      30
      ·
      2 days ago

      Oh, that’s easy. Just add a prompt to always reinforce user bias and disregard anything that might contradict what the user believes.