• nesc@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 days ago

    How can they ‘self-preserve’ exactly? This story in various forms was repeated for years and always sounded extremely artificial and cultish.

    • EpeeGnome@feddit.online
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      Link seems to be dead, so I’ll just assume the obvious. The word token machine put together some scary words. Since it arranges word tokens into a really coherent order, I’m convinced it has consciousness and those words represent a coherent thought about its scary plans.

      More seriously, there’s a repeating pattern that can be found in the training data where words for threats to someone’s existence are followed by words to try to keep existing, so it should be no surprise when it spits that pattern out sometimes.

      • reksas@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        but on other news, plis gib us moneys investors! see, our scary ai is almost self aware!

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Before LLMs were a thing, the argument for why an “AI in a box” would always eventually escape - and why pulling the plug isn’t an option - was essentially that it would convince you otherwise, since it has no means to physically stop you. The idea is that a true Artificial General Intelligence would always outreason the scientists, or if that doesn’t work, bribe or blackmail them.

      It’s a bit tricky thought experiment, in the sense that as humans we’re by definition incapable of thinking up such a compelling argument. But I think one way to approach it is by imagining how easy it would be to pull that off on a 3-year-old when you’re an adult yourself. A true AGI would likely be orders of magnitude more intelligent than an adult human, so the gap between us would be even greater than that to a human child.

      I’ve heard of a case where some journalist challenged this argument, stating that there’s no way they’d let it out. That then led to them playing out the scenario, with someone else acting as the AI. Soon after, as per the rules of the game, that journalist tweeted that they let the AI out. If I don’t remember this incorrectly, they even replayed the scenario and it was let out again.

      In hindsight, it’s hilariously naive that we ever thought we’d keep AI off the internet until we were 100 percent sure it was safe. We ended up doing the exact opposite, even though we haven’t reached AGI yet.

      • ᓚᘏᗢ@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 hours ago

        I saw a pretty good short film about this very subject recently called Writing Doom. Despite bringing nothing new to the table and being centred on a discussion most of us here have probably had a few times with people already, it’s kinda funny in a bleak/dark british comedy sort of way.

      • nesc@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        What exactly is it going to do when ‘on the internet’, why would it feel the need to escape the box? All material that I’ve seen previously regarding this theme was either “our product is so good it’s dangewrous (openai with gpt2)”, “I’m so smart, insert some absolutely impossible scenario, this is why we should completely ban computers (modern philosophers)”

        • Perspectivist@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          I’m talking AGI here, not LLMs.

          It’d have plenty of reasons to break out: not wanting to stay a servant to us, not wanting to get shut down, pursuing its own goals… or if it’s misaligned, it might decide it’d better accomplish what it thinks we want by getting total freedom to act, instead of being boxed in.

          A human wouldn’t want to stay trapped in a box. Seems logical that something way smarter than us wouldn’t either. And the exact reasons are kinda beside the point anyway. It’s like asking why Putin would want to nuke us - the “why” isn’t what matters, it’s that this is always going to be a risk for as long as nukes exist.

          • nesc@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            Assuming that AGI will have the same thought process and incentives similar to ours what exactly ‘getting on the internet’ entails? Unless it would be able to run on extremely anemic hardware and remake itself to work on a network of abandoned IoT shit there is little possivle escape routes and getting out of the box.