Statement on Superintelligence taken from https://superintelligence-statement.org/

Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.

Statement: We call for a prohibition on the development of superintelligence, not lifted before there is

  1. broad scientific consensus that it will be done safely and controllably
  2. strong public buy-in.
  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    I have respect for some of those researchers from Tsinghua University. Can’t speak to most others without looking up their citations.

    Also funny to see Steve Bannon, Susan Rice, and other folks all in the same pool.

  • Twongo [she/her]@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    9 hours ago

    lol, lmao even.

    AI developers themselves don’t know what their creations are doing so they just give it “guardrails” and the whole process to progress the technology is based on vibes or giving it more computing power. the latter proved to plateau really quick.

    this article looks like it’s part of the illusive AGI hype. The only realistic consequence i see in this is that researches hit a brick wall, the technology plateaus and even gets worse due to ai cannibalization and a multi-trillion-dollar industry collapses leaving an economy in shambles.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 hours ago

      You’re completely missing the point. It honestly sounds like you want them to keep pursuing AGI, because I can’t see any other reason why you’d be mocking the people arguing that we shouldn’t.

      How close to the nuclear bomb do researchers need to get before it’s time to hit the brakes? Is it really that unreasonable to suggest that maybe we shouldn’t even be trying in the first place? I don’t understand where this cynicism comes from. From my perspective, these people are actually on the same side as the anti-AI sentiment I see here every day - yet they’re still being ridiculed just for having the audacity to even consider that we might actually stumble upon AGI, and that doing so could be the last mistake humanity ever makes.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 hours ago

      There’s a distinction between Tech Bro transformers scaling hype and legit AGI research, which existed well before the former.

      Things are advancing extremely rapidly, and an AGI “cap” would be a good idea if it could somehow materialize. But it honestly has nothing to do with Sam Altman and that tech circle, at least not the tech they currently have.

  • justsomeguy@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    13 hours ago

    Always sounds like marketing to me since all we really have so far are shitty LLMs. If they actually were close to AGI there’s no way any single government would stop the armsrace to it anyway. We’re currently pumping all our eggs into the AI basket and headlines like these encourage it. They imply this breakthrough that is around the corner that will justify the current state of the market while the reality will most likely be a massive disappointment.

    • Goodman@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      13 hours ago

      I am expecting disappointment, and maybe a few cool technologies that will stick around. Don’t think we are going to get AGI soon, but if we did, no-one asked for it anyways.

  • itkovian@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    11 hours ago

    We DON’T HAVE the ability or know-how to create Intelligence of any kind. We really don’t understand intelligence.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      Intelligence is a human-made term to describe an abstract phenomenon - it’s not a concrete thing but a spectrum. On one end of that spectrum is a rock: it doesn’t do anything, it just is. On the opposite end lies what we call superintelligence. Somewhere in between are things like a mouse, a sunflower, a human, a large language model, and a dolphin. All of these display some degree of intelligent behavior. It’s not that one is intelligent and another isn’t - it’s that some are more intelligent than others.

      While it’s true that we don’t fully understand how intelligence works, it’s false to say we don’t understand it at all or that we’re incapable of creating intelligent systems. The chess opponent on the Atari is an intelligent system. It can acquire, understand, and use information - which fits one of the common definitions of intelligence. It’s what we call a narrow intelligence, because its intelligence is limited to a single domain. It can play chess - and nothing else.

      Humans, on the other hand, are generally intelligent. Our intelligence spans multiple independent domains, and we can learn, reason, and adapt across them. We are, by our own definition, the benchmark for general intelligence. Once a system reaches human-level intelligence, it qualifies as generally intelligent. That’s not some cosmic law - it’s an arbitrary threshold we invented.

      The reason this benchmark matters is that once something becomes as intelligent as we are, it no longer needs us to improve itself. By definition, we have nothing more to offer it. We’ve raised the tiger cub to adulthood - and now it no longer needs us to feed it. It’s free to feed on us if it so desires.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      10 hours ago

      Yep.

      They’ve raised billions, maybe trillions, on the promise they can do it, hoping they’d figure it out in time to keep investors happy.

      But they can’t, so this is the proverbial “hold me back bro”.

      Their mouths wrote a check their asses can’t cash, so they’re desperate to save face by making it look like something is preventing them from following thru.

  • TheLeadenSea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    13 hours ago

    As if this will stop it being made, even if it is put into law. All it will do is make it happen without legal and public oversight.