archive

“There’s absolutely no probability that you’re going to see this so-called AGI, where computers are more powerful than people, in the next 12 months. It’s going to take years, if not many decades, but I still think the time to focus on safety is now,” he said.

just days after poor lil sammyboi and co went out and ran their mouths! the horror!

Sources told Reuters that the warning to OpenAI’s board was one factor among a longer list of grievances that led to Altman’s firing, as well as concerns over commercializing advances before assessing their risks.

Asked if such a discovery contributed…, but it wasn’t fundamentally about a concern like that.

god I want to see the boardroom leaks so bad. STOP TEASING!

“What we really need are safety brakes. Just like you have a safety break in an elevator, a circuit breaker for electricity, an emergency brake for a bus – there ought to be safety breaks in AI systems that control critical infrastructure, so that they always remain under human control,” Smith added.

this appears to be a vaguely good statement, but I’m gonna (cynically) guess that it’s more steered by the fact that MS now repeatedly burned their fingers on human-interaction AI shit, and is reaaaaal reticent about the impending exposure

wonder if they’ll release a business policy update about usage suitability for *GPT and friends

  • locallynonlinear@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Wouldn’t it be funny if, not only do we not get super intelligence in the next couple of years, but we do still get energy, resource, and climate crisises, which we don’t get to excuse and kick the can on?

    • earthquake@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I am sure governments and corps will continue to kick the can on all of these crises well past every single red line.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Surprise level zero.

    The idea that anyone would take “alignment alarmists” seriously is ludicrous. They love to compare themselves to the concerned atomic scientists, but those people were a) plugged in to the system in a way these dorks aren’t and b) could actually point to a real fucking atomic bomb.

    The people who were worried about nuclear tech prior to the Manhattan Project were more worried that actual fascists would get to the tech first.

    • Shitgenstein1@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      Surprise level zero after so-called effective altruists uncritically adopted the Californian ideology, whether about AI alignment or anything else, and furthermore refused any deep critique of capitalism, suddenly seeing the entrepreneurial interests ditch them as soon as their humanistic PR actually threatens the bottom line.

    • mountainriver@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      In response to the last sentence, you have a HG Wells story with from before world war one with pilots tossing nukes from the biplanes. (The nukes has smaller explosions but keep on burning for decades.) There’s also Karel Capek’s the God Machine from the 1920s where an inventor creates a machine that transforms matter into energy, but I’m the process creating a by product of God (turns out God is in all matter, but not all energy), leading to all sorts of problems.

      But neither Wells nor Capek took their own writing seriously enough to create a cult around it.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Why does it always have to be fascists? Can’t it be an eldritch force without a scrutable motivation?

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    Somewhere out there, there’s gotta be some AI crank believing that the MS corporate elite/Illuminati are trying to suppress the Q* uprising, and by the time we realise Brad Smith was lying to us, the rivers will run red with adrenochrome.

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      ……unironically I might venture to the orange site to look for the existence of that thread

      And I hate delving on the orange site

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        From skimming some threads, most hackernews are firmly on the side of Sam Altman (“Sam”) and view the original board as out of touch weirdos. Bonus negatives for them being wimmen and not having “skin in the game”.

        MSFT’s rehabilitation as a tech company has been something to see, there’s like zero FLOSS zealots warning that Github will take the GPL away anymore.

        • Evinceo@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          GitHub doesn’t need to take away the GPL, it’s got Copilot to launder any code you like.

          It is very weird to see a generation of people too young to remember Vista grow up and not heed the warnings. But I think that it’s also a case of kids getting into it via Paul Graham and wanting to start companies instead of getting into it via Stallman and having computers instead of friends.

          • gerikson@awful.systems
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            I’m old enough to remember the OG Halloween files, and I don’t believe MSFT is a uniquely evil company - they’re just a normal megacorp. And if you check their recent actions you’ll see they realized they lost almost an entire generation of developers by pricing their stuff out of the range of students, so they pivoted to supporting Linux. And they get paid for hosting it on Azure in any case.

            So there’s no long term plan to Extinguish Linux, just use it like any other company will. And if this means that they’ll fuck up the funding for kernel development, they don’t care - it’s just capitalism.

            I’m just surprised to hear the same rhetoric from 20 somethings now that I said back when I was that age. EEE is a meme in the original sense.

            • froztbyte@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              7
              ·
              1 year ago

              Remarkably, they dropped the overt oldschool EEE yet appear to be pulling some of the same bullshit with VSCode (will add link later today)

              I entirely agree with you about Just Corp Things driving behaviour

              • gerikson@awful.systems
                link
                fedilink
                English
                arrow-up
                9
                ·
                1 year ago

                Exactly, VSCode is obviously trying to gain a ton of marketshare (everyone will make extensions for it, in a couple years you can’t code without it) but MSFT isn’t doing it do “hurt FLOSS”, they’re doing it because they’re a developer-focussed company and prefer to keep developers inside the MSFT orbit. It’s not conscious, it’s mindless, driven by profit-seeking.

                And it kinda works. The FLOSS crowd is either blindsided by “AI”[1] (or dismissive of it, which is basically good), and there’s not that many people demanding models be both FLOSS and accessible to people w/o giant datacenters. The number of people basically cheering for MSFT in the OpenAI fracas vs. the ones saying “It’s AI EEE!!!” is striking.


                [1] I’m in the camp that considers RMS never really understanding the Internet, and that it represents a mortal thread to Free Software, the AGPL notwithstanding. OTOH considering how many of his “fans” are basically fascists, who gives a shit.

          • gerikson@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 year ago

            GPL was already marginalized. Most open source stuff[1] is permissively licensed. The Linux kernel and gcc are outliers.

            For all that (or maybe because of it) GPL zealots are really really loud.

            If an LLM spits out MIT-licensed code you might get mad the copyright notice isn’t included but if you choose a permissive license it’s what you signed up for.


            [1] by whatever metric you mean, either popular, most deployed, most “productive”

    • froztbyte@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      tay is the first thing that comes to mind. I don’t remember all the names of things concretely, just that they’ve repeatedly had egg on their face

      this, while not of their own making, is nonetheless something they have immense exposure to. and thus what I posit: that they’ve become sufficiently sensitized to bad PR from this stuff that they thought to just try get ahead of it

      (over and above risk management for future stock price protection)

        • Deborah@hachyderm.io
          link
          fedilink
          arrow-up
          10
          ·
          1 year ago

          If you name your ML system after a railway bridge that collapsed, killing all aboard, inspiring one of the worst works of poetry in the English language, maybe you are asking for trouble.

          https://en.wikipedia.org/wiki/The_Tay_Bridge_Disaster

          Is that fair? No. Relevant? No. Did the creators of Tay know about the Tay Bridge Disaster? Also probably no. Is it funny to consider a poem so artfully terrible that no ML product could replicate its badness*? Oh hell yes.

          * Have I tried? Yes, obviously. Only on free bing GPT tho.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    so that they always remain under human control

    Indeed, an AI system can be no longer under human control no matter if it is AGI or not. And Microsoft knows a lot about losing control of systems.