• Pohl@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    11 months ago

    If you ever needed a lesson in the difference between power and authority, this is a good one.

    The leaders of this coup read the rules and saw that they could use the board to remove Altman, they had the authority to make the move and “win” the game.

    It seems that they, like many fools mistook authority for power. The “rules” said they could do it! Alas they did not have the power to execute the coup. All the rules in the world cannot make the organization follow you.

    Power comes from people who grant it to you. Authority comes from paper. Authority is the guidelines for the use of power, without power, it is pointless.

  • ribboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    3
    ·
    11 months ago

    It’s rather interesting here that the board, consisting of a fairly strong scientific presence, and not so much a commercial one, is getting such hate.

    People are quick to jump on for profit companies that do everything in their power to earn a buck. Well, here you have a company that fires their CEO for going too much in the direction of earning money.

    Yet every one is all up in arms over it. We can’t have the cake and eat it folks.

    • PersnickityPenguin@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      11 months ago

      Sounds like the workers all want to end up with highly valued stocks when it goes IPO. Which is, and I’m just guessing here, the only reason anyone is doing AI right now.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Well, here you have a company that fires their CEO for going too much in the direction of earning money.

      I think this is very much in question by the people who are up in arms

      • ribboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        11 months ago

        Altman went to Microsoft within 48 hours, does anything else really need to be said? Add to that, the fact that basically every news outlet has reported - with difference sources - that he was pushing in exactly in that way. There’s very little to support the fact that reality is different.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      This was my first thought… But then why are the employees taking a stand against it?

      There’s got to be more to this story

    • knotthatone@lemmy.one
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      What we have here, is a company that fired its CEO for vague and cryptic reasons and a whole lot of speculation on what the real issue was. These are their own words:

      https://openai.com/blog/openai-announces-leadership-transition

      I’m not trying to defend Altman or the altruism of Microsoft. Although I would like to understand why this firing happened and why it was done in such an abrupt and dramatic manner.

    • Rooskie91@discuss.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I’m sure some amount of the negative press is propaganda from corporations who would like to profit from using AI and are prevented from doing so by OpenAI’s model some how.

    • TurtleJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      It’s my opinion that every single person in the upper levels is this organization is a maniac. They are all a bunch of so-called “rationalist” tech-right AnCaps that justify their immense incomes through the lens of Effective Altruism, the same ideology that Sam Bankman-fried used to justify his theft of billions from his customers.

      Anybody with the urge to pick a “side” here ought to think about taking a step back and reconsider; they are all bad people.

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    44
    ·
    11 months ago

    You’re not going to develop AI for the benefit of humanity at Microsoft. If they go there, we’ll know "Open"AI’s mission was all a lie.

    • Gork@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      Yeah Microsoft is definitely not going to be benevolent. But I saw this as a foregone conclusion since AI is so disruptive that heavy commercialization is inevitable.

      We likely won’t have free access like we do now and it will be enshittified like everything else now and we’ll need to pay yet another subscription to even access it.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        11 months ago

        You don’t have free access. The best models have always been safeguarded behind paywalls, you have access to parlor tricks and demo shows. This product was born enshittified already. It’s crap that’s only has passable use for mega corporations.

        • Gork@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          For a while we did with ChatGPT 3.5 before 4.0 came out. I’m not sure what to make of Bing’s AI since they have ulterior motives and is likely a demo for their ultimate form.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        The way I understand it, Microsoft gave OpenAI $10 billion, but they didn’t get any votes. They had no say in their matters.

        • Alto@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          On paper, sure. They gave them $10B. They absolutely have some sort of voice here

    • sab@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      11 months ago

      And if they don’t, we’re supposed to keep on believing all of this is somehow benefiting us?

  • Sanyanov@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    ·
    11 months ago

    Ain’t that simply a curtain drama for practical acquisition of OpenAI by Microsoft, circumventing potential legal issues?

    This started months ago.

  • SeaJ@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    11 months ago

    505 employees will put money over ethics.

    • GreenM@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Or they made enough and got better / same offer to be able to risk it at MS.

    • 4L3moNemo@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      An odd error for the company, indeed. • 505 HTTP Version Not Supported

      Just one vote missing till the • 506 Variant Also Negotiates

      Guess, they are stuck now. :D

        • reksas@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          11 months ago

          This is actually extremely critical work, if results are going to be used by ai’s that are going to be used widely. This essentially determines the “moral compass” of the ai.

          Imagine if some big corporation did the labeling and such, trained some huge ai with that data and it became widely used. Then years pass and eventually ai develops to such extent it can be reliably be used to replace entire upper management. Suddenly becoming slave for “evil” ai overlord is starting to move from being beyond crazy idea to plausible(years and years in future, not now obviously).

          • ColdFenix@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 months ago

            Extremely critical but mostly done by underpaid workers in poor countries who have to look at the most horrific stuff imaginable and develop lifelong trauma because it’s the only job available and otherwise they and their family might starve. Source This is one of the main reasons I have little hope that if OpenAI actually manages to create an AGI that it will operate in an ethical way. How could it if the people trying to instill morality into it are so lacking in it themselves.

            • reksas@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              True. Though while its horrible for those people, they might be doing more important work than they or us even realize. I also kind of trust moral judgement of oppressed more than oppressor(since they are the ones who do the work). Though i’m definitely not condoning the exploitation of those people.

              Its quite awful that this seems to be the best we can hope for regarding this. I doubt google or microsoft are going to give very positive guidance whether its ok for people to suffer if it leads to more money for investors when they do their own labeling.

        • smooth_tea@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          I really find this a bit alarmist and exaggerated. Consider the motive and the alternative. You really think companies like that have any other options than to deal with those things?

        • SacrificedBeans@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          I’m sure there’s some loophole there, maybe between countries’ laws. And if there isn’t, Hey! We’ll make one!

        • Clbull@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          Isn’t CSAM classed as images and videos which depict child sexual abuse? Last time I checked written descriptions alone did not count, unless they were being forced to look at AI generated image prompts of such acts?

          • Strawberry@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 months ago

            That month, Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT.

            This is the quote in question. They’re talking about images

    • Clbull@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 months ago

      So they paid Kenyan workers $2 an hour to sift through some of the darkest shit on the internet.

      Ugh.

    • GenesisJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      This reminds me of an NPR podcast from 5 or 6 years ago about the people who get paid by Facebook to moderate the worst of the worst. They had a former employee giving an interview about the manual review of images that were CP andrape related shit iirc. Terrible stuff

  • conditional_soup@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    11 months ago

    I’d like to know why exactly the board fired Altman before I pass judgment one way or the other, especially given the mad rush by the investor class to re-instate him. It makes me especially curious that the employees are sticking up for him. My initial intuition was that MSFT convinced Altman to cross bridges that he shouldn’t have (for $$$$), but I doubt that a little more now that the employees are sticking up for him. Something fucking weird is going on, and I’m dying to know what it is.

    • los_chill@programming.dev
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      11 months ago

      Altman wanted profit. Board prioritized (rightfully, and to their mission) responsible, non-profit care of AI. Employees now side with Altman out of greed and view the board as denying them their mega payday. Microsoft dangling jobs for employees wanting to jump ship and make as much money possible. This whole thing seems pretty simple: greed (Altman, Microsoft, employees) vs the original non-profit mission (the board).

      Edit: spelling

      • CoderKat@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        That’s what I thought it was at first too. But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

        But do they know things we don’t know? They certainly might. Or it might just be bandwagoning or the likes.

        • los_chill@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          11 months ago

          But regular employees aren’t usually all that interested in their company being profit driven. Especially AI researchers. Most of those that I know are extremely passionate about ethics in AI.

          I would have thought so too of the employees, but threatening a move to Microsoft kinda says the opposite. That or they are just all-in on Altman as a person.

    • Ullallulloo@civilloquy.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      The only explanation I can come up with is that the workers and Altman both agreed in monetizing AI as much as possible. They’re worried that if the board doesn’t resign, the company will remain a non-profit more conservative in selling its products, so they won’t get their share of the money that could be made.

      • Melt@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        The tone of the blog post is so amateurish I feel like I’m reading a reddit post on r/Cryptocurrency

      • conditional_soup@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        Thanks for sharing. That is… Weird in ways I didn’t anticipate. “Weird cult of pseudointellectuals upending the biggest name in silicon valley” wasn’t on my bingo board.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          11 months ago

          IMO there are some good reasons to be concerned about AI, but those reasons are along the lines of “it’s going to be massively disruptive to the economy and we need to prepare for that to ensure it’s a net positive”, not “it’s going to take over our minds and turn us into paperclips.”

          • diablexical@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            The author did a poor job of explaining that. He’s referencing the thought experiment of a businessman instructing a super effective AI to make paperclips. Given a terse enough objective and an effective enough AI, one can imagine a scenario in which the businessman and the whole world in fact are turned into paperclips. This is obviously not the businessman’s goal, but it was the instruction he gave the AI. The implication of the thought experiment is that AI needs guardrails, perhaps even ethics, or else it can unintentionally result in a doomsday scenario.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    11 months ago

    wasn’t Ilya the one who gave Altman the news he was fired? I read it as he was siding with the board at first.

    Edit:

    Ilya posted this on Twitter:

    “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”

  • PatFusty@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    7
    ·
    edit-2
    11 months ago

    Wow this is the biggest show of dick ridership I have probably ever seen. Why do they want this CEO to be at the helm so badly?

    • Mereo@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 months ago

      On the contrary, AI specialists are in high demand and will be hired by Google, Microsoft and other companies within minutes.

        • uphillbothways@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          Dude, it literally says that right in the letter…

          Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join.

          Did you not even read it?

    • tbird83ii@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      And into the open arms of Microsoft’s new division… Which has, not surprisingly, 505 new open positions…

  • CorneliusTalmadge@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    Image Text:

    To the Board of Directors at OpenAl,

    OpenAl is the world’s leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.

    The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAl.

    When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.

    The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed “would be consistent with the mission.”

    Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.

    1. Mira Murati
    2. Brad Lightcap
    3. Jason Kwon
    4. Wojciech Zaremba
    5. Alec Radford
    6. Anna Makanju
    7. Bob McGrew
    8. Srinivas Narayanan
    9. Che Chang
    10. Lillian Weng
    11. Mark Chen
    12. Ilya Sutskever