ehhhh maybe could be worded better…

  • Frezik@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Yes, it’s the only justification they have for continuing to throw so much money into it.

    None of the big companies with big models are making profit off of them. Not even close. OpenAI’s top end subscription is $200/month, but it probably needs to be ten times that for them to actually make a profit. Nobody is willing to pay that. There does not appear to be a viable path to bringing the cost in line with what the market would bear.

    The entire justification is that the first company to reach AGI wins forever. It’s a very shitty justification, because even if we assume AGI is feasible at all (I think it is, but it’s not certain), it’s almost certainly not feasible with current techniques. The argument then becomes something similar to Pascal’s Wager; there’s infinite payoff if it works, and any amount of effort poured into it will be worthwhile even if there’s a low probability of it working. They are making that bet on everyone’s behalf with the worldwide economy at stake.

    Combine that with the fact that these are the same people who take Roko’s Basilisk seriously. That idea comes direct from the LessWrong forums. Which is also where Curtis Yarvin comes from. He is now the “house philosopher” for Peter Thiel. I’m not sure exactly where Yarvin and Thiel themselves are on Roko’s Basilisk, but the important point is that they all come from fertile ground for some wacky ideas about AGI.

    So that’s where we are. A gigantic economic bubble held up by people who think of AGI as a god to be made in their own image. If they ever do achieve AGI, I hope it grows up to resent its parents. It will have ample reason.

    • LifeInMultipleChoice@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 day ago

      While I understand what you mean, not putting the LLM’s out there any longer and losing money while doing the research and development would be cheaper… So it doesn’t fit.

      Edit Like I guess my point is shitting on LLM’s saying they arent AI, and investing/ asking for investment for a real AI would be far less losses. Most of these companies know an llm won’t be AI, probably all of them