ehhhh maybe could be worded better…

  • Triumph@fedia.io
    link
    fedilink
    arrow-up
    25
    ·
    2 days ago

    The unintended message here is that someone else put a whole bunch of work into mining actual diamonds, and then cutting them into jewels, only for theives to tunnel in to steal them.

  • LifeInMultipleChoice@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    2 days ago

    Are many of them really trying for actual artificial intelligence? I don’t follow AI research. I just see the shoving of LLM’s into everything and giving people the ability to make their own cat pictures

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Yes, it’s the only justification they have for continuing to throw so much money into it.

      None of the big companies with big models are making profit off of them. Not even close. OpenAI’s top end subscription is $200/month, but it probably needs to be ten times that for them to actually make a profit. Nobody is willing to pay that. There does not appear to be a viable path to bringing the cost in line with what the market would bear.

      The entire justification is that the first company to reach AGI wins forever. It’s a very shitty justification, because even if we assume AGI is feasible at all (I think it is, but it’s not certain), it’s almost certainly not feasible with current techniques. The argument then becomes something similar to Pascal’s Wager; there’s infinite payoff if it works, and any amount of effort poured into it will be worthwhile even if there’s a low probability of it working. They are making that bet on everyone’s behalf with the worldwide economy at stake.

      Combine that with the fact that these are the same people who take Roko’s Basilisk seriously. That idea comes direct from the LessWrong forums. Which is also where Curtis Yarvin comes from. He is now the “house philosopher” for Peter Thiel. I’m not sure exactly where Yarvin and Thiel themselves are on Roko’s Basilisk, but the important point is that they all come from fertile ground for some wacky ideas about AGI.

      So that’s where we are. A gigantic economic bubble held up by people who think of AGI as a god to be made in their own image. If they ever do achieve AGI, I hope it grows up to resent its parents. It will have ample reason.

      • LifeInMultipleChoice@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 day ago

        While I understand what you mean, not putting the LLM’s out there any longer and losing money while doing the research and development would be cheaper… So it doesn’t fit.

        Edit Like I guess my point is shitting on LLM’s saying they arent AI, and investing/ asking for investment for a real AI would be far less losses. Most of these companies know an llm won’t be AI, probably all of them

  • pr06lefs@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    2 days ago

    replace diamonds with 99% of the human race being fed into a wood chipper and you’re almost there.

      • cecilkorik@piefed.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        That’s probably what they believe, yes. But their version of “the environment” will be sanitized, sterilized, stripped of anything natural, wild, or unpleasant, and reconstructed in small patches from an idealized image of it to create “sustainable green ecosystems” for decorative and cosmetic value, carefully balancing the needs of aesthetic beauty and functional habitat for the small subset of life forms they have decided to preserve and care about. Turning Earth into a collection of novelty terrariums on a planetary scale for the surviving billionaire space-cowboys and their disciples to return to when they need a reminder of what they imagine “home” must’ve been like before their tech-utopia arrived and they became functionally immortal.

        These people cannot accept and do not value anything unless they absolutely control it. It is ironic that people think they are going to create AGI and give it the freedom to take over the world. The only way they’ll do that is by accident if they somehow lose their obsessive, neurotic control over it. Granted, they are certainly not anywhere near as smart as they think they are, and such an accident seems utterly plausible if anything resembling true AGI were actually ever developed. Fortunately I think we’re a really long way from that.

  • Bjarne@feddit.org
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    20 hours ago

    99% of prompters quit before the ultimate world peace bringing, cancer curing prompt.