Not something I believe full stop, but imo there are signs that should there be a bubble, it will pop later than we may think. A few things for consideration.

Big tech continues to invest. They are greedy. They aren’t stupid. They have access to better economic forcasting than we do. I believe they are aware of markets for the /application/ of AI which will continue to be profitable in the future. Think of how many things are pOwErEd By ArTiFiCiAl InTelIGence. That’s really speak for we have api tokens we pay for.

Along these lines comes the stupid. Many of us have bosses who insist, if not demand, we use AI. The US Secretary of Defense had his own obnoxious version if this earlier this week. If the stupid want it, the demand will remain if not increase.

Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will “get better” at the specifics and “create more versions”. This creates further reliance and demand on those products that “do exactly what we want”. It’s an opiate. Like that one tng episode with the headsets (weak allusion and shameless pandering I know)

IMO generative AI is a dead end which will only exacerbate existing inequity. That doesn’t mean there won’t continue to be tremendous buy in which will warp our collective culture maintaining it’s profitability. If the bubble bursts, I don’t think it will be for a while.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    16 days ago

    Mmhh, I don’t think AI is self-replicating. We have papers detailing how it gets stupider after being fed its own output. So it needs external, human-written text to learn. And that’s in limited supply.

    Reinforcement learning with human feedback is certainly a thing, but I don’t think that feedback changes it substantially. It is a bit of fine-tuning which happens with user feedback but not much.

    And I mean Altman, Zuckerberg etc just say whatever gets them investor money. It’s not like they have a crystal ball and can tell whether there is going to be some scientific breaktrough in 2029 which is going to solve the scaling problem. They’re just charismatic salesmen and people like to throw money ontop of huge piles of money… And we have some plain crazy people like Musk and Peter Thiel. But I really don’t think there’s some advanced forecasting involved here. It’s a good old hype. And maybe the technology really has some potential.

    Agree on the whole, will pop later than we think.

    • nymnympseudonym@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      edit-2
      16 days ago

      papers detailing how it gets stupider after being fed its own output

      “Model collapse”

      Turns out to be NBD, you just have to be careful with both the generated outputs and with the mathematics. All major models are pretrained on synthetic (AI-generated) data these days.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        16 days ago

        I think it’s a bit more complicated than that, not sure if I’d call it no big deal… You’re certainly right, it’s impressive what they can do with synthetic data these days. But as far as I’m aware that’s mostly used to train substantially smaller models from output of bigger models. I think it’s called distillation? I did not read any paper revising the older findings with synthetic data. And to be honest, I think we want the big models to improve. And not just by a few percent each year, like what OpenAI is able to do these days… We’d need to make it like 10x more intelligent and less likely to confabulate answers, so it starts becoming reliable and usable for tasks like proper coding. And with the exponential need for more training data, we’d probably need many times the internet and all human-written books to go in, to make it two times or five times better than it is today. So it needs to work with mostly synthetic data. And then I’m not sure if that even works. Can we even make more intelligent newer models learn from the output of their stupider predecessors? With humans we mostly learn from people who are more intelligent than us, it’s rarely the other way round. And I don’t see how language is like chess, where AI can just play a billion games and just learn from that, that’s not really how LLMs work.