Not something I believe full stop, but imo there are signs that should there be a bubble, it will pop later than we may think. A few things for consideration.

Big tech continues to invest. They are greedy. They aren’t stupid. They have access to better economic forcasting than we do. I believe they are aware of markets for the /application/ of AI which will continue to be profitable in the future. Think of how many things are pOwErEd By ArTiFiCiAl InTelIGence. That’s really speak for we have api tokens we pay for.

Along these lines comes the stupid. Many of us have bosses who insist, if not demand, we use AI. The US Secretary of Defense had his own obnoxious version if this earlier this week. If the stupid want it, the demand will remain if not increase.

Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will “get better” at the specifics and “create more versions”. This creates further reliance and demand on those products that “do exactly what we want”. It’s an opiate. Like that one tng episode with the headsets (weak allusion and shameless pandering I know)

IMO generative AI is a dead end which will only exacerbate existing inequity. That doesn’t mean there won’t continue to be tremendous buy in which will warp our collective culture maintaining it’s profitability. If the bubble bursts, I don’t think it will be for a while.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    16 days ago

    I think it’s a bit more complicated than that, not sure if I’d call it no big deal… You’re certainly right, it’s impressive what they can do with synthetic data these days. But as far as I’m aware that’s mostly used to train substantially smaller models from output of bigger models. I think it’s called distillation? I did not read any paper revising the older findings with synthetic data. And to be honest, I think we want the big models to improve. And not just by a few percent each year, like what OpenAI is able to do these days… We’d need to make it like 10x more intelligent and less likely to confabulate answers, so it starts becoming reliable and usable for tasks like proper coding. And with the exponential need for more training data, we’d probably need many times the internet and all human-written books to go in, to make it two times or five times better than it is today. So it needs to work with mostly synthetic data. And then I’m not sure if that even works. Can we even make more intelligent newer models learn from the output of their stupider predecessors? With humans we mostly learn from people who are more intelligent than us, it’s rarely the other way round. And I don’t see how language is like chess, where AI can just play a billion games and just learn from that, that’s not really how LLMs work.