As soon as a reasonable bad news story breaks. For example if the LLM architecture shows it’s limits without a way forward, or that new laws make further AI research unfeasible. Multiple big players abandoning LLM research.
Something like that from a trusted news source would burst the bubble.
I even think that most people in AI know that it’s a bubble, it’s just that they gamble that AI reaches AGI or even ASI before the bubble bursts.
As soon as a reasonable bad news story breaks. For example if the LLM architecture shows it’s limits without a way forward, or that new laws make further AI research unfeasible. Multiple big players abandoning LLM research.
Something like that from a trusted news source would burst the bubble.
I even think that most people in AI know that it’s a bubble, it’s just that they gamble that AI reaches AGI or even ASI before the bubble bursts.
if? They have shown their limits, and we don’t have a way forward.