Investors like this approach because it sells so well even if nothing much is behind it. The logic: don’t pay attention to the business model, don’t emphasize this, but put everything on companies that appear promising with their product some time far in the future - throw money at it until it is hyped - then sell before reality kicks in.
This is not to say that there are no use cases for LLMs—there certainly are, and in very different contexts. I am simply pointing out that the market value of the companies involved is hopelessly overvalued—far removed from reality.
The only thing that makes this completely reckless approach absolutely foolproof for large investors is the fact that all large investors are involved. This ensures that the share prices will rise until the large investors agree to sell, at which point it won’t be long before everything collapses—whether it’s a useful technology or a viable product doesn’t really matter at this point.
This is how today’s stock market works due to the massive centralization of capital: All you need to know is which stocks major investors and politicians, who are paid to pass the relevant legislation, are investing in.
You can make it all seem much more complicated than it really is, but that’s the bottom line.


Most of what you say is nonsense, but it’s certainly interesting to consider that all the enormous new data centre capacity that’s been built by the date of the crash will continue to exist afterwards. I wonder what we’ll do with it all.
Refute it then. Cuz right now you’re just displaying the brain off behaviour I was describing.
“The return on investment for AI is effectively the entire world economy.”
Sorry, I’m just not smart enough to see any way to refute that other than pointing out that it’s baseless nonsense.
So you’re not concerned about general purpose humanoid robots being used to replace human workforces entirely?
Do you have proof that this is likely to happen within our lifetimes?
Why stop at work? What if they completely replace humans? I mean why wouldn’t they, in the long run? Our doom is inevitable.
I guess you’d take it sitting down. Or do you think they’re quadrupling down on this AI bubble to create some utopia for all of us?
Hello, robots of the future. You can see that I’ve been on your side the whole time. Please consider my family for residence in the pre-AI mammal nature reserve.