Investors like this approach because it sells so well even if nothing much is behind it. The logic: don’t pay attention to the business model, don’t emphasize this, but put everything on companies that appear promising with their product some time far in the future - throw money at it until it is hyped - then sell before reality kicks in.
This is not to say that there are no use cases for LLMs—there certainly are, and in very different contexts. I am simply pointing out that the market value of the companies involved is hopelessly overvalued—far removed from reality.
The only thing that makes this completely reckless approach absolutely foolproof for large investors is the fact that all large investors are involved. This ensures that the share prices will rise until the large investors agree to sell, at which point it won’t be long before everything collapses—whether it’s a useful technology or a viable product doesn’t really matter at this point.
This is how today’s stock market works due to the massive centralization of capital: All you need to know is which stocks major investors and politicians, who are paid to pass the relevant legislation, are investing in.
You can make it all seem much more complicated than it really is, but that’s the bottom line.


Do you have any sources that cite figures that would suggest this? To be honest, I have my doubts—except for the statement that money is being shifted back and forth; however, I don’t understand why massive investments in data centers would make sense in this context if it’s not just making a profit for Nvidia and such.
As I said, I don’t consider LLMs and image generation to be technologies without use cases. I’m simply saying that the impact of these technologies is being significantly and very deliberately overestimated. Take so-called AI agents, for example: they’re a practical thing, but miles away from how they’re being sold.
Furthermore, even Open AI is very far from being in the black, and I consider it highly doubtful that this will ever be possible given the considerable costs involved. In my opinion, the only option would be to focus on marketing opportunities, which is the business model of the classic Google search engine—but this would have a very negative impact on user value.
So you gotta understand, I’m a history buff with a financial background that dabbles in cybersecurity. So like this is me speculating based on my own view.
Can you be more specific? I want to give you a high quality response when I have time.
Thank you, I really appreciate that.
Figures and/or examples would be very interesting for:
The statement that LLMs will continue to develop rapidly and/or that their output will still improve significantly in quality. I currently assume that development will slow down considerably—for example, with regard to hallucinations, where it was assumed for some time that the problem could be solved by more extensive training data, but this has proven to be a dead end.
The statement that the value of the companies involved can be justified in any way with real-world assets. Or, at any rate, reliable statements about how existing or planned data centers built for this purpose can be operated economically despite their considerable running costs.
How you justify your statement that it would be realistic to replace human workers on a large scale. Examples where this is the case would be interesting (by this I don’t mean figures on where workers have been laid off, but examples of companies where human work has been (successfully) made obsolete by LLMs – I am not aware of any such examples where this has happened in a significant way and attributable to the use of LLMs).
I am aware that the technology is being used in warfare. I am not aware of its significance or the tactical advantages it is supposed to offer. Please provide examples of what you mean.