Not that I have sources to link but last I read I thought the big two providers are making enough money to profit on just inference cost, but because of the obscene spending on human talent and compute for new model training they aren’t turning a profit. If they stopped developing new models they would likely be making money though.
And they are fleecing investors for billions, so big profit in that way lol
Midjourney reported a profit in 2022, and then never reported anything new.
Cursor recently made 1 month of mad profit, by first hiking the price in their product and holding the users basically hostage, and then they stopped offering their most succesful product because they couldnt afford to sell it at that price. They annualized that month, and now “make a profit”.
Basically, cursor let everyone drive a Ferrari for a hundred bucks a month. Then they said “sorry, it costs 500 a month”. And then said “actually, instead of Ferrari, here’s a Honda”. Then they subtracted the cost of the Honda from the price of the Ferrari, and called it a record profit
The companies that were rasing reasonable revenue compared to their costs (e.g. Cursor) were ones that were buying inference from OpenAI and Anthropic at enterprise rates and selling it to users at retail rates, but OpenAI and Anthropic raised their rates, so that cost was passed onto consumers, who stopped paying for Cursor etc. and now they’re haemorrhaging money.
The problem is that you do need to keep training models for this to make sense.
And you always need at least some human editorialization of models, otherwise the model will just say whatever, learn from itself and degrade over time. This cannot be done by other AIs, so for now you still need humans to make sure the AI models are actually getting useful information.
The problem with this, which many have already pointed out, is that it makes AIs just as unreliable as any traditional media. But if you don’t oversee their datasets at all and just allow them to learn from everything then they’re even more useless, basically just replicating social media bullshit, which nowadays is like at least 60% AI generated anyway.
So yeah, the current model is, not surprisingly, completely unsustainable.
The technology itself is great though. Imagine having an AI that you can easily train at home on 100s of different academic papers, and then run specific analyses or find patterns that would be too big for humans to see at first. Also imagine the impact to the medical field with early cancer detection or virus spreading patterns, or even DNA analysis for certain diseases.
It’s also super good if used for creative purposes (not for just generating pictures or music). So for example, AI makes it possible for you to sing a song, then sing the melody for every member of a choir, and fine tune all voices to make them unique. You can be your own choir, making a lot of cool production techniques more accessible.
I believe once the initial hype dies down, we stop seeing AI used as a cheap marketing tactic, and the bubble bursts, the real benefits of AI will become apparent, and hopefully we will learn to live with it without destroying each other lol.
The technology itself is great though. Imagine having an AI that you can easily train at home on 100s of different academic papers, and then run specific analyses or find patterns that would be too big for humans to see at first.
Imagine is the key word. I’ve actually tried to use LLMs to perform literature analyses in my field, and they’re total crap. They produce something that sounds true to someone not familiar with a field. But if you actually have some expert knowledge in a field, the LLM just completely falls apart. Imagine is all you can do, because LLMs cannot perform basic literature review and project planning, let alone find patterns in papers that human scientists can’t. The emperor has no clothes.
But I don’t think that’s necessarily a problem that can’t be solved. LLM and so on are ultimately simply statistical analysis, and if you refine it and train it enough, it can absolutely summarise at least one paper at the moment. Google’s Notebook LM is already capable of it, I just don’t think it can quite pull off many of them yet. But the current state of LLMs is not that far off.
I agree with AIs being way over hyped and also just having a general dislike for them due to the way they’re being used, the people who gush over them, and the surrounding culture. But I don’t think that means we should simply ignore reality altogether. The LLMs from 2 or even 1 year ago are not even comparable to the ones today, and that trend will probably keep going that way for a while. The main issue lies with the ethics of training, copyright, and of course, the replacement of labor in exchange of what amounts to simply a cool tool.
Not that I have sources to link but last I read I thought the big two providers are making enough money to profit on just inference cost, but because of the obscene spending on human talent and compute for new model training they aren’t turning a profit. If they stopped developing new models they would likely be making money though.
And they are fleecing investors for billions, so big profit in that way lol
Midjourney reported a profit in 2022, and then never reported anything new.
Cursor recently made 1 month of mad profit, by first hiking the price in their product and holding the users basically hostage, and then they stopped offering their most succesful product because they couldnt afford to sell it at that price. They annualized that month, and now “make a profit”.
Basically, cursor let everyone drive a Ferrari for a hundred bucks a month. Then they said “sorry, it costs 500 a month”. And then said “actually, instead of Ferrari, here’s a Honda”. Then they subtracted the cost of the Honda from the price of the Ferrari, and called it a record profit
This is legal somehow
The companies that were rasing reasonable revenue compared to their costs (e.g. Cursor) were ones that were buying inference from OpenAI and Anthropic at enterprise rates and selling it to users at retail rates, but OpenAI and Anthropic raised their rates, so that cost was passed onto consumers, who stopped paying for Cursor etc. and now they’re haemorrhaging money.
The problem is that you do need to keep training models for this to make sense.
And you always need at least some human editorialization of models, otherwise the model will just say whatever, learn from itself and degrade over time. This cannot be done by other AIs, so for now you still need humans to make sure the AI models are actually getting useful information.
The problem with this, which many have already pointed out, is that it makes AIs just as unreliable as any traditional media. But if you don’t oversee their datasets at all and just allow them to learn from everything then they’re even more useless, basically just replicating social media bullshit, which nowadays is like at least 60% AI generated anyway.
So yeah, the current model is, not surprisingly, completely unsustainable.
The technology itself is great though. Imagine having an AI that you can easily train at home on 100s of different academic papers, and then run specific analyses or find patterns that would be too big for humans to see at first. Also imagine the impact to the medical field with early cancer detection or virus spreading patterns, or even DNA analysis for certain diseases.
It’s also super good if used for creative purposes (not for just generating pictures or music). So for example, AI makes it possible for you to sing a song, then sing the melody for every member of a choir, and fine tune all voices to make them unique. You can be your own choir, making a lot of cool production techniques more accessible.
I believe once the initial hype dies down, we stop seeing AI used as a cheap marketing tactic, and the bubble bursts, the real benefits of AI will become apparent, and hopefully we will learn to live with it without destroying each other lol.
Imagine is the key word. I’ve actually tried to use LLMs to perform literature analyses in my field, and they’re total crap. They produce something that sounds true to someone not familiar with a field. But if you actually have some expert knowledge in a field, the LLM just completely falls apart. Imagine is all you can do, because LLMs cannot perform basic literature review and project planning, let alone find patterns in papers that human scientists can’t. The emperor has no clothes.
But I don’t think that’s necessarily a problem that can’t be solved. LLM and so on are ultimately simply statistical analysis, and if you refine it and train it enough, it can absolutely summarise at least one paper at the moment. Google’s Notebook LM is already capable of it, I just don’t think it can quite pull off many of them yet. But the current state of LLMs is not that far off.
I agree with AIs being way over hyped and also just having a general dislike for them due to the way they’re being used, the people who gush over them, and the surrounding culture. But I don’t think that means we should simply ignore reality altogether. The LLMs from 2 or even 1 year ago are not even comparable to the ones today, and that trend will probably keep going that way for a while. The main issue lies with the ethics of training, copyright, and of course, the replacement of labor in exchange of what amounts to simply a cool tool.