Ah they’re learning from the “unlimited” mobile carriers.
“Unlimited” until you meet your limit, then throttled.
Imagine the price hikes when they need to get that return on hundreds of billions they’ve poured into these models, datacenters and electricity.
Someone just got the AWS bill.
That’s got to be it. Cloud compute is expensive when you’re not being funded in Azure credits. One the dust settles from the AI bubble bursting, most of the AI we’ll see will probably be specialized agents running small models locally.
I’m still running Qwen32b-coder on a Mac mini. Works great, a little slow, but fine.
I’m somewhat tech savvy, how do I run llm locally. Any suggestions? How to know if my local data is safe
Checkout lm studio https://lmstudio.ai/ and you can pair it with vs continue extension https://docs.continue.dev/getting-started/overview.
Sounds like charge back territory
Hopefully (?) this is the start of a trend and people might begin to realize how all those products are not worth their price and AI is an overhyped mess made to hook users before exploiting them…
The whole industry is projecting something like negative $200B for next years. They know it’s not worth the price.
Common People
Well shit, I’ve been on vacation, and I signed up with Cursor a month ago. Not allowed at work, but for side projects at home in an effort to “see what all the fuss is about”.
So far, the experience was rock solid, but I assume when I get home that I’ll be unpleasantly surprised.
Has anyone here had rate limiting hit them?
I’ve primarily use claude-4-sonnet in cursor and was surprised to see a message telling me it would start costing extra above and beyond my subscription. This was prolly after 100 queries or so. However, switching to “auto” instead of a specific model continues to not cost anything and that still uses claude-4-sonnet when it thinks it needs to. Main difference I’ve noticed is it’s actually faster because it’ll sometimes hit cheaper/dumber APIs to address simple code changes.
It’s a nice toy that does improve my productivity quite a bit and the $20/month is the right price for me, but I have no loyalty and will drop them without delay if it becomes unusable. That hasn’t happened yet.
“Prolly”
In the English language, specifically North American dialects, this is a form of idiom.
I mean yeah? I wasn’t counting in detail, it’s an estimate.
Previously you got 500 requests a month and then it’d start charging you, even on “auto.” So the current charging scheme seems to be encouraging auto use so they can use cheaper LLMs when they make sense (honestly a good thing).
I was questioning the use of the word “prolly”
Nah, you should find a new bone to pick.
it means “probably” 🤗