Happy Linux user over here. Free open source AI models are becoming much more powerful, and things like “Apple Intelligence” and “Co-Pilot” will be looked back on like Netscape.
Getting a free older computer from my work soon because it’s too old to “upgrade” to Windows 11 so I’ll be turning it into a Linux machine. Pretty dang psyched mostly for all the free software!
Gotta be honest, though, a locally hosted 70B model with basic RAG functionality isn’t exactly playing in the same league as the market leaders, which can be bigger by two to three orders of magnitude. And a model that size is already around the limit of what a beefy gaming PC can do with reasonable performance. We’re unlikely to ever beat the big players on quality with local models.
What might happen is that the market collapses, the big players all go bankrupt, further LLM development ceases, and locally hosted Qwen3-80B will be the pinnacle of available text generation for the next thirty years.
Happy Linux user over here. Free open source AI models are becoming much more powerful, and things like “Apple Intelligence” and “Co-Pilot” will be looked back on like Netscape.
Getting a free older computer from my work soon because it’s too old to “upgrade” to Windows 11 so I’ll be turning it into a Linux machine. Pretty dang psyched mostly for all the free software!
Gotta be honest, though, a locally hosted 70B model with basic RAG functionality isn’t exactly playing in the same league as the market leaders, which can be bigger by two to three orders of magnitude. And a model that size is already around the limit of what a beefy gaming PC can do with reasonable performance. We’re unlikely to ever beat the big players on quality with local models.
What might happen is that the market collapses, the big players all go bankrupt, further LLM development ceases, and locally hosted Qwen3-80B will be the pinnacle of available text generation for the next thirty years.