I’d rather pay for chinese gpus than cloud gaming.
The article, by focusing solely almost considers than nothing else exists, especially in China where moore threads gpu start to have reasonnable perf for gaming. I don’t think Europe can’t produce something as long as it stays neoliberal, but some weird stuff could happen with RISCV
For accelerated rendering etc cuda is the standard, and because of it Nvidia. And it’s like that for a lot of other niche areas. Accelerated encoding? NVIDIA, and CUDA again. Yes, AMD and Intel can generally do all that these days as well. But that’s a much more recent thing. If all you want to do is game, sure that’s not a big issue.
But if you want to do anything more than gaming. Say video editing and rendering timelines. Nvidia had been the standard with a lot of inertia. To give an example of how badly AMD missed the boat. I have been using accelerated rendering on my gt 750 in blender for a decade now. The card came out early 2014. The first AMD card capable of that came out one month before the end of 2020. Nearly a 7 year difference! I’m looking at a recent Intel arc or battle mage card or a 6xxx series AMD ATM. Because I run BSD/Linux and Nvidia has traditionally been a necessary PITA. But with both of them now supporting my workflows. Nvidia is much less necessary. Others will eventually leave too. As long as AMD and Intel don’t fuck it up for themselves.
Yeah I mean you are preaching to the choir there. I picked up a used 3090 because rocm on the 7900 was in such a poor state.
That being said, much of what you describe is just software obstinacy. AMD (for example) has had hardware encoding since early 2012, with the 7970. Intel quicksync has long been a standard on laptops. It’s just a few stupid propriety bits that never bothered to support it.
CUDA is indeed extremely entrenched in some areas, like anything involving PyTorch or Blender’s engines. But there’s no reason (say) Plex shouldn’t support AMD, or older editing programs that use OpenGL anyway.
Yes, the software will get there long before many people’s hearts and minds. As you said already is in many cases. The inertia Nvidia got by being early is why they’re so dominant now. But I think Nvidia’s focus on crypto and now data center AI is set to hurt them long term. Only time will tell, and they’re technically swimming in it ATM. But I’m getting out now.
CUDA is actually pretty cool, especially in the early days when there was nothing like it. And Intel/AMD attempts at alternatives have been as mixed as their corporate dysfunction.
And Nvidia has long has a focus on other spaces, like VR, AR, dataset generation, robotics, “virtual worlds” and such. If every single LLM thing disappeared overnight in a puff of smoke, they’d be fine; a lot of their efforts would transition to other spaces.
Not that I’m an apologist for them being total jerks, but I don’t want to act like CUDA isn’t useful, either.
It’s inexplicable to me. Why are folks ignoring Battlemage and AMD 9000 series when they’re so good?
Because the 2022 GPU amd 7900xtx has better raw performance than the 9000 series, and I’m pissed off they didn’t try to one-up themselves in the high-end market. I’m not buying a new Nvidia card but I’m not buying a 9000 series either because it feels like I’m paying for a sub-par GPU compared to what they’re capable of.
I’d rather pay for chinese gpus than cloud gaming. The article, by focusing solely almost considers than nothing else exists, especially in China where moore threads gpu start to have reasonnable perf for gaming. I don’t think Europe can’t produce something as long as it stays neoliberal, but some weird stuff could happen with RISCV
Fact is, Nvidia has the vast majority of market share: https://www.techpowerup.com/337775/nvidia-grabs-market-share-amd-loses-ground-and-intel-disappears-in-latest-dgpu-update
It’s inexplicable to me. Why are folks ignoring Battlemage and AMD 9000 series when they’re so good?
Alas, for whatever reason they aren’t even picking existing alternatives to Nvidia, so Nvidia suddenly becoming unavailable would be a huge event.
For accelerated rendering etc cuda is the standard, and because of it Nvidia. And it’s like that for a lot of other niche areas. Accelerated encoding? NVIDIA, and CUDA again. Yes, AMD and Intel can generally do all that these days as well. But that’s a much more recent thing. If all you want to do is game, sure that’s not a big issue.
But if you want to do anything more than gaming. Say video editing and rendering timelines. Nvidia had been the standard with a lot of inertia. To give an example of how badly AMD missed the boat. I have been using accelerated rendering on my gt 750 in blender for a decade now. The card came out early 2014. The first AMD card capable of that came out one month before the end of 2020. Nearly a 7 year difference! I’m looking at a recent Intel arc or battle mage card or a 6xxx series AMD ATM. Because I run BSD/Linux and Nvidia has traditionally been a necessary PITA. But with both of them now supporting my workflows. Nvidia is much less necessary. Others will eventually leave too. As long as AMD and Intel don’t fuck it up for themselves.
Yeah I mean you are preaching to the choir there. I picked up a used 3090 because rocm on the 7900 was in such a poor state.
That being said, much of what you describe is just software obstinacy. AMD (for example) has had hardware encoding since early 2012, with the 7970. Intel quicksync has long been a standard on laptops. It’s just a few stupid propriety bits that never bothered to support it.
CUDA is indeed extremely entrenched in some areas, like anything involving PyTorch or Blender’s engines. But there’s no reason (say) Plex shouldn’t support AMD, or older editing programs that use OpenGL anyway.
Yes, the software will get there long before many people’s hearts and minds. As you said already is in many cases. The inertia Nvidia got by being early is why they’re so dominant now. But I think Nvidia’s focus on crypto and now data center AI is set to hurt them long term. Only time will tell, and they’re technically swimming in it ATM. But I’m getting out now.
CUDA is actually pretty cool, especially in the early days when there was nothing like it. And Intel/AMD attempts at alternatives have been as mixed as their corporate dysfunction.
And Nvidia has long has a focus on other spaces, like VR, AR, dataset generation, robotics, “virtual worlds” and such. If every single LLM thing disappeared overnight in a puff of smoke, they’d be fine; a lot of their efforts would transition to other spaces.
Not that I’m an apologist for them being total jerks, but I don’t want to act like CUDA isn’t useful, either.
Because the 2022 GPU amd 7900xtx has better raw performance than the 9000 series, and I’m pissed off they didn’t try to one-up themselves in the high-end market. I’m not buying a new Nvidia card but I’m not buying a 9000 series either because it feels like I’m paying for a sub-par GPU compared to what they’re capable of.
Well, same for me TBH. I’d buy a huge Battlemage card before you could blink, but the current cards aren’t really faster than an ancient 3090.
But most people I see online want something more affordable, right in the 9000 series or Arc range, not a 384-bit GPU.