We don’t have that many other processors, though. If you look at the desktop, there is AMD and there is Apple silicon which is restricted to Apple products. And then there is nothing. If Intel goes under ground, AMD might become next Intel. It’s time (for EU) to invest heavily into RISC-V, the entire stack.
If you look at the desktop, there is AMD and there is Apple silicon
You can get workstations with Ampere Altra CPUs that use an ARM ISA. It’s not significant in the market, more of a server CPU put in a desktop for developers, but it provides a starting point, from which you could cut down the core count and try to boost the clocks.
There is also the Qualcomm Snapdragon X Plus with some laptops on the market from mainstream brands already (Asus Zenbook A14, Lenovo ThinkPad T14s Gen 6, Dell Inspiron 5441). That conversely could probably scale up to a desktop design fairly quickly.
You’re right that we’re not there, but I don’t think we’re that far off either. If Intel keeled over there would be a race to fill the gap and they wouldn’t leave the market to AMD alone.
ARMs are more oriented towards servers and mobile devices for now. Sure, we saw Apple demonstrating desktop use but not much is there for desktops for now. RISC-V is far away, Chinese CPUs are not competitive. It’s coming doesn’t help in short term, questionable in mid term. 🤷♂️ Yes, alternatives will come eventually, but it takes a lot of time and resources.
There is ARM also found on apple,raspberry pi,Orange Pi but those are SBCS(except apple) they can always be turned into normal laptops and desktops and such.
The only problem with ARM its a closed ISA like X64.
The only Problem with both ARM AND RISC-V They are RISC not CISC like x64 so better power consumption with lower clock speeds, bad for desktop great for laptops and such.
Thanks for coming to my ted talk.
RISC is perfectly good for desktops as demonstrated by Apple. Microcontroller chips are suitable for light desktop tasks, they are nowhere near modern x64 CPUs. For now.
It doesn’t really make much of a difference on modern CPUs as instructions are broken down into RISC-like instructions even on CISC CPUs before being processed to make pipelining more effective.
From what I remember one of problems with CISC is that it has variable length instructions and these are harder to predict since you have to analyze all instructions up to the current one wheres for RISC you exactly know where is each instruction in memory/cache.
one of problems with CISC is that it has variable length instructions
RISC systems also have variable length instructions, they’re just a bit stricter with the implementation that alleviates a lot of the issues (ARM instructions are always either 16-bits or 32-bits, while RISC-V is always a multiple of 16-bits and self-describing, similar to UTF-8)
Edit: Oh, and ARM further restricts instruction length based on a CPU flag, so you can’t mix and match at an instruction level. It’s always one or the other, or it’s invalid.
I was thinking about Apple’s M CPUs that have fixed length and they benefit out of it. It was explained on Anandtech years ago, here is a brief paragraph on the topic. Sadly Anandtech article(s) isn’t available anymore.
Since this type of chip has a fixed instruction length, it becomes simple to load a large number of instructions and explore opportunities to execute operations in parallel. This is what’s called out-of-order execution, as explained by Anandtech in a highly technical analysis of the M1. Since complex CISC instructions can access memory before completing an operation, executing instructions in parallel becomes more difficult in contrast to the simpler RISC instructions.
Ahh, yep it turns out ARM actually removed Thumb support with their 64-bit transition, so their instruction length is fixed now, and Thumb never made it into the M* SoCs.
This isn’t completely true. Even a basic instruction like ADD has multiple implementations depending on the memory sources.
For example, if the memory operand is in RAM, then the ADD needs to be decoded to include a fetch before the actual addition. RISC doesn’t change that fact.
Yes, but RISC knows the exact position of that instruction in cache and how many instructions fit the instructions cache or pipeline. Like you said, it doesn’t help with data cache.
Are you sure there’s a significant difference between RISC and CISC after instructions are decoded?
The assembly in RISC is just an abstraction of the machine code, as it also is in CISC. If the underlying CPU has the same capabilities then it doesn’t really matter what the assembly looks like?
Of course, the underlying CPUs aren’t the same and that’s the real point of differentiation.
We don’t have that many other processors, though. If you look at the desktop, there is AMD and there is Apple silicon which is restricted to Apple products. And then there is nothing. If Intel goes under ground, AMD might become next Intel. It’s time (for EU) to invest heavily into RISC-V, the entire stack.
You can get workstations with Ampere Altra CPUs that use an ARM ISA. It’s not significant in the market, more of a server CPU put in a desktop for developers, but it provides a starting point, from which you could cut down the core count and try to boost the clocks.
There is also the Qualcomm Snapdragon X Plus with some laptops on the market from mainstream brands already (Asus Zenbook A14, Lenovo ThinkPad T14s Gen 6, Dell Inspiron 5441). That conversely could probably scale up to a desktop design fairly quickly.
You’re right that we’re not there, but I don’t think we’re that far off either. If Intel keeled over there would be a race to fill the gap and they wouldn’t leave the market to AMD alone.
ARMs are coming. RISCV are coming. Some Chinese brands have been seen, too.
Neither are commonly available in desktop form factors and they usually require custom builds for each board to work.
And for many x86 will remain an important architecture for a long time
ARMs are more oriented towards servers and mobile devices for now. Sure, we saw Apple demonstrating desktop use but not much is there for desktops for now. RISC-V is far away, Chinese CPUs are not competitive. It’s coming doesn’t help in short term, questionable in mid term. 🤷♂️ Yes, alternatives will come eventually, but it takes a lot of time and resources.
There is ARM also found on apple,raspberry pi,Orange Pi but those are SBCS(except apple) they can always be turned into normal laptops and desktops and such.
The only problem with ARM its a closed ISA like X64.
The only Problem with both ARM AND RISC-V They are RISC not CISC like x64 so better power consumption with lower clock speeds, bad for desktop great for laptops and such.
Thanks for coming to my ted talk.
RISC is perfectly good for desktops as demonstrated by Apple. Microcontroller chips are suitable for light desktop tasks, they are nowhere near modern x64 CPUs. For now.
It doesn’t really make much of a difference on modern CPUs as instructions are broken down into RISC-like instructions even on CISC CPUs before being processed to make pipelining more effective.
Yeah if you build a RISC processor directly you can just save the area needed for instruction decode.
This is the correct answer. Modern x86 (x64) is a RISC CPU with a decoder that can decode a cisc isa.
From what I remember one of problems with CISC is that it has variable length instructions and these are harder to predict since you have to analyze all instructions up to the current one wheres for RISC you exactly know where is each instruction in memory/cache.
RISC systems also have variable length instructions, they’re just a bit stricter with the implementation that alleviates a lot of the issues (ARM instructions are always either 16-bits or 32-bits, while RISC-V is always a multiple of 16-bits and self-describing, similar to UTF-8)
Edit: Oh, and ARM further restricts instruction length based on a CPU flag, so you can’t mix and match at an instruction level. It’s always one or the other, or it’s invalid.
I was thinking about Apple’s M CPUs that have fixed length and they benefit out of it. It was explained on Anandtech years ago, here is a brief paragraph on the topic. Sadly Anandtech article(s) isn’t available anymore.
Ahh, yep it turns out ARM actually removed Thumb support with their 64-bit transition, so their instruction length is fixed now, and Thumb never made it into the M* SoCs.
This isn’t completely true. Even a basic instruction like ADD has multiple implementations depending on the memory sources.
For example, if the memory operand is in RAM, then the ADD needs to be decoded to include a fetch before the actual addition. RISC doesn’t change that fact.
Yes, but RISC knows the exact position of that instruction in cache and how many instructions fit the instructions cache or pipeline. Like you said, it doesn’t help with data cache.
Are you sure there’s a significant difference between RISC and CISC after instructions are decoded?
The assembly in RISC is just an abstraction of the machine code, as it also is in CISC. If the underlying CPU has the same capabilities then it doesn’t really matter what the assembly looks like?
Of course, the underlying CPUs aren’t the same and that’s the real point of differentiation.
See my other reply
alr thanks for the info