Readit News logoReadit News
rbanffy · 2 days ago
Don't blame the ISA - blame the silicon implementations AND the software with no architecture-specific optimisations.

RISC-V will get there, eventually.

I remember that ARM started as a speed demon with conscious power consumption, then was surpassed by x86s and PPCs on desktops and moved to embedded, where it shone by being very frugal with power, only to now be leaving the embedded space with implementations optimised for speed more than power.

newpavlov · 2 days ago
In some cases RISC-V ISA spec is definitely the one to blame:

1) https://github.com/llvm/llvm-project/issues/150263

2) https://github.com/llvm/llvm-project/issues/141488

Another example is hard-coded 4 KiB page size which effectively kneecaps ISA when compared against ARM.

weebull · 2 days ago
All of those things are solved with modern extensions. It's like comparing pre-MMX x86 code with modern x86. Misaligned loads and stores are Zicclsm, bit manipulation is Zb[abcs], atomic memory operations are made mandatory in Ziccamoa.

All of these extensions are mandatory in the RVA22 and RVA23 profiles and so will be implemented on any up to date RISC-V core. It's definitely worth setting your compiler target appropriately before making comparisons.

tosti · 2 days ago
Regarding misaligned reads, IIRC only x86 hides non-aligned memory access. It's still slower than aligned reads. Other processors just fault, so it would make sense to do the same on riscv.

The problem is decades of software being written on a chip that from the outside appears not to care.

adastra22 · 2 days ago
Also the bit manipulation extension wasn't part of the core. So things like bit rotation is slow for no good reason, if you want portable code. Why? Who knows.
torginus · 2 days ago
Unaligned load/store is a horrible feature to implement.

Page size can be easily extended down the line without breaking changes.

GoblinSlayer · a day ago
> 1) https://github.com/llvm/llvm-project/issues/150263

Huh? They have no idea what they are doing. If data is unaligned, the solution is memcpy, not compiler optimizations, also their hack of 17 loads is buffer overflow. Also not ISA spec problem.

direwolf20 · 2 days ago
The first one is common across many architectures, including ARM, and the second is just LLVM developers not understanding how cmpxchg works
fidotron · 2 days ago
> RISC-V will get there, eventually.

Not trolling: I legitimately don't see why this is assumed to be true. It is one of those things that is true only once it has been achieved. Otherwise we would be able to create super high performance Sparc or SuperH processors, and we don't.

As you note, Arm once was fast, then slow, then fast. RISC-V has never actually been fast. It has enabled surprisingly good implementations by small numbers of people, but competing at the high end (mobile, desktop or server) it is not.

lizknope · 2 days ago
I think the bigger question is does RISC-V need to be fast? Who wants to make it fast?

I'm a chip designer and I see people using RISC-V as small processor cores for things like PCIE link training or various bookkeeping tasks. These don't need to be fast, they need to be small and low power which means they will be relatively slow.

Most people on tech review sites only care about desktop / laptop / server performance. They may know about some of the ARM Cortex A series CPUs that have MMUs and can run desktop or smartphone Linux versions.

They generally don't care about the ARM Cortex M or R versions for embedded and real time use. Those are the areas where you don't need high performance and where RISC-V is already replacing ARM.

EDIT:

I'll add that there are companies that COULD make a fast RISC-V implementation.

Intel, AMD, Apple, Qualcomm, or Nvidia could redirect their existing teams to design a high performance RISC-V CPU. But why should they? They are heavily invested in their existing x86 and ARM CPU lines. Amazon and Google are using licensed ARM cores in their server CPUs.

What is the incentive for any of them to make a high performance RISC-V CPU? The only reason I can think of is that Softbank keeps raising ARM licensing costs and it gets high enough that it is more profitable to hire a team and design your own RISC-V CPU.

rwmj · 2 days ago
RISC-V doesn't have the pitfalls of Sparc (register windows, branch delay slots), largely because we learned from that. It's in fact a very "boring" architecture. There's no one that expects it'll be hard to optimize for. There are at least 2 designs that have taped out in small runs and have high end performance.
Findecanor · 2 days ago
Because today, getting a fast CPU out it isn't as much an engineering issue as it is about getting the investment for hiring a world-class fab.

The most promising RISC-V companies today have not set out to compete directly with Intel, AMD, Apple or Samsung, but are targeting a niche such as AI, HPC and/or high-end embedded such as automotive.

And you can bet that Qualcomm has RISC-V designs in-house, but only making ARM chips right now because ARM is where the market for smartphone and desktop SoCs is. Once Google starts allowing RVA23 on Android / ChromeOS, the flood gates will open.

gt0 · 2 days ago
I don't think anybody suggests Oracle couldn't make faster SPARC processors, it's just that development of SPARC ended almost 10 years ago. At the time SPARC was abandoned, it was very competitive.
snvzz · 2 days ago
Fast, RVA23-compatible microarchitectures already exist. Everything high performance seems to be based on RVA23, which is the current application profile and comparable to ARMv9 and x86-64v4.

However, it takes time from microarchitecture to chips, and from chips to products on shelves.

The very first RVA23-compatible chips to show up will likely be the spacemiT K3 SoC, due in development boards April (i.e. next month).

More of them, more performant, such as a development board with the Tenstorrent Ascalon CPU in the form of the Atlantis SoC, which was tapped out recently, are coming this summer.

It is even possible such designs will show up in products aimed at the general public within the present year.

bsder · 2 days ago
> Don't blame the ISA - blame the silicon implementations

That's true, but tautological.

The issue is that the RISC-V core is the easy part of the problem, and nobody seems to even be able to generate a chip that gets that right without weirdness and quirks.

The more fundamental technical problem is that things like the cache organization and DDR interface and PCI interface and ... cannot just be synthesized. They require analog/RF VLSI designers doing things like clock forwarding and signal integrity analysis. If you get them wrong, your performance tanks, and, so far, everybody has gotten them wrong in various ways.

The business problem is the fact that everybody wants to be the "performance" RISC-V vendor, but nobody wants to be the "embedded" RISC-V vendor. This is a problem because practically anybody who is willing to cough up for a "performance" processor is almost completely insensitive to any cost premium that ARM demands. The embedded space is hugely sensitive to cost, but nobody is willing to step into it because that requires that you do icky ecosystem things like marketing, software, debugging tools, inventory distribution, etc.

This leads to the US business problem which is the fact that everybody wants to be an IP vendor and nobody wants to ship a damn chip. Consequently, if I want actual RISC-V hardware, I'm stuck dealing with Chinese vendors of various levels of dodginess.

Dwedit · 2 days ago
There's the ARM video from LowSpecGamer, where they talk about how they forgot to connect power to the chip, and it was still executing code anyway. According to Steve Furber, the chip was accidentally being powered from the protection diodes alone. So ARM was incredibly power efficient from the very beginning.
api · 2 days ago
A pattern I've noticed for a very long time:

A lot of times the path to the highest performing CPU seems to be to optimize for power first, then speed, then repeat. That's because power and heat are a major design constraint that limits speed.

I first noticed this way back with the Pentium 4 "Netburst" architecture vs. the smaller x86 cores that became the ancestor of the Core architecture. Intel eventually ran into a wall with P4 and then branched high performance cores off those lower-power ones and that's what gave us the venerable Core architecture that made Intel the dominant CPU maker for over a decade.

ARM's history is another example.

cpgxiii · 2 days ago
I think the story is a bit more complicated. Core succeeded precisely because Intel had both the low-power experience with Pentium-M and the high-power experience with Netburst. The P4 architecture told them a lot about what was and wasn't viable and at what complexity. When you look at the successor generations from Core, what you see are a lot of more complex P4-like features being re-added, but with the benefits of improved microarch and fab processes. Obviously we will never know, but I don't think you would get to Haswell or Skylake in the form they were without the learning experience of the P4.

In comparison, I think Arm is actually a very strong cautionary tale that focusing on power will not get you to performance. Arm processors remained pretty poor performance until designers from other CPU families entirely (PowerPC and Intel) took it on at Apple and basically dragged Arm to the performance level they are today.

userbinator · 2 days ago
NetBurst was supposed to be the application of RISC principles to x86 taken to its extreme (ultra-long pipelines to reduce clock-to-clock delay, highest clock speed possible --- basically reducing work-per-clock and hoping that reduces complexity enough to increase clock speed to compensate.) The ALU was 16 bits, "double pumped" with the carry split between the two, which lead to 32-bit ALU operations that don't carry between the lower and upper halves actually finishing a clock cycle faster than those with a carry.

https://stackoverflow.com/questions/45066299/was-there-a-p4-...

cptskippy · 2 days ago
Core evolved from the Banis (Centrino) CPU core which was based on P3, not P4. Banias used the front-side bus from P4 but not the cores.

Banias was hyper optimized for power, the mantra was to get done quickly and go to sleep to save power. Somewhere along the line someone said "hey what happens if we don't go to sleep?" and Core was born.

jnovek · 2 days ago
I don’t have a micro architecture background so I apologize if this is obvious — What do power and speed mean in this context?
jauntywundrkind · 2 days ago
Parallels to code design, where optimizing data or code size can end up having fantastic performance benefits (sometimes).
rwmj · 2 days ago
Marcin is working with us on RISC-V enablement for Fedora and RHEL, he's well aware of the problem with current implementations. We're hopeful that this'll be pretty much resolved by the end of the year.
LeFantome · 2 days ago
If he expects it to be resolved by the end of the year (and I agree it likely will be), why is he writing a post like this?

Is this because Fedora 44 is going to beta?

cogman10 · 2 days ago
> AND the software with no architecture-specific optimisations

The optimizations that'd be applied to ARM and MIPS would be equally applicable to RISC-V. I do not believe this is a lack of software optimization issue.

We are well past the days where hand written assembly gives much benefit, and modern compilers like gcc and llvm do nearly identical work right up until it comes to instruction emissions (including determining where SIMD instructions could be placed).

Unless these chips have very very weird performance characteristics (like the weirdness around x86's lea instruction being used for arithmetic) there's just not going to be a lot of missed heuristics.

hrmtst93837 · 2 days ago
One thing compilers still struggle with is exploiting weird microarchitectural quirks or timing behaviors that aren't obvious from the ISA spec, especially with memory, cache and pipeline tuning. If a new RISC-V core doesn't expose the same prefetching tricks or has odd branch prediction you won't get parity just by porting the same backend. If you want peak numbers sometimes you do still need to tune libraries or even sprinkle in a bit of inline asm despite all the "let the compiler handle it" dogma.
bobmcnamara · 2 days ago
> The optimizations that'd be applied to ARM and MIPS would be equally applicable to RISC-V.

There's no carry bit, and no widening multiply(or MAC)

dmitrygr · 2 days ago
IF you care to read the article, they indeed do not blame the architecture but the available silicon implementations.
rbanffy · 2 days ago
I did read it. A Banana Pi is not the fastest developer platform. The title is misleading.

BTW, it's quite impressive how the s390x is so fast per core compared to the others. I mean, of course it's fast - we all knew that.

And don't let IBM legal see this can be considered a published benchmark, because they are very shy about s390x performance numbers.

topspin · 2 days ago
I keep checking in on Tenstorrent every few months thinking Keller is going to rock our world... losing hope.

At this point the most likely place for truly competitive RISC-V to appear is China.

tromp · 2 days ago
But they didn't reflect that in a title like "current RISC-V silicon Is Sloooow" ...
spiderice · 2 days ago
Then how do you justify the title?
izacus · a day ago
If you make a spec that the wider industry cannot effectively implement into quality products, it's the spec that's wrong. And that's true for anything - whether it's RISC-V, ipv6, Matter, USB-C and so on.

That's what makes writing specs hard - you need people who understand implementation challenges at the table, not dreaming architects and academics.

crest · 2 days ago
RISC-V lacks a bunch of really useful relatively easy to implement instructions and most extensions are truly optional so you can't rely on them. That's the problem if you let a bunch of academics turn your ISA into a paper mill.

In theory you can spend a lot of effort to make a flawed ISA perform, but it will be neither easy nor pretty e.g. real world Linux distros can't distribute optimised packages for every uarch from dual-issue in-order RV64GC to 8-wide OoO RV64 with all the bells and whistles. Only in (deeply) embedded systems can you retarget the toolchain and optimise for each damn architecture subset you encounter.

userbinator · 2 days ago
ARM was never a "speed demon"; it started out as a low power small-area core and clearly had more complexity and thought put into it than MIPS or RISC-V.

Over a decade ago: https://news.ycombinator.com/item?id=8235120

RISC-V will get there, eventually.

Strong doubt. Those of us who were around in the 90s might remember how much hype there was with MIPS.

rbanffy · 2 days ago
I don’t think you remember, But the first Archimedes smoked the just-launched Compaq 386s with a dedicated 387 coprocessor.

It was not designed to be one, but it ended up being surprisingly fast.

kashyapc · 2 days ago
A couple of corrections (the blog-post is by a colleague, but I'm not speaking for Marcin! :))

First, we do have a recent 'binutils' build[1] with test-suites in 67 minutes (it was on Milk-V "Megrez") in the Fedora RISC-V build system. This is a non-trivial improvement over the 143-minute build time reported in the blog.

Second, the current fastest development machine is not Banana Pi BPI-F3. If we consider what is reasonably accessible today, it is SiFive "HiFive P550" (P550 for short) and an upcoming UltraRISC "DP1000", we have access to an eval board. And as noted elsewhere in this thread, in "several months" some RVA23-based machines should be available. (RVA23 == the latest ISA spec).

FWIW, our FOSDEM talk from earlier this year, "Fedora on RISC-V: state of the arch"[1], gives an overview of the hardware situation. It also has a couple of related poorman's benchmarks (an 'xz' compression test and a 'binutils' build without the test-suite on the above two boards -- that's what I could manage with the time I had).

Edit: Marcin's RISC-V test was done on StarFive "Vision Five 2". This small board has its strengths (upstreamed drivers), but it is not known for its speed!

[1] https://riscv-koji.fedoraproject.org/koji/taskinfo?taskID=91...

[2] Slides: https://fosdem.org/2026/events/attachments/SQGLW7-fedora-on-...

brucehoult · a day ago
> VisionFive 2

It's a good solid reliable board, but over three years old at this point (in a fast-moving industry) and the maximum 8 GB RAM is quite challenging for some builds.

Binutils is fine, but on recent versions of gcc it wants to link four binaries at the same time, with each link using 4 GB RAM. I've found this fails on my 16 GB P550 Megrez with swap disabled, but works quickly and uses maybe 50 or 100 MB of swap if I enable it.

On the VisionFive 2 you'd need to use `-j1` (or `-j2` with swap enabled) which will nearly double or quadruple the build time.

Or use a better linker than `ld`.

At least the LLVM build system lets you set the number of parallel link jobs separately to the number of C/C++ jobs.

kashyapc · 4 hours ago
> I've found this fails on my 16 GB P550 Megrez with swap disabled but works quickly and uses maybe 50 or 100 MB of swap if I enable it.

I see, I don't have a Megrez at my desk, only in the build system. I only have P550 as my "workhorse".

PS: I made a typo above - the P550 I was referring to was the SiFive "HiFive Premier P550". But based on your HN profile text, you must've guessed it as much :)

kashyapc · 2 days ago
Arm had 40 years to be where it is today. RISC-V is 15 years old. Some more patience is warranted.

Assuming they will keep their word, later this year Tenstorrent is supposed to ship their RVA23-based server development platform[1]. They announced[2] it at the last year's NA RISC-V Summit. Let's see.

The ball is in the court of hardware vendors to cook some high-end silicon.

[1] https://tenstorrent.com/ip/risc-v-cpu

[2] https://static.sched.com/hosted_files/riscvsummit2025/e2/Unl...

userbinator · 2 days ago
MIPS, which RISC-V is closely modeled after, is also roughly 4 decades old and was massively hyped in the early 90s as well.
kashyapc · a day ago
Great point; I only know about MIPS legacy vaguely. As you imply, don't listen to the "hype-sters" but pay attention to what silicon is being produced.
saati · a day ago
Aarch64 is just 15 years old, and shares pretty much nothing with 32 bit arms apart from the name.
Levitating · 2 days ago
This is why felix has been building the risc-v archlinux repositories[1] using the Milk-V Pioneer.

I think the ban of SOPHGO is part to blame for the slow development.[2] They had the most performant and interesting SOCs. I had a bunch of pre-orders for the Milk-V Oasis before it was cancelled. It was supposed to come out a while ago, using the SG2380, supposedly much more performant than the Milk-V Titan mentioned in the article (which still isn't out).

It was also SOPHGO's SOCs that powered the crazy cheap/performant/versatile Milk-V DUO boards. They have the ability to switch ARM/RISC-V architecture.

[1]: https://archriscv.felixc.at/

[2]: https://www.tomshardware.com/tech-industry/artificial-intell...

15155 · 2 days ago
Can you articulate why you think this ban impacted anything and what you think the ban applies to?
Levitating · 2 days ago
I won't pretend to understand the geo-politics or rulings.

What I do know is since the ban, all ongoing products featuring SOPHGO SOCs were cancelled, and I haven't seen any products featuring them since. The SOPHGO forums have also closed down.

The Milk-V Oasis would have had 16 cores (SG2380 w/ SiFive P670), it was replaced by the Milk-V Megrez with just 4 cores (SiFive P550) for around the same price. The new Milk-V Titan has only 8. We're slowly catching up, but the performance is now one or two years behind what it could've been.

The SG2380 would've been the first desktop ready RISC-V SOC at an affordable price. I think it's still the only SOC made that used the SiFive P670 core.

LeFantome · 7 hours ago
I am going to make a wild guess here.

The reason that he does not tell us what hardware he is using is because none of these times are for a single system building binutils. I think he is using a mix of systems and then doing some kind of averaging to tell us what a individual system would look like.

For some kind of hardware, all the systems they have would be the fastest that architecture offers, like with i686 I expect. While others are going to be a mix of old and new, like x86-64.

For RISC-V, the latest gen hardware is about as fast as the numbers he quotes for Aarch64. To be clear, the fastest ARM is still faster than the fastest RISC-V. But the numbers he quotes make no sense for something like a SpacemiT K3.

But if you are using RISC-V systems from two years ago in your build cluster, they will as he says be "Sloooow". But that shows how fast RISC-V is improving. It makes no sense to publish this article now.

At least, he should reveal what hardware he is talking about. His chart makes no sense (for most of the platforms).

echoangle · 2 days ago
Is there a simple explanation why RISC-V software has to be built on a RISC-V system? Why is it so hard for compilers to compile for a different architecture? The general structure of the target architecture lives inside the compiler code and isn’t generated by introspecting the current system, right?
haerwu · a day ago
Cross compilation of entire distributions requires such distributions to be prepated for it. Which is not a case when you use OpenEmbedded/Yocto or Buildroot to build it. But it gets complicated with distributions which are built natively.

Fedora does not have a way to cross compile packages. The only cross compiler available in repositories is bare-metal one. You can use it to build firmware (EDK2, U-Boot) or Linux kernel. But nothing more.

Then there is the other problem: testing. What is a point of successful build if it does not work on target systems? Part of each Fedora build is running testsuite (if packaged software has any). You should not run it in QEMU so each cross-build would need to connect to target system, upload build artifacts and run tests. Overcomplicated.

Native builds allows to test is distribution ready for any kind of use. I use AArch64 desktop daily for almost a year now. But it is not "4core/16GB ram SBC" but rather "server-as-a-desktop" kind (80 cores, 128 GB ram, plenty of PCI-Express lanes). And I build software on, write blog posts, watch movies etc. And can emulate other Fedora architectures to do test builds.

Hardware architecture slow today, can be fast in the future. In 2013 building Qt4 for Fedora/AArch64 took days (we used software emulators). Now it takes 18 minutes.

boredatoms · 2 days ago
Under specified build dependencies that use libraries/config on your host OS rather than the target system

You can solve this on a per language basis, but the C/C++ ecosystem is messy. So people use VMs or real hardware of the target arch to not have to think about it

flowerthoughts · 2 days ago
Old compilers tended to make it a compile-time switch which backends were included, probably because backends were "huge", so they were left out. (The insn lookup table in GCC took ages to generate and compile.) And of course all development environments running on Windows assumed x86 was the only architecture.

With LLVM existing, cross-compiling is not a problem anymore, but it means you can't run tests without an emulator. So it might just be easier to do it all on the target machine.

anarazel · 2 days ago
Cross building of possible, but it's rather useful to be able to test the software you just built... And often enough, tests take more resources than the build.
AnssiH · 2 days ago
The cross-compiler part itself is easy, but getting all the build scripting of tens of thousands of Fedora packages to work perfectly for cross-compiling would be a lot of work.

There are lots of small issues (libraries or headers not being found, wrong libraries or headers being found, build scripts trying to run the binaries they just built, wrong compiler being used, wrong flags being used, etc.) when trying to cross-compile arbitrary software.

All fixable (cross-compiling entire distributions is a thing), but a lot of work and an extra maintenance burden.

aa-jv · a day ago
Native builds are always a safer/more reliable path to take than cross-compiling, which usually requires solid native builds to be operational before the cross environment can be reliably trusted.

Its a bootstrapping chain of priority. Once a native build regime is set in stone, cross compiling harnesses can be built to exploit the beachhead.

I have saved many a failing projects budget and deadline by just putting the compiler onboard and obviating the hacky scaffolding usually required for reliable cross compiling at the beginning stages of a new architecture project, and I suspect this is the case here too ..

LeFantome · 7 hours ago
This is article is being discussed on another forum where kernel build times are being compared for different RISC-V hardware. The conclusion there was that, if a BananaPi-F3 is taking 143 minutes to compile binutils, the SpacemiT K3 will buld it in 36 minutes using its X100 cores (half its cores).

That is the same as the time he quotes for the unidentified Aarch64 hardware.

Which makes this a pretty funny article.

I do not have a K3 to confrim. I am hoping to pick one up when it becomes more widely available next month.

lifis · 2 days ago
Or they could fix cross compilation and then compile it on a normal x86_64 server
mort96 · 2 days ago
Fixing cross compilation is a huge undertaking. So much software needs to be patched to be properly cross-compilable.