Readit News logoReadit News
brucehoult · 2 days ago
Out of interest I tried running my Primes benchmark [1] on both the x86_64 and x86 Alpine and the riscv64 Buildroot, both in Chrome on M1 Mac Mini. Both are 2nd run so that all needed code is already cached locally.

x86_64:

    localhost:~# time gcc -O primes.c -o primes
    real    0m 3.18s
    user    0m 1.30s
    sys     0m 1.47s
    localhost:~# time ./primes
    Starting run
    3713160 primes found in 456995 ms
    245 bytes of code in countPrimes()
    real    7m 37.97s
    user    7m 36.98s
    sys     0m 0.00s
    localhost:~# uname -a
    Linux localhost 6.19.3 #17 PREEMPT_DYNAMIC Mon Mar  9 17:12:35 CET 2026 x86_64 Linux
x86 (i.e. 32 bit):

    localhost:~# time gcc -O primes.c -o primes
    real    0m 2.08s
    user    0m 1.43s
    sys     0m 0.64s
    localhost:~# time ./primes
    Starting run
    3713160 primes found in 348424 ms
    301 bytes of code in countPrimes()
    real    5m 48.46s
    user    5m 37.55s
    sys     0m 10.86s
    localhost:~# uname -a
    Linux localhost 4.12.0-rc6-g48ec1f0-dirty #21 Fri Aug 4 21:02:28 CEST 2017 i586 Linux

riscv64:

    [root@localhost ~]# time gcc -O primes.c -o primes
    real    0m 2.08s
    user    0m 1.13s
    sys     0m 0.93s
    [root@localhost ~]# time ./primes
    Starting run
    3713160 primes found in 180893 ms
    216 bytes of code in countPrimes()
    real    3m 0.90s
    user    3m 0.89s
    sys     0m 0.00s
    [root@localhost ~]# uname -a
    Linux localhost 4.15.0-00049-ga3b1e7a-dirty #11 Thu Nov 8 20:30:26 CET 2018 riscv64 GNU/Linux

Conclusion: as seen also in QEMU (also started by Bellard!), RISC-V is a *lot* easier to emulate than x86. If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.

Note: quite different gcc versions, with x86_64 being 15.2.0, x86 9.3.0, and riscv64 7.3.0.

[1] http://hoult..rg/primes.txt

dmitrygr · 2 days ago
MIPS (the arch of which RISCV is mostly a copy) is even easier to emulate, unlike RV it does not scatter immediate bits al over the instruction word, making it easier for an emulator to get immediates. If you need emulated perf, MIPS is the easiest of all
brucehoult · 2 days ago
That's a very small effect in the overall decoding of an instruction even in a pure interpretive emulator, and undetectable in a JIT.

Also MIPS code is much larger.

anthk · 2 days ago
saagarjha · 2 days ago
> If you're building code specifically to run in emulation, use RISC-V: builds faster, smaller code, runs faster.

I don't really think this bears out in practice. RISC-V is easy to emulate but this does not make it fast to emulate. Emulation performance is largely dominated by other factors where RISC-V does not uniquely dominate.

lxgr · 2 days ago
Do you have an explanation for GP's benchmark results then?
camel-cdr · 2 days ago
x86 is a lot easier to JIT to Arm or RISC-V though, because it has fewer registers.
vexnull · 2 days ago
Interesting to see the gcc version gap between the targets. The x86_64 image shipping gcc 15.2.0 vs 7.3.0 on riscv64 makes the performance comparison less apples-to-apples than it looks - newer gcc versions have significantly better optimization passes, especially for register allocation.
brucehoult · 2 days ago
The RISC-V one has just never been touched since it was created in 2018.

> newer gcc versions have significantly better optimization passes

So what you're saying is that with a modern compiler RISC-V would win by even more?

TBH I doubt much has changed with register allocation on register-rich RISC ISAs since 2018. On i386, yeah, quite possible.

Dead Comment

maxloh · 3 days ago
Unfortunately, he didn't attach the source code for the 64-bit x86 emulation layer, or the config used to compile the hosted image.

For a more open-source version, check out container2wasm (which supports x86_64, riscv64, and AArch64 architectures): https://github.com/container2wasm/container2wasm

zamadatix · 3 days ago
https://github.com/copy/v86 might be a more 1:1 fully open sourced alternative.
maxloh · 3 days ago
Not really. x86_64 is not supported yet: https://github.com/copy/v86/issues/133
zoobab · 2 days ago
"he didn't attach the source code for the 64-bit x86 emulation layer"

It's not open source? If that's the case, it should be in his FAQ.

simonw · 3 days ago
The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.

Claude Code / Codex CLI / etc are all great because they know how to drive Bash and other Linux tools.

The browser is probably the best sandbox we have. Being able to run an agent loop against a WebAssembly Linux would be a very cool trick.

I had a play with v86 a few months ago but didn't quite get to the point where I hooked up the agent to it - here's my WIP: https://tools.simonwillison.net/v86 - it has a text input you can use to send commands to the Linux machine, which is pretty much what you'd need to wire in an agent too.

In that demo try running "cat test.lua" and then "lua test.lua".

the_mitsuhiko · 2 days ago
> The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.

That exists: https://github.com/container2wasm/container2wasm

Unfortunately I found the performance to be enough of an issue that I did not look much further into it.

stingraycharles · 2 days ago
Did anyone expect anything different though, when running a full-blown OS in JavaScript?
d_philla · 2 days ago
Check out Jeff Lindsay's Apptron (https://github.com/tractordev/apptron), comes very close to this, and is some great tech all on its own.
progrium · 2 days ago
It's getting there. Among other things, it's probably the quickest way to author a Linux environment to embed on the web: https://www.youtube.com/watch?v=aGOHvWArOOE

Apptron uses v86 because its fast. Would love it for somebody to add 64-bit support to v86. However, Apptron is not tied to v86. We could add Bochs like c2w or even JSLinux for 64-bit, I just don't think it will be fast enough to be useful for most.

Apptron is built on Wanix, which is sort of like a Plan9-inspired ... micro hypervisor? Looking forward to a future where it ties different environments/OS's together. https://www.youtube.com/watch?v=kGBeT8lwbo0

apignotti · 2 days ago
We are working on exactly this: https://browserpod.io

For a full-stack demo see: https://vitedemo.browserpod.io/

To get an idea of our previous work: https://webvm.io

otterley · 2 days ago
How’s performance relative to bare metal or hardware virtualization?
andai · 2 days ago
I run agents as a separate Linux user. So they can blow up their own home directory, but not mine. I think that's what most people are actually trying to solve with sandboxing.

(I assume this works on Macs too, both being Unixes, roughly speaking :)

johnhenry · 2 days ago
Are you describing bolt.new? (Unfortunately, it looks like their open source project is lagging behind https://github.com/stackblitz-labs/bolt.diy)
zitterbewegung · 2 days ago
While this may be a better sandbox, actually having a separate computer dedicated to the task seems like a better solution still and you will get better performance.

Besides, prompt injection or simpler exploits should be addressed first than making a virtual computer in a browser and if you are simulating a whole computer you have a huge performance hit as another trade off.

On the other hand using the browser sandbox that also offers a UI / UX that the foundation models have in their apps would ease their own development time and be an easy win for them.

repstosb · 2 days ago
> The thing I most want to use this (or some other WASM Linux engine) for is running a coding agent against a virtual operating system directly in my browser.

Well, there it is, the dumbest thing I'll read on the internet all week.

Most of the engineering in Linux revolves around efficiently managing hardware interfaces to build up higher-level primitives, upon which your browser builds even higher-level primitives, that you want to use to simulate an x86 and attached devices, so you can start the process again? Somewhere (everywhere), hardware engineers are weeping. I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.

Even worse, you want this so your cloud-hosted imaginary friend can boil a medium-sized pond while taking the joyful bits of software development away from you, all for the enrichment of some of the most ethically-challenged members of the human race, and the fawning investors who keep tossing other people's capital at them? Our species has perhaps jumped the shark.

thepasch · 2 days ago
> while taking the joyful bits of software development away from you

Quick question: by "joyful bits of software development," do you mean the bit where you design robust architectures, services, and their communication/data concepts to solve specific problems, or the part where you have to assault a keyboard for extended periods of time _after_ all that interesting work so that it all actually does anything?

Because I sure know which of these has been "taken from me," and it's certainly not the joyful one.

simonw · 2 days ago
> Well, there it is, the dumbest thing I'll read on the internet all week.

Rude.

In case you're open to learning, here's why I think this is useful.

The big lesson we've learned from Claude Code, Codex CLI et al over the past twelve months is that the most useful tool you can provide to an LLM is Bash.

Last year there was enormous buzz around MCP - Model Context Protocol. The idea was to provide a standard for wiring tools into LLMs, then thousands of such tools could bloom.

Claude Code demonstrated that a single tool - Bash - is actually much more interesting than dozens of specialized tools.

Want to edit files without rewriting the whole thing every time? Tell the agent to use sed or perl -e or python -c.

Look at the whole Skills idea. The way Skills work is you tell the LLM "if you need to create an Excel spreadsheet, go read this markdown file first and it will tell you how to run some extra scripts for Excel generation in the same folder". Example here: https://github.com/anthropics/skills/tree/main/skills/xlsx

That only works if you have a filesystem and Bash style tools for navigating it and reading and executing the files.

This is why I want Linux in WebAssembly. I'd like to be able to build LLM systems that can edit files, execute skills and generally do useful things without needing an entire locked down VM in cloud hosting somewhere just to run that application.

Here's an alternative swipe at this problem: Vercel have been reimplementing Bash and dozens of other common Unix tools in TypeScript purely to have an environment agents know how to use: https://github.com/vercel-labs/just-bash

I'd rather run a 10MB WASM bundle with a full existing Linux build in then reimplement it all in TypeScript, personally.

yjftsjthsd-h · 2 days ago
> I'll bet you can't name a single advantage such a system would have over cloud hosting or a local Docker instance.

Cheaper than renting a server, more isolated than a container.

ZeWaka · 2 days ago
It's relatively easy to spin up a busybox WASM v86 solution
kantord · 2 days ago
This is not the technical solution you want, but I think it provides the result that you want: https://github.com/devcontainers

tldr; devcontainers let you completely containerize your development environment. You can run them on Linux natively, or you can run them on rented computers (there are some providers, such as GitHub Codespaces) or you can also run them in a VM (which is what you will be stuck with on a Mac anyways - but reportedly performance is still great).

All CLI dev tools (including things like Neovim) work out of the box, but also many/most GUI IDEs support working with devcontainers (in this case, the GUI is usually not containerized, or at least does not live in the same container. Although on Linux you can do that also with Flatpak. And for instance GitHub Codespaces runs a VsCode fully in the browser for you which is another way to sandbox it on both ends).

stavros · 2 days ago
This is interesting (and I've seen it mentioned in some editors), but how do I use it? It would be great if it had bubblewrap support, so I don't have to use Docker.

Do you know if there's a cli or something that would make this easier? The GitHub org seems to be more focused on the spec.

jraph · 3 days ago
Simon, this HN post didn't need to be about Gen AI.

This thing is really inescapable those days.

dang · 2 days ago
It's normal for HN to be preoccupied with the major technical trend of the moment, and this is unquestionably the biggest technical trend in many years.

People can argue about where to insert it in the list, but it is certainly in the top 5 of many decades (smartphones, web, PCs, etc.) That's why it's inescapable.

Your complaint isn't really about simonw's comment, but rather the fact that it was heavily upvoted - in other words, you were dissenting from the community reaction to the comment. That's understandable; in fact it's a fundamental problem with forums and upvoting systems: the same few massive topics suck in all the smaller ones until we get one big ball of topic mud: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

simonw · 2 days ago
Parallel thread: https://news.ycombinator.com/item?id=47311484#47312829 - "I've always been fascinated by this, but I have never known what it would be useful for."

I should have replied there instead, my mistake.

yokoprime · 2 days ago
What topics are allowed in your opinion? I very much enjoyed Simon’s comment as it is a use case I also was thinking of.
brumar · 2 days ago
Why not leting upvotes do their thing? I enjoyed this comment.
grimgrin · 2 days ago
a bit cute that you interacted with the 1 AI thread. there are other threads!
bakugo · 2 days ago
[flagged]
dang · 2 days ago
Please don't cross into personal attack on this site. We ban accounts that do that, and you've unfortunately done it repeatedly in this thread. Current comment was the worst case of this by far, but https://news.ycombinator.com/item?id=47317411, for example, is also on the wrong side of the line.

https://news.ycombinator.com/newsguidelines.html

iamjackg · 2 days ago
Nobody is promoting a product. Simon is just sharing an experiment he attempted. No products being sold here.
westurner · 3 days ago
How do TinyEmu and JSLinux compare to linux-wasm?

From "Show HN: Amla Sandbox – WASM bash shell sandbox for AI agents" (2026) https://news.ycombinator.com/item?id=46825119 :

>>> How to run vscode-container-wasm-gcc-example with c2w, with joelseverin/linux-wasm?

>> linux-wasm is apparently faster than c2w

From "Ghostty compiled to WASM with xterm.js API compatibility" https://news.ycombinator.com/item?id=46118267 :

> From joelseverin/linux-wasm: https://github.com/joelseverin/linux-wasm :

>> Hint: Wasm lacks an MMU, meaning that Linux needs to be built in a NOMMU configuration

From https://news.ycombinator.com/item?id=46229385 :

>> There's a pypi:SystemdUnitParser.

hashkitly · 2 days ago
Amazing work by Fabrice Bellard as always. The x86_64 support opens up so many possibilities for running modern Linux distributions in the browser.
bonzini · 2 days ago
Wow, with AVX512 too?? Now I really want to add it to QEMU. :)

(For APX I have patches at https://lore.kernel.org/qemu-devel/20260301144218.458140-1-p... but I have never tested them on system emulation).

lxgr · 2 days ago
Is JSLinux still an interpreter, or does it JIT compile these days?

Or are modern JS JITs so good that this is no longer a relevant distinction, i.e. is the performance of a JITted x86 interpreter effectively equivalent to a JITting x86-to-Javascript translator where the result is then itself JIT interpreted?

AlecMurphy · 3 days ago
If anyone is interested, I made some modifications last month to get TempleOS running on the x86_64 JSLinux: https://ring0.holyc.xyz/
zb3 · 2 days ago
Wow, thanks for this, this is exactly what v86 was missing! Runs faster than my demo: https://zb3.me/qemu-wasm-test/jspi-noffi/

Even though it has no JIT. Truly magic :)