We use WASM quite a bit for embedding a ton of Rust code with very company specific domain code into our web frontend. Pretty cool, because now your backend and frontend can share all kinds of logic without endless network calls.
But it’s safe to say that the interaction layer between the two is extremely painful. We have nicely modeled type-safe code in both the Rust and TypeScript world and an extremely janky layer in between. You need a lot of inherently slow and unsafe glue code to make anything work. Part is WASM related, part of it wasm-bindgen. What were they thinking?
I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often. That it fits the purpose more of heaving longer running compute in the background and bring over some chunk of data in the end. Why create a generic bytecode execution platform and limit the use case so much? Not everyone is building an in-browser crypto miner.
My reading of it is that the people furthering WASM aren't really associated with just browsers anymore and they are building a whole new VM ecosystem that the browser people aren't interested in. This is just my take since I am not internal to those organizations. But you have the whole web assembly component model and browsers just do not seem interested in picking that up at all.
So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access. The browser is the main driving force for WASM, as I see it, because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container. So WASM doesn't really have much impetus to improve beyond compute.
> So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access
I don't think this is entirely fair or accurate. This isn't how Wasm runtimes work. Making it possible for the sandbox to explicitly request specific resource access is not quite the same thing as what you're implying here.
> The browser is the main driving force for WASM, as I see it
This hasn't been the case for a while. In your first paragraph you yourself say that 'the people furthering WASM are [...] building a whole new VM ecosystem that the browser people aren't interested in' - if that's the case, how can the browser be the main driving force for Wasm? It's true, though, that there's verey little revenue in browser-based Wasm. There is revenue in enterprise compute.
> because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container
Not exactly true when you consider that docker containers are orders of magnitude bigger, slower to mirror and start up, require architecture specific binaries, are not great at actually 'containing' fallout from insecure code, supply chain vulns, etc.. The potential benefits to enterprise orgs that ship thousands of multi-gig docker containers a week with microservices architectures that just run simple business logic, are very substantial. They just rarely make it to the hn frontpage, because they really are boring.
However, the Wasm push in enterprise compute is real, and the value is real. But you're right that the ecosystem and its sponsorship is still struggling - in some part due to lack of support for the component model by the browser people. The component model support introduced in go 1.25 has been huge though, at least for the (imho bigger) enterprise compute use case, and the upcoming update to the component model (wasi p3) should make a ton of this stuff way more usable. So it's a really interesting time for Wasm.
Meanwhile the people using already established VM ecosystems, don't a value dropping several decades of IDEs, libraries and tools, for yet another VM redoing more or less the same, e.g. application servers in Kubernetes with WASM containers.
WASM as it is, is good enough for non-trivial graphics and geometry workloads - visibility culling (given octree/frustum), data de-serialization (pointclouds, meshes), and actual BREP modeling. All of these a) are non-trivial to implement b) would be a pain to rewrite and maintain c) run pretty swell in the wasm.
I agree WASM has it’s drawbacks but the execution model is mostly fine for these types of task where you offload the task to a worker and are fine waiting a millisecond or two for the response.
The main benefit for complex tasks like above is that when a product needs to support isomorphic web and native experience - quite many use cases actually in CAD, graphics & gis) - based on complex computation you maintain, the implementation and maintenance load drops to a half. Ie these _could_ be eg typescript but then maintaining feature parity becomes _much_ more burdensome.
> I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often.
It's fine and fast enough as long as you don't need to pass complex data types back and forth. For instance WebGL and WebGPU WASM applications may call into JS thousands of times per frame. The actual WASM-to-JS call overhead itself is negligible (in any case, much less than the time spent inside the native WebGL or WebGPU implementation), but you really need to restrict yourself to directly passing integers and floats for 'high frequency calls'.
Those problems are quite similar to any FFI scenario though (e.g. calling from any high level language into restricted C APIs).
The entire DOM API is very coupled to JS, it's all designed with JS in mind, any new and future proposed changes are thought about solely through the lens of JS.
If they introduced a WASM API it would perpetually be a few months/years behind the JS one, any new features would have to be implemented in both etc.
I can see why it's not happened
(edit) And yes, I think the intention of WASM was either heavy processing, or UI elements more along the lines of what used to be done with Java applets etc. potentially using canvas and bypassing the DOM entirely, not as an alternative to JS for doing `document.createElement`
The confusion is perhaps due to your usage focus and the security constraints browser compiler makers face to make something secure.
First off, remember that initially all we had was JS, then Asm.JS was forced down Apple throats by being "just" a JS compatible performance hack (remember that Google had tried to introduce NaCl beforehand but never got traction). You can still see the Asm.JS lineage in how Wasm branching opcodes work (you can always easily decompose them into while loops together with break and continue instructions).
The target market for NaCl, Asm.JS and Wasm seems to have been focused on enabling porting C/C++ games even if other usages was always of interest, so while interop times can be painful it's usually not a major factor.
Secondly, As a compiler maker (and to look at performance profiles), I usually place languages into 3 categories.
Category 1: Plain-memory-accessors, objects are usually a pointer number + offsets for members, more or less manually managed memory. Cache friendlyness is your own worry, CPU instructions are always simple.
C, C++, Rust, Zig, Wasm/Asm.JS, etc goes here.
Category 2: GC'd offset-languagses, while we still have pointers(now called references) they're usually restricted from being directly mutated, instead going through specialized access instructions, however as with category 1 the actual value can often be accessed with the pointer+offset and object layouts are _fixed_ so less freedom vs JS but higher perf.
Also there can often be GC-specific instructions like read/write-barriers associated with object accesses. Performance for actual instructions is still usually good but GC's can affect access patterns to increase costs and some GC collection unpredictability.
Java, C#, Lisps, high perf functional languages,etc usually belong here (with exceptions).
Category 3: GC'd free-prop languages, objects are no longer of fixed size (you can add properties after creation), runtimes like V8 tries their best to optimize this away to approach Category 2 languages but abuse things enough and you'll run out a performance cliff. Every runtime optimization requires _very careful_ design of fallbacks that can affect practically almost any other part of the runtime (these manifest as type-confusion vulnerabilities if you look at bug-reports) as well as how native-bindings are handled.
JS, Python, Lua, Ruby, etc goes here.
Naturally some languages/runtimes can straddle these lines (.NET/CIL has always been able to run C as well as later JS, Ruby and Python in addition to C# and today C# itself is gaining many category 1 features), I'm mostly putting the languages into the categories where the majority of user created code runs.
To get back to the "troubles" of Wasm<->JS, as you noticed they are of category 1 and 3, since Wasm is "wrapped" by JS you can usually reach into Wasm memory from JS since it's "just an buffer", the end-user security implications are fairly low since the JS has well defined bounds checking (outside of performance costs).
The other direction is a pure clusterf from a compiler writers point of view, remember that most of these optimizations of Cat 3 languages have security implications? Allowing access would require every precondition check to be replicated on the Wasm side as well as in the main JS runtime (or build a unified runtime but optimization strategies are often different).
The new Wasm-GC (finally usable with Safari since late last year) allows GC'd Catgory 2 languages to be built directly to Wasm (and not ship their own GC via Cat 1 emulation like C#/Blazor) or be compiled to JS, and even here they punted any access to category 3 (JS) objects, basically marking them as opaque objects that can be referred and passed back to JS (improvement over previous WASM since there is no extra GC synching as one GC handles it all but still no direct access standardized iirc).
So, security has so far taken a center stage over usability. They fix things as people complain but it's not a fast process.
WASM is just hotfixing javascript to use any language people want.
It's all about javascript being popular and being the standard language, js is not a great language, but it's standard across every computer, which dwarfs anything that can be said about js.
Adjusting browsers so they can use WASM was easy to do, but telling browser vendors to make the DOM work was obviously more difficult, because they might handle the DOM in various ways.
WASM enables things like running a 20 year old CAD engine written in C++ in the browser. It isn’t a scripting language, it’s a way to get high-performing native code into web apps with a sensible bridge to the JS engine. It gets us closer to the web as the universal platform.
It's not just the DOM, it's also all other APIs like WebGL2.
I ended up having to rewrite the entire interfacing layer of my mobile application (which used to be WebAssembly running in WebKit/Safari on iOS) because I was getting horrible performance losses each time I crossed that barrier. For graphics applications where you have to allocate and pass buffers or in general piping commands, you take a horrible hit. Firefox and Chrome on Windows/macOS/Linux did quite well, but Safari...
Everything has to pass the JavaScript barrier before it hits the browser. It's so annoying!
The web is a platform that has so much unrealized potential that is absolutely wasted.
Wasm is the perfect example of this - it has the potential to revolutionize web (and desktop GUI) development but it hasn't progressed beyond niche single threaded use cases in basically 10 years.
It should never have been web assembly. WASM is the fulfillment of the dream that started with Java VM in the 90’s but never got realized. A performant, truly universal virtual machine for write-once, run anywhere deployment. The web part is a distraction IMHO.
High performance web-based applications is pretty high on my list.
Low memory usage and low CPU demand may not be a requirement for all websites because most are simple, but there are plenty of cases where JavaScript/TypeScript is objectively the wrong language to be using.
Banking apps, social network sites, chat apps, spreadsheets, word processors, image processors, jira, youtube, etc
Something as simple as multithreading is enough to take an experience from "treading water" to "runs flawlessly on an 8 year old mobile device". Accurate data types are also very valuable for finance applications.
Another use case is sharing types between the front and back end.
I use it for a web version of some robotics simulation & visualization software I wrote in C++. It normally runs on as an app on Mac or Linux, but compiling to WASM lets me show public interactive demos.
Before WASM, the options were:
- require everyone to install an app to see visualizations
- just show canned videos of visualizations
- write and maintain a parallel Javascript version
With access to DOM it could run with no (or just very little) js, no ts-to-js transpiler, no web-framework-of-the-month wobbly frontends perpetually reinventing the wheel. One could use a sane language for the frontend. That would be quite the revolution.
People are already appreciate the accessibility to low level native libraries like duckdb, sqlite, imagemagick, ffmpeg… allowed by wasm. Or high performance games/canvas based applications (figma).
But CRUD developers don’t know/care about those, I guess.
Law of question marks on headlines holds here: no / never seems to be the answer.
Article l also discussed ref types, which do exist and do provide... Something. Some ability to at least refer to host objects. It's not clear what that enables or what it's limitstions are.
Definitely some feeling of being rug-pulled in the shift here. It felt like there was a plan for good integration, but fast forward half a decade+ and there's been so so much progress and integration but it's still so unclear how WebAssembly is going to alloy the web, seems like we have reams of generated glue code doing so much work to bridge systems.
Very happy that Dan at least checked in here, with a state of the wasm for web people type post. It's been years of waiting and wondering, and I've been keeping my own tabs somewhat through twists and turns but having some historical artifact, some point in time recap to go look at like this: it's really crucial for the health of a community to have some check-ins with the world, to let people know what to expect. Particularly for the web, wasm has really needed an update State of the Web WebAssmebly.
I wish I felt a little better though! Jco is amazing but running a js engine in wasm to be able to use wasm-components is gnarly as hell. Maybe by 2030 wasm & wasm-components will be doing well enough that browsers will finally rejoin the party & start implementing new.
"Definitely some feeling of being rug-pulled in the shift here."
Definitely feeling rug-pulled.
What I think all the people that hark on the "Don't worry, going through JS is good enough for you." are missing is the subtext of their message. They might objectively be right, but in the end what they are saying is that they are content with WASM being a second class citizen in the web world.
This might be fine for everyone needing a quick and dirty solution now, but it is not the kind of narrative that draws in smart people to support an ecosystem in the long run. When you bet, you bet on the rider and not the domestique.
> that they are content with WASM being a second class citizen in the web world
Tbh, most of the ideas so far to enable more direct access of Javascript APIs from WASM have a good chance of ruining WASM with pointless complexity.
Keeping those two worlds separate, but making sure that 'raw' calls between WASM and JS are as fast as they can be (which they are) is really the best longterm solution.
I think what people need to understand is that the idea of having 'pure' WASM browser applications which don't involve a single line of Javascript is a pipe dream. There will always be some sort of JS glue code, it might be generated and you don't need to directly deal with it, but it will still be there, and that's simply because web APIs are first and foremost designed for usage from Javascript.
Some web APIs have started to 'appease' WASM more by adding 'garbage free' function overloads, which IMHO is a good thing because it may help to reduce overhead on the JS side, but this takes time and effort to be implemented in all browser (and most importantly, a will by mostly "JS-centric" web people to add such helper functions which mostly only benefit WASM).
I'm always baffled by the crowd that suggests "Just use Javascript to interface it to the DOM!". If that's the outcome of using WASM, couldn't I just write Javascript?
One could say second class, another could say that's a good separation of concerns. Having direct access would lead to additional security issues and considerations.
I wish it was possible to disable WASM in browsers.
Reference types makes wasm/js interoperability way cleaner and easier. wasm-gc added a way to test a function pointer for whether it will trap or not.
And JSPI is a standard since April and available in Chrome >= 137. I think JSPI is the greatest step forward for webassembly in the browser ever. Just need Firefox and Safari to implement it...
I'd really love a deep dive on what reference types enable and what limitations they have. Why are reference types not an end-all be-all "When is WebAssembly Going to Get DOM Support?" 'we have them now' answer?
Is there any data on the performance cost of JS/WASM context switches? The way the architecture is described, it sounds as if the costs could be substantial, but the approaches described in the article basically hand them out like candy.
This would sort of defeat the point that WASM is supposed to be for the "performance critical" parts of the application only. It doesn't seem very useful if your business logic runs fast, but requires so many switching steps that all performance benefits are undone again.
Yeah, it's very unfortunate for WebGL/WebGPU apps, where every call has to pass/convert typed arrays and issue a js gl call. It pretty much kills any advantage of using WASM. Hope that changes.
How can you reconcile this with all of the AAA games that have been shown to work well on Wasm+WebGL ? What is different between your usage and theirs?
Not entirely sure, but C#'s Blazor is amazing. I can stick to purely C# code, front-end and back-end, we rarely call out to JS unless its for like file uploading dialogs. I don't want to ever touch JavaScript again after this workflow.
Edit:
And if you don't want to do "WebAssembly" you can have it do it all server rendered, think of a SPA on steroids.
This problem is how you spot people that have tried to do it vs those that just talk about it. Everyone ends up with batching calls back and forth because the cost is so high.
Separately the conceptual mismatch when the js has to allocate/deallocate things on the wasm side is also tedious to deal with.
I want DOM access from WASM, but I don't want WASM to have to rely on UTF-16 to do it (DOMString is a 16-bit encoding). We already have the js-string-builtins proposal which ties WASM a little closer to 16-bit string encodings and I'd rather not see any more moves in that direction. So I'd prefer to see an additional DOM interface of DOMString8 (8-bit encoding) before providing WASM access to DOM apis. But I suspect the interest in that development is low.
Tbh I would be surprised if converting between UTF-8 and JS strings is the performance bottleneck when calling into JS code snippets which manipulate the DOM.
In any case, I would probably define a system which doesn't simply map the DOM API (objects and properties) into a granular set of functions on the WASM side (e.g. granular setters and getters for each DOM object property).
Instead I'd move one level up and build a UI framework where the DOM is abstracted away (quite similar to all those JS frameworks), and where most of the actual DOM work happens in sufficiently "juicy" JS functions (e.g. not just one line of code to set a property).
I'm worried that wide use of WASM is going to reduce the amount of abilities extensions have. Currently a lot of websites are basically source-available by default due to JS.
With minimisers and obfuscators I don't see wasm adding to the problem.
I felt something was really lost once css classes became randomised garbage on major sites. I used to be able to fix/tune a website layout to my needs but now it's pretty much a one-time effort before the ids all change.
I’ve been trying to fix UI bugs in Grafana and “randomized garbage” is real. Is that a general React thing or just something the crazy people do? Jesus fucking Christ.
> Currently a lot of websites are basically source-available by default due to JS.
By default maybe, but JS obfuscators exist so not really. Many websites have totally incomprehensible JS even without obfuscators due to extensive use of bundlers and compile-to-JS frameworks.
I expect if WASM gets really popular for the frontend we'll start seeing better tooling - decompilers etc.
I am confused by this. If WASM is a VM then why would it understand the DOM? To me it akin to asking "When will Arm get DOM support?" Seems like the answer is "When someone writes the code that runs on WASM that interacts with the DOM." Am I missing something? (not a web dev.)
The WASM VM doesn't have any (direct) access to the DOM, so there's no code you can write in it that would affect the DOM.
There's a way to make JS functions callable by WASM, and that's how people build a bridge from WASM to the DOM, but it involves extra overhead versus some theoretical direct access.
That's like saying WASM doesn't have a direct way to allocate memory or print to the console. Of course it doesn't, it doesn't have access to anything, that's the whole point.
Thanks for the clarification. So if I understand correctly - when using WASM you interface to web things through JS forcing the user to always need JS in the stack when e.g. they may want to just use Rust or Go. My first thought would be modules that are akin to a syscall interface to a DOM "device" exposed by the VM.
But it’s safe to say that the interaction layer between the two is extremely painful. We have nicely modeled type-safe code in both the Rust and TypeScript world and an extremely janky layer in between. You need a lot of inherently slow and unsafe glue code to make anything work. Part is WASM related, part of it wasm-bindgen. What were they thinking?
I’ve read that WASM isn’t designed with this purpose in mind to go back and forth over the boundary often. That it fits the purpose more of heaving longer running compute in the background and bring over some chunk of data in the end. Why create a generic bytecode execution platform and limit the use case so much? Not everyone is building an in-browser crypto miner.
The whole WASM story is confusing to me.
So on the one side you have organizations that definitely don't want to easily give network/filesystem/etc. access to code and on the other side you have people wanting it to be easier to get this access. The browser is the main driving force for WASM, as I see it, because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container. So WASM doesn't really have much impetus to improve beyond compute.
I don't think this is entirely fair or accurate. This isn't how Wasm runtimes work. Making it possible for the sandbox to explicitly request specific resource access is not quite the same thing as what you're implying here.
> The browser is the main driving force for WASM, as I see it
This hasn't been the case for a while. In your first paragraph you yourself say that 'the people furthering WASM are [...] building a whole new VM ecosystem that the browser people aren't interested in' - if that's the case, how can the browser be the main driving force for Wasm? It's true, though, that there's verey little revenue in browser-based Wasm. There is revenue in enterprise compute.
> because outside of the browser the need for sandboxing is limited to plugins (where LUA often gets used) since otherwise you can run a binary or a docker container
Not exactly true when you consider that docker containers are orders of magnitude bigger, slower to mirror and start up, require architecture specific binaries, are not great at actually 'containing' fallout from insecure code, supply chain vulns, etc.. The potential benefits to enterprise orgs that ship thousands of multi-gig docker containers a week with microservices architectures that just run simple business logic, are very substantial. They just rarely make it to the hn frontpage, because they really are boring.
However, the Wasm push in enterprise compute is real, and the value is real. But you're right that the ecosystem and its sponsorship is still struggling - in some part due to lack of support for the component model by the browser people. The component model support introduced in go 1.25 has been huge though, at least for the (imho bigger) enterprise compute use case, and the upcoming update to the component model (wasi p3) should make a ton of this stuff way more usable. So it's a really interesting time for Wasm.
I agree WASM has it’s drawbacks but the execution model is mostly fine for these types of task where you offload the task to a worker and are fine waiting a millisecond or two for the response.
The main benefit for complex tasks like above is that when a product needs to support isomorphic web and native experience - quite many use cases actually in CAD, graphics & gis) - based on complex computation you maintain, the implementation and maintenance load drops to a half. Ie these _could_ be eg typescript but then maintaining feature parity becomes _much_ more burdensome.
It's fine and fast enough as long as you don't need to pass complex data types back and forth. For instance WebGL and WebGPU WASM applications may call into JS thousands of times per frame. The actual WASM-to-JS call overhead itself is negligible (in any case, much less than the time spent inside the native WebGL or WebGPU implementation), but you really need to restrict yourself to directly passing integers and floats for 'high frequency calls'.
Those problems are quite similar to any FFI scenario though (e.g. calling from any high level language into restricted C APIs).
https://github.com/ealmloff/sledgehammer_bindgen
How would you make such a thing without limiting it in some such way?
If they introduced a WASM API it would perpetually be a few months/years behind the JS one, any new features would have to be implemented in both etc.
I can see why it's not happened
(edit) And yes, I think the intention of WASM was either heavy processing, or UI elements more along the lines of what used to be done with Java applets etc. potentially using canvas and bypassing the DOM entirely, not as an alternative to JS for doing `document.createElement`
Is it though? I thought it was all specified in WebIDL and all the browser vendors generate C++ headers from it too.
First off, remember that initially all we had was JS, then Asm.JS was forced down Apple throats by being "just" a JS compatible performance hack (remember that Google had tried to introduce NaCl beforehand but never got traction). You can still see the Asm.JS lineage in how Wasm branching opcodes work (you can always easily decompose them into while loops together with break and continue instructions).
The target market for NaCl, Asm.JS and Wasm seems to have been focused on enabling porting C/C++ games even if other usages was always of interest, so while interop times can be painful it's usually not a major factor.
Secondly, As a compiler maker (and to look at performance profiles), I usually place languages into 3 categories.
Category 1: Plain-memory-accessors, objects are usually a pointer number + offsets for members, more or less manually managed memory. Cache friendlyness is your own worry, CPU instructions are always simple.
C, C++, Rust, Zig, Wasm/Asm.JS, etc goes here.
Category 2: GC'd offset-languagses, while we still have pointers(now called references) they're usually restricted from being directly mutated, instead going through specialized access instructions, however as with category 1 the actual value can often be accessed with the pointer+offset and object layouts are _fixed_ so less freedom vs JS but higher perf.
Also there can often be GC-specific instructions like read/write-barriers associated with object accesses. Performance for actual instructions is still usually good but GC's can affect access patterns to increase costs and some GC collection unpredictability.
Java, C#, Lisps, high perf functional languages,etc usually belong here (with exceptions).
Category 3: GC'd free-prop languages, objects are no longer of fixed size (you can add properties after creation), runtimes like V8 tries their best to optimize this away to approach Category 2 languages but abuse things enough and you'll run out a performance cliff. Every runtime optimization requires _very careful_ design of fallbacks that can affect practically almost any other part of the runtime (these manifest as type-confusion vulnerabilities if you look at bug-reports) as well as how native-bindings are handled.
JS, Python, Lua, Ruby, etc goes here.
Naturally some languages/runtimes can straddle these lines (.NET/CIL has always been able to run C as well as later JS, Ruby and Python in addition to C# and today C# itself is gaining many category 1 features), I'm mostly putting the languages into the categories where the majority of user created code runs.
To get back to the "troubles" of Wasm<->JS, as you noticed they are of category 1 and 3, since Wasm is "wrapped" by JS you can usually reach into Wasm memory from JS since it's "just an buffer", the end-user security implications are fairly low since the JS has well defined bounds checking (outside of performance costs).
The other direction is a pure clusterf from a compiler writers point of view, remember that most of these optimizations of Cat 3 languages have security implications? Allowing access would require every precondition check to be replicated on the Wasm side as well as in the main JS runtime (or build a unified runtime but optimization strategies are often different).
The new Wasm-GC (finally usable with Safari since late last year) allows GC'd Catgory 2 languages to be built directly to Wasm (and not ship their own GC via Cat 1 emulation like C#/Blazor) or be compiled to JS, and even here they punted any access to category 3 (JS) objects, basically marking them as opaque objects that can be referred and passed back to JS (improvement over previous WASM since there is no extra GC synching as one GC handles it all but still no direct access standardized iirc).
So, security has so far taken a center stage over usability. They fix things as people complain but it's not a fast process.
That describes much of modern computing.
Think of it as a backend and not as library and it clicks.
It's all about javascript being popular and being the standard language, js is not a great language, but it's standard across every computer, which dwarfs anything that can be said about js.
Adjusting browsers so they can use WASM was easy to do, but telling browser vendors to make the DOM work was obviously more difficult, because they might handle the DOM in various ways.
Not to mention js engines are very complicated.
Trying to shoehorn Rust as a web scripting language was your second mistake
Your first mistake was to mix Rust, TypeScript and JavaScript only just to add logic to your HTML buttons
I swear, things get worse every day on this planet
I ended up having to rewrite the entire interfacing layer of my mobile application (which used to be WebAssembly running in WebKit/Safari on iOS) because I was getting horrible performance losses each time I crossed that barrier. For graphics applications where you have to allocate and pass buffers or in general piping commands, you take a horrible hit. Firefox and Chrome on Windows/macOS/Linux did quite well, but Safari...
Everything has to pass the JavaScript barrier before it hits the browser. It's so annoying!
Wasm is the perfect example of this - it has the potential to revolutionize web (and desktop GUI) development but it hasn't progressed beyond niche single threaded use cases in basically 10 years.
Generalized Assembly? GASM?
I’ve personally felt like it has been progressing, but I’m hoping you can expand my understanding!
Low memory usage and low CPU demand may not be a requirement for all websites because most are simple, but there are plenty of cases where JavaScript/TypeScript is objectively the wrong language to be using.
Banking apps, social network sites, chat apps, spreadsheets, word processors, image processors, jira, youtube, etc
Something as simple as multithreading is enough to take an experience from "treading water" to "runs flawlessly on an 8 year old mobile device". Accurate data types are also very valuable for finance applications.
Another use case is sharing types between the front and back end.
Before WASM, the options were:
- require everyone to install an app to see visualizations
- just show canned videos of visualizations
- write and maintain a parallel Javascript version
Demo at https://throbol.com/sheet/examples/humanoid_walking.tb
But CRUD developers don’t know/care about those, I guess.
Article l also discussed ref types, which do exist and do provide... Something. Some ability to at least refer to host objects. It's not clear what that enables or what it's limitstions are.
Definitely some feeling of being rug-pulled in the shift here. It felt like there was a plan for good integration, but fast forward half a decade+ and there's been so so much progress and integration but it's still so unclear how WebAssembly is going to alloy the web, seems like we have reams of generated glue code doing so much work to bridge systems.
Very happy that Dan at least checked in here, with a state of the wasm for web people type post. It's been years of waiting and wondering, and I've been keeping my own tabs somewhat through twists and turns but having some historical artifact, some point in time recap to go look at like this: it's really crucial for the health of a community to have some check-ins with the world, to let people know what to expect. Particularly for the web, wasm has really needed an update State of the Web WebAssmebly.
I wish I felt a little better though! Jco is amazing but running a js engine in wasm to be able to use wasm-components is gnarly as hell. Maybe by 2030 wasm & wasm-components will be doing well enough that browsers will finally rejoin the party & start implementing new.
Definitely feeling rug-pulled.
What I think all the people that hark on the "Don't worry, going through JS is good enough for you." are missing is the subtext of their message. They might objectively be right, but in the end what they are saying is that they are content with WASM being a second class citizen in the web world.
This might be fine for everyone needing a quick and dirty solution now, but it is not the kind of narrative that draws in smart people to support an ecosystem in the long run. When you bet, you bet on the rider and not the domestique.
Tbh, most of the ideas so far to enable more direct access of Javascript APIs from WASM have a good chance of ruining WASM with pointless complexity.
Keeping those two worlds separate, but making sure that 'raw' calls between WASM and JS are as fast as they can be (which they are) is really the best longterm solution.
I think what people need to understand is that the idea of having 'pure' WASM browser applications which don't involve a single line of Javascript is a pipe dream. There will always be some sort of JS glue code, it might be generated and you don't need to directly deal with it, but it will still be there, and that's simply because web APIs are first and foremost designed for usage from Javascript.
Some web APIs have started to 'appease' WASM more by adding 'garbage free' function overloads, which IMHO is a good thing because it may help to reduce overhead on the JS side, but this takes time and effort to be implemented in all browser (and most importantly, a will by mostly "JS-centric" web people to add such helper functions which mostly only benefit WASM).
I wish it was possible to disable WASM in browsers.
And JSPI is a standard since April and available in Chrome >= 137. I think JSPI is the greatest step forward for webassembly in the browser ever. Just need Firefox and Safari to implement it...
This would sort of defeat the point that WASM is supposed to be for the "performance critical" parts of the application only. It doesn't seem very useful if your business logic runs fast, but requires so many switching steps that all performance benefits are undone again.
https://playgama.com/blog/general/boost-html5-game-performan...
It is still the same jit calling itself, there is no reason it should be far slower than js-to-js
Edit:
And if you don't want to do "WebAssembly" you can have it do it all server rendered, think of a SPA on steroids.
https://www.youtube.com/watch?v=4KtotxNAwME
https://www.youtube.com/watch?v=V1cqQRmVAK0
Separately the conceptual mismatch when the js has to allocate/deallocate things on the wasm side is also tedious to deal with.
In any case, I would probably define a system which doesn't simply map the DOM API (objects and properties) into a granular set of functions on the WASM side (e.g. granular setters and getters for each DOM object property).
Instead I'd move one level up and build a UI framework where the DOM is abstracted away (quite similar to all those JS frameworks), and where most of the actual DOM work happens in sufficiently "juicy" JS functions (e.g. not just one line of code to set a property).
I felt something was really lost once css classes became randomised garbage on major sites. I used to be able to fix/tune a website layout to my needs but now it's pretty much a one-time effort before the ids all change.
By default maybe, but JS obfuscators exist so not really. Many websites have totally incomprehensible JS even without obfuscators due to extensive use of bundlers and compile-to-JS frameworks.
I expect if WASM gets really popular for the frontend we'll start seeing better tooling - decompilers etc.
There's a way to make JS functions callable by WASM, and that's how people build a bridge from WASM to the DOM, but it involves extra overhead versus some theoretical direct access.