I'm curious if Coi will also suffer from the drawbacks of compile-time reactivity. Or as its own language that doesn't have to "fit" into JS, can Coi side-step the disadvantages via syntax?
Runes allowed simpler Svelte syntax while making it more clear, consistent, flexible, and composable.
Support for signals may help in terms of interop with native browser JS and other frameworks: https://github.com/tc39/proposal-signals
Coi avoids this ambiguity because the compiler can definitively trace usage patterns. Since mut variables are explicitly declared, the compiler essentially just looks at where they are used in the view {} block to establish dependencies at compile time. This static analysis is precise and doesn't require the compiler to "guess" intent, effectively preserving the benefits of compile-time reactivity without the fragility found in dynamic languages
Most practical approach: AI-assisted conversion. Feed an LLM the Coi docs + your Vue code and let it transform components. For migrating existing codebases, that's likely the most efficient path.
For new code, writing Coi directly is simpler :)
Fatal error: 'stdint.h' file not found
Yet exists within /usr/includeNot a rant, but developers, please include testing on FreeBSD. Git issue raised.
The fix was adding freestanding stdint.h and stddef.h to webcc's compat layer using compiler built-ins (__SIZE_TYPE__, etc.). This makes webcc work consistently across all platforms without relying on platform-specific clang configurations.
I hope it works now for you - hit me up if there are still problems!
JSX-like view syntax – Embedding HTML with expressions, conditionals (<if>), and loops (<for>) requires parser support. Doing this with C++ macros would be unmaintainable.
Scoped CSS – The compiler rewrites selectors and injects scope attributes automatically. In WebCC, you write all styling imperatively in C++.
Component lifecycle – init{}, mount{}, tick{}, view{} blocks integrate with the reactive system. WebCC requires manual event loop setup and state management.
Efficient array rendering – Array loops track elements by key, so adding/removing/reordering items only updates the affected DOM nodes. The compiler generates the diffing and patching logic automatically.
Fine-grained reactivity – The compiler analyzes which DOM nodes depend on which state variables, generating minimal update code that only touches affected elements.
From a DX perspective: Coi lets you write <button onclick={increment}>{count}</button> with automatic reactivity. WebCC is a low-level toolkit – Coi is a high-level language that compiles to it, handling the reactive updates and DOM boilerplate automatically.
These features require a new language because they need compiler-level integration – reactive tracking, CSS scoping, JSX-like templates, and efficient array updates can't be retrofitted into C++ without creating an unmaintainable mess of macros and preprocessors. A component-based declarative language is fundamentally better suited for building UIs than imperative C++.
From the product perspective, it occupies a different market than Emscripten, and I don't see it's good comparison. Your product is borderline optimized to run C++ code on Web (and Coi is a cherry on top of that). Where Emscripten is made to make native C++ application to run on Web - without significant changes to the original source itself.
Now, the `webcc::fush()` - what are your thoughts about scalability of the op-codes parsing? Right now it's switch/case based.
The flushing part can be tricky, as I see cases when main logic doesn't care about immediate response/sharing data - and it would be good to have a single flush on the end of the frame, and sometimes you'd like to pass data from C++ while it's in its life scope. On top of that, I'd be no surprised that control of what flushes is lost.
(I'm speaking from a game developer perspective, some issues I'm thinking aloud might be exaggerated)
Last, some suggestion what would make developers more happy is to provide a way to change wasm compilation flags - as a C++ developer I'd love to compile debug wasm code with DWARF, so I can debug with C++ sources.
To wrap up - I'm very impressed about the idea and execution. Phenomenal work!
On the opcode parsing - the switch/case approach is intentionally simple and surprisingly fast. Modern compilers turn dense switch statements into jump tables, so it's essentially O(1) dispatch.
Your flush timing concern is understandable, but the architecture actually handles this cleanly. Buffered commands accumulate, and anything that returns a value auto-flushes first to guarantee correct ordering. For game loops, the natural pattern is batch everything during your frame logic, single flush at the end. You don't lose control, the auto-flush on sync calls ensures execution order is always maintained.
DWARF debug support is a great call
This itself is quite cool. I know of a project in ClojureScript that also avoids virtual DOM and analyzes changes at compile-time by using sophisticated macros in that language. No doubt with your own language it can be made even more powerful. How do you feel about creating yet another language? I suppose you think the performance benefits are worthwhile to have a new language?
Just curious, what would the FPS be using native plain pure JavaScript for the same exact test?
The real advantage comes when you have compute-intensive operations, data processing, image manipulation, numerical algorithms, etc. The batched command buffer lets you do those operations in WASM, then batch all the rendering commands and flush once, minimizing the interop tax.
For pure "draw 10k rectangles with no logic," JS is probably fastest since there's no bridge to cross. But add real computation and the architecture pays off :)
> It’s way more efficient, I ran a benchmark rendering 10k rectangles on a canvas and the difference was huge: Emscripten hit around 40 FPS, while my setup hit 100 FPS.
This sounds a bit suspicious tbh.
For instance in this Emscripten WebGL2 sample I can move the instance slider to about 25000 before the frame rate drops below 120 fps on my 2021 M1 MBP in Chrome:
https://floooh.github.io/sokol-html5/drawcallperf-sapp.html
Each 'instance draw' is doing one glUniform4iv and one glDrawElements call via Emscripten's regular WebGL2 shim, e.g. 50k calls across the WASM/JS boundary per 120Hz frame, and I'm pretty sure the vast bulk of the execution time is inside the WebGL2 implementation and the actual call overhead from WASM to JS is negligible (also see: https://hacks.mozilla.org/2018/10/calls-between-javascript-a... - e.g. wasm-to-js call overhead is in the nanoseconds area since around 2018).
Still, very cool idea to have this command batching, but I doubt that the performance improvement can be explained with the WASM-JS call overhead alone, there must be something else going on (maybe it's as simple as the command buffer approach being more cache-friendly, or the tight decoding loop on the JS side allowing more JIT optimizations by the JS engine - but the differences you saw are still baffling, because IME 10k fairly simple operations in 25 or 10 milliseconds (e.g. 40 or 100fps) are not enough too see much of a difference by CPU caches or inefficient JIT code generation, and by far most of the time should be spent inside the WebGL2 implementation, and that should be the same no matter if a traditional shim or the command buffer approach is used).
Just to clarify, my benchmark was using Canvas2D, not WebGL, that's why the numbers are much lower than your WebGL2 example. Based on your comment I actually removed the command batching to test the difference, and yeah, the batching optimization is smaller than I initially thought. WebCC with batched commands hits ~100 FPS, without batching it's ~86 FPS, and Emscripten is ~40 FPS. So the batching itself only contributes about ~14 FPS.
The bigger performance difference compared to Emscripten seems to come from how Canvas2D operations are handled. Emscripten uses their val class for JS interop which wraps each canvas call in their abstraction layer. WebCC writes raw commands (opcode + arguments) directly into a buffer that the JS side decodes with a tight switch statement. The JS decoder already has direct references to canvas objects and can call methods immediately without property lookups or wrapper overhead. With 10k draw calls per frame, these small per-call differences (property access, type boxing/unboxing, generic dispatch) compound significantly.