Readit News logoReadit News
koenschipper commented on Making WebAssembly a first-class language on the Web   hacks.mozilla.org/2026/02... · Posted by u/mikece
koenschipper · 15 hours ago
This article perfectly captures the frustration of the "WebAssembly wall." Writing and maintaining the JS glue code—or relying on opaque generation tools—feels like a massive step backward when you just want to ship a performant module.

The 45% overhead reduction in the Dodrio experiment by skipping the JS glue is massive. But I'm curious about the memory management implications of the WebAssembly Component Model when interacting directly with Web APIs like the DOM.

If a Wasm Component bypasses JS entirely to manipulate the DOM, how does the garbage collection boundary work? Does the Component Model rely on the recently added Wasm GC proposal to keep DOM references alive, or does it still implicitly trigger the JS engine's garbage collector under the hood?

Really excited to see this standardize so we can finally treat Wasm as a true first-class citizen.

koenschipper commented on Show HN: I built a real-time OSINT dashboard pulling 15 live global feeds   github.com/BigBodyCobain/... · Posted by u/vancecookcobxin
koenschipper · 20 hours ago
That live GPS jamming calculation using commercial flight NAC-P degradation is honestly brilliant. Such a clever use of existing public telemetry.

You mentioned compressing the FastAPI payloads by 90% to keep the browser from melting. I'm really curious about your approach there did you just crank up gzip/brotli on the JSON responses, or did you have to switch to something like MessagePack, Protobuf, or a custom binary format to handle that volume of moving GeoJSON features?

Also, never apologize for the "movie hacker" UI. A project like this absolutely deserves that aesthetic. Awesome work!

koenschipper commented on Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs   dnhkng.github.io/posts/ry... · Posted by u/dnhkng
koenschipper · 20 hours ago
This is an incredibly elegant hack. The finding that it only works with "circuit-sized" blocks of ~7 layers is fascinating. It really makes you wonder how much of a model's depth is just routing versus actual discrete processing units.

I spend a lot of time wrestling with smaller LLMs for strict data extraction and JSON formatting. Have you noticed if duplicating these specific middle layers boosts a particular type of capability?

For example, does the model become more obedient to system prompts/strict formatting, or is the performance bump purely in general reasoning and knowledge retrieval?

Amazing work doing this on a basement 4090 rig!

u/koenschipper

KarmaCake day9July 6, 2024View Original