We Ditched Rust WASM for TypeScript — And It Was 4x Faster
Here's a story that will make you reconsider everything you think you know about Rust, WASM, and JavaScript performance.
The OpenUI team built their parser in Rust and compiled it to WASM. The logic seemed bulletproof: Rust is fast, WASM gives you near-native speed in the browser, and their parser is a "reasonably complex multi-stage pipeline." Why wouldn't you want that in Rust?
Turns out, they were optimizing the wrong thing entirely.
The Hidden Cost of the WASM Boundary
Every call to their WASM parser paid a mandatory overhead regardless of how fast the Rust code ran:
The Rust parsing itself was never the slow part. The overhead was entirely in the boundary: copy string in, serialize to JSON, copy JSON out, then V8 deserializes it back into a JS object.
They tried serde-wasm-bindgen to return a JS object directly, skipping JSON serialization. It was 30% slower.
JS cannot read a Rust struct's bytes from WASM linear memory as a native JS object — the two runtimes use completely different memory layouts. To construct a JS object from Rust data, serde-wasm-bindgen must recursively materialize Rust data into real JS arrays and objects, which involves many fine-grained conversions across the runtime boundary.
Fewer, larger, and more optimized operations win over many small ones.
The Results: Pure TypeScript Won
They ported the full parser pipeline to TypeScript. Same architecture, same output — no WASM, no boundary, runs entirely in the V8 heap.
| Fixture | TypeScript | WASM | Speedup |
|---|---|---|---|
| simple-table | 9.3µs | 20.5µs | 2.2x |
| contact-form | 13.4µs | 61.4µs | 4.6x |
| dashboard | 19.4µs | 57.9µs | 3.0x |
But they didn't stop there. The real win came from fixing the streaming architecture.
The O(N²) Problem Nobody Talks About
The parser runs on every LLM chunk. The naïve approach re-parses the entire string from scratch each time:
The fix: statement-level incremental caching. Statements terminated by a depth-0 newline are immutable — the LLM will never modify them. Cache completed statement ASTs and only re-parse the trailing in-progress statement.
| Fixture | Naïve TS (re-parse) | Incremental TS (cached) | Speedup |
|---|---|---|---|
| contact-form | 316µs | 12µs | 2.6x |
| dashboard | 840µs | 255µs | 3.3x |
When WASM Actually Makes Sense
Here's the framework the OpenUI team now uses:
✅ Good WASM use cases:
- Compute-bound with minimal interop: image/video processing, cryptography, physics simulations, audio codecs. Large input → scalar output. The boundary is crossed rarely.
- Portable native libraries: shipping C/C++ libraries (SQLite, OpenCV, libpng) to the browser without a full JS rewrite.
❌ Bad WASM use cases:
- Parsing structured text into JS objects: You pay the serialization cost either way. The parsing computation is fast enough that V8's JIT eliminates any Rust advantage.
- Frequently-called functions on small inputs: If the function is called 50 times per stream and the computation takes 5µs, you cannot amortize the boundary cost.
The Lessons That Matter
1. Profile before you optimize
The cost was never in the computation — it was always in data transfer across the WASM-JS boundary.
2. "Direct object passing" is not cheaper
Constructing a JS object field-by-field from Rust involves more boundary crossings than a single JSON string transfer.
3. Algorithmic improvements beat language optimizations
Going from O(N²) to O(N) in the streaming case had a larger practical impact than switching from WASM to TypeScript.
4. WASM and JS do not share a heap
WASM has flat linear memory that JS can read as raw bytes — but those bytes are Rust's internal layout (pointers, enum discriminants, alignment padding). Conversion is always required and always costs something.
What This Means for You
If you're building web apps and reaching for Rust/WASM to speed things up, pause and ask: What's actually slow?
For most web developers working with structured data, V8's JIT-compiled JavaScript is incredibly fast. The boundary between languages is expensive. And algorithmic improvements almost always beat language-level optimizations.
Before you reach for WASM, profile. Before you optimize, measure. And remember: the fastest code is often the code you don't have to run.
Want more practical builder insights?
I curate the best stories from Hacker News every day — ones that actually help you build better software. No fluff, no hype. Just actionable engineering lessons.
Check Out z3n.iwnl