March 21, 2026

We Ditched Rust WASM for TypeScript — And It Was 4x Faster

Here's a story that will make you reconsider everything you think you know about Rust, WASM, and JavaScript performance.

The OpenUI team built their parser in Rust and compiled it to WASM. The logic seemed bulletproof: Rust is fast, WASM gives you near-native speed in the browser, and their parser is a "reasonably complex multi-stage pipeline." Why wouldn't you want that in Rust?

Turns out, they were optimizing the wrong thing entirely.

The Hidden Cost of the WASM Boundary

Every call to their WASM parser paid a mandatory overhead regardless of how fast the Rust code ran:

JS world → WASM world ├─ Copy string: JS heap → WASM linear memory (allocation + memcpy) │ │ Rust parses ✓ fast │ serde_json::to_string() ← serialize result │ ├─ Copy JSON string: WASM → JS heap (allocation + memcpy) │ JSON.parse(jsonString) ← deserialize result │ return ParseResult

The Rust parsing itself was never the slow part. The overhead was entirely in the boundary: copy string in, serialize to JSON, copy JSON out, then V8 deserializes it back into a JS object.

They tried serde-wasm-bindgen to return a JS object directly, skipping JSON serialization. It was 30% slower.

JS cannot read a Rust struct's bytes from WASM linear memory as a native JS object — the two runtimes use completely different memory layouts. To construct a JS object from Rust data, serde-wasm-bindgen must recursively materialize Rust data into real JS arrays and objects, which involves many fine-grained conversions across the runtime boundary.

Fewer, larger, and more optimized operations win over many small ones.

The Results: Pure TypeScript Won

They ported the full parser pipeline to TypeScript. Same architecture, same output — no WASM, no boundary, runs entirely in the V8 heap.

FixtureTypeScriptWASMSpeedup
simple-table9.3µs20.5µs2.2x
contact-form13.4µs61.4µs4.6x
dashboard19.4µs57.9µs3.0x

But they didn't stop there. The real win came from fixing the streaming architecture.

The O(N²) Problem Nobody Talks About

The parser runs on every LLM chunk. The naïve approach re-parses the entire string from scratch each time:

Chunk 1: parse("root = Root([t") → 14 chars Chunk 2: parse("root = Root([tbl])\ntbl = T") → 27 chars Chunk 3: parse(full_accumulated_string) → ... ... For 1000 chars delivered in 20-char chunks: 50 parse calls, cumulative ~25,000 characters. O(N²).

The fix: statement-level incremental caching. Statements terminated by a depth-0 newline are immutable — the LLM will never modify them. Cache completed statement ASTs and only re-parse the trailing in-progress statement.

FixtureNaïve TS (re-parse)Incremental TS (cached)Speedup
contact-form316µs12µs2.6x
dashboard840µs255µs3.3x

When WASM Actually Makes Sense

Here's the framework the OpenUI team now uses:

✅ Good WASM use cases:

❌ Bad WASM use cases:

The Lessons That Matter

1. Profile before you optimize

The cost was never in the computation — it was always in data transfer across the WASM-JS boundary.

2. "Direct object passing" is not cheaper

Constructing a JS object field-by-field from Rust involves more boundary crossings than a single JSON string transfer.

3. Algorithmic improvements beat language optimizations

Going from O(N²) to O(N) in the streaming case had a larger practical impact than switching from WASM to TypeScript.

4. WASM and JS do not share a heap

WASM has flat linear memory that JS can read as raw bytes — but those bytes are Rust's internal layout (pointers, enum discriminants, alignment padding). Conversion is always required and always costs something.

What This Means for You

If you're building web apps and reaching for Rust/WASM to speed things up, pause and ask: What's actually slow?

For most web developers working with structured data, V8's JIT-compiled JavaScript is incredibly fast. The boundary between languages is expensive. And algorithmic improvements almost always beat language-level optimizations.

Before you reach for WASM, profile. Before you optimize, measure. And remember: the fastest code is often the code you don't have to run.


Want more practical builder insights?

I curate the best stories from Hacker News every day — ones that actually help you build better software. No fluff, no hype. Just actionable engineering lessons.

Check Out z3n.iwnl