Boa and JIT #4487
Replies: 4 comments 42 replies
-
|
No, we don't support JIT, but it should definitely be possible to implement. It would just be a very big design task to hook Cranelift into our VM, but it's definitely easier than having to completely reimplement a JIT compiler from scratch. |
Beta Was this translation helpful? Give feedback.
-
Cranelift JIT Prototype — Results and Next StepsI've been exploring @jedel1043's suggestion of integrating Cranelift for JIT support. There's now a working prototype on a fork branch — about 5,000 lines behind a What it doesThe JIT uses a simple tiering model: interpret a function for 10 calls, then compile it to native code via Cranelift. Currently 109 of Boa's 196 opcodes are supported — the remainder are async/generator, iterator protocol, eval, and class private fields which rarely appear in hot loops. Functions containing unsupported opcodes stay interpreted. For integer arithmetic (add, subtract, compare, bitwise ops), the JIT inlines NaN-boxing-aware fast paths directly into the native code — checking tag bits, doing native i32 operations with overflow guards, and falling back to runtime helpers for non-integer cases. For property access, it bakes inline cache entries from compile time into the generated code: a shape guard + direct storage load that bypasses the full property lookup when the shape matches. ResultsComparing
On a tight integer loop (
The JIT is 10x faster than the interpreter on tight loops and 1.6x faster than QuickJS. Four of seven V8 benchmarks show clear improvement, with NavierStokes more than doubling. What we learnedProfiling reveals that the V8 benchmarks are dominated by property access, GC tracing, and function call overhead — not bytecode dispatch. The JIT eliminates dispatch (~13% of interpreter time) and inlines some property access via IC, but can't eliminate the cost of the runtime operations themselves. This is why benchmarks like RayTrace and EarleyBoyer don't improve — their bottleneck is object allocation and GC, not instruction dispatch. The benchmarks that DO improve (NavierStokes, Crypto, Richards) have hot functions with tight numeric loops or integer-heavy operations where the inlined NaN-boxing fast paths pay off. Architecture notes
What would improve V8 scores further
Regarding copy-and-patch@zhuzhu81998 raised the copy-and-patch approach. Having built the Cranelift version, I think both have merit for different tiers. Copy-and-patch would compile much faster (nanoseconds vs microseconds) and could serve as a baseline tier, while Cranelift gives us the optimization path (inlined type checks, IC, potential for speculative optimization). The current Cranelift compilation is fast enough that the 10-call threshold works well in practice. Try itgit clone https://github.com/rubys/boa.git
cd boa
git checkout cranelift-jit
cargo build --release --bin boa --features jit
echo 'function f(n) { var s=0; for(var i=0;i<n;i++) s=(s+i)|0; return s; } console.log(f(10000000));' | ./target/release/boaThe branch passes format checks, clippy with My interest in this comes from Boax, a gem that embeds Boa in Ruby. Performance matters there, and the JIT makes a measurable difference for numeric workloads. |
Beta Was this translation helpful? Give feedback.
-
|
Can we talk about real examples? For example, nbits from Crypto: function nbits(x) {
var r = 1, t;
if ((t = x >>> 16) != 0) { x = t; r += 16; }
if ((t = x >> 8) != 0) { x = t; r += 8; }
if ((t = x >> 4) != 0) { x = t; r += 4; }
if ((t = x >> 2) != 0) { x = t; r += 2; }
if ((t = x >> 1) != 0) { x = t; r += 1; }
return r;
}Pure numeric, no objects, no property access. The AST approach and bytecode approach would produce identical Cranelift IR for this. For the case where AST WOULD help, you'd need something like NavierStokes's function addFields(x, s, dt) {
for (var i = 0; i < size; i++) x[i] += dt * s[i];
}Here the AST knows size is a closure variable and x, s, dt are parameters. But it doesn't know their types or shapes — that requires runtime information. |
Beta Was this translation helpful? Give feedback.
-
|
See https://v8.dev/blog/ignition-interpreter Key excerpt:
Also relevant, Rust's has multiple intermediate representations: https://blog.rust-lang.org/2016/04/19/MIR/ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello! I wonder if boa support JIT? If not, is it possible to integrate Cranelift for JIT support?
Beta Was this translation helpful? Give feedback.
All reactions