Add v8::Locker, v8::Unlocker, and UnenteredIsolate bindings#1896
Add v8::Locker, v8::Unlocker, and UnenteredIsolate bindings#1896max-lt wants to merge 5 commits intodenoland:mainfrom
Conversation
|
To make this actually safe, Also, v8::Locker internals don't belong in Also, these ai generated comments and pr descriptions are really fucking annoying. You need to take responsibility for what you are submitting, rather than just assuming claude will be correct. |
|
Will rework, thanks for the review |
|
Applied requested changes. Let me know if I missed anything. |
|
I would also like to help with MR. Do you need any help, or can I contribute to speed up the merge with the main code? |
|
Thanks @RainyPixel for offering help! I'd really appreciate it. Honestly, what I need most right now is a thorough code review. I believe the current implementation addresses @devsnek's safety concerns, but having fresh eyes on it would really help move this forward. In the meantime, we maintain a working fork at https://github.com/openworkers/rusty-v8 (currently at v145 with ptrcomp & sandbox feature flags) that we use in production for https://github.com/openworkers/openworkers-runner. It's also published on crates.io as https://crates.io/crates/openworkers-v8. The isolate pooling works well for our use case. Feel free to check it out if you want to see the implementation in action. |
- Add IsolateExitGuard to ensure v8::Isolate::Exit is called if panic occurs during Locker::new() after Enter has been called - Add comprehensive documentation for Locker and UnenteredIsolate including thread safety guarantees and usage examples - Add debug_assert to verify no Locker is held when dropping UnenteredIsolate - Add UnenteredIsolate::as_raw() method for low-level access - Expand test coverage: - locker_state_preserved_across_locks - locker_drop_releases_lock - unentered_isolate_as_raw - locker_send_isolate_between_threads (verifies Send trait works) This improves safety guarantees for multi-threaded isolate usage.
Add compile_fail tests to ensure Rust's type system prevents misuse: - locker_double_borrow: prevents creating two Lockers simultaneously - locker_not_send: Locker cannot be sent to another thread - locker_scope_outlives: HandleScope cannot outlive its Locker Add comprehensive unsafe documentation to raw::Locker: - Document memory layout and size requirements - Document all safety invariants for initialization - Document thread affinity requirements Improve runtime tests: - Fix lifetime issues in locker_send_isolate_between_threads - Simplify locker_state_preserved_across_locks test All 9 runtime tests and 15 compile_fail tests pass.
|
@RainyPixel's follow-up commit adds panic safety for the Locker::new() initialization path and some compile-time safety tests. Ready for review when you have time. |
V8's Isolate::Enter()/Exit() manage a per-isolate entry_stack_ that stores the "previous isolate" at each nesting level. When multiple Lockers coexist on the same thread (cooperative scheduling via spawn_local), non-LIFO drop ordering causes Exit() to restore a stale pointer, leading to a NULL dereference on the next Enter(). Fix: Locker now acquires only the V8 mutex (pure mutex). It temporarily enters the isolate during construction so the C++ Locker sees IsCurrent()==true and sets top_level_=false, then immediately exits. The new IsolateScope type manages Enter/Exit at fine-grained boundaries around each synchronous V8 work block, ensuring proper nesting. API change: callers must now call locker.enter() to get an IsolateScope before performing V8 operations. Existing tests updated accordingly.
|
Commit de789a7 makes Locker a pure mutex, add IsolateScope for Enter/Exit Previous changes introduced Locker + unentered isolates to avoid OwnedIsolate's LIFO drop constraint. But the Locker itself still called Enter()/Exit() internally, which reintroduced the same LIFO constraint at the Locker level, when multiple Lockers for different isolates coexist on the same thread (async pool), dropping them out of order still corrupts the entry stack. Last fix completes the decoupling:
Tests: 10 safety tests (non-LIFO drop, concurrent lockers, stress) + 6 IsolateScope tests (interleaved isolates, round-robin). |
The Enter+Exit calls before Locker::Initialize() were a no-op. The C++ Locker only checks IsLockedByCurrentThread() and RestoreThread(), neither of which depends on the entry stack. top_level_ is always true in our usage (no Locker/Unlocker nesting). Also fix incorrect doc comments claiming the isolate must be in "entered" state for raw::Locker::init().
|
de789a7 called Enter()/Exit() during Locker construction, attempting to influence top_level_ in the C++ Locker. This was unnecessary, Locker::Initialize() determines top_level_ via RestoreThread() (archived thread state from Unlocker pairs), not via the entry stack. Since we never use Unlocker, top_level_ is always true, which correctly calls FreeThreadResources() on drop. 5b42bdd Removed the dead Enter/Exit calls and the IsolateExitGuard. No behavioral change. |
Adds bindings for multi-threaded isolate pooling, enabling architectures like Cloudflare Workers.
Changes
UnenteredIsolate: New isolate type without auto-enter, removes LIFO drop constraintv8::Locker: Acquires exclusive thread access to an isolatev8::Unlocker: Temporarily releases a lockunsafe impl SendforUnenteredIsolate(safe with Locker)Key Features
Arc<Mutex<UnenteredIsolate>>Tests
Note: Each thread must call
unsafe { isolate.enter() }before use to setup V8's thread-local state.No breaking changes - all new APIs