This module is about correctness under concurrency pressure, not toy async syntax.
Scope:
- What JavaScript's "single-threaded" claim means and where it misleads
- Worker Threads and message passing
- SharedArrayBuffer + Atomics for shared-memory coordination
- Deterministic race patterns
- Thread-safe abstraction design
A single JS event loop (one thread) executes one callback at a time in that context. A Node.js process can still run multiple threads:
- libuv thread pool (for some I/O and native tasks)
- worker threads (
node:worker_threads) - OS/network work outside JS thread
- Concurrency: multiple tasks in progress, interleaved.
- Parallelism: tasks executing simultaneously on different cores/threads.
Event loop gives concurrency but usually not CPU parallelism for JS code on one thread. Worker threads provide actual JS parallelism.
Races can happen in two major forms:
- Logical races on one thread (async interleavings, stale reads).
- Data races with shared memory (
SharedArrayBuffer) if non-atomic operations are used.
Workers do not share normal JS objects by default.
They communicate through postMessage and structured clone.
'use strict';
const { Worker } = require('node:worker_threads');
const worker = new Worker(`
const { parentPort } = require('node:worker_threads');
parentPort.on('message', (n) => parentPort.postMessage(n * 2));
`, { eval: true });
worker.once('message', (result) => {
console.log(result); // 42
});
worker.postMessage(21);postMessageclones data (structured clone).- Transferables move ownership (e.g.
ArrayBuffer) instead of cloning. SharedArrayBufferis shared, not cloned.
Worker creation is not free:
- startup latency
- memory overhead
- serialization cost for messages
Use workers when:
- CPU-bound tasks are heavy enough
- partitioning work is clear
Avoid workers when:
- tasks are tiny
- overhead dominates
- shared-state complexity risks correctness
SharedArrayBuffer lets multiple threads see the same raw bytes.
Use typed arrays for views:
'use strict';
const sab = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * 2);
const view = new Int32Array(sab);Non-atomic read/modify/write is race-prone:
// Racy pattern (do not use across threads):
view[0] = view[0] + 1;Atomic increment:
Atomics.add(view, 0, 1);Atomics.load/storeAtomics.add/sub/and/or/xorAtomics.compareExchangeAtomics.wait/Atomics.notify(Int32Array only)
Atomics provide synchronization guarantees so writes become visible in coordinated order. Without Atomics, concurrent visibility/order assumptions are unsafe.
Spinlock:
- busy loop with
compareExchange - burns CPU while waiting
Wait/notify:
- block with
Atomics.wait - wake with
Atomics.notify - avoids hot spinning under contention
'use strict';
let count = 0;
async function inc() {
const snapshot = count;
await Promise.resolve(); // interleaving point
count = snapshot + 1;
}
await Promise.all([inc(), inc()]);
// count can be 1, not 2if (!cache.has(key)) {
const value = await load(key);
cache.set(key, value);
}Two callers can both miss and both load.
Thread T1 reads A. Thread T2 changes A -> B -> A. T1 sees A again and assumes no change. Value equality alone misses intermediate mutation; version tagging fixes this.
Most robust default:
- workers send immutable messages
- single owner mutates state
- critical sections with mutex/lock
- clear ownership boundaries
- ensure unlock in finally-like paths
- CAS loops (
compareExchange) - version/tag fields to avoid ABA
- harder to reason about; require strict invariants
In distributed/concurrent systems, operations that can be retried or reordered safely reduce race impact.
If producers outrun consumers, buffers grow and memory pressure rises.
Bound queues + explicit refusal (push -> false) + retry policy are mandatory.
How to explain JS concurrency clearly:
- One event loop is single-threaded.
- Process/runtime still uses parallel subsystems.
- Worker threads run JS in parallel with explicit communication.
- Shared memory requires Atomics for correctness.
What Atomics guarantee:
- atomicity for specific operations
- synchronization/visibility guarantees across threads
What Atomics do not guarantee:
- automatic fairness
- deadlock freedom
- correctness of your protocol
Why "JS is single-threaded so no race conditions" is wrong:
- async interleavings cause logical races on one thread
- worker + shared memory causes true data races without Atomics
- "Await makes operations sequential globally."
- "Workers can mutate outer-scope variables directly."
- "SharedArrayBuffer reads/writes are safe without Atomics if values are small."
- "If code passes tests once, race is solved."
- "Spinlocks are fine in JS because loops are cheap."
For interview-style exercises, avoid timing randomness. Use:
- explicit barriers (promises, Atomics gates)
- fixed operation counts
- deterministic orchestration of interleavings
- invariant checks (counts, no overlap, no missing items)
- Browsers vs Node:
SharedArrayBufferis available in browsers only under cross-origin isolation (COOP/COEP). In Node.js it is available without that requirement. - Atomics.wait / Atomics.notify:
Atomics.waitonly works on Int32Array (and BigInt64Array in newer runtimes) and blocks the calling agent (thread). In Node, it is usable inside Worker Threads. - Determinism: Real data races are nondeterministic. Many exercises here use deterministic interleavings to prove correctness of synchronization logic without relying on timing.