High-performance message queue for Node.js — powered by Valkey/Redis Streams and a Rust-native NAPI client.
If you find this useful, give it a ⭐ on GitHub — it helps the project reach more developers.
npm install glide-mq- 1 RTT per job —
completeAndFetchNextfinishes the current job and fetches the next one in a single round-trip - Rust core, not ioredis — built on Valkey GLIDE's native NAPI bindings for lower latency and less GC pressure
- 1 function library, not 53 scripts — all queue logic runs as a single Valkey Server Function (no EVAL overhead)
- Cluster-native — hash-tagged keys work out of the box; no manual
{braces}needed - Cloud-ready — AZ-affinity routing and IAM auth built in
- Queues & Workers — producer/consumer with configurable concurrency
- Delayed & priority jobs — schedule jobs for later or run high-priority work first
- Workflows —
FlowProducerparent-child trees,chain,group,chordpipelines with result aggregation - Schedulers — cron and interval repeatable jobs, persisted across restarts
- Per-key ordering — sequential processing per key while staying parallel across keys
- Rate limiting — token-bucket (cost-based), per-group, and global rate limiting
- Retries & DLQ — exponential/fixed/custom backoff with dead-letter queues
- Deduplication — simple, throttle, and debounce modes with configurable TTL
- Job revocation — cooperative cancellation via AbortSignal for active jobs
- Stalled job recovery — auto-reclaim jobs from crashed workers via XAUTOCLAIM
- Global concurrency — cross-worker active job cap for the entire queue
- Pause & resume — pause/resume at queue level or per-worker, with force option
- Real-time events —
QueueEventsstream for added, completed, failed, stalled, revoked, and more - Job search — query by state, name, and data filters
- Progress tracking — real-time numeric or object progress updates
- Batch API —
addBulkfor high-throughput ingestion (12.7× faster than serial) - Compression — transparent gzip (up to 98% size reduction)
- Graceful shutdown — one-liner
gracefulShutdown()for SIGTERM/SIGINT handling - Connection sharing — reuse a single client across components to reduce TCP connections
- Observability — OpenTelemetry tracing, per-job logs,
@glidemq/dashboardweb UI - In-memory testing —
TestQueue&TestWorkerwith zero dependencies viaglide-mq/testing
import { Queue, Worker } from 'glide-mq';
const connection = { addresses: [{ host: 'localhost', port: 6379 }] };
// Producer
const queue = new Queue('tasks', { connection });
await queue.add('send-email', { to: 'user@example.com', subject: 'Hello' });
// Consumer
const worker = new Worker('tasks', async (job) => {
console.log(`Processing ${job.name}:`, job.data);
return { sent: true };
}, { connection, concurrency: 10 });
worker.on('completed', (job) => console.log(`Job ${job.id} done`));
worker.on('failed', (job, err) => console.error(`Job ${job.id} failed:`, err.message));Requires Node.js 20+ and a running Valkey (7.0+) or Redis 7.0+ instance.
| Concurrency | Throughput |
|---|---|
| c=1 | 4,376 jobs/s |
| c=5 | 14,925 jobs/s |
| c=10 | 15,504 jobs/s |
| c=50 | 48,077 jobs/s |
addBulk batch API: 1,000 jobs in 18 ms (12.7× faster than serial).
Gzip compression: 98% payload reduction on 15 KB payloads.
Valkey 8.0, single node, no-op processor. Run npm run bench to reproduce.
| Guide | What you'll learn |
|---|---|
| Usage | Queue & Worker basics, graceful shutdown, cluster mode |
| Advanced | Schedulers, rate limiting, dedup, compression, retries & DLQ |
| Workflows | FlowProducer, chain, group, chord pipelines |
| Observability | OpenTelemetry, job logs, @glidemq/dashboard |
| Testing | In-memory TestQueue & TestWorker — no Valkey needed |
| Migration | Coming from BullMQ? API mapping & workarounds |
- ⭐ Star on GitHub — helps others find the project
- 🐛 Open an issue — bug reports & feature requests welcome
- 💬 Discussions — questions, ideas, show & tell
Apache-2.0