Redis Serialization Protocol version 3 (RESP3) is the newer protocol used for communication between Redis servers and clients. It offers more data types and richer semantics compared to RESP2.
To use RESP3, simply set the RESP
option to 3
when creating a client:
const client = createClient({
RESP: 3
});
Some RESP types can be mapped to more than one JavaScript type. For example, "Blob String" can be mapped to string
or Buffer
. You can override the default type mapping using the withTypeMapping
function:
await client.get('key'); // `string | null`
const proxyClient = client.withTypeMapping({
[TYPES.BLOB_STRING]: Buffer
});
await proxyClient.get('key'); // `Buffer | null`
Some Redis modules (particularly the Search module) have responses that might change in future RESP3 implementations. These commands are marked with unstableResp3: true
in the codebase.
To use these commands with RESP3, you must explicitly enable unstable RESP3 support:
const client = createClient({
RESP: 3,
unstableResp3: true
});
If you attempt to use these commands with RESP3 without enabling the unstableResp3
flag, the client will throw an error with a message like:
Some RESP3 results for Redis Query Engine responses may change. Refer to the readme for guidance
The following Redis commands and modules use the unstableResp3
flag:
- Many Search module commands (FT.*)
- Stream commands like XREAD, XREADGROUP
- Other modules with complex response structures
If you're working with these commands and want to use RESP3, make sure to enable unstable RESP3 support in your client configuration.
Redis 6.0 introduced client-side caching, which allows clients to locally cache command results and receive invalidation notifications from the server. This significantly reduces network roundtrips and latency for frequently accessed data.
- When a cacheable command is executed, the client checks if the result is already in the cache
- If found and valid, it returns the cached result (no Redis server roundtrip)
- If not found, it executes the command and caches the result
- When Redis modifies keys, it sends invalidation messages to clients
- The client removes the invalidated entries from its cache
This mechanism ensures data consistency while providing significant performance benefits for read-heavy workloads.
Client-side caching in node-redis:
- Requires RESP3 protocol (
RESP: 3
in client configuration) - Uses Redis server's invalidation mechanism to keep the cache in sync
- Is completely disabled when using RESP2
Currently, node-redis implements client-side caching only in "default" tracking mode. The implementation does not yet support the following Redis client-side caching modes:
-
Opt-In Mode: Where clients explicitly indicate which specific keys they want to cache using the
CLIENT CACHING YES
command before each cacheable command. -
Opt-Out Mode: Where all keys are cached by default, and clients specify exceptions for keys they don't want to cache with
CLIENT UNTRACKING
. -
Broadcasting Mode: Where clients subscribe to invalidation messages for specific key prefixes without the server tracking individual client-key relationships.
These advanced caching modes offer more fine-grained control over caching behavior and may be supported in future node-redis releases. While node-redis doesn't implement these modes natively yet, the underlying Redis commands (CLIENT TRACKING
, CLIENT CACHING
, CLIENT UNTRACKING
) are available if you need to implement these advanced tracking modes yourself.
To enable client-side caching with default settings:
const client = createClient({
RESP: 3,
clientSideCache: {
// Cache configuration options
maxEntries: 1000, // Maximum number of entries in the cache (0 = unlimited)
ttl: 60000, // Time-to-live in milliseconds (0 = never expire)
evictPolicy: "LRU" // Eviction policy (LRU or FIFO)
}
});
You can also create and control the cache instance directly:
// Import the cache provider
const { BasicClientSideCache } = require('redis');
// Create a configurable cache instance
const cache = new BasicClientSideCache({
maxEntries: 5000,
ttl: 30000
});
// Create client with this cache
const client = createClient({
RESP: 3,
clientSideCache: cache
});
// Later you can:
// Get cache statistics
const hits = cache.cacheHits();
const misses = cache.cacheMisses();
// Manually invalidate specific keys
cache.invalidate('my-key');
// Clear the entire cache
cache.clear();
// Listen for cache events
cache.on('invalidate', (key) => {
console.log(`Cache key invalidated: ${key}`);
});
Client-side caching also works with connection pools:
const pool = createClientPool({
RESP: 3
}, {
clientSideCache: {
maxEntries: 10000,
ttl: 60000
},
minimum: 5
});
For pools, you can use specialized cache providers like BasicPooledClientSideCache
or PooledNoRedirectClientSideCache
that handle connection events appropriately.
- Configure appropriate
maxEntries
andttl
values based on your application needs - Monitor cache hit/miss rates to optimize settings
- Consider memory usage on the client side when using large caches
- Client-side caching works best for frequently accessed, relatively static data
We have introduced the ability to perform a "typed" MULTI
/EXEC
transaction. Rather than returning Array<ReplyUnion>
, a transaction invoked with .exec<'typed'>
will return types appropriate to the commands in the transaction where possible:
const multi = client.multi().ping();
await multi.exec(); // Array<ReplyUnion>
await multi.exec<'typed'>(); // [string]
await multi.execTyped(); // [string]