-
-
Notifications
You must be signed in to change notification settings - Fork 348
Description
The Redis backends in their current state do not work with AWS ElastiCache, due to certain limitations the latter imposes on the provided Redis/Valkey interfaces. This issue tracks the changes necessary to make the two cooperate happily.
This is no way about any if AWS: kind of hacks, but rather about finding and using the lowest common denominators between "normal" Redis and the one provided by AWS. Internally, an ElastiCache (whether "serverful" or Serverless) is a Redis Cluster, so AFAIU most restrictions actually come from the Cluster deployment model. Serverless has some further limitations on top of that, so I would propose to target compatibility with Serverless to cover everything.
AWS docs:
Supported and restricted commands
Best practices for Lua scripts
Broker
-
Citing Redis docs:
Important: to ensure the correct execution of scripts, both in standalone and clustered deployments, all names of keys that a script accesses must be explicitly provided as input key arguments. The script should only access keys whose names are given as input arguments. Scripts should never access keys with programmatically-generated names or based on the contents of data structures stored in the database.
AWS takes the recommendation seriously and outright disallows execution of non-conformant scripts.
- Most of the dynamic key names in
dispatch.luaare not that dynamic, we just build new keys like queue names from known static values (namespaceetc.) - I believe this could be done in Python instead.
- Most of the dynamic key names in
-
KEYSis not supported on Serverless. -
Lua scripts without any input keys are not supported.- I think this is only the case with
maxstack.luaand is easily fixed by providing some key.
- I think this is only the case with
Rate Limiter backend
WATCHis not supported on Serverless, soincr/decr/incr_and_summethods won't work.
This is solved by rewriting the logic of incr/decr/incr_and_sum as Lua scripts instead - the atomic execution would achieve the same effect without the need for WATCH on the client.
In fact, I've already implemented this and it's been used in production for quite a while. I can submit a PR after I tidy the code up a bit.
Result backend
I haven't tried using the result backend yet, but the code looks harmless enough and I expect no changes to be necessary.