-
Notifications
You must be signed in to change notification settings - Fork 81
Description
It can be extremely useful for mlcache to return stale items while the underlying L3 callback is being run by a worker. Unlike resurrect_ttl, which resurrects an item and marks it as stale when the L3 callback returns nil, err (e.g. database lookup error), support for swrv items would mean that an item is marked as stale by mlcache (and callers of get() can be made aware of the stale state of the fetched item), all the while an L3 callback is being run in the background by a given worker.
Currently, implementing swrv implicates scheduling background timers to fetch fresh data. Once fetched, users call cache:set() to update the data in the L2 cache and each worker's L1 cache. This forces users to call cache:update() as well to ensure that all workers L1 is up-to-date.
We should try to implement swrv items natively, without having to rely on any IPC. Items could be marked as stale and be scheduled for re-validation in a number of ways:
- Via a new
stale_ttloption: when reached, the item is not removed from the cache likettl, but instead is marked as stale (for thecache:get()hit_lvlreturn value) and its associated L3 callback is scheduled in a timer. This approach requires that the callback argument be stored inside of mlcache for later use whenstale_ttlis reached. - When returning from the L3 callback, a 4th return value could be:
item, err, ttl, stale. Whenstaleis truthy, the item is marked as stale and only stored in L2 (never promoted to L1). Subsequent calls toget()return the L2 data and an appropriatehit_lvlindicating the staleness. Eventually, the fresh data is fetched by the user (outside of mlcache) who callscache:set_fresh(), which only updates the L2 data and removes the stale marker, causing the data to be promoted to L1 again.
Each approach comes with its own pros and cons or opportunities for new APIs (e.g. custom swrv and retry strategies implemented in an OOP fashion) which I won't detail right now...
Let's talk about implementation here before working on any PR for this!