-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update serialization extension to support custom cache-size metric #52
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #52 +/- ##
==========================================
+ Coverage 81.92% 81.97% +0.05%
==========================================
Files 3 3
Lines 354 355 +1
==========================================
+ Hits 290 291 +1
Misses 64 64 ☔ View full report in Codecov by Sentry. |
ext/SerializationExt.jl
Outdated
@@ -29,20 +29,21 @@ function Serialization.deserialize(s::AbstractSerializer, ::Type{LRU{K, V}}) whe | |||
lock = Serialization.deserialize(s) | |||
by = Serialization.deserialize(s) | |||
finalizer = Serialization.deserialize(s) | |||
n_items = Serialization.deserialize(s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a way to detect there are no bytes left to deserialize and fallback to n_items = currentsize
here? to continue supporting de-serializing old serialized files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(if not, I'm not sure this should be a patch release, it seems like it could unexpectedly break stuff. Not quite sure what folks use LRU serialization for though, is it for IPC only or for persistence?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point; this will break deserializing old serialized data. I think we can make this backward compatible, and that would be nicer in case there are some persisted LRUs out there.
Any existing file would have n_items
in the currentsize
position, so we can continue storing n_items
there, and always recompute currentsize
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the PR. Thanks for pointing this out!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good solution!
This seems good to me as well. Thanks for the great changes! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also looks like a great improvement to me, I don't think I can spot any issues here.
The current Serialization extension assumes the cache size is always the number of items in the cache. This PR allows serializing LRUs with custom
by
functions, wherelru.currentsize != length(lru)
.by
functions #51