-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Distributed size for concurrent ordered containers #1803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
| key_compare my_compare; | ||
| random_level_generator_type my_rng; | ||
| atomic_node_ptr my_head_ptr; | ||
| std::atomic<size_type> my_size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also an option to keep my_size, reserve a flag bit in it and use it for optimizing consecutive calls to size() by saving the result of combine into my_size and marking it as "changed" in the insertion.
I did not investigate potential performance effects from CAS operations on my_size, just wanted to save the idea.
|
|
||
| void set_size(size_type size) { my_local_size.store(size, std::memory_order_relaxed); } | ||
| void increment_size() { | ||
| my_local_size.store(local_size() + 1, std::memory_order_relaxed); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use atomic std::atomic<>::fetch_add instead of load and store pair?
Description
Add a comprehensive description of proposed changes
Fixes # - issue number(s) if exists
Type of change
Choose one or multiple, leave empty if none of the other choices apply
Add a respective label(s) to PR if you have permissions
Tests
Documentation
Breaks backward compatibility
Notify the following users
List users with
@to send notificationsOther information