-
Notifications
You must be signed in to change notification settings - Fork 1.6k
smp: add a function that barriers memory prefault work #2608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Did you observe latency impact from the prefault threads? It was written carefully not to have latency impact, but it's of course possible that some workloads suffer. |
As you described in #1702, page faults can cause deviation, and following up the example, there can be 25sec where latency is variably higher. |
I said nothing about latency being higher there. We typically run large machines with a few vcpus not assigned to any shards, and the prefault threads run with low priority. |
There are 2 aspects:
In the previous comment, I meant page fault latency. The page faults can cause high latency unpredictably until the prefaulter finishes. Regarding page faults measurement, it seems I cannot reliably measure on my env.
I tried to non scientifically isolate wall time overhead of prefault threads: I have a test app that performs file I/O and process memory buffers repeatedly. I used Ubuntu Orbstack VM with 1 NUMA node, 10 cores, --memory=14G - effectively a small NUMA node, and a small input to let the overhead be most visible.
|
By default seastar uses all vcpus, which makes sense for resource efficiency. Also, do you free specific vcpus? Like one per numa node, the granularity of prefault threads. |
1 in 8, with NUMA awareness. They're allocated for kernel network processing. See perftune.py. |
Nice. Let me know if this change makes sense to you |
@avikivity ping |
I also tried to simulate perftune with 1 free vcpu: |
I don't understand what this 1600ms overhead is. |
I mean its the wall time of the work that I observe, with 1 free vcpu: |
Okay. But what's the problem with that time? Anyway, is we add |
The problem is that it is slower and not consistent/predictable. After memory initialization it is faster and consistent. Regarding the implementation, the problem is that |
How is pthread_join relevant? |
Currently, the logical barrier waits for pthread_join on all the threads that perform the prefault work. It will block the reactor thread. |
Ah, you're referring to the patch while I was referring to the current state. Don't use join then, instead figure out something else that can satisfy a seastar::promise. Maybe it's as simple as |
ah, I see. So if I understand correctly, you have just restated the clarified motivation for the patch (pls correct me if I'm wrong). I'll work on a non-blocking method next week. |
I don't completely see that it's useful but can't deny that it might be. I'd be happier with an example of a real application requiring it. |
I have only a test application. How about scylladb? |
I'm not aware of reports of problem during the prefault stage. It takes some time for a node to join the cluster, and by that time enough memory was prefaulted for it to work well. |
sorry for the delay and the confusion. I saw you added a join method: #2679 Ill try to rebase my changes |
d681d1e
to
8ece4fc
Compare
v2: Rebase master and rewrite a non-blocking seastar::join_memory_prefault |
@avikivity hey, pls review |
src/core/reactor.cc
Outdated
future<> join_memory_prefault() { | ||
auto& r = engine(); | ||
if (!r._smp->memory_prefault_initialized()) { | ||
seastar_logger.warn("Memory prefaulter is not initialized but joined"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This warning isn't helpful to users; what can they do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They should fix the configuration to make Seastar actually prefault memory, or remove the redundant join if not intended. But if you prefer otherwise, Ill remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The application doesn't know if the user wants to prefault memory or not. It's common in local testing not to prefault, and in production to prefault.
src/core/smp.cc
Outdated
bool | ||
smp::memory_prefault_initialized() { | ||
return _prefaulter != nullptr; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of this, you can promise::set_value() on the promise if you don't initialize the prefaulter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, this way the promise is not left unresolved. Will call it on smp::configure.
src/core/smp.cc
Outdated
internal::memory_prefaulter::alien_on_complete(smp& smp_context) { | ||
run_on(smp_context._alien, 0, [this, &smp_context] () noexcept { | ||
join_threads(); | ||
run_in_background(smp_context.broadcast_memory_prefault_completion()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, we can document that that the join() should only be run on shard 0. I expect that most applications run initialization code in shard 0 and don't need it to be available anywhere else.
(I want to deprecate and remove run_in_background, I think it's dangerous)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not let it work on any shard? We anyway need the new promise member on the reactor class.
Also, maybe make run_in_background internal instead? its needed for such use cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@avikivity ping
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems an unnecessary complication. Applications typically have a main thread that runs on shard 0 that coordinates the startup process.
8ece4fc
to
8b3dc8c
Compare
Currently, memory prefault logic is internal and seastar doesnt provide much control to users. In order to improve the situation, I suggest to provide a barrier for the prefault work. This allows to: * Prefer predictable low latency and high throughput from the start of request serving, at the cost of a startup delay, depending on machine characteristics and application specific requirements. For example, a fixed capacity on prem db setup, where slower startup can be tolerated. From users perspective, they generally cannot tolerate inconsistency (like spikes in latency during startup). * Similarly, improve user scheduling decisions, like running less critical tasks while prefault works. * Reliably test the prefault logic, improving reliability and users trust in seastar. This patch adds memory_prefaulter class as a friend of smp class, and passes a smp context to the prefaulter. The prefulater calls the smp context upon completion using a new broadcast method, which sends a completion event to all the reactor threads. A new promise member on the reactor class is enables to return a per-reactor future that represents the prefault completion state. This way, the mechanism is eventually consistent on all the reactors. The interface is a free function on the seastar namespace.
8b3dc8c
to
9156e3f
Compare
@avikivity v3:
|
Currently, memory prefault logic is internal and seastar doesnt provide much control to users. In order to improve the situation, I suggest to provide a barrier for the prefault threads. This allows to:
I tested locally. If you approve, next I will try to submit a prefault test.