-
Notifications
You must be signed in to change notification settings - Fork 36
Add total memory to job info 878 #879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This is so strange. Locally I have to add And yet, in the CI I get different failures. So, we have divergence in the tests between local and remote, which is horrible from a code velocity perspective. Here's the other issue, Anyway, when I get the tests to somehow work, I'll finish this extremely simple ticket which has now taken weeks because of testing. testing. |
# Slurm uses per CPU memory if --mem-per-cpu with 'Mc' output | ||
# or uses per node if --mem with 'M' output | ||
if v[:min_memory].end_with?('c') | ||
# memory per CPU | ||
memory_per = :cpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where did you get this from? I'm unable to replicate this or see it in any job at OSC.
@@ -116,6 +121,7 @@ def initialize(id:, status:, allocated_nodes: [], submit_host: nil, | |||
@status = job_array_aggregate_status unless @tasks.empty? | |||
|
|||
@native = native | |||
@total_memory = total_memory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should likely keep the other behavior where we check for nil and cast to_i when ti's not nil, like gpus below it.
Addresses #878.
OodCore::Job::Info
to ensure we pass back memory in a consistent manner usingbytes
nil
--mem
or--mem-per-cpu
slurm_spec.rb
as well to ensure the adapter are correct.info_spec.rb
to ensure correctness.