[iris] Remove 8 GB Docker memory cap on build containers#4625
Conversation
uv sync on a large workspace exceeds the 8 GB _BUILD_MEMORY_LIMIT_BYTES ceiling. The host OOM killer fires against the container cgroup, not host memory (confirmed: CONSTRAINT_MEMCG in dmesg on a worker with 1.4 TB free). Remove the artificial limit so builds use whatever the host has available.
|
Claude finished @rjpower's task in 1m 34s —— View job Review of PR #4625
LGTM — the change is correct and well-motivated. The 8 GB fixed ceiling was clearly too low for large-workspace One subtlety to flagPassing effective_memory_mb = memory_limit_mb or config.get_memory_mb() # line 683This has two cases:
The third case is a mild regression: if a task requests less than 8 GB, the build container now gets a lower cap than it used to. In practice this is probably fine — Iris tasks on beefy hosts almost certainly request more than 8 GB — but it's worth being aware of. If this could bite anyone, an easy fix is: memory_limit_mb=None, # → skip the docker --memory flag entirely for buildsand branching inside If the intent is "builds should never have a Docker memory cap", the cleanest approach would be to add a Summary
Ship it. 🚢 |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a2e2805e65
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| command=["bash", "/app/_setup_env.sh"], | ||
| label_suffix="_build", | ||
| memory_limit_mb=build_memory_mb, | ||
| memory_limit_mb=None, |
There was a problem hiding this comment.
Skip memory fallback when removing build limits
Passing memory_limit_mb=None here does not actually remove the Docker memory cap: _docker_create() computes effective_memory_mb = memory_limit_mb or config.get_memory_mb() and still sets --memory whenever the task specifies resources.memory_bytes. That means build containers for memory-scoped tasks remain capped (and tasks below 8 GB now get a tighter limit than before), so uv sync can still OOM despite this change’s stated goal of running build containers without an artificial limit.
Useful? React with 👍 / 👎.
|
@claude let's just bump from 8G -> 32G instead |
|
Claude finished @rjpower's task in 2m 37s —— View job Bumped build memory limit from 8 GB → 32 GB
Replaced the "remove limit entirely" approach with a simple bump: |
Restore the max(limit, task_request) logic but with a 32 GB floor instead of 8 GB. The old ceiling caused uv sync OOMs on large workspaces (confirmed via CONSTRAINT_MEMCG in dmesg). Co-authored-by: Russell Power <rjpower@users.noreply.github.com>
uv sync on a large workspace OOMs against the fixed 8 GB _BUILD_MEMORY_LIMIT_BYTES ceiling. Confirmed via dmesg on the worker: CONSTRAINT_MEMCG killed uv at exactly 8 GB while the host had 1.4 TB free. Remove the artificial limit so build containers are bounded only by host memory.