[LEADS-199] Add CPU-only PyTorch support to reduce package size#185
[LEADS-199] Add CPU-only PyTorch support to reduce package size#185asamal4 merged 1 commit intolightspeed-core:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds UV-based CPU and GPU installation flows to README for local HuggingFace embedding models, replaces the pip-only example with Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyproject.toml`:
- Around line 134-140: The guidance line suggesting "uv lock --no-sources -o
uv-gpu.lock" is invalid because uv lock only writes uv.lock and does not support
-o for renaming; remove the incorrect multi-lockfile guidance and the reference
to uv-gpu.lock, leaving only the correct regenerate command ("uv lock") and a
short note that alternative-named lockfiles are not supported by uv lock (or
replace with correct steps if you want separate GPU lock generation via a
different workflow/tool). Ensure you update the comment block that mentions
"uv-gpu.lock" and the example command so it no longer instructs using
-o/--output-file with uv lock.
- Around line 48-50: The extra "local-embeddings-gpu" is ineffective because
torch is globally mapped to the CPU index via [tool.uv.sources], so
sentence-transformers' transitive torch will be resolved from pytorch-cpu; fix
by adding an extra-scoped source and constraints so the "local-embeddings-gpu"
extra uses the CUDA index for torch (e.g., add a source entry keyed for the
extra and add a conflict or explicit torch dependency under the
local-embeddings-gpu extra to pull torch from that CUDA source), updating the
mapping in [tool.uv.sources] so the extra name (local-embeddings-gpu) resolves
torch from the CUDA index instead of pytorch-cpu while leaving the default extra
to use the CPU index.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c5cd4676-9547-4dde-a690-1e0e4ec69618
⛔ Files ignored due to path filters (2)
uv-gpu.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
README.mdpyproject.toml
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 49-53: Update the README pip example to guarantee CPU-only PyTorch
wheels by modifying the pip install command shown ('pip install
'lightspeed-evaluation[local-embeddings]'') to include PyTorch's CPU wheel index
URL (--index-url https://download.pytorch.org/whl/cpu) or alternatively add an
explicit note next to that command clarifying that the plain command does not
guarantee CPU-only installs and to use the index URL for CPU-only installation;
also keep the existing mention of uv.lock vs uv-gpu.lock to guide users choosing
the smaller CPU-only environment.
- Around line 46-47: The README shows an invalid command using a positional
lockfile after --locked; update the instructions to explain that `uv sync` does
not accept a lockfile path and that `--locked` is a boolean that requires the
project lockfile to be named `uv.lock` adjacent to `pyproject.toml`, and provide
the correct steps: either rename or symlink `uv-gpu.lock` to `uv.lock` before
running `uv sync --locked`, or run `uv sync --frozen` to use the existing
lockfile without freshness checks; reference the `uv sync` command, `--locked`,
`--frozen`, `uv.lock`, and `uv-gpu.lock` in the updated text.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9373b58c-0e69-4d13-ab42-8be1dc5353c9
⛔ Files ignored due to path filters (2)
uv-gpu.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
README.mdpyproject.toml
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
VladimirKadlec
left a comment
There was a problem hiding this comment.
LGTM, thank you. Please rebase to the current main to pass the e2e tests.
Thanks @VladimirKadlec for reviewing it.... I have rebased on the current main, and not it passed. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyproject.toml`:
- Around line 38-40: The documented GPU install command "uv sync --extra
local-embeddings --locked uv-gpu.lock" is invalid because --locked is a boolean
and uv uses a fixed uv.lock; update the examples in pyproject.toml and README.md
to remove the filename after --locked and offer supported alternatives: show the
single-lock approach using a dependency group (e.g., "uv sync --group gpu" with
the gpu group declared in pyproject.toml) or the separate-lock via pip compile
and sync flow (explain generating a requirements-gpu.txt with "uv pip compile
... -o requirements-gpu.txt" then installing with "uv pip sync
requirements-gpu.txt"); replace every occurrence of the invalid command string
with one of these supported variants and ensure the docs mention that --locked
takes no file path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fd28ce25-9d43-4b25-818b-a54b6780e238
⛔ Files ignored due to path filters (2)
uv-gpu.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
README.mdpyproject.toml
🚧 Files skipped from review as they are similar to previous changes (1)
- README.md
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 43-54: Update the README so the uv-related instructions explicitly
state that the `uv sync` commands require a checked-out repository containing
`pyproject.toml`, `uv.lock`, and `uv-gpu.lock` (i.e., a cloned repo), and that
running `pip install git+...` alone will not provide those files; mention that
users must clone the project before running `uv sync` or copying `uv-gpu.lock`
to `uv.lock`. Reference the `uv sync` command and the files `uv.lock`,
`uv-gpu.lock`, and `pyproject.toml` in the clarification so readers know exactly
which files are needed and the correct setup flow.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8964382c-43c1-4aa7-b77d-773253a0e1c0
⛔ Files ignored due to path filters (2)
uv-gpu.lockis excluded by!**/*.lockuv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
README.mdpyproject.toml
🚧 Files skipped from review as they are similar to previous changes (1)
- pyproject.toml
Made-with: Cursor
[LEADS-199] Add CPU-only PyTorch support to reduce package size
Description
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit