Skip to content

chore: add llm finetuning benchmark#1095

Merged
jfrery merged 31 commits intomainfrom
chore/add_llama_finetuning_benchmark
Oct 3, 2025
Merged

chore: add llm finetuning benchmark#1095
jfrery merged 31 commits intomainfrom
chore/add_llama_finetuning_benchmark

Conversation

@jfrery
Copy link
Contributor

@jfrery jfrery commented May 28, 2025

No description provided.

@jfrery jfrery requested a review from a team as a code owner May 28, 2025 12:38
@cla-bot cla-bot bot added the cla-signed label May 28, 2025
@jfrery jfrery force-pushed the chore/add_llama_finetuning_benchmark branch from abb4b2e to b04bad6 Compare May 28, 2025 14:24
@jfrery jfrery force-pushed the chore/add_llama_finetuning_benchmark branch from 6595bad to 188f773 Compare May 28, 2025 22:00
@jfrery jfrery force-pushed the chore/add_llama_finetuning_benchmark branch from 3018276 to 28c1410 Compare May 30, 2025 09:13
@jfrery jfrery force-pushed the chore/add_llama_finetuning_benchmark branch from ba90eb8 to 147c7fd Compare June 5, 2025 06:47

# Install PyTorch with appropriate CUDA support
if [ "${{ matrix.device }}" == "gpu" ]; then
pip install torch>=2.0.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
Copy link
Contributor

@kcelia kcelia Jun 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on this machine 'Nvidia -smi' already works, you just need to enable Concrete GPU:

      - name: Install GPU concrete-python
        id: install-gpu-concrete-python
        run: |
          poetry run pip show concrete-python || echo "concrete-python not installed"
          CONCRETE_WITH_VERSION=$(poetry run pip freeze | grep concrete-python)
          poetry run pip uninstall -y concrete-python
          poetry run pip install --extra-index-url https://pypi.zama.ai/gpu $CONCRETE_WITH_VERSION
          poetry run pip show concrete-python || echo "concrete-python not installed"

- name: Set up Python
uses: actions/setup-python@42375524e23c412d93fb67b49958b491fce71c38
with:
python-version: "3.10"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we follow the logic of our CI, you should pick p3.12


# Install any additional requirements from the project
if [ -f "requirements.txt" ]; then
pip install -r requirements.txt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this file ?

fi

# Install accelerate BEFORE other dependencies to ensure correct version
pip install 'accelerate>=1.1.0'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why you don't gather all the package dependancies in one place ?

info["swap"] = get_size(psutil.swap_memory().total)

# Check for GPU information
if torch.cuda.is_available():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the model runs on GPU, you should check concrete gpu as well no?

if force_cpu or device_type == "cpu":
return "cpu"

if device_type == "gpu":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a check for concrete gpu ?

@github-actions
Copy link

Coverage passed ✅

Coverage details

---------- coverage: platform linux, python 3.8.18-final-0 -----------
Name    Stmts   Miss  Cover   Missing
-------------------------------------
TOTAL    8885      0   100%

63 files skipped due to complete coverage.

@github-actions
Copy link

⚠️ Known flaky tests have been rerun ⚠️

One or several tests initially failed but were identified as known flaky. tests. Therefore, they have been rerun and passed. See below for more details.

Failed tests details

Known flaky tests that initially failed:

  • tests/torch/test_compile_torch.py::test_compile_torch_or_onnx_networks[get_and_compile--FHE_simulation-TorchDivide-input_output_feature9-relu]

Copy link
Contributor

@kcelia kcelia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, it's better to use the pre-installed cuda of the image and just install concrete gpu.

@jfrery jfrery merged commit 1c356a9 into main Oct 3, 2025
28 checks passed
@jfrery jfrery deleted the chore/add_llama_finetuning_benchmark branch October 3, 2025 07:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants