Skip to content

Convert python samples to use UV with uv.lock and .toml files also remove conda dependency #2610

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: development
Choose a base branch
from

Conversation

eleemhui
Copy link

@eleemhui eleemhui commented Mar 6, 2025

Existing Sample Changes

Description

ONSAM 1917
In these changes we are using UV to manage the python dependencies, kernels, and venv.
Each test will be isolated from all other tests and packages added/changed/removed will not affect later tests.
pyproject.toml files have been added to each sample test which list all the dependencies and required versions (takes the place of the requirements.txt).
Adds uv.lock file to maintain consistency of packages being run on any environment.
Removes all calls to 'conda' in the sample.json files as well as dependence on the AI Tools Offline Installer

The following test cases are completed and tested:

Features-and-Functionality:
IntelPyTorch_GPU_InferenceOptimization_with_AMP
IntelPyTorch_TrainingOptimizations_AMX_BF16
IntelPython_GPU_dpnp_Genetic_Algorithm
IntelPython_Numpy_Numba_dpnp_kNN
IntelPython_XGBoost_Performance
IntelTensorFlow_AMX_BF16_Inference
IntelTensorFlow_AMX_BF16_Training
IntelTensorFlow_InferenceOptimization
IntelTensorFlow_Transformer_AMX_bfloat16_MixedPrecision
IntelTransformers_Quantization

Getting-Started-Samples:
INC-Quantization-Sample-for-PyTorch
INC-Sample-for-Tensorflow
Intel_Extension_For_SKLearn_GettingStarted
Intel_Extension_For_TensorFlow_GettingStarted

External Dependencies

none.

Type of change

Please delete options that are not relevant. Add a 'X' to the one that is applicable.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Implement fixes for ONSAM Jiras
    ONSAM 1917

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

  • Command Line
  • oneapi-cli
  • Visual Studio
  • Eclipse IDE
  • VSCode
  • When compiling the compliler flag "-Wall -Wformat-security -Werror=format-security" was used

aemorabr and others added 3 commits March 5, 2025 11:23
* uv for getting_started_with_intel_neural_compressor_for_quantization

* install uv on getting_started_with_intel_neural_compressor_for_quantization

* uv for intel_neural_compressor_accelerate_inference_with_intel_optimization_for_tensorflow

* uv for intel_extension_for_scikit_learn_getting_started

* Addin toml for intel_extension_for_tensorflow_getting_started

* uv for intel_extension_for_tensorflow_getting_started
* Migration to uv for intel_pytorch_gpu_inference_optimization_with_amp
* uv  for genetic_algorithms_on_gpu_using_intel_distribution_of_python_dpnp
* uv for quantizing_transformer_model_using_intel_extension_for_transformers_(itrex)
Signed-off-by: Mora Jimenez, Kevin <[email protected]>

* working tests with uv: tensorflow-amx-bf16-inference and training
* uv for IntelPyTorch_TrainingOptimization_AMX_BF16
* uv set up for IntelPython_Numpy_Numba_dpnp_kNN
* uv for tensorflow_transformer_with_advanced_matrix_extensions_bfloat16_mixed_precision_learning.
Signed-off-by: Mora Jimenez, Kevin <[email protected]>

* add uv to IntelPython_XGBoost_Performance
* Fix: add pip install uv to samples.json AMX_BF16 and Numpy_Numba_dpnp
Signed-off-by: Mora Jimenez, Kevin <[email protected]>
Co-authored-by: Mora Brenes, Allan <[email protected]>
Co-authored-by: Mora Jimenez, Kevin <[email protected]>
Co-authored-by: Edgar Parra <[email protected]>
@eleemhui eleemhui changed the title Convert python samples to use UV took with uv.lock and .toml files also remove conda Convert python samples to use UV with uv.lock and .toml files also remove conda dependency Mar 6, 2025
@Ankur-singh
Copy link
Contributor

Ankur-singh commented Mar 7, 2025

Hi @eleemhui thanks the amazing PR. It looks really good. Would love to know how you tested the samples. I use a combination of Makefile and Docker, TBH it's quite clumsy. Would love to know your setup.

We can merge the PR after testing.

@eleemhui
Copy link
Author

eleemhui commented Mar 10, 2025

@Ankur-singh. We tested these using a sapphire rapids server we had access to. All these samples are python so we did not need to use Makefile on any of them we just dealt with adding all the correct python dependencies to the pyproject.toml file.
While we were working on them we checked the notebooks were working correctly in VSCode and once they worked we would then test each one by running the commands listed in sample.json and verified the correct output (for both the .py and the .ipynb).
Some of the samples were for GPU and we tested those on Intel Tiber Cloud, which was more clumsy.

@Ankur-singh
Copy link
Contributor

Can relate to it. Since we have to do it very frequently, I wrote some python script to parse the sample.json file, dynamically create bash script base on commands from sample.json file and run them inside docker. Still quite clumsy and breaks every now and then.

I have the setup environment ready. Will be testing this PR over the weekend, I have a few things to take care of tomorrow. Hope thats fine.

@aemorabr
Copy link

@Ankur-singh just wanted to add that there is a second part PR for some other python samples: #2623, it is the same solution but with the rest of samples we left behind of this one.

eleemhui and others added 2 commits April 2, 2025 16:45
* fix Intel_Extension_For_SKLearn_GettingStarted
* fix INC-Sample-for-Tensorflow
* fixed INC-Quantization-Sample-for-PyTorch
* fix IntelTransformers_Quantization
* fix IntelTensorFlow_Transformer_AMX_bfloat16_MixedPrecision
* fixed IntelTensorFlow_InferenceOptimization
* fixed IntelTensorFlow_AMX_BF16_Inference
* fixed IntelTensorFlow_AMX_BF16_Training
* fix IntelPyTorch_TrainingOptimizations_AMX_BF16
* fix IntelPyTorch_GPU_InferenceOptimization_with_AMP
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants