-
Notifications
You must be signed in to change notification settings - Fork 722
Convert python samples to use UV with uv.lock and .toml files also remove conda dependency #2610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
Conversation
* uv for getting_started_with_intel_neural_compressor_for_quantization * install uv on getting_started_with_intel_neural_compressor_for_quantization * uv for intel_neural_compressor_accelerate_inference_with_intel_optimization_for_tensorflow * uv for intel_extension_for_scikit_learn_getting_started * Addin toml for intel_extension_for_tensorflow_getting_started * uv for intel_extension_for_tensorflow_getting_started
* Migration to uv for intel_pytorch_gpu_inference_optimization_with_amp * uv for genetic_algorithms_on_gpu_using_intel_distribution_of_python_dpnp * uv for quantizing_transformer_model_using_intel_extension_for_transformers_(itrex) Signed-off-by: Mora Jimenez, Kevin <[email protected]> * working tests with uv: tensorflow-amx-bf16-inference and training * uv for IntelPyTorch_TrainingOptimization_AMX_BF16 * uv set up for IntelPython_Numpy_Numba_dpnp_kNN * uv for tensorflow_transformer_with_advanced_matrix_extensions_bfloat16_mixed_precision_learning. Signed-off-by: Mora Jimenez, Kevin <[email protected]> * add uv to IntelPython_XGBoost_Performance * Fix: add pip install uv to samples.json AMX_BF16 and Numpy_Numba_dpnp Signed-off-by: Mora Jimenez, Kevin <[email protected]> Co-authored-by: Mora Brenes, Allan <[email protected]> Co-authored-by: Mora Jimenez, Kevin <[email protected]> Co-authored-by: Edgar Parra <[email protected]>
Hi @eleemhui thanks the amazing PR. It looks really good. Would love to know how you tested the samples. I use a combination of Makefile and Docker, TBH it's quite clumsy. Would love to know your setup. We can merge the PR after testing. |
@Ankur-singh. We tested these using a sapphire rapids server we had access to. All these samples are python so we did not need to use Makefile on any of them we just dealt with adding all the correct python dependencies to the pyproject.toml file. |
Can relate to it. Since we have to do it very frequently, I wrote some python script to parse the I have the setup environment ready. Will be testing this PR over the weekend, I have a few things to take care of tomorrow. Hope thats fine. |
@Ankur-singh just wanted to add that there is a second part PR for some other python samples: #2623, it is the same solution but with the rest of samples we left behind of this one. |
* fix Intel_Extension_For_SKLearn_GettingStarted * fix INC-Sample-for-Tensorflow * fixed INC-Quantization-Sample-for-PyTorch * fix IntelTransformers_Quantization * fix IntelTensorFlow_Transformer_AMX_bfloat16_MixedPrecision * fixed IntelTensorFlow_InferenceOptimization * fixed IntelTensorFlow_AMX_BF16_Inference * fixed IntelTensorFlow_AMX_BF16_Training * fix IntelPyTorch_TrainingOptimizations_AMX_BF16 * fix IntelPyTorch_GPU_InferenceOptimization_with_AMP
Existing Sample Changes
Description
ONSAM 1917
In these changes we are using UV to manage the python dependencies, kernels, and venv.
Each test will be isolated from all other tests and packages added/changed/removed will not affect later tests.
pyproject.toml files have been added to each sample test which list all the dependencies and required versions (takes the place of the requirements.txt).
Adds uv.lock file to maintain consistency of packages being run on any environment.
Removes all calls to 'conda' in the sample.json files as well as dependence on the AI Tools Offline Installer
The following test cases are completed and tested:
Features-and-Functionality:
IntelPyTorch_GPU_InferenceOptimization_with_AMP
IntelPyTorch_TrainingOptimizations_AMX_BF16
IntelPython_GPU_dpnp_Genetic_AlgorithmIntelPython_Numpy_Numba_dpnp_kNNIntelPython_XGBoost_PerformanceIntelTensorFlow_AMX_BF16_Inference
IntelTensorFlow_AMX_BF16_Training
IntelTensorFlow_InferenceOptimization
IntelTensorFlow_Transformer_AMX_bfloat16_MixedPrecision
IntelTransformers_Quantization
Getting-Started-Samples:
INC-Quantization-Sample-for-PyTorch
INC-Sample-for-Tensorflow
Intel_Extension_For_SKLearn_GettingStarted
Intel_Extension_For_TensorFlow_GettingStarted
External Dependencies
none.
Type of change
Please delete options that are not relevant. Add a 'X' to the one that is applicable.
ONSAM 1917
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration