-
Notifications
You must be signed in to change notification settings - Fork 276
Open
Description
Describe the bug
I am following the example doc in
https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/how-to/how-to-compile-hugging-face-models?view=foundry-classic&tabs=PowerShell&source=docs
Unfortunately there seems to be some installation dependency missing in the tutorial docs.
when I try to invoke olive i get the following error:
ModuleNotFoundError: No module named 'onnxruntime'
To Reproduce
(venv) PS C:\work\microsoft\Foundry-Local> olive auto-opt --model_name_or_path meta-llama/Llama-3.2-1B-Instruct --trust_remote_code --output_path models/llama --device cpu --provider CPUExecutionProvider --use_ort_genai --precision int4 --log_level 1
Loading HuggingFace model from meta-llama/Llama-3.2-1B-Instruct
[2025-12-15 22:31:06,135] [INFO] [run.py:99:run_engine] Running workflow default_workflow
[2025-12-15 22:31:06,139] [WARNING] [run.py:111:run_engine] ORT log severity level configuration ignored since the module isn't installed.
[2025-12-15 22:31:06,144] [INFO] [cache.py:138:__init__] Using cache directory: C:\work\microsoft\Foundry-Local\.olive-cache\default_workflow
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\work\microsoft\Foundry-Local\venv\Scripts\olive.exe\__main__.py", line 7, in <module>
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\cli\launcher.py", line 66, in main
service.run()
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\cli\auto_opt.py", line 173, in run
return self._run_workflow()
^^^^^^^^^^^^^^^^^^^^
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\cli\base.py", line 44, in _run_workflow
workflow_output = olive_run(run_config)
^^^^^^^^^^^^^^^^^^^^^
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\workflows\run\run.py", line 178, in run
return run_engine(package_config, run_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\workflows\run\run.py", line 131, in run_engine
accelerator_spec = create_accelerator(
^^^^^^^^^^^^^^^^^^^
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\systems\accelerator_creator.py", line 174, in create_accelerator
system_config = normalizer.normalize()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\systems\accelerator_creator.py", line 39, in normalize
self.system_supported_eps = target.get_supported_execution_providers()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\systems\local.py", line 66, in get_supported_execution_providers
return get_ort_available_providers()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\work\microsoft\Foundry-Local\venv\Lib\site-packages\olive\common\ort_inference.py", line 72, in get_ort_available_providers
import onnxruntime as ort
ModuleNotFoundError: No module named 'onnxruntime'
Expected behavior
following the instructions the example should work
Olive config
Add Olive configurations here.
Olive logs
Add logs here.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels