-
Notifications
You must be signed in to change notification settings - Fork 145
Open
Description
Trying to open a Gemma 3n model results in an error:
model = OVModelForCausalLM.from_pretrained("google/gemma-3n-e4b-it", device_map="auto")
No OpenVINO files were found for google/gemma-3n-e4b-it, setting `export=True` to convert the model to the OpenVINO IR. Don't forget to save the resulting model with `.save_pretrained()`
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/paris/.pyenv/versions/npu/lib/python3.12/site-packages/optimum/intel/openvino/modeling_base.py", line 505, in from_pretrained
return super().from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paris/.pyenv/versions/npu/lib/python3.12/site-packages/optimum/modeling_base.py", line 419, in from_pretrained
return from_pretrained_method(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paris/.pyenv/versions/npu/lib/python3.12/site-packages/optimum/intel/openvino/modeling_decoder.py", line 347, in _export
main_export(
File "/home/paris/.pyenv/versions/npu/lib/python3.12/site-packages/optimum/exporters/openvino/__main__.py", line 267, in main_export
raise ValueError(
ValueError: Trying to export a gemma3n model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum-intel/issues if you would like the model type gemma3n to be supported natively in the OpenVINO export.
The same error occurs when trying to export a Gemma 3n model with optimum-cli
.
spew8712
Metadata
Metadata
Assignees
Labels
No labels