Skip to content

Conversation

@nikita-savelyevv
Copy link
Collaborator

@nikita-savelyevv nikita-savelyevv commented Dec 15, 2025

What does this PR do?

  • Added a configuration dict _DEFAULT_IGNORED_SCOPE_CONFIGS for storing default quantization ignored scope per model id. Such ignored scope will be applied irrespective of quantization method.
  • Added export_model_id argument to OVBaseModel._from_pretrained() which is provided from OVBaseModel._export(). This is needed for more robust matching of default quantization configuration / ignored scope when exporting a model via OVModelForX.from_pretrained(model_id, export=True).
  • Reworked test_quantization.py::OVWeightCompressionTest.test_ovmodel_4bit_auto_compression test to test on more model types besides OVModelForCausalLM.
  • Added test_quantization.py::OVWeightCompressionTest.test_ovmodel_default_ignored_scope and test_exporters_cli.py::OVCLIExportTestCase.test_exporters_cli_with_default_ignored_scope to test the new default ignored scope matching logic.

Tickets CVS-175336, CVS-166099.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests?

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Copy link
Collaborator

@echarlaix echarlaix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR !

model._apply_quantization(
quantization_config, compile_only, compile_model, str(model_id), trust_remote_code
)
model_id = export_model_id or getattr(config, "name_or_path", model_id)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why use export_model_id instead of directly using the config ? (also here looks like export_model_id will be used later on to load the config anyway to get config.name_or_path to add as a candidate)

Suggested change
model_id = export_model_id or getattr(config, "name_or_path", model_id)
model_id = getattr(config, "name_or_path", model_id)

likely related to,

This is needed for more robust matching of default quantization configuration / ignored scope when exporting a model via OVModelForX.from_pretrained(model_id, export=True)

would you mind developing ?

The quantization configuration to which the default ignored scope will be applied.
Returns:
Updated quantization configuration with the default ignored scope applied.
"""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could add a check to make sure quantization_config is in instance of OVPipelineQuantizationConfig

# 1. Create a local copy of the model so that we can override _name_or_path
pt_model = auto_model_cls.from_pretrained(MODEL_NAMES[test_model_id])
pt_model.save_pretrained(tmpdir)
try:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants