I request to add support for the "Llama 3.2 11B Vision Instruct" model so that we can quantize the model using optimum-cli. Currently, I get an error while trying to quantize and download the Llama 3.2 11B Vision Instruct model.
"ValueError: Trying to export a mllama model, that is a custom or unsupported architecture, but no custom export configuration was passed as custom_export_configs"