-
Notifications
You must be signed in to change notification settings - Fork 17
Description
I downloaded the model from the huggingface, and when I try to run the code of inference.py. I met this Error:
Traceback (most recent call last): File "/data1/xxx/vila-u/inference.py", line 45, in <module> model = vila_u.load(args.model_path) File "/data1/xxx/vila-u/vila_u/entry.py", line 34, in load model = load_pretrained_model(model_path, **kwargs)[1] File "/data1/xxx/vila-u/vila_u/model/builder.py", line 30, in load_pretrained_model model = VILAULlamaModel( File "/data1/xxx/vila-u/vila_u/model/language_model/vila_u_llama.py", line 30, in __init__ return self.init_vlm(config=config, *args, **kwargs) File "/data1/xxx/vila-u/vila_u/model/vila_u_arch.py", line 51, in init_vlm self.vision_tower = build_vision_tower(vision_tower_cfg, config) File "/data1/xxx/vila-u/vila_u/model/multimodal_encoder/builder.py", line 25, in build_vision_tower vision_tower = RQVAESIGLIPTransformerVisionTower(model_name_or_path, config) File "/data1/xxx/vila-u/vila_u/model/multimodal_encoder/rqvaesigliptransformer_encoder.py", line 15, in __init__ self.vision_tower = RQVAESIGLIPTransformer.from_pretrained(model_name_or_path, torch_dtype=eval(config.model_dtype)) File "/data1/xxx/miniconda3/envs/vila-u/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3462, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/data1/xxx/vila-u/vila_u/model/multimodal_encoder/rqvaesigliptransformer/modeling_rqvaesigliptransformer.py", line 16, in __init__ self.rqvaesiglip = RQVAESiglipModel._from_config(rqvaesiglip_config) File "/data1/xxx/miniconda3/envs/vila-u/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1249, in _from_config model = cls(config, **kwargs) File "/data1/xxx/vila-u/vila_u/model/multimodal_encoder/rqvaesigliptransformer/rqvaesiglip/modeling_rqvaesiglip.py", line 19, in __init__ siglip_config = SiglipModel.config_class.from_pretrained(config.pretrained_model) File "/data1/xxx/miniconda3/envs/vila-u/lib/python3.10/site-packages/transformers/configuration_utils.py", line 615, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/data1/xxx/miniconda3/envs/vila-u/lib/python3.10/site-packages/transformers/configuration_utils.py", line 644, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/data1/xxx/miniconda3/envs/vila-u/lib/python3.10/site-packages/transformers/configuration_utils.py", line 699, in _get_config_dict resolved_config_file = cached_file( File "/data1/xxx/miniconda3/envs/vila-u/lib/python3.10/site-packages/transformers/utils/hub.py", line 429, in cached_file raise EnvironmentError( OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like google/siglip-large-patch16-256 is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
This is my script:
CUDA_VISIBLE_DEVICES=3 python inference.py \ --model_path /data2/xxx/Model/vila_u \ --prompt "A snowy mountaion." \ --save_path generated_images/ \ --generation_nums 1
My environment is an offline server. The error message seems to indicate that the model weights of siglip-large-patch16-256 are missing, So , an attempt was made to connect to Huggingface.io to download google/siglip-large-patch16-256.
I'm wondering if I need to prepare the model weights of siglip-large?
And do I need to modify the "pretrained_model": "google/siglip-large-patch16-256", in the vila-u/vision_tower/config.json file to my local siglip-large weights path?
Or is the error caused by other issues?
Looking forward to hearing back from you!