encouter OSError: We couldn't connect to 'https://huggingface.co' to load this file
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like facebook/mms-1b-all is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.
Failed to load pretrained models. Any solutions?
Ok now I firstly download it via colab and then to my own env. Another problem is Traceback (most recent call last):
File "asr_infer_in_a_row.py", line 17, in
model = Wav2Vec2ForCTC.from_pretrained(model_path)
File "/home/zhangzheyu/miniconda3/envs/mms/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2446, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory /home/zhangzheyu/fairseq/examples/mms/mms-1b-all.
It seems that a pytorch_model.bin file is needed while the repo does not have the file.
Are you connected to the internet @ZZ99 ? You'll need to have an internet connection to load the weights the first time. After that, you can load them from cache and run it offline!
The model path should be set to the model id on the Hub (facebook/mms-1b-all
) or a local folder where you've cloned the repo
Here is an example on how one can use MMS models - https://huggingface.co/spaces/mms-meta/MMS/blob/main/asr.py#L37
Yes, i did connect to the internet. However, you know, In china you cannot directly use from_pretrain as the huggingface is blocked. So you have to manually download the models via git lfs. The pytorch_model.bin is missing somehow by the git clone. I tried to replace the file from facebook/mms-1b but failed. Do you know the right way to fix it? @sanchit-gandhi