404 Client Error: Not Found for url: https://huggingface.co/xlm-roberta-large-xnli/resolve/main/config.json

#7
by Vreins - opened

In trying to do this:

pose sequence as a NLI premise and label as a hypothesis

from transformers import AutoModelForSequenceClassification, AutoTokenizer
nli_model = AutoModelForSequenceClassification.from_pretrained("joeddav/xlm-roberta-large-xnli")
tokenizer = AutoTokenizer.from_pretrained("joeddav/xlm-Roberta-large-xnli")

I got this error:
404 Client Error: Not Found for url: https://huggingface.co/xlm-roberta-large-xnli/resolve/main/config.json
HTTPError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
465 use_auth_token=use_auth_token,
--> 466 user_agent=user_agent,
467 )

/opt/conda/lib/python3.7/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1172 use_auth_token=use_auth_token,
-> 1173 local_files_only=local_files_only,
1174 )

/opt/conda/lib/python3.7/site-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1335 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1336 r.raise_for_status()
1337 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")

/opt/conda/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
942 if http_error_msg:
--> 943 raise HTTPError(http_error_msg, response=self)
944

HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/xlm-roberta-large-xnli/resolve/main/config.json

During handling of the above exception, another exception occurred:

OSError Traceback (most recent call last)
/tmp/ipykernel_18/1132170543.py in
7 model_roBerta ="xlm-roberta-large-xnli"
8 # model_Bert = 'bert-base-multilingual-cased'
----> 9 tokenizer = AutoTokenizer.from_pretrained(model_roBerta, use_auth_token="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")
10 model = TFAutoModel.from_pretrained(model_roBerta,use_auth_token="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")

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
388 kwargs["_from_auto"] = True
389 if not isinstance(config, PretrainedConfig):
--> 390 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
391
392 use_fast = kwargs.pop("use_fast", True)

/opt/conda/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
396 """
397 kwargs["_from_auto"] = True
--> 398 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
399 if "model_type" in config_dict:
400 config_class = CONFIG_MAPPING[config_dict["model_type"]]

/opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
476 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
477 )
--> 478 raise EnvironmentError(msg)
479
480 except json.JSONDecodeError:

OSError: Can't load config for 'xlm-roberta-large-xnli'. Make sure that:

  • 'xlm-roberta-large-xnli' is a correct model identifier listed on 'https://huggingface.co/models'

  • or 'xlm-roberta-large-xnli' is the correct path to a directory containing a config.json file

Should be fixed now.

joeddav changed discussion status to closed

Sign up or log in to comment