Repository Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/tokenizer/vocab.json.

#1625
by ZeroCool22 - opened

Followed this guide: https://www.youtube.com/watch?v=w6PTviOCYQY

(diffusers) zerocool@DESKTOP-IFR8E96:~/github/diffusers/examples/dreambooth$ ./my_training.sh
The following values were not passed to accelerate launch and had defaults used instead:
--num_cpu_threads_per_process was set to 12 to improve out-of-box performance
To avoid this warning pass in values for each of the problematic parameters or run accelerate config.
Traceback (most recent call last):
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 213, in hf_raise_for_status
response.raise_for_status()
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/tokenizer/vocab.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/transformers/utils/hub.py", line 408, in cached_file
resolved_file = hf_hub_download(
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1053, in hf_hub_download
metadata = get_hf_file_metadata(
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/huggingface_hub/file_download.py", line 1359, in get_hf_file_metadata
hf_raise_for_status(r)
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 242, in hf_raise_for_status
raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: baPvU1nBim9S_79aRkpng)

Repository Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/tokenizer/vocab.json.
Please make sure you specified the correct repo_id and repo_type.
If the repo is private, make sure you are authenticated.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/zerocool/github/diffusers/examples/dreambooth/train_dreambooth.py", line 657, in
main()
File "/home/zerocool/github/diffusers/examples/dreambooth/train_dreambooth.py", line 420, in main
tokenizer = CLIPTokenizer.from_pretrained(
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1734, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/transformers/utils/hub.py", line 423, in cached_file
raise EnvironmentError(
OSError: CompVis/stable-diffusion-v1-4 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True.
Traceback (most recent call last):
File "/home/zerocool/anaconda3/envs/diffusers/bin/accelerate", line 8, in
sys.exit(main())
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 43, in main
args.func(args)
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/accelerate/commands/launch.py", line 837, in launch_command
simple_launcher(args)
File "/home/zerocool/anaconda3/envs/diffusers/lib/python3.9/site-packages/accelerate/commands/launch.py", line 354, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/zerocool/anaconda3/envs/diffusers/bin/python', 'train_dreambooth.py', '--pretrained_model_name_or_path=CompVis/stable-diffusion-v1-4', '--instance_data_dir=training', '--output_dir=my_model', '--instance_prompt=beaninstance', '--resolution=512', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--learning_rate=5e-6', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--max_train_steps=1000']' returned non-zero exit status 1.
./my_training.sh: line 17: --gradient_accumulation_steps=2: command not found

Hi, I have the same issue, did you address it yet?

As the error message says:
"If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True."
So you just need to add to the command you are running
--use_auth_token

Have you trued that?

As the error message says:
"If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_auth_token=True."
So you just need to add to the command you are running
--use_auth_token

Have you trued that?

where do I use this --use_auth_token ???

I have the same problem when I train Textual Inversion.
This issue was solved by adding "use_auth_token=True" as the argument to "from_pretrained()" function

Taking the code in "textual_inversion.py: as an example,
change:
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
to:
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer", use_auth_token=True)
In my case, I add "use_auth_token=True" to all from_pretrained functions since I'm not sure which models require "use_auth_token=True" for downloading specifically.

Upgrade transformers package work for me. pip install --upgrade diffusers transformers scipy

This is caused by transformers. You can upgrade it to recent version, transformers-4.24.0 works well.

Sign up or log in to comment