Repo model google/gemma-2b is gated. You must be authenticated to access it.

#28
by IanKelly63 - opened

Hi all,

I'm trying to install Gemma-2b locally (Running the model on a CPU) and following the instructions on the model card.

I ran..

from transformers import AutoTokenizer, AutoModelForCausalLM

All good!

When I try running...

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b")

I get Cannot access gated repo for url https://huggingface.co/google/gemma-2b/resolve/main/config.json. Repo model google/gemma-2b is gated. You must be authenticated to access it.

Looking at the HuggingFace WebUI - I have access.. screenshot attched
gemma-2b.PNG

Can anyone help me out?

Many Thanks

Ian

I got similar issue, and now it's fixed after updating:

pip install -U transformers

Hi @IanKelly63
Make sure to login to HF-hub in your local instance by doing huggingface-cli login and pasting your read token

osanseviero changed discussion status to closed

I also had the same issue, turns out I need to create a .env file and put the Access token there, instead of just using it in the .py file.

This also worked for me:

  1. Create token: https://huggingface.co/settings/tokens
  2. Add the token to your tokenizer & model
access_token = "hf_..."
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it", token=access_token)
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", token=access_token)

Reference: https://huggingface.co/docs/hub/security-tokens

I have the same problem huggingface-cli download google/gemma-2-9b-i does not work despite that i have made sure that im logged in with the right token

Traceback (most recent call last):
  File "/mnt/vast/home/wendy/envs/mistral_env/bin/huggingface-cli", line 8, in <module>
    sys.exit(main())
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/commands/huggingface_cli.py", line 51, in main
    service.run()
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/commands/download.py", line 146, in run
    print(self._download())  # Print path to downloaded files
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/commands/download.py", line 180, in _download
    return snapshot_download(
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/_snapshot_download.py", line 233, in snapshot_download
    raise api_call_error
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/_snapshot_download.py", line 164, in snapshot_download
    repo_info = api.repo_info(repo_id=repo_id, repo_type=repo_type, revision=revision, token=token)
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2491, in repo_info
    return method(
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2301, in model_info
    hf_raise_for_status(r)
  File "/mnt/vast/home/wendy/envs/mistral_env/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 352, in hf_raise_for_status
    raise RepositoryNotFoundError(message, response) from e
huggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-668d10b6-2a6202857b11034c4ef3c926;3ed0e486-a107-47e7-9a5c-5a49d4773f40)

Repository Not Found for url: https://huggingface.co/api/models/google/gemma-2-9b-i/revision/main.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.

I have the same problem huggingface-cli download google/gemma-2-9b-i does not work despite that i have made sure that im logged in with the right token

Just a bling guess: typo in the model's name
huggingface-cli download google/gemma-2-9b-it

It helped me to do it like this

HF_TOKEN = "YOUT_TOKEN"
!huggingface-cli login --token {HF_TOKEN}

i try to put my hugingface token with command G:\ComfyUI_windows_portable\python_embeded\Scripts> huggingface-cli login

i try also G:\ComfyUI_windows_portable\python_embeded\Scripts> huggingface-cli download google/gemma-2b-it

but it didnt help: Access to model google/gemma-2b-it is restricted. You must have access to it and be authenticated to access it. Please log in.

I logged in hugingface with firefox and i downloaded all the 2 big files (model-00001 and model-00002-of-00002.safetensor size 4,8go and 235Mo)
i place them in several places in G:\ComfyUI_windows_portable\ComfyUI\models\text_encoders and G:\ComfyUI_windows_portable\ComfyUI\models\text_encoders\models--unsloth--gemma-2-2b-it and I succed launching euler ksampler in SANA workflow but get another error then !! i guest i did something wrong !

to use google/gemma-2b-it how and where can i use this 3 lines ?
access_token = "hf_******"
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it", token=access_token)
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", token=access_token)

Sign up or log in to comment