unable to use in google colab ... even after having gated access

#17
by rajivraghu21 - opened

!pip install --upgrade transformers

access_token = 'xxxx'

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

Check CUDA availability

if torch.cuda.is_available():
device = "cuda"
else:
device = "cpu"
print("Warning: CUDA not available, using CPU.")

tokenizer = AutoTokenizer.from_pretrained("google/gemma-2b-it", token=access_token)
model = AutoModelForCausalLM.from_pretrained("google/gemma-2b-it", token=access_token)
model.to(device)

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to(device)

outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))

17 frames
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/google/gemma-2b-it/resolve/main/config.json

The above exception was the direct cause of the following exception:

HfHubHTTPError Traceback (most recent call last)
HfHubHTTPError: (Request ID: Root=1-66adc09d-7056f0b756079c2424dc8ff4;cee75437-8986-4890-beb0-90cecffdd9a3)

403 Forbidden: Please enable access to public gated repositories in your fine-grained token settings to view this repository..
Cannot access content at: https://huggingface.co/google/gemma-2b-it/resolve/main/config.json.
If you are trying to create or update content,make sure you have a token with the write role.

The above exception was the direct cause of the following exception:

LocalEntryNotFoundError Traceback (most recent call last)
LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

The above exception was the direct cause of the following exception:

OSError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/hub.py in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_gated_repo, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs)
443 ):
444 return resolved_file
--> 445 raise EnvironmentError(
446 f"We couldn't connect to '{HUGGINGFACE_CO_RESOLVE_ENDPOINT}' to load this file, couldn't find it in the"
447 f" cached files and it looks like {path_or_repo_id} is not the path to a directory containing a file named"

OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like google/gemma-2b-it is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

Google org

Hi @rajivraghu21 ,

Could you please re-check the access token you have assigned and ensure that you are using the access token for the gemma-2b-it model.

Thank you.

It worked .. Thank you . Under access token i selected repository permissions and selected the right model

image.png

rajivraghu21 changed discussion status to closed

Sign up or log in to comment