Cannot load model post agreement to new terms and using access token
When trying to load any of the mistral7b models I get
"Cannot access gated repo for url https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/resolve/main/config.json."
I can recreate the issue using the following code:
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
import os
hf_token = "[hf_token]"
os.environ["HF_TOKEN"] = hf_token
config = AutoConfig.from_pretrained("mistralai/Mistral-7B-instruct-v0.2")
I can confirm 100% that the access token is working (the above code runs for Gemma for example) and I have 100% accepted the agreement for Mistral 7b.
Interestingly I do not get the issue for Mixtral-8x22B-Instruct-v0.1 only for the set of Mistral7b models.
Hey ... try passing the token also into the AutoTokenizer
, not only on AutoModelForCausalLM
:
AutoTokenizer.from_pretrained(model_id, token = '<your token>')
AutoModelForCausalLM.from_pretrained(model_id, token = '<your token>')
In bash
$ huggingface-cli login
then the screen will ask you to Enter your token (input will not be visible):
paste your token from settings => access tokens
this worked for me
Hey, Thanks for the advice, sadly neither of the above worked for me.
huggingface-cli whoami
Is returning the correct user name so the token is definitely working and I am able to load the tokenizer but not the model or config.
I have replicated this in several environments one using vllm in Vertex AI, a local notebook and a Colab enterprise notebook.
ok, weird ... can you post the error message?
@GustavoBMG where do i pass?
AutoTokenizer.from_pretrained(model_id, token = '')
AutoModelForCausalLM.from_pretrained(model_id, token = '')
Im trying to do it on the privategpt repository via docker..
After the token param:
AutoTokenizer.from_pretrained(model_id, token = 'bfksdhbfksdbgkdsbgfhd')
AutoModelForCausalLM.from_pretrained(model_id, token = 'fdsbfndksjfdsnfkds')
something like this
sorry my i wrote wrong.
I know here to put the token, i just don't know where i put the autoTokenizer.from and the CausalILM. When i say where, i'm talking about the file.
@mav1814 ... hey I'm not following what you are saying ...
What I mean is: copy the code snippet example from the model card and put the parameters there.