How to save tokenizer_config.json and config.json files into local

#2
by nicoleds - opened

Hi, I have loaded the model into my local using
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("lmsys/vicuna-13b-v1.5")
model = AutoModelForCausalLM.from_pretrained("lmsys/vicuna-13b-v1.5")

but every time I load the model it still tries to connect to huggingface to get the tokenizer_config.json and config.json files, returning the error as follows:

'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /lmsys/vicuna-13b-v1.5/resolve/main/tokenizer_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fff1f86f0a0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/lmsys/vicuna-13b-v1.5/resolve/main/tokenizer_config.json
'HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /lmsys/vicuna-13b-v1.5/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fff1f86ff40>, 'Connection to huggingface.co timed out. (connect timeout=10)'))' thrown while requesting HEAD https://huggingface.co/lmsys/vicuna-13b-v1.5/resolve/main/config.json

Is it possible for me to download the tokenizer_config.json and config.json files into my local? and if so where should they be saved? thanks

Sign up or log in to comment