OSError:<ModelName> is not a local folder

#3
by mullerse - opened

Hey there :)

Big thankyou for all the good documentations and the really helpfull models! But there is a Problem we cant solve :(

We downloaded and installed the oobabooga text generation webui and it runs as expected

What we want to do next is: Run e.g. this Model with python (https://huggingface.co/TheBloke/wizardLM-7B-GPTQ).
As in the instruduction written, we installed autogpt etc.

But if we run the "example code" from your Introduction "How to use this GPTQ model from Python code" we always geht this error:
OSError: TheBloke_vicuna-13b-v1.3.0-GPTQ is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with use_auth_token or log in with huggingface-cli login and pass use_a uth_token=True.

But the local folder TheBloke_vicuna-13b-v1.3.0-GPTQ exists at /text-generation-webui/models/ :(

Where do we have to place the example_code.py? Actually it is in the Main oobabooga-Folder. But the same error appears when the .py is at /text-generation-webui/ and as well when it is at /text-generation-webui/models/

if i use the complete path at model_name_or_path = "C:/Users/user/oobabooga/text-generation-webui/models/TheBloke_vicuna-13b-v1.3.0-GPTQ" i'll get: "FileNotFoundError: Could not find model in ..."

As well i tried with use_auto_token = True in the tokenizer and added my huggingface token to the model. But there returns another failure then :/

Do you have any idea?

Thank you very much :)

Please show your full code. If you're passing model_basename = "gptq-model...." then remove that, as that's the problem. I recently renamed the models so they're now called model.safetensors and model_basename is no longer required.

It might be easier if you start again and do GPTQ directly from Transformers, which is now supported since Transformers 4.32.0. There's a blog post with explanations and links to Google Colab code, here: https://huggingface.co/blog/gptq-integration

I will be updating my GPTQ documentation to reflect that new method shortly, this weekend

Hey TheBloke,

thank you very much for your time and advices :)
Yes, i used model_basename
So then i tried your Wizard-Vicuna-7B-Uncensored GPTQ and it runs perfectly :)

Sign up or log in to comment