It wont load

#9
by GaymerDanny - opened

I have oobabooga installed on my GPU and so i installed this model and i tried to run it and oh god I ran into so many troubles and itried many solutions till I ended up with this one

When I tried to load it, it gives me this error.

Traceback (most recent call last): File “C:\Users\Dan\Desktop\AI\oobabooga_windows\text-generation-webui\server.py”, line 71, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name) File “C:\Users\Dan\Desktop\AI\oobabooga_windows\text-generation-webui\modules\models.py”, line 97, in load_model output = load_func(model_name) File “C:\Users\Dan\Desktop\AI\oobabooga_windows\text-generation-webui\modules\models.py”, line 155, in huggingface_loader model = LoaderClass.from_pretrained(Path(f"{shared.args.model_dir}/{model_name}"), low_cpu_mem_usage=True, torch_dtype=torch.bfloat16 if shared.args.bf16 else torch.float16, trust_remote_code=shared.args.trust_remote_code) File “C:\Users\Dan\Desktop\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py”, line 472, in from_pretrained return model_class.from_pretrained( File “C:\Users\Dan\Desktop\AI\oobabooga_windows\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 2406, in from_pretrained raise EnvironmentError( OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\TheBloke_Wizard-Vicuna-13B-Uncensored-GPTQ.

so after I changed transformers gpu-memory it loaded succefully but now when i type it gives back nothing
image.png

image.png

ok i chnged pre layer from 0 to 7 and now it is generating very very slow responses even tho i had a nvidia rtx 2060 MAX
I think all these problems coming mainly from my setting or smt i have no idea how to worjk with oobabonga like what am i missing

Sign up or log in to comment