Error running the code:
1
#29 opened about 1 year ago
by
Andyrasika
How to run qlora with stable vicuna?
1
#28 opened about 1 year ago
by
Andyrasika
Could not find model in TheBloke/stable-vicuna-13B-GPTQ
3
#27 opened over 1 year ago
by
AB00k
HF/bitsandbytes load_in_4bit is now apparently live apparently! (in peft)
3
#24 opened over 1 year ago
by
2themaxx
Issue starting the webui
7
#23 opened over 1 year ago
by
Depsi
Updated special_tokens_map.json
1
#22 opened over 1 year ago
by
brandonglockaby
What are the hardware requirements for this? I am running out of memory on my RTX3060 Ti :o
2
#21 opened over 1 year ago
by
yramshev
AttributeError: 'Offload_LlamaModel' object has no attribute 'preload'
13
#17 opened over 1 year ago
by
yramshev
Attentions are all None
2
#16 opened over 1 year ago
by
joshlevy89
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\TheBlokestable-vicuna-13B-GPTQ.
13
#14 opened over 1 year ago
by
Squeezitgirdle
some errors occured when I use the webui who can tell me why?
4
#13 opened over 1 year ago
by
yogurt111
using with transformers
2
#11 opened over 1 year ago
by
thefaheem
Model's VRAM Utilisation
3
#10 opened over 1 year ago
by
thefaheem
Unable to load it using --pre_layer flag
5
#5 opened over 1 year ago
by
boricuapab
the model thinks its chatgpt, lobotomy is advised
2
#4 opened over 1 year ago
by
eucdee
No such file or directory: ‘models\TheBloke_stable-vicuna-13B-GPTQ\pytorch_model-00001-of-00003.bin’
9
#2 opened over 1 year ago
by
Blue-Devil
stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors not compatible with "standard" settings
25
#1 opened over 1 year ago
by
vmajor