Issue of sha256sum of file gpt4-x-alpaca-30b-128g-4bit.safetensors
#16 opened about 1 year ago
by
mamsds
Size mismatch error no matter whan .json or .config files or startup parameters I use
1
#13 opened over 1 year ago
by
Anphex
Best model I tested, but seems to have an issue on some tokens
2
#12 opened over 1 year ago
by
kbrkbr
llama.cpp breaks quantized ggml file format
4
#11 opened over 1 year ago
by
Waldschrat
What model_type to set between "None" and "llama"? And what prompt style to use? There are plenty of the latter in Ooba's TextGenWebUI as of now and I'm pretty much lost (see pics for clarification)
4
#10 opened over 1 year ago
by
sneedingface
sha256sum not matching for one file gpt4-x-alpaca-30b-128g-4bit.safetensors
1
#9 opened over 1 year ago
by
spaceman7777
Error: Internal: src/sentencepiece_processor.cc in Ooba and KAI 4bit
4
#8 opened over 1 year ago
by
Co0ode
Please, help :<
17
#7 opened over 1 year ago
by
ANGIPO
Loaded the model but it wont respond and is stuck saying "typing" meanwhile gpu usage at 100%
1
#6 opened over 1 year ago
by
barncroft
Error when launching
4
#5 opened over 1 year ago
by
pupdike
filenames of shards in pytorch_model.bin.index.json
2
#4 opened over 1 year ago
by
h3ndrik
Model size for int4 fine tuning on rtx 3090
3
#2 opened over 1 year ago
by
KnutJaegersberg