Fine-tuning?
#14 opened about 1 year ago
by
OSK-Creative-Tech
The model is not responding.
#13 opened about 1 year ago
by
PhelixZhen
model responses not good.
1
#12 opened about 1 year ago
by
muneerhanif7
how to quant llama2 70b model with AutoGPTQ
4
#11 opened over 1 year ago
by
tonycloud
Wrong shape when loading with Peft-AutoGPTQ
2
#10 opened over 1 year ago
by
tridungduong16
Long waiting time
14
#9 opened over 1 year ago
by
wempoo
Context Length Differences
#7 opened over 1 year ago
by
zacharyrs
Problems with temperature when using with python code.
3
#6 opened over 1 year ago
by
matchaslime
Should we expect GGML soon?
3
#5 opened over 1 year ago
by
yehiaserag
Issue with 64g version?
#4 opened over 1 year ago
by
AARon99
The `main` branch for TheBloke/Llama-2-70B-GPTQ appears borked
11
#3 opened over 1 year ago
by
Aivean
I found an fp16 model if it helps
1
#2 opened over 1 year ago
by
rombodawg
❤️❤️❤️❤️
1
#1 opened over 1 year ago
by
SinanAkkoyun