Can we have more bit and group size combinations like in other models?
#19 opened 27 days ago
by
teefan
Prompt formats and referencing the user or model by a specific name?
2
#18 opened 3 months ago
by
Mincookie

Speed up inference
3
#15 opened 3 months ago
by
mr96
RuntimeError: CUDA error: an illegal memory access was encountered
#14 opened 4 months ago
by
Nafnlaus
Lora Upload Dtype Error - Solved
#13 opened 4 months ago
by
Jaragua

Unable to load GPTQ model, despite utilizing autoGPTQ
2
#11 opened 4 months ago
by
DarvinDelray
It wont load
2
#9 opened 4 months ago
by
GaymerDanny
Error no file named...
1
#8 opened 4 months ago
by
solotrek
Can't see it better than the censored one. Some advise?
1
#7 opened 4 months ago
by
anon7463435254
Model doesnt load
1
#5 opened 4 months ago
by
agonzalez
TheBloke is GOAT
2
#4 opened 5 months ago
by
hussainwali1
Error when loading the model in ooba's UI (colab version)
14
#3 opened 5 months ago
by
PopGa
Gibberish on 'latest', with recent qwopqwop GPTQ/triton and ooba?
5
#2 opened 5 months ago
by
andysalerno
DefaultCPUAllocator: not enough memory
5
#1 opened 5 months ago
by
VladCorvi