Fine-Tuning Request
#19 opened 3 months ago
by
nikhiljais
Candle cannot inference madlad for gguf
1
#18 opened 4 months ago
by
thanasismem
Running out of memory with 12GB VRAM
1
#17 opened 8 months ago
by
redstar6486
Conserve formating across translation...
#16 opened 8 months ago
by
WCDR
Number alter issue
1
#15 opened 9 months ago
by
sarahai
Make 8 bit quantized model version
3
#14 opened 10 months ago
by
nonetrix
Translation of longer texts
3
#13 opened 10 months ago
by
hanshupe
quantized version as app
4
#11 opened 10 months ago
by
sarahai