Frankie G
spanielrassler
AI & ML interests
None yet
Organizations
None yet
spanielrassler's activity
Garbled output in llama.cpp
2
#13 opened about 1 month ago
by
spanielrassler
Oh my gosh StoryTelling indeed
12
#2 opened 12 months ago
by
remowylliams
Any chance of a 4_2 or 4_0 ggml quantization?
4
#4 opened about 1 year ago
by
spanielrassler
Thanks!
1
#1 opened about 1 year ago
by
spanielrassler
Converting to ggml and quantizing with llama.cpp
5
#2 opened about 1 year ago
by
akiselev
Is this supposed to be usable with llama.cpp?
2
#1 opened about 1 year ago
by
spanielrassler
Is there a model based on this dataset yet?
78
#1 opened about 1 year ago
by
spanielrassler
Thank you
12
#3 opened about 1 year ago
by
anon8231489123
How to merge the 2 bin files into the pytorch_model.bin file for usage?
1
#2 opened about 1 year ago
by
spanielrassler