Robert Sinclair
ZeroWw
AI & ML interests
LLMs
Organizations
None yet
ZeroWw's activity
Please post f16 quantization.
4
#1 opened about 1 month ago
by
ZeroWw
Please check these quantizations.
4
#40 opened 5 days ago
by
ZeroWw
Check out an alternate quantization...
3
#7 opened 1 day ago
by
ZeroWw
Testing experimental quants
14
#2 opened 6 days ago
by
bartowski
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6435718aaaef013d1aec3b8b/XKf-8MA47tjVAM6SCX0MP.jpeg)
Quantization suggestion
25
#3 opened 10 days ago
by
ZeroWw
Try my quantizations...
#3 opened 2 days ago
by
ZeroWw
How did you convert it?
5
#1 opened 7 days ago
by
ZeroWw
lots of INS....
2
#1 opened 6 days ago
by
ZeroWw
Dont download, google scuttled this model
10
#77 opened 3 months ago
by
Tom-Neverwinter
Can't find a way to make it work with llama.cpp
#102 opened 7 days ago
by
ZeroWw
What do you think about this method (or derivatives)?
#15 opened 8 days ago
by
ZeroWw
Please do v03 !
1
#3 opened 24 days ago
by
ZeroWw
Don't forget to post here your models trained with this dataset!
#1 opened 30 days ago
by
ZeroWw
Please upload f16 too.
#4 opened about 1 month ago
by
ZeroWw
--leave-output-tensor !
2
#13 opened about 1 month ago
by
ZeroWw
colab notebook.
#10 opened about 1 month ago
by
ZeroWw
New activity in
NikolayKozloff/Meta-Llama-3-8B-Instruct-bf16-correct-pre-tokenizer-and-EOS-token-Q8_0-Q6_k-Q4_K_M-GGUF
about 1 month ago
what tokenizer did you use?
#1 opened about 1 month ago
by
ZeroWw