Sam McLeod
smcleod
AI & ML interests
cool things
Organizations
None yet
smcleod's activity
Mobile App?
3
#16 opened 4 days ago
by
Yhyu13
Still seems heavily censored
3
#2 opened 5 days ago
by
smcleod
IQ2_XS quants?
#7 opened 4 days ago
by
smcleod
Chat template - use with Ollama?
2
#1 opened 8 days ago
by
smcleod
Have these quants had their pre-tokenizer fixed?
2
#8 opened 27 days ago
by
smcleod
GGUF Quantisations
3
#3 opened 3 months ago
by
smcleod
Are there plans for longer context version of this model?
5
#1 opened 16 days ago
by
MarinaraSpaghetti
Has this model had it's pre-tokenizer fixed?
1
#10 opened 27 days ago
by
smcleod
Have these quants had their pre-tokenizer fixed?
3
#2 opened 27 days ago
by
smcleod
Have these quants had their pretokenizer fixed?
#1 opened 27 days ago
by
smcleod
Phi 3 tokenizer_config has been updated upstream
#6 opened 29 days ago
by
smcleod
Doesn't work for Phi-3 models
2
#47 opened about 1 month ago
by
smcleod
Is it possible to convert these to a single GGUF?
1
#2 opened about 2 months ago
by
smcleod
Quantized models (4bit) request
3
#4 opened about 2 months ago
by
terminator33
I made a little GUI app for this
1
#4 opened 2 months ago
by
smcleod
Why not in .safetensors format?
1
#1 opened 3 months ago
by
Wubbbi
Lets Quantize
8
#1 opened 2 months ago
by
simsim314
Quantisation parameters + Q5_K_M version?
2
#1 opened 3 months ago
by
smcleod
Any chance of providing an iMatrix?
2
#2 opened 3 months ago
by
smcleod
[FEEDBACK] Notifications
66
#6 opened almost 2 years ago
by
victor
Update README.md
2
#1 opened 4 months ago
by
Gordorak
Hey bloke you should make new version of that model
3
#1 opened 8 months ago
by
mirek190
Any chance of a 13B-20B version?
7
#7 opened 9 months ago
by
smcleod
Phind-CodeLlama-13b model?
2
#3 opened 9 months ago
by
rombodawg