xldistance
xldistance
AI & ML interests
None yet
Recent Activity
new activity
3 days ago
pipilok/phi-4-unsloth-exl2-8bpw-hb8:Model loading failure
liked
a model
3 days ago
unsloth/phi-4
liked
a model
3 days ago
pipilok/phi-4-unsloth-exl2-8bpw-hb8
Organizations
None yet
xldistance's activity
Model loading failure
2
#1 opened 3 days ago
by
xldistance
Can you produce a quantized 2.4bpw model of this model?
3
#1 opened about 1 month ago
by
xldistance
Phi-4 = gpt-4o-mini
6
#4 opened 28 days ago
by
maxbn
Can you produce a 2.4bpw quantization of this model?
2
#1 opened 30 days ago
by
xldistance
How to reduce the problem of 2.25bpw quantitative models often responding haphazardly
1
#2 opened about 2 months ago
by
xldistance
Can you make a 2.25bpw quantization for this model?
#4 opened about 1 month ago
by
xldistance
Can you use the same method to train the qwen2.5 32b model?
8
#24 opened about 2 months ago
by
xldistance
The model can go off on tangents
1
#4 opened about 2 months ago
by
spanspek
The censorship was so excessive that it led to a refusal to answer many non-sensitive questions as well
8
#3 opened about 2 months ago
by
xldistance
Your trained model calls the ollama api frequently unresponsive, you need to restart ollama to reply again.
1
#3 opened about 2 months ago
by
xldistance
The most powerful open source code large model!!!
3
#1 opened 2 months ago
by
xldistance
gguf model not loading properly in ollama
3
#1 opened 6 months ago
by
xldistance
Can you quantify this model in exl2?
1
#7 opened 7 months ago
by
xldistance
Can you provide the EXL2 quantitative model?
1
#1 opened 10 months ago
by
xldistance
Create GGUF for this please
8
#2 opened 10 months ago
by
ishanparihar
Can you produce a 2.4bpw exl2 quantisation of this model?
1
#2 opened 11 months ago
by
xldistance
Can you quantify the model?
5
#1 opened 12 months ago
by
xldistance
Can you make a 2.4bpw exl2 quantisation for this model?
4
#1 opened 12 months ago
by
xldistance
GGUF Version?
20
#1 opened 12 months ago
by
johnnnna