Async0x42
async0x42
AI & ML interests
None yet
Recent Activity
updated
a model
about 9 hours ago
async0x42/cogito-v1-preview-qwen-32B-exl2_4.65bpw
published
a model
about 9 hours ago
async0x42/cogito-v1-preview-qwen-32B-exl2_4.65bpw
updated
a model
about 10 hours ago
async0x42/DeepScaleR-1.5B-Preview-exl2_4.65bpw
Organizations
None yet
async0x42's activity
QwQ updated their tokenizer, model update needed?
1
#4 opened 26 days ago
by
async0x42

Bartowski! 0.0!!!! You are on double-secret probation for this jinja error!
7
#4 opened about 1 month ago
by
rkh661

Can you produce a quantized 2.4bpw model of this model?
3
#1 opened 4 months ago
by
xldistance
Added to the UGI leaderboard
1
#1 opened 5 months ago
by
async0x42

So far, very good model
#1 opened 7 months ago
by
async0x42

Garbled output on this model
1
#3 opened 7 months ago
by
async0x42

Getting run-on sentences at 6k context
4
#1 opened 9 months ago
by
St33lMouse
Thanks, this takes more VRAM, is a 3.5bpw possible?
#1 opened 10 months ago
by
async0x42

Thanks for the 4bpw!
#1 opened 10 months ago
by
async0x42

Thanks for the exl2! Can you do 4bpw?
#1 opened 10 months ago
by
async0x42
