no
Rotating
AI & ML interests
None yet
Recent Activity
new activity
8 days ago
bartowski/Qwen_QwQ-32B-GGUF:Something wrong
new activity
20 days ago
unsloth/DeepSeek-R1-GGUF:is it uncensored?
Organizations
None yet
Rotating's activity
Something wrong
12
#3 opened 9 days ago
by
wcde
Q2_K_XL model is the best? IQ2_XXS is better than Q2_K_XL in mmlu-pro benchmark
11
#36 opened 19 days ago
by
albertchow
is it uncensored?
5
#33 opened 25 days ago
by
Morrigan-Ship
when using with ollama, does it support kv_cache_type=q4_0 and flash_attention=1?
3
#28 opened about 1 month ago
by
leonzy04
Accuracy of the dynamic quants compared to usual quants?
19
#21 opened about 1 month ago
by
inputout

Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
9
#13 opened about 1 month ago
by
ubergarm
R1 32b is much worse than QwQ ...
22
#6 opened about 2 months ago
by
mirek190
Prompt format
2
#1 opened about 2 months ago
by
Rotating
More gemma 2 llama.cpp merges, do they require GGUF regen again?
4
#6 opened 9 months ago
by
IHadToMakeAccount

Prompt format <bos> not needed?
16
#3 opened 9 months ago
by
eamag
'LlamaCppModel' object has no attribute 'model'
10
#2 opened 9 months ago
by
DrNicefellow

Is there any information on which prompt template to use?
2
#1 opened 9 months ago
by
Debich

Please upload the full model first
88
#1 opened about 1 year ago
by
ChuckMcSneed

Even this excellent high-end model doesn't follow my instructions
5
#8 opened over 1 year ago
by
alexcardo