Orimo W
imoc
AI & ML interests
None yet
Recent Activity
liked
a model
10 days ago
SakuraLLM/Sakura-14B-Qwen2.5-v1.0
new activity
12 days ago
Qwen/Qwen2.5-VL-32B-Instruct:You guys ROCK! I ❤️ Qwen!
new activity
12 days ago
Qwen/Qwen2.5-VL-32B-Instruct:竖排文字识别建议优化一下
Organizations
None yet
imoc's activity
You guys ROCK! I ❤️ Qwen!
3
#1 opened 12 days ago
by
Mdubbya
竖排文字识别建议优化一下
6
#3 opened 12 days ago
by
liuqjox
Can't run the model in tabbyAPI
4
#2 opened 19 days ago
by
minyor25

U da man
1
#1 opened 22 days ago
by
jth01
Nice work... Cant-believe-its-just-32B-performance even with various different tones system prompt.
#37 opened 30 days ago
by
imoc
Hmmmmm still weird refusal, as QwQ
#5 opened 3 months ago
by
imoc
Minimal GPU requirements
3
#3 opened 6 months ago
by
tmk12
Which one?
1
#1 opened 4 months ago
by
imoc
too big to run
4
#320 opened 4 months ago
by
karan963
Why FP32?
2
#10 opened 4 months ago
by
imoc
The training data is not in ChatML format and it won't stop correctly.
3
#3 opened 4 months ago
by
imoc
Difference between this and the other (100 steps) model?
7
#1 opened 8 months ago
by
lemon07r
Nice work!
7
#1 opened 6 months ago
by
DeFactOfficial

32 B coding model please
3
#4 opened 5 months ago
by
gopi87
vllm reply garbled
3
#29 opened 4 months ago
by
SongXiaoMao

This is way too much... USB? Yes. U SB.
3
#21 opened 4 months ago
by
imoc
Very good 7B, good job
#1 opened 4 months ago
by
imoc
Adds Chinese characters to responses
8
#16 opened 4 months ago
by
maxbenk
Nice name QAQ. I'll later upload a 4.7bpw quantized model if it works.
#12 opened 4 months ago
by
imoc
Nice work. Best 32B model(quantized to 4.7bpw) so far, more people should try.
1
#1 opened 5 months ago
by
imoc