Junyang Lin
JustinLin610
AI & ML interests
Pretraining, NLP, CV, etc.
Recent Activity
authored
a paper
2 days ago
Qwen2.5 Technical Report
authored
a paper
11 days ago
Evaluating and Aligning CodeLLMs on Human Preference
authored
a paper
12 days ago
ProcessBench: Identifying Process Errors in Mathematical Reasoning
Organizations
JustinLin610's activity
Independent evaluation results
2
#1 opened 3 months ago
by
yaronr
Have you deleted your GitHub page?
7
#10 opened 4 months ago
by
xwzy6
The sample code could not run...
1
#16 opened 6 months ago
by
zhiminy
fine-tuning
4
#16 opened 8 months ago
by
SaghirAya
Maybe a silly question...
2
#18 opened 8 months ago
by
urtuuuu
This model is Awesome
5
#20 opened 7 months ago
by
areumtecnologia
Update tokenizer_config.json
#3 opened 8 months ago
by
JustinLin610
请问这个版本GPU内存消耗28G与14B对比如何?
7
#7 opened 9 months ago
by
william0014
Fine tuning this model with Proprietary Code
2
#6 opened 8 months ago
by
vtraghu
What are the diffences of this with Qwen/CodeQwen1.5-7B
6
#5 opened 8 months ago
by
Kalemnor
Adding Evaluation Results
#14 opened 8 months ago
by
leaderboard-pr-bot
qwen1.5-7b-chat是不是推理起来比qwen1.5-7b快很多
3
#9 opened 10 months ago
by
endNone
tie_word_embeddings=true ?
1
#6 opened 8 months ago
by
salmitta
Why 72B model has different vocab size comparing with other models?
7
#1 opened 11 months ago
by
Mikasaka
Using llama.cpp server, responses always end with <|im_end|>
1
#2 opened 8 months ago
by
gilankpam
The llm output is incomplete
1
#11 opened 8 months ago
by
lijianqiang
GGUF models
1
#1 opened 8 months ago
by
MaziyarPanahi
is 14b coming?
4
#3 opened 8 months ago
by
rombodawg
(Rebuked: this claim proven false) "Fake coding scores" .73 at best
8
#4 opened 8 months ago
by
rombodawg