Trong Vu's picture

Trong Vu

tattrongvu

AI & ML interests

LLM, Reinforcement Learning, Robotics, Self-driving car, Computer Vision

Recent Activity

Organizations

T-Systems International's profile picture

tattrongvu's activity

New activity in tsystems/colqwen2-7b-v1.0 about 18 hours ago

Update metadata with huggingface_hub

#1 opened about 24 hours ago by
merve
New activity in tsystems/colqwen2-2b-v1.0-merged about 18 hours ago

Update metadata with huggingface_hub

#1 opened about 24 hours ago by
merve
New activity in tsystems/colqwen2-2b-v1.0 about 18 hours ago

Update metadata with huggingface_hub

#1 opened about 24 hours ago by
merve
New activity in tsystems/colqwen2-7b-v1.0-merged about 18 hours ago

Update metadata with huggingface_hub

#1 opened about 24 hours ago by
merve
upvoted an article 1 day ago
view article
Article

Merge Large Language Models with mergekit

By mlabonne โ€ข
โ€ข 91
view reply

Great explanation ;)
I'm considering between the Slerp and Passthrough to create a smaller version of a big one and use that as speculative draft model.

With Passthrough, would it make sense to pick the layer evenly to avoid too far distance in between layer? (as mentioned in the Solar paper) e.g: original layer from 1 to 10, then pick 1,4,7,10 to create a 2 time smaller model.

Which method would you recommend?
Thanks in advance!

reacted to lewtun's post with ๐Ÿ”ฅ 11 days ago
view post
Post
9875
We are reproducing the full DeepSeek R1 data and training pipeline so everybody can use their recipe. Instead of doing it in secret we can do it together in the open!

๐Ÿงช Step 1: replicate the R1-Distill models by distilling a high-quality reasoning corpus from DeepSeek-R1.

๐Ÿง  Step 2: replicate the pure RL pipeline that DeepSeek used to create R1-Zero. This will involve curating new, large-scale datasets for math, reasoning, and code.

๐Ÿ”ฅ Step 3: show we can go from base model -> SFT -> RL via multi-stage training.

Follow along: https://github.com/huggingface/open-r1
ยท