Junyang Lin

JustinLin610

AI & ML interests

Pretraining, NLP, CV, etc.

Organizations

Posts 3

view post
Post
1672
Just now, we release a small MoE model, Qwen1.5-MoE-A2.7B, a 14B model with 2.7B activated parameters. Leaving the hype, I would love to share more things here in HF. But if you don't know much about this, check our blog for more info: https://qwenlm.github.io/blog/qwen-moe/

At the beginning, it was trying with the MoE stuff, making Megatron work well with MegaBlocks. As always, we worked with small ones first. However, we have been struggling with a lot of details.

With megablocks and so many tricks that make training MoE models work, it is almost impossible to fail. The challenge is actually how good your model is. Then things became more complex than I had expected. Finegrained experts actually pissed me off but damn it works for the model at this scale. However, it brings complexity to the model, and this is somehow why at this moment our codes are not merged into llama.cpp cuz it really brings problems. Shared experts might be good, but we need more engineering efforts to really unleash its benefits in inference acceleration.

For the community, this is actually our first time releasing an MoE model. We don't know what will happen to us, but we are prepared for complaints. I just hope that we can really make things clear, and provide a good recipe to play with our MoE model just like people playing with Mixtral.
view post
Post
https://qwen.readthedocs.io/ ๐Ÿ”ฅ The official doc of Qwen1.5 is coming! This is a bilingual doc (English and Chinese, and it will be multilingual if I have time for them). The doc includes instructions for simple inference, running locally with GGUF, ollama, etc., quantization, finetuning, deployment, etc. We will continue adding more stuff to the doc. Stay tuned!

models

None public yet

datasets

None public yet