Papers
arxiv:2406.06563

Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models

Published on Jun 3
· Submitted by akhaliq on Jun 12
Authors:
,
,
,
,
,
,
,

Abstract

In this technical report, we introduce the training methodologies implemented in the development of Skywork-MoE, a high-performance mixture-of-experts (MoE) large language model (LLM) with 146 billion parameters and 16 experts. It is initialized from the pre-existing dense checkpoints of our Skywork-13B model. We explore the comparative effectiveness of upcycling versus training from scratch initializations. Our findings suggest that the choice between these two approaches should consider both the performance of the existing dense checkpoints and the MoE training budget. We highlight two innovative techniques: gating logit normalization, which improves expert diversification, and adaptive auxiliary loss coefficients, allowing for layer-specific adjustment of auxiliary loss coefficients. Our experimental results validate the effectiveness of these methods. Leveraging these techniques and insights, we trained our upcycled Skywork-MoE on a condensed subset of our SkyPile corpus. The evaluation results demonstrate that our model delivers strong performance across a wide range of benchmarks.

Community

Congrats on the release and paper! Really cool to see a new MoE from Chinese community!🔥

·

What I would do to be as on top of the Chinese community research as you lol

great job.

In section 3, "Upcycling vs. From Scratch", train the MoE model from scratch is better then upcycling.

But, in section 5, "Skywork-MoE", it initialized from our in-house pre-trained Skywork-13B.

Can you tell me the detailed information?

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.06563 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.06563 in a Space README.md to link it from this page.

Collections including this paper 6