lsw825 commited on
Commit
bb39c64
·
verified ·
1 Parent(s): 391e7a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -141
README.md CHANGED
@@ -1,142 +1,146 @@
1
- <div align="center">
2
- <a href="https://github.com/MoonshotAI/dummy.pdf"><img width="80%" src="figures/banner.png"></a>
3
- </div>
4
-
5
- <!-- # Muon is Scalable For LLM Training -->
6
-
7
- <div align="center">
8
- <a href="https://github.com/MoonshotAI/dummy.pdf"><img src="figures/logo.png" height="16" width="16" style="vertical-align:middle"><b> Tech Report</b></a> |
9
- <a href="https://huggingface.co/moonshotai/Moonlight"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg" height="16" width="16" style="vertical-align:middle"><b> HuggingFace</b></a> |
10
- <a href="#"><img src="figures/megatron.png" height="16" width="16" style="vertical-align:middle"><b>Megatron(coming soon)</b></a>
11
- </div>
12
-
13
-
14
- ## Abstract
15
- Recently, the [Muon optimizer](https://github.com/KellerJordan/Muon) has demonstrated strong results in training small-scale language models, but the scalability to larger models has not been proven. We identify two crucial techniques for scaling up Muon:
16
-
17
- - **Weight Decay**: Critical for scaling to larger models
18
- - **Consistent RMS Updates**: Enforcing a consistent root mean square on model updates
19
-
20
- These techniques allow Muon to work out-of-the-box on large-scale training without the need of hyper-parameter tuning. Scaling law experiments indicate that Muon is $\sim2\times$ more sample efficient than Adam with compute optimal training.
21
-
22
- Based on these improvements, we introduce **Moonlight**, a 3B/16B-parameter Mixture-of-Expert (MoE) model trained with 5.7T tokens using Muon. Our model improves the current Pareto frontier, achieving better performance with much fewer training FLOPs compared to prior models.
23
-
24
- We open-source our Muon implementation that is memory optimal and communication efficient. We also release the pretrained, instruction-tuned, and intermediate checkpoints to support future research.
25
-
26
- Our code is available at [MoonshotAI/Moonlight](https://github.com/MoonshotAI/Moonlight).
27
-
28
- ## Key Ingredients
29
-
30
- Our work builds upon Muon while systematically identifying and resolving its limitations in large-scale training scenarios. Our technical contributions include:
31
-
32
- - **Analysis for Effective Scaling of Muon**: Through extensive analysis, we identify that weight decay plays a crucial roles in Muon's scalability. Besides, we proposed to keep a consistent update root mean square (RMS) across different matrix and non-matrix parameters through parameter-wise update scale adjustments. Such adjustments significantly enhanced training stability.
33
-
34
- - **Efficient Distributed Implementation**: We develop a distributed version of Muon with ZeRO-1 style optimization, achieving optimal memory efficiency and reduced communication overhead while preserving the mathematical properties of the algorithm.
35
-
36
- - **Scaling Law Validation**: We performed scaling law research that compares Muon with strong AdamW baselines, and showed the superior performance of Muon (see Figure 1). Based on the scaling law results, Muon achieves comparable performance to AdamW trained counterparts while requiring only approximately 52% of the training FLOPs.
37
-
38
- <div align="center">
39
- <img width="90%" src="figures/scaling.png">
40
- <p><em>Scaling up with Muon. <b>(a)</b> Scaling law experiments comparing Muon and Adam. Muon is 2 times more sample efficient than Adam. <b>(b)</b> The MMLU performance of our Moonlight model optimized with Muon and other comparable models. Moonlight advances the Pareto frontier of performance vs training FLOPs.</em></p>
41
- </div>
42
-
43
-
44
- ## Performance
45
-
46
- We compared Moonlight with SOTA public models at similar scale:
47
-
48
- - **LLAMA3-3B** is a 3B-parameter dense model trained with 9T tokens
49
- - **Qwen2.5-3B** is a 3B-parameter dense model trained with 18T tokens
50
- - **Deepseek-v2-Lite** is a 2.4B/16B-parameter MOE model trained with 5.7T tokens
51
-
52
- | | **Benchmark (Metric)** | **Llama3.2-3B** | **Qwen2.5-3B** | **DSV2-Lite** | **Moonlight** |
53
- |---|---|---|---|---|---|
54
- | | Activated Param† | 2.81B | 2.77B | 2.24B | 2.24B |
55
- | | Total Params† | 2.81B | 2.77B | 15.29B | 15.29B |
56
- | | Training Tokens | 9T | 18T | 5.7T | 5.7T |
57
- | | Optimizer | AdamW | * | AdamW | Muon |
58
- | **English** | MMLU | 54.75 | 65.6 | 58.3 | **70.0** |
59
- | | MMLU-pro | 25.0 | 34.6 | 25.5 | **42.4** |
60
- | | BBH | 46.8 | 56.3 | 44.1 | **65.2** |
61
- | | TriviaQA‡ | 59.6 | 51.1 | 65.1 | **66.3** |
62
- | **Code** | HumanEval | 28.0 | 42.1 | 29.9 | **48.1** |
63
- | | MBPP | 48.7 | 57.1 | 43.2 | **63.8** |
64
- | **Math** | GSM8K | 34.0 | **79.1** | 41.1 | 77.4 |
65
- | | MATH | 8.5 | 42.6 | 17.1 | **45.3** |
66
- | | CMath | - | 80.0 | 58.4 | **81.1** |
67
- | **Chinese** | C-Eval | - | 75.0 | 60.3 | **77.2** |
68
- | | CMMLU | - | 75.0 | 64.3 | **78.2** |
69
-
70
- *Qwen 2 & 2.5 reports didn't disclose their optimizer information. †The reported parameter counts exclude the embedding parameters. ‡We test all listed models with the full set of TriviaQA.*
71
-
72
-
73
- ## Example usage
74
- ### Model Download
75
-
76
- <div align="center">
77
-
78
- | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download Link** |
79
- | :------------: | :------------: | :------------: | :------------: | :------------: |
80
- | Moonlight | 16B | 3B | 8K | [🤗 Hugging Face](https://huggingface.co/moonshotai/Moonlight) |
81
- | Moonlight-Instruct | 16B | 3B | 8K | [🤗 Hugging Face](https://huggingface.co/moonshotai/Moonlight-Instruct) |
82
-
83
- </div>
84
-
85
- ### Inference with Hugging Face Transformers
86
-
87
- We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and the latest version of transformers as the development environment.
88
-
89
- For our pretrained model (Moonlight):
90
- ```python
91
- from transformers import AutoModelForCausalLM, AutoTokenizer
92
-
93
- model_path = "path-to-your-checkpoint"
94
- model = AutoModelForCausalLM.from_pretrained(
95
- model_path,
96
- torch_dtype="auto",
97
- device_map="auto",
98
- trust_remote_code=True,
99
- )
100
- tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
101
-
102
- prompt = "1+1=2, 1+2="
103
- inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(model.device)
104
- generated_ids = model.generate(**inputs, max_new_tokens=100)
105
- response = tokenizer.batch_decode(generated_ids)[0]
106
- ```
107
-
108
- For our instruct model (Moonlight-Instruct):
109
-
110
- ```python
111
- from transformers import AutoModelForCausalLM, AutoTokenizer
112
-
113
- model_path = "path-to-your-checkpoint"
114
- model = AutoModelForCausalLM.from_pretrained(
115
- model_path,
116
- torch_dtype="auto",
117
- device_map="auto",
118
- trust_remote_code=True
119
- )
120
- tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
121
-
122
- prompt = "Give me a short introduction to large language model."
123
- messages = [
124
- {"role": "system", "content": "You are a helpful assistant provided by Moonshot-AI."},
125
- {"role": "user", "content": "Is 123 a prime?"}
126
- ]
127
- input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
128
- generated_ids = model.generate(inputs=input_ids, max_new_tokens=500)
129
- response = tokenizer.batch_decode(generated_ids)[0]
130
- ```
131
-
132
- Moonlight has the same architecture as DeepSeek-V3, which is supported by many popular inference engines, such as VLLM and SGLang. As a result, our model can also be easily deployed using these tools.
133
-
134
- ## Citation
135
- If you find Moonlight is useful or want to use in your projects, please kindly cite our paper:
136
- ```
137
- @article{MoonshotAI,
138
- author = {Kimi Team},
139
- title = {Muon is Scalable For LLM Training},
140
- year = {2025},
141
- }
 
 
 
 
142
  ```
 
1
+ ---
2
+ license: mit
3
+ library_name: transformers
4
+ ---
5
+ <div align="center">
6
+ <a href="https://github.com/MoonshotAI/dummy.pdf"><img width="80%" src="figures/banner.png"></a>
7
+ </div>
8
+
9
+ <!-- # Muon is Scalable For LLM Training -->
10
+
11
+ <div align="center">
12
+ <a href="https://github.com/MoonshotAI/dummy.pdf"><img src="figures/logo.png" height="16" width="16" style="vertical-align:middle"><b> Tech Report</b></a> |
13
+ <a href="https://huggingface.co/moonshotai/Moonlight"><img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg" height="16" width="16" style="vertical-align:middle"><b> HuggingFace</b></a> |
14
+ <a href="#"><img src="figures/megatron.png" height="16" width="16" style="vertical-align:middle"><b>Megatron(coming soon)</b></a>
15
+ </div>
16
+
17
+
18
+ ## Abstract
19
+ Recently, the [Muon optimizer](https://github.com/KellerJordan/Muon) has demonstrated strong results in training small-scale language models, but the scalability to larger models has not been proven. We identify two crucial techniques for scaling up Muon:
20
+
21
+ - **Weight Decay**: Critical for scaling to larger models
22
+ - **Consistent RMS Updates**: Enforcing a consistent root mean square on model updates
23
+
24
+ These techniques allow Muon to work out-of-the-box on large-scale training without the need of hyper-parameter tuning. Scaling law experiments indicate that Muon is $\sim2\times$ more sample efficient than Adam with compute optimal training.
25
+
26
+ Based on these improvements, we introduce **Moonlight**, a 3B/16B-parameter Mixture-of-Expert (MoE) model trained with 5.7T tokens using Muon. Our model improves the current Pareto frontier, achieving better performance with much fewer training FLOPs compared to prior models.
27
+
28
+ We open-source our Muon implementation that is memory optimal and communication efficient. We also release the pretrained, instruction-tuned, and intermediate checkpoints to support future research.
29
+
30
+ Our code is available at [MoonshotAI/Moonlight](https://github.com/MoonshotAI/Moonlight).
31
+
32
+ ## Key Ingredients
33
+
34
+ Our work builds upon Muon while systematically identifying and resolving its limitations in large-scale training scenarios. Our technical contributions include:
35
+
36
+ - **Analysis for Effective Scaling of Muon**: Through extensive analysis, we identify that weight decay plays a crucial roles in Muon's scalability. Besides, we proposed to keep a consistent update root mean square (RMS) across different matrix and non-matrix parameters through parameter-wise update scale adjustments. Such adjustments significantly enhanced training stability.
37
+
38
+ - **Efficient Distributed Implementation**: We develop a distributed version of Muon with ZeRO-1 style optimization, achieving optimal memory efficiency and reduced communication overhead while preserving the mathematical properties of the algorithm.
39
+
40
+ - **Scaling Law Validation**: We performed scaling law research that compares Muon with strong AdamW baselines, and showed the superior performance of Muon (see Figure 1). Based on the scaling law results, Muon achieves comparable performance to AdamW trained counterparts while requiring only approximately 52% of the training FLOPs.
41
+
42
+ <div align="center">
43
+ <img width="90%" src="figures/scaling.png">
44
+ <p><em>Scaling up with Muon. <b>(a)</b> Scaling law experiments comparing Muon and Adam. Muon is 2 times more sample efficient than Adam. <b>(b)</b> The MMLU performance of our Moonlight model optimized with Muon and other comparable models. Moonlight advances the Pareto frontier of performance vs training FLOPs.</em></p>
45
+ </div>
46
+
47
+
48
+ ## Performance
49
+
50
+ We compared Moonlight with SOTA public models at similar scale:
51
+
52
+ - **LLAMA3-3B** is a 3B-parameter dense model trained with 9T tokens
53
+ - **Qwen2.5-3B** is a 3B-parameter dense model trained with 18T tokens
54
+ - **Deepseek-v2-Lite** is a 2.4B/16B-parameter MOE model trained with 5.7T tokens
55
+
56
+ | | **Benchmark (Metric)** | **Llama3.2-3B** | **Qwen2.5-3B** | **DSV2-Lite** | **Moonlight** |
57
+ |---|---|---|---|---|---|
58
+ | | Activated Param† | 2.81B | 2.77B | 2.24B | 2.24B |
59
+ | | Total Params† | 2.81B | 2.77B | 15.29B | 15.29B |
60
+ | | Training Tokens | 9T | 18T | 5.7T | 5.7T |
61
+ | | Optimizer | AdamW | * | AdamW | Muon |
62
+ | **English** | MMLU | 54.75 | 65.6 | 58.3 | **70.0** |
63
+ | | MMLU-pro | 25.0 | 34.6 | 25.5 | **42.4** |
64
+ | | BBH | 46.8 | 56.3 | 44.1 | **65.2** |
65
+ | | TriviaQA‡ | 59.6 | 51.1 | 65.1 | **66.3** |
66
+ | **Code** | HumanEval | 28.0 | 42.1 | 29.9 | **48.1** |
67
+ | | MBPP | 48.7 | 57.1 | 43.2 | **63.8** |
68
+ | **Math** | GSM8K | 34.0 | **79.1** | 41.1 | 77.4 |
69
+ | | MATH | 8.5 | 42.6 | 17.1 | **45.3** |
70
+ | | CMath | - | 80.0 | 58.4 | **81.1** |
71
+ | **Chinese** | C-Eval | - | 75.0 | 60.3 | **77.2** |
72
+ | | CMMLU | - | 75.0 | 64.3 | **78.2** |
73
+
74
+ *Qwen 2 & 2.5 reports didn't disclose their optimizer information. †The reported parameter counts exclude the embedding parameters. ‡We test all listed models with the full set of TriviaQA.*
75
+
76
+
77
+ ## Example usage
78
+ ### Model Download
79
+
80
+ <div align="center">
81
+
82
+ | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download Link** |
83
+ | :------------: | :------------: | :------------: | :------------: | :------------: |
84
+ | Moonlight | 16B | 3B | 8K | [🤗 Hugging Face](https://huggingface.co/moonshotai/Moonlight) |
85
+ | Moonlight-Instruct | 16B | 3B | 8K | [🤗 Hugging Face](https://huggingface.co/moonshotai/Moonlight-Instruct) |
86
+
87
+ </div>
88
+
89
+ ### Inference with Hugging Face Transformers
90
+
91
+ We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and the latest version of transformers as the development environment.
92
+
93
+ For our pretrained model (Moonlight):
94
+ ```python
95
+ from transformers import AutoModelForCausalLM, AutoTokenizer
96
+
97
+ model_path = "path-to-your-checkpoint"
98
+ model = AutoModelForCausalLM.from_pretrained(
99
+ model_path,
100
+ torch_dtype="auto",
101
+ device_map="auto",
102
+ trust_remote_code=True,
103
+ )
104
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
105
+
106
+ prompt = "1+1=2, 1+2="
107
+ inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(model.device)
108
+ generated_ids = model.generate(**inputs, max_new_tokens=100)
109
+ response = tokenizer.batch_decode(generated_ids)[0]
110
+ ```
111
+
112
+ For our instruct model (Moonlight-Instruct):
113
+
114
+ ```python
115
+ from transformers import AutoModelForCausalLM, AutoTokenizer
116
+
117
+ model_path = "path-to-your-checkpoint"
118
+ model = AutoModelForCausalLM.from_pretrained(
119
+ model_path,
120
+ torch_dtype="auto",
121
+ device_map="auto",
122
+ trust_remote_code=True
123
+ )
124
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
125
+
126
+ prompt = "Give me a short introduction to large language model."
127
+ messages = [
128
+ {"role": "system", "content": "You are a helpful assistant provided by Moonshot-AI."},
129
+ {"role": "user", "content": "Is 123 a prime?"}
130
+ ]
131
+ input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
132
+ generated_ids = model.generate(inputs=input_ids, max_new_tokens=500)
133
+ response = tokenizer.batch_decode(generated_ids)[0]
134
+ ```
135
+
136
+ Moonlight has the same architecture as DeepSeek-V3, which is supported by many popular inference engines, such as VLLM and SGLang. As a result, our model can also be easily deployed using these tools.
137
+
138
+ ## Citation
139
+ If you find Moonlight is useful or want to use in your projects, please kindly cite our paper:
140
+ ```
141
+ @article{MoonshotAI,
142
+ author = {Kimi Team},
143
+ title = {Muon is Scalable For LLM Training},
144
+ year = {2025},
145
+ }
146
  ```