RichardErkhov commited on
Commit
c9bdfb2
1 Parent(s): 60f750a

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +192 -0
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ MobileLLM-600M - GGUF
11
+ - Model creator: https://huggingface.co/facebook/
12
+ - Original model: https://huggingface.co/facebook/MobileLLM-600M/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [MobileLLM-600M.Q2_K.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q2_K.gguf) | Q2_K | 0.34GB |
18
+ | [MobileLLM-600M.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q3_K_S.gguf) | Q3_K_S | 0.34GB |
19
+ | [MobileLLM-600M.Q3_K.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q3_K.gguf) | Q3_K | 0.36GB |
20
+ | [MobileLLM-600M.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q3_K_M.gguf) | Q3_K_M | 0.36GB |
21
+ | [MobileLLM-600M.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q3_K_L.gguf) | Q3_K_L | 0.38GB |
22
+ | [MobileLLM-600M.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.IQ4_XS.gguf) | IQ4_XS | 0.35GB |
23
+ | [MobileLLM-600M.Q4_0.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q4_0.gguf) | Q4_0 | 0.35GB |
24
+ | [MobileLLM-600M.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.IQ4_NL.gguf) | IQ4_NL | 0.36GB |
25
+ | [MobileLLM-600M.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q4_K_S.gguf) | Q4_K_S | 0.41GB |
26
+ | [MobileLLM-600M.Q4_K.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q4_K.gguf) | Q4_K | 0.43GB |
27
+ | [MobileLLM-600M.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q4_K_M.gguf) | Q4_K_M | 0.43GB |
28
+ | [MobileLLM-600M.Q4_1.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q4_1.gguf) | Q4_1 | 0.39GB |
29
+ | [MobileLLM-600M.Q5_0.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q5_0.gguf) | Q5_0 | 0.42GB |
30
+ | [MobileLLM-600M.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q5_K_S.gguf) | Q5_K_S | 0.45GB |
31
+ | [MobileLLM-600M.Q5_K.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q5_K.gguf) | Q5_K | 0.46GB |
32
+ | [MobileLLM-600M.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q5_K_M.gguf) | Q5_K_M | 0.46GB |
33
+ | [MobileLLM-600M.Q5_1.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q5_1.gguf) | Q5_1 | 0.46GB |
34
+ | [MobileLLM-600M.Q6_K.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q6_K.gguf) | Q6_K | 0.6GB |
35
+ | [MobileLLM-600M.Q8_0.gguf](https://huggingface.co/RichardErkhov/facebook_-_MobileLLM-600M-gguf/blob/main/MobileLLM-600M.Q8_0.gguf) | Q8_0 | 0.63GB |
36
+
37
+
38
+
39
+
40
+ Original model description:
41
+ ---
42
+ license: cc-by-nc-4.0
43
+ library_name: transformers
44
+ ---
45
+ # Model Details
46
+
47
+ MobileLLM is introduced: "[MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases](https://arxiv.org/abs/2402.14905)", published in ICML 2024.
48
+
49
+ **Model Developer**: Meta
50
+
51
+ **Model Architecture**: MobileLLM is an auto-regressive language model leveraging an optimized transformer architecture, specifically engineered for on-device applications with constrained resources.
52
+ MobileLLM integrated several key techniques including: (1) SwiGLU activation function, (2) deep and thin architectures, (3) embedding sharing, (4) grouped-query attention. MobileLLM-125M/350M attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M SoTA models on zero-shot commonsense reasoning tasks. In our updated version, we further demonstrate that our design philosophy scales effectively to larger models, with SoTA results for MobileLLM-600M/1B/1.5B.
53
+
54
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/660f893bae89429c07a32cdb/ahtsJXC5vBVIdmsMQDNHv.jpeg)
55
+
56
+ | | # Layers | # Attnetion Heads | # KV Heads | Token Dimension | Params |
57
+ | --- | --- | --- | --- | --- | --- |
58
+ | MobileLLM-125M | 30 | 9 | 3 | 576 | 124.6M |
59
+ | MobileLLM-350M | 32 | 15 | 5 | 960 | 345.3M |
60
+ | MobileLLM-600M | 40 | 18 | 6 | 1152 | 603.1M |
61
+ | MobileLLM-1B | 54 | 20 | 5 | 1280 | 1.01B |
62
+ | MobileLLM-1.5B | 54 | 25 | 5 | 1600 | 1.51B |
63
+
64
+ | | Training Data | Input modalities | Output modalities | Context Length | GQA | Shared Embeddings | Token count |
65
+ | --- | --- | --- | --- | --- | --- | --- | --- |
66
+ | MobileLLM-125M | Publicly available online data. | Text | Text | 2k | Yes | Yes | 1T tokens |
67
+ | MobileLLM-350M | Publicly available online data. | Text | Text | 2k | Yes | Yes | 1T tokens |
68
+ | MobileLLM-600M | Publicly available online data. | Text | Text | 2k | Yes | Yes | 1T tokens |
69
+ | MobileLLM-1B | Publicly available online data. | Text | Text | 2k | Yes | Yes | 1T tokens |
70
+ | MobileLLM-1.5B | Publicly available online data. | Text | Text | 2k | Yes | Yes | 1T tokens |
71
+
72
+
73
+ # How to use
74
+ We are providing 2 ways to run the model:
75
+
76
+ [HuggingFace](#huggingface)
77
+
78
+ [MobileLLM codebase](#mobilellm-codebase)
79
+
80
+ ## HuggingFace
81
+ To load the pretrained model for further finetuning or evaluation:
82
+ ```bash
83
+ from transformers import AutoModelForCausalLM, AutoTokenizer
84
+ tokenizer = AutoTokenizer.from_pretrained("facebook/MobileLLM-600M", use_fast=False)
85
+ model = AutoModelForCausalLM.from_pretrained("facebook/MobileLLM-600M", trust_remote_code=True)
86
+ ```
87
+ Note that the default tokenizer does not contain special tokens. For example you can use:
88
+ ```bash
89
+ tokenizer.add_special_tokens(
90
+ {
91
+ "eos_token": "</s>",
92
+ "bos_token": "<s>",
93
+ "unk_token": "<unk>",
94
+ }
95
+ )
96
+ ```
97
+ ## MobileLLM codebase
98
+ We provide the pretraining code in https://github.com/facebookresearch/MobileLLM
99
+
100
+ ```bash
101
+ > git clone https://github.com/facebookresearch/MobileLLM
102
+ > pip install -r requirement.txt
103
+
104
+ # data pre-process and specify the data path in pretrain.sh
105
+ # run pretraining
106
+ > bash pretrain.sh
107
+ ```
108
+ We also provide evaluation script for calculating ppl of wikitext-2 test split:
109
+ ```bash
110
+ > bash eval.sh
111
+ ```
112
+
113
+ You can find more details in the GitHub repo.
114
+
115
+ # Training cost
116
+ It takes the following number of days to train MobileLLM on 1T tokens using 32 NVIDIA A100 80G GPUs.
117
+ | 125M | 350M | 600M | 1B | 1.5B |
118
+ | --- | --- | --- | --- | --- |
119
+ | ~3 days| ~6 days| ~8 days | ~12 days | ~18 days |
120
+
121
+
122
+ # Evaluation
123
+ We evaluate the pretrained MobileLLM models on Zero-shot Common Sense Reasoning tasks
124
+
125
+ ## MobileLLM-125M
126
+
127
+ | model | boolq | piqa | siqa | hellaswag | winogrande | arc_easy | arc_challenge | obqa | avg. |
128
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
129
+ | OPT-125M | 41.3 | 25.2 | 57.5 | 62.0 | 41.9 | 31.1 | 31.2 | 50.8 | 42.6 |
130
+ | GPT-neo-125M | 40.7 | 24.8 | 61.3 | 62.5 | 41.9 | 29.7 | 31.6 | 50.7 | 42.9 |
131
+ | Pythia-160M | 40.0 | 25.3 | 59.5 | 62.0 | 41.5 | 29.9 | 31.2 | 50.9 | 42.5 |
132
+ | **MobileLLM-125M** | 43.9 | 27.1 | 60.2 | 65.3 | 42.4 | 38.9 | 39.5 | 53.1 | **46.3** |
133
+ | **MobileLLM-LS-125M** | 45.8 | 28.7 | 60.4 | 65.7 | 42.9 | 39.5 | 41.1 | 52.1 | **47.0** |
134
+
135
+ ## MobileLLM-350M
136
+
137
+ | model | boolq | piqa | siqa | hellaswag | winogrande | arc_easy | arc_challenge | obqa | avg. |
138
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
139
+ | OPT-350M | 41.9 | 25.7 | 54.0 | 64.8 | 42.6 | 36.2 | 33.3 | 52.4 | 43.9 |
140
+ | Pythia-410M | 47.1 | 30.3 | 55.3 | 67.2 | 43.1 | 40.1 | 36.2 | 53.4 | 46.6 |
141
+ | **MobileLLM-350M** | 53.8 | 33.5 | 62.4 | 68.6 | 44.7 | 49.6 | 40.0 | 57.6 | **51.3** |
142
+ | **MobileLLM-LS-350M** | 54.4 | 32.5 | 62.8 | 69.8 | 44.1 | 50.6 | 45.8 | 57.2 | **52.1** |
143
+
144
+ ## MobileLLM-600M
145
+
146
+ | model | boolq | piqa | siqa | hellaswag | winogrande | arc_easy | arc_challenge | obqa | avg. |
147
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
148
+ | Qwen1.5-500M | 54.7 | 32.1 | 46.9 | 68.9 | 46.0 | 48.8 | 37.7 | 55.0 | 48.8 |
149
+ | BLOOM-560M | 43.7 | 27.5 | 53.7 | 65.1 | 42.5 | 36.5 | 32.6 | 52.2 | 44.2 |
150
+ | MobiLlama-800M | 52.0 | 31.7 | 54.6 | 73.0 | 43.3 | 52.3 | 42.5 | 56.3 | 50.7 |
151
+ | **MobileLLM-600M** | 58.1 | 35.8 | 61.0 | 72.3 | 44.9 | 55.9 | 47.9 | 58.6 | **54.3** |
152
+
153
+ ## MobileLLM-1B
154
+
155
+ | model | boolq | piqa | siqa | hellaswag | winogrande | arc_easy | arc_challenge | obqa | avg. |
156
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
157
+ | Pythia-1B | 49.9 | 30.4 | 58.7 | 69.2 | 43.3 | 47.4 | 38.6 | 52.2 | 48.7 |
158
+ | MobiLlama-1B | 59.7 | 38.4 | 59.2 | 74.5 | 44.9 | 62.0 | 43.7 | 59.0 | 55.2 |
159
+ | Falcon-1B | 59.5 | 38.4 | 63.9 | 74.6 | 44.6 | 62.9 | 45.6 | 60.9 | 56.3 |
160
+ | BLOOM-1.1B | 47.6 | 27.3 | 58.6 | 67.0 | 42.4 | 42.2 | 36.6 | 53.8 | 46.9 |
161
+ | TinyLlama-1.1B | 59.2 | 37.1 | 58.1 | 72.9 | 43.9 | 59.1 | 44.7 | 58.8 | 54.2 |
162
+ | **MobileLLM-1B** | 63.0 | 39.0 | 66.7 | 74.4 | 45.0 | 61.4 | 46.8 | 62.3 | **57.3** |
163
+
164
+ ## MobileLLM-1.5B
165
+
166
+ | model | boolq | piqa | siqa | hellaswag | winogrande | arc_easy | arc_challenge | obqa | avg. |
167
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
168
+ | GPT-neo-1.3B | 51.3 | 33.0 | 61.8 | 70.9 | 43.7 | 48.6 | 41.2 | 54.5 | 50.6 |
169
+ | OPT-1.3B | 54.4 | 31.7 | 58.4 | 71.5 | 44.7 | 53.7 | 44.6 | 59.1 | 52.3 |
170
+ | BLOOM-1.7B | 50.9 | 31.2 | 61.7 | 70.0 | 43.2 | 47.2 | 36.2 | 56.1 | 49.6 |
171
+ | Qwen1.5-1.8B | 61.1 | 36.5 | 68.3 | 74.1 | 47.2 | 60.4 | 42.9 | 61.2 | 56.5 |
172
+ | GPT-neo-2.7B | 55.8 | 34.3 | 62.4 | 72.9 | 43.6 | 55.6 | 40.0 | 57.9 | 52.8 |
173
+ | OPT-2.7B | 56.6 | 34.6 | 61.8 | 74.5 | 45.6 | 60.2 | 48.2 | 59.6 | 55.1 |
174
+ | Pythia-2.8B | 59.4 | 38.9 | 66.1 | 73.8 | 44.5 | 59.6 | 45.0 | 59.4 | 55.8 |
175
+ | BLOOM-3B | 55.1 | 33.6 | 62.1 | 70.5 | 43.2 | 53.9 | 41.6 | 58.2 | 52.3 |
176
+ | **MobileLLM-1.5B** | 67.5 | 40.9 | 65.7 | 74.8 | 46.4 | 64.5 | 50.5 | 64.7 | **59.4** |
177
+
178
+ # Citation
179
+
180
+ If you find our code useful for your research, please consider citing:
181
+
182
+ @article{liu2024mobilellm,
183
+ title={MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases},
184
+ author={Liu, Zechun and Zhao, Changsheng and Iandola, Forrest and Lai, Chen and Tian, Yuandong and Fedorov, Igor and Xiong, Yunyang and Chang, Ernie and Shi, Yangyang and Krishnamoorthi, Raghuraman and others},
185
+ journal={arXiv preprint arXiv:2402.14905},
186
+ year={2024}
187
+ }
188
+
189
+ # License
190
+
191
+ MobileLLM is CC-BY-NC 4.0 licensed as of now.
192
+