YikangS commited on
Commit
64210c3
1 Parent(s): 2d4540e

update readme

Browse files
Files changed (1) hide show
  1. README.md +11 -10
README.md CHANGED
@@ -1,11 +1,8 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- ---
5
- license: apache-2.0
6
- ---
7
  # **MoLM**
8
- MoLM is a collection of MoE-based language models ranging in scale from 4 billion to 8 billion parameters. This is the repository for the 8B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
9
 
10
  **Model Usage**
11
  To load the model, you need install the [ModuleFormer package](https://github.com/IBM/ModuleFormer). Then you can load the model with the following code:
@@ -21,30 +18,31 @@ model = AutoModelForCausalLM.from_pretrained('ibm/MoLM-350M-4B')
21
  ```
22
 
23
  **Model Details**
24
- MoLM-350M-4B is a MoE-based language models. It has 4 billion parameters, but each input token will only use 350M parameteres during its inference. Thus, it's computationally equivelant to a 350M dense model.
 
25
  MoLM-700M-8B has 8 billion parameters and computationally equivelant to a 700M dense model.
26
  Both models are trained on 300 billion tokens from publicly available sources, with a learning rate of 3.0 x 10<sup>-4</sup> and a global batch-size of 3M tokens.
27
 
28
  **Model Developers** IBM
29
 
30
- **Variations** MoLM comes in two different parameter sizes — 4B and 8B.
31
 
32
  **Input** Models input text only.
33
 
34
  **Output** Models generate text only.
35
 
36
- **Model Architecture** MoLM is an auto-regressive language model that uses the ModuleFormer architecture. It has 16 attention modules in each attention layer and 32 MLP modules in each MLP layer. During inference, the model activate 2 modules in each layer for each token condition on the inputs. MoLM-350M-4B has 24 blocks and MoLM-700M-8B has 48 blocks.
37
 
38
  **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
39
 
40
  **Research Paper** ["ModuleFormer: Modularity Emerges from Mixture-of-Experts"](https://arxiv.org/abs/2306.04640)
41
 
42
  ## Training Data
43
- MoLM was pretrained on 300 billion tokens of data from publicly available sources.
44
 
45
  ## Evaluation Results
46
 
47
- In this section, we report the results for the MoLM-350M-4B and MoLM-700M-8B models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
48
 
49
  |Model|Latency|Memory|Throughput|Hellaswag|PIQA|ARC-e|ARC-c|OBQA|
50
  |---|---|---|---|---|---|---|---|---|
@@ -55,6 +53,7 @@ In this section, we report the results for the MoLM-350M-4B and MoLM-700M-8B mod
55
  |MoLM-350M-4B|497|27|71017|39.21|70.13|56.44|23.55|20.8|
56
  |GPT-Neo 2.7B|1737|35|18788|42.71|72.2|61.07|27.47|23.2|
57
  |Pythia 2.8B|2111|70|15522|45.34|73.99|64.35|29.35|23.8|
 
58
  |MoLM-700M-8B|939|38|37419|43.33|72.91|62.46|27.90|23.8|
59
 
60
  |Model| |TriviaQA| | | HumanEval| |Wikitext|
@@ -65,7 +64,8 @@ In this section, we report the results for the MoLM-350M-4B and MoLM-700M-8B mod
65
  |Pythia 1.4B |5.30 |9.87 |12.84 |2.19 |7.31 |14.33 |14.71|
66
  |MoLM-350M-4B |5.40 |11.12 |13.70 |3.04 |6.99 |13.79 |15.15 |
67
  |GPT-Neo 2.7B |4.82 |11.23 |13.67 |4.89 |9.54 |17.90 |13.93 |
68
- |Pythia 2.8B |7.38 |15.58 |18.98 |4.91 |11.76 |21.54 |12.68|
 
69
  |MoLM-700M-8B |11.47 |16.73 |20.75 |5.51 |12.58 |20.40 |12.97 |
70
 
71
  ## Ethical Considerations and Limitations
@@ -75,4 +75,5 @@ MoLM is a new technology that carries risks with use. Testing conducted to date
75
  |Model|MoLM|
76
  |---|---|
77
  |350M-4B| [Link](https://huggingface.co/ibm/MoLM-350M-4B) |
 
78
  |700M-8B| [Link](https://huggingface.co/ibm/MoLM-700M-8B) |
 
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
4
  # **MoLM**
5
+ MoLM is a collection of MoE-based language models ranging in scale from 4 billion to 8 billion parameters. This is the repository for the MoLM-700M-8B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
6
 
7
  **Model Usage**
8
  To load the model, you need install the [ModuleFormer package](https://github.com/IBM/ModuleFormer). Then you can load the model with the following code:
 
18
  ```
19
 
20
  **Model Details**
21
+ MoLM-350M-4B is a MoE-based language models. It has 4 billion parameters, but each input token only use 350M parameteres during its inference. Thus, it's computationally equivelant to a 350M dense model.
22
+ MoLM-700M-4B has 4 billion parameters and computationally equivelant to a 700M dense model.
23
  MoLM-700M-8B has 8 billion parameters and computationally equivelant to a 700M dense model.
24
  Both models are trained on 300 billion tokens from publicly available sources, with a learning rate of 3.0 x 10<sup>-4</sup> and a global batch-size of 3M tokens.
25
 
26
  **Model Developers** IBM
27
 
28
+ **Variations** MoLM comes in two different parameter sizes — 4B and 8B. The 4B models has two variants with different computation cost — 350M and 700M.
29
 
30
  **Input** Models input text only.
31
 
32
  **Output** Models generate text only.
33
 
34
+ **Model Architecture** MoLM is an auto-regressive language model that uses the ModuleFormer architecture. It has 16 attention modules in each attention layer and 32 MLP modules in each MLP layer. During inference, in each layer, MoLM-350M-4B and MoLM-700M-8B activate 2 modules for each token, while MoLM-700M-4B activate 4 modules. MoLM-350M-4B and MoLM-700M-4B has 24 blocks and MoLM-700M-8B has 48 blocks.
35
 
36
  **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
37
 
38
  **Research Paper** ["ModuleFormer: Modularity Emerges from Mixture-of-Experts"](https://arxiv.org/abs/2306.04640)
39
 
40
  ## Training Data
41
+ MoLM models are pretrained on 300 billion tokens of data from publicly available sources.
42
 
43
  ## Evaluation Results
44
 
45
+ In this section, we report the results for the MoLM models on standard academic benchmarks. For all the evaluations, we use [LM evaluations Harness](https://github.com/EleutherAI/lm-evaluation-harness).
46
 
47
  |Model|Latency|Memory|Throughput|Hellaswag|PIQA|ARC-e|ARC-c|OBQA|
48
  |---|---|---|---|---|---|---|---|---|
 
53
  |MoLM-350M-4B|497|27|71017|39.21|70.13|56.44|23.55|20.8|
54
  |GPT-Neo 2.7B|1737|35|18788|42.71|72.2|61.07|27.47|23.2|
55
  |Pythia 2.8B|2111|70|15522|45.34|73.99|64.35|29.35|23.8|
56
+ |MoLM-700M-4B|863|27|39931|42.20|73.01|60.82|25.94|22.6|
57
  |MoLM-700M-8B|939|38|37419|43.33|72.91|62.46|27.90|23.8|
58
 
59
  |Model| |TriviaQA| | | HumanEval| |Wikitext|
 
64
  |Pythia 1.4B |5.30 |9.87 |12.84 |2.19 |7.31 |14.33 |14.71|
65
  |MoLM-350M-4B |5.40 |11.12 |13.70 |3.04 |6.99 |13.79 |15.15 |
66
  |GPT-Neo 2.7B |4.82 |11.23 |13.67 |4.89 |9.54 |17.90 |13.93 |
67
+ |Pythia 2.8B |7.38 |15.58 |18.98 |4.91 |11.76 |21.54 |12.68|
68
+ |MoLM-700M-4B|9.07|14.24|16.49|5.50|10.65|20.27|13.20|
69
  |MoLM-700M-8B |11.47 |16.73 |20.75 |5.51 |12.58 |20.40 |12.97 |
70
 
71
  ## Ethical Considerations and Limitations
 
75
  |Model|MoLM|
76
  |---|---|
77
  |350M-4B| [Link](https://huggingface.co/ibm/MoLM-350M-4B) |
78
+ |700M-4B| [Link](https://huggingface.co/ibm/MoLM-700M-4B) |
79
  |700M-8B| [Link](https://huggingface.co/ibm/MoLM-700M-8B) |