ybelkada commited on
Commit
418ce69
1 Parent(s): fb63b7a

Create README.md (#1)

Browse files

- Create README.md (8ae0f02e32af0b32ea7646d1ae61104cfac10419)

Files changed (1) hide show
  1. README.md +249 -0
README.md ADDED
@@ -0,0 +1,249 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+
5
+ tags:
6
+ - text2text-generation
7
+
8
+ widget:
9
+ - text: "The <extra_id_0> walks in <extra_id_1> park"
10
+ example_title: "Masked Language Modeling"
11
+
12
+ datasets:
13
+ - c4
14
+ - xsum
15
+
16
+
17
+ license: apache-2.0
18
+ ---
19
+
20
+ # Model Card for Switch Transformers Large - 128 experts
21
+
22
+ ![model image](https://s3.amazonaws.com/moonup/production/uploads/1666966931908-62441d1d9fdefb55a0b7d12c.png)
23
+
24
+ # Table of Contents
25
+
26
+ 0. [TL;DR](#TL;DR)
27
+ 1. [Model Details](#model-details)
28
+ 2. [Usage](#usage)
29
+ 3. [Uses](#uses)
30
+ 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
31
+ 5. [Training Details](#training-details)
32
+ 6. [Evaluation](#evaluation)
33
+ 7. [Environmental Impact](#environmental-impact)
34
+ 8. [Citation](#citation)
35
+ 9. [Model Card Authors](#model-card-authors)
36
+
37
+ # TL;DR
38
+
39
+ Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks.
40
+ As mentioned in the first few lines of the abstract :
41
+ > we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.
42
+
43
+ **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf).
44
+
45
+ # Model Details
46
+
47
+ ## Model Description
48
+
49
+
50
+ - **Model type:** Language model
51
+ - **Language(s) (NLP):** English
52
+ - **License:** Apache 2.0
53
+ - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=switch)
54
+ - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints)
55
+ - **Resources for more information:**
56
+ - [Research paper](https://arxiv.org/pdf/2101.03961.pdf)
57
+ - [GitHub Repo](https://github.com/google-research/t5x)
58
+ - [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers)
59
+
60
+ # Usage
61
+
62
+ Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing)
63
+
64
+ Find below some example scripts on how to use the model in `transformers`:
65
+
66
+ ## Using the Pytorch model
67
+
68
+ ### Running the model on a CPU
69
+
70
+ <details>
71
+ <summary> Click to expand </summary>
72
+
73
+ ```python
74
+
75
+ from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
76
+
77
+ tokenizer = AutoTokenizer.from_pretrained("google/switch-large-128")
78
+ model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-large-128")
79
+
80
+ input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
81
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
82
+
83
+ outputs = model.generate(input_ids)
84
+ print(tokenizer.decode(outputs[0]))
85
+ >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
86
+ ```
87
+
88
+ </details>
89
+
90
+ ### Running the model on a GPU
91
+
92
+ <details>
93
+ <summary> Click to expand </summary>
94
+
95
+ ```python
96
+ # pip install accelerate
97
+ from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
98
+
99
+ tokenizer = AutoTokenizer.from_pretrained("google/switch-large-128")
100
+ model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-large-128", device_map="auto")
101
+
102
+ input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
103
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
104
+
105
+ outputs = model.generate(input_ids)
106
+ print(tokenizer.decode(outputs[0]))
107
+ >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
108
+ ```
109
+
110
+ </details>
111
+
112
+ ### Running the model on a GPU using different precisions
113
+
114
+ #### FP16
115
+
116
+ <details>
117
+ <summary> Click to expand </summary>
118
+
119
+ ```python
120
+ # pip install accelerate
121
+ from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
122
+
123
+ tokenizer = AutoTokenizer.from_pretrained("google/switch-large-128")
124
+ model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-large-128", device_map="auto", torch_dtype=torch.float16)
125
+
126
+ input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
127
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
128
+
129
+ outputs = model.generate(input_ids)
130
+ print(tokenizer.decode(outputs[0]))
131
+ >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
132
+ ```
133
+
134
+ </details>
135
+
136
+ #### INT8
137
+
138
+ <details>
139
+ <summary> Click to expand </summary>
140
+
141
+ ```python
142
+ # pip install bitsandbytes accelerate
143
+ from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
144
+
145
+ tokenizer = AutoTokenizer.from_pretrained("google/switch-large-128")
146
+ model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-large-128", device_map="auto")
147
+
148
+ input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
149
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
150
+
151
+ outputs = model.generate(input_ids)
152
+ print(tokenizer.decode(outputs[0]))
153
+ >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
154
+ ```
155
+
156
+ </details>
157
+
158
+ # Uses
159
+
160
+ ## Direct Use and Downstream Use
161
+
162
+ The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
163
+
164
+ > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
165
+
166
+ See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
167
+
168
+ ## Out-of-Scope Use
169
+
170
+ More information needed.
171
+
172
+ # Bias, Risks, and Limitations
173
+
174
+ More information needed.
175
+
176
+ ## Ethical considerations and risks
177
+
178
+ More information needed.
179
+
180
+ ## Known Limitations
181
+
182
+ More information needed.
183
+
184
+ ## Sensitive Use:
185
+
186
+ > SwitchTransformers should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
187
+
188
+ # Training Details
189
+
190
+ ## Training Data
191
+
192
+ The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`.
193
+
194
+
195
+ ## Training Procedure
196
+
197
+ According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
198
+
199
+ > These models are based on pretrained SwitchTransformers and are not fine-tuned. It is normal if they perform well on zero-shot tasks.
200
+
201
+ The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
202
+
203
+
204
+ # Evaluation
205
+
206
+ ## Testing Data, Factors & Metrics
207
+
208
+ The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation:
209
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1666967660372-62441d1d9fdefb55a0b7d12c.png)
210
+ For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf).
211
+
212
+ ## Results
213
+
214
+ For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5.
215
+
216
+ # Environmental Impact
217
+
218
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
219
+
220
+ - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
221
+ - **Hours used:** More information needed
222
+ - **Cloud Provider:** GCP
223
+ - **Compute Region:** More information needed
224
+ - **Carbon Emitted:** More information needed
225
+
226
+ # Citation
227
+
228
+ **BibTeX:**
229
+
230
+ ```bibtex
231
+ @misc{https://doi.org/10.48550/arxiv.2101.03961,
232
+ doi = {10.48550/ARXIV.2101.03961},
233
+
234
+ url = {https://arxiv.org/abs/2101.03961},
235
+
236
+ author = {Fedus, William and Zoph, Barret and Shazeer, Noam},
237
+
238
+ keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
239
+
240
+ title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
241
+
242
+ publisher = {arXiv},
243
+
244
+ year = {2021},
245
+
246
+ copyright = {arXiv.org perpetual, non-exclusive license}
247
+ }
248
+
249
+ ```