Text2Text Generation
Transformers
PyTorch
English
switch_transformers
ybelkada commited on
Commit
5b6f52f
1 Parent(s): 415b535

Create README.md (#1)

Browse files

- Create README.md (ef760a0a78327e313956f7157ac1c2146334f775)

Files changed (1) hide show
  1. README.md +248 -0
README.md ADDED
@@ -0,0 +1,248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+
5
+ tags:
6
+ - text2text-generation
7
+
8
+ widget:
9
+ - text: "The <extra_id_0> walks in <extra_id_1> park"
10
+ example_title: "Masked Language Modeling"
11
+
12
+ datasets:
13
+ - c4
14
+
15
+
16
+ license: apache-2.0
17
+ ---
18
+
19
+ # Model Card for Switch Transformers XXL - 128 experts
20
+
21
+ ![model image](https://s3.amazonaws.com/moonup/production/uploads/1666966931908-62441d1d9fdefb55a0b7d12c.png)
22
+
23
+ # Table of Contents
24
+
25
+ 0. [TL;DR](#TL;DR)
26
+ 1. [Model Details](#model-details)
27
+ 2. [Usage](#usage)
28
+ 3. [Uses](#uses)
29
+ 4. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
30
+ 5. [Training Details](#training-details)
31
+ 6. [Evaluation](#evaluation)
32
+ 7. [Environmental Impact](#environmental-impact)
33
+ 8. [Citation](#citation)
34
+ 9. [Model Card Authors](#model-card-authors)
35
+
36
+ # TL;DR
37
+
38
+ Switch Transformers is a Mixture of Experts (MoE) model trained on Masked Language Modeling (MLM) task. The model architecture is similar to the classic T5, but with the Feed Forward layers replaced by the Sparse MLP layers containing "experts" MLP. According to the [original paper](https://arxiv.org/pdf/2101.03961.pdf) the model enables faster training (scaling properties) while being better than T5 on fine-tuned tasks.
39
+ As mentioned in the first few lines of the abstract :
40
+ > we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus”, and achieve a 4x speedup over the T5-XXL model.
41
+
42
+ **Disclaimer**: Content from **this** model card has been written by the Hugging Face team, and parts of it were copy pasted from the [original paper](https://arxiv.org/pdf/2101.03961.pdf).
43
+
44
+ # Model Details
45
+
46
+ ## Model Description
47
+
48
+
49
+ - **Model type:** Language model
50
+ - **Language(s) (NLP):** English
51
+ - **License:** Apache 2.0
52
+ - **Related Models:** [All FLAN-T5 Checkpoints](https://huggingface.co/models?search=switch)
53
+ - **Original Checkpoints:** [All Original FLAN-T5 Checkpoints](https://github.com/google-research/t5x/blob/main/docs/models.md#mixture-of-experts-moe-checkpoints)
54
+ - **Resources for more information:**
55
+ - [Research paper](https://arxiv.org/pdf/2101.03961.pdf)
56
+ - [GitHub Repo](https://github.com/google-research/t5x)
57
+ - [Hugging Face Switch Transformers Docs (Similar to T5) ](https://huggingface.co/docs/transformers/model_doc/switch_transformers)
58
+
59
+ # Usage
60
+
61
+ Note that these checkpoints has been trained on Masked-Language Modeling (MLM) task. Therefore the checkpoints are not "ready-to-use" for downstream tasks. You may want to check `FLAN-T5` for running fine-tuned weights or fine-tune your own MoE following [this notebook](https://colab.research.google.com/drive/1aGGVHZmtKmcNBbAwa9hbu58DDpIuB5O4?usp=sharing)
62
+
63
+ Find below some example scripts on how to use the model in `transformers`:
64
+
65
+ ## Using the Pytorch model
66
+
67
+ ### Running the model on a CPU
68
+
69
+ <details>
70
+ <summary> Click to expand </summary>
71
+
72
+ ```python
73
+
74
+ from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
75
+
76
+ tokenizer = AutoTokenizer.from_pretrained("google/switch-xxl-128")
77
+ model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-xxl-128")
78
+
79
+ input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
80
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
81
+
82
+ outputs = model.generate(input_ids)
83
+ print(tokenizer.decode(outputs[0]))
84
+ >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
85
+ ```
86
+
87
+ </details>
88
+
89
+ ### Running the model on a GPU
90
+
91
+ <details>
92
+ <summary> Click to expand </summary>
93
+
94
+ ```python
95
+ # pip install accelerate
96
+ from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
97
+
98
+ tokenizer = AutoTokenizer.from_pretrained("google/switch-xxl-128")
99
+ model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-xxl-128", device_map="auto")
100
+
101
+ input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
102
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
103
+
104
+ outputs = model.generate(input_ids)
105
+ print(tokenizer.decode(outputs[0]))
106
+ >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
107
+ ```
108
+
109
+ </details>
110
+
111
+ ### Running the model on a GPU using different precisions
112
+
113
+ #### FP16
114
+
115
+ <details>
116
+ <summary> Click to expand </summary>
117
+
118
+ ```python
119
+ # pip install accelerate
120
+ from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
121
+
122
+ tokenizer = AutoTokenizer.from_pretrained("google/switch-xxl-128")
123
+ model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-xxl-128", device_map="auto", torch_dtype=torch.float16)
124
+
125
+ input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
126
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
127
+
128
+ outputs = model.generate(input_ids)
129
+ print(tokenizer.decode(outputs[0]))
130
+ >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
131
+ ```
132
+
133
+ </details>
134
+
135
+ #### INT8
136
+
137
+ <details>
138
+ <summary> Click to expand </summary>
139
+
140
+ ```python
141
+ # pip install bitsandbytes accelerate
142
+ from transformers import AutoTokenizer, SwitchTransformersConditionalGeneration
143
+
144
+ tokenizer = AutoTokenizer.from_pretrained("google/switch-xxl-128")
145
+ model = SwitchTransformersConditionalGeneration.from_pretrained("google/switch-xxl-128", device_map="auto")
146
+
147
+ input_text = "A <extra_id_0> walks into a bar a orders a <extra_id_1> with <extra_id_2> pinch of <extra_id_3>."
148
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(0)
149
+
150
+ outputs = model.generate(input_ids)
151
+ print(tokenizer.decode(outputs[0]))
152
+ >>> <pad> <extra_id_0> man<extra_id_1> beer<extra_id_2> a<extra_id_3> salt<extra_id_4>.</s>
153
+ ```
154
+
155
+ </details>
156
+
157
+ # Uses
158
+
159
+ ## Direct Use and Downstream Use
160
+
161
+ The authors write in [the original paper's model card](https://arxiv.org/pdf/2210.11416.pdf) that:
162
+
163
+ > The primary use is research on language models, including: research on zero-shot NLP tasks and in-context few-shot learning NLP tasks, such as reasoning, and question answering; advancing fairness and safety research, and understanding limitations of current large language models
164
+
165
+ See the [research paper](https://arxiv.org/pdf/2210.11416.pdf) for further details.
166
+
167
+ ## Out-of-Scope Use
168
+
169
+ More information needed.
170
+
171
+ # Bias, Risks, and Limitations
172
+
173
+ More information needed.
174
+
175
+ ## Ethical considerations and risks
176
+
177
+ More information needed.
178
+
179
+ ## Known Limitations
180
+
181
+ More information needed.
182
+
183
+ ## Sensitive Use:
184
+
185
+ > SwitchTransformers should not be applied for any unacceptable use cases, e.g., generation of abusive speech.
186
+
187
+ # Training Details
188
+
189
+ ## Training Data
190
+
191
+ The model was trained on a Masked Language Modeling task, on Colossal Clean Crawled Corpus (C4) dataset, following the same procedure as `T5`.
192
+
193
+
194
+ ## Training Procedure
195
+
196
+ According to the model card from the [original paper](https://arxiv.org/pdf/2210.11416.pdf):
197
+
198
+ > These models are based on pretrained SwitchTransformers and are not fine-tuned. It is normal if they perform well on zero-shot tasks.
199
+
200
+ The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://github.com/google-research/t5x) codebase together with [`jax`](https://github.com/google/jax).
201
+
202
+
203
+ # Evaluation
204
+
205
+ ## Testing Data, Factors & Metrics
206
+
207
+ The authors evaluated the model on various tasks and compared the results against T5. See the table below for some quantitative evaluation:
208
+ ![image.png](https://s3.amazonaws.com/moonup/production/uploads/1666967660372-62441d1d9fdefb55a0b7d12c.png)
209
+ For full details, please check the [research paper](https://arxiv.org/pdf/2101.03961.pdf).
210
+
211
+ ## Results
212
+
213
+ For full results for Switch Transformers, see the [research paper](https://arxiv.org/pdf/2101.03961.pdf), Table 5.
214
+
215
+ # Environmental Impact
216
+
217
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
218
+
219
+ - **Hardware Type:** Google Cloud TPU Pods - TPU v3 or TPU v4 | Number of chips ≥ 4.
220
+ - **Hours used:** More information needed
221
+ - **Cloud Provider:** GCP
222
+ - **Compute Region:** More information needed
223
+ - **Carbon Emitted:** More information needed
224
+
225
+ # Citation
226
+
227
+ **BibTeX:**
228
+
229
+ ```bibtex
230
+ @misc{https://doi.org/10.48550/arxiv.2101.03961,
231
+ doi = {10.48550/ARXIV.2101.03961},
232
+
233
+ url = {https://arxiv.org/abs/2101.03961},
234
+
235
+ author = {Fedus, William and Zoph, Barret and Shazeer, Noam},
236
+
237
+ keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
238
+
239
+ title = {Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity},
240
+
241
+ publisher = {arXiv},
242
+
243
+ year = {2021},
244
+
245
+ copyright = {arXiv.org perpetual, non-exclusive license}
246
+ }
247
+
248
+ ```