sgugger Marissa commited on
Commit
41f839d
1 Parent(s): 952e4a1

Update model card (#1)

Browse files

- Update model card (a3bb583a6e7543f746c1c018972f0b4432ef4f3f)


Co-authored-by: Marissa Gerchick <Marissa@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +118 -20
README.md CHANGED
@@ -14,22 +14,70 @@ license: apache-2.0
14
  inference: false
15
  ---
16
 
17
- ## Disclaimer
18
 
19
- **Before `transformers` v3.5.0**, due do its immense size, `t5-11b` required some special treatment.
20
- If you're using transformers `<= v3.4.0`, `t5-11b` should be loaded with flag `use_cdn` set to `False` as follows:
21
 
22
- ```python
23
- t5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-11b', use_cdn = False)
24
- ```
25
 
26
- Secondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB.
27
- - Model parallelism has to be used here to overcome this problem as is explained in this [PR](https://github.com/huggingface/transformers/pull/3578).
28
- - DeepSpeed's ZeRO-Offload is another approach as explained in this [post](https://github.com/huggingface/transformers/issues/9996).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
31
 
32
- ## PreTraining
 
 
33
 
34
  The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
35
  Thereby, the following datasets were being used for (1.) and (2.):
@@ -64,22 +112,72 @@ Thereby, the following datasets were being used for (1.) and (2.):
64
  - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
65
  - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
66
 
67
- ## All T5 checkpoints
68
 
69
- Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
70
 
71
- ## Paper
72
 
73
- For more information, please take a look at the original paper.
74
 
75
- Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
76
 
77
- Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
78
 
 
79
 
80
- **Abstract**
81
 
82
- Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
83
 
84
- ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
14
  inference: false
15
  ---
16
 
17
+ # Model Card for T5 11B
18
 
19
+ ![model image](https://camo.githubusercontent.com/623b4dea0b653f2ad3f36c71ebfe749a677ac0a1/68747470733a2f2f6d69726f2e6d656469756d2e636f6d2f6d61782f343030362f312a44304a31674e51663876727255704b657944387750412e706e67)
 
20
 
21
+ # Table of Contents
 
 
22
 
23
+ 1. [Model Details](#model-details)
24
+ 2. [Uses](#uses)
25
+ 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
26
+ 4. [Training Details](#training-details)
27
+ 5. [Evaluation](#evaluation)
28
+ 6. [Environmental Impact](#environmental-impact)
29
+ 7. [Citation](#citation)
30
+ 8. [Model Card Authors](#model-card-authors)
31
+ 9. [How To Get Started With the Model](#how-to-get-started-with-the-model)
32
+
33
+ # Model Details
34
+
35
+ ## Model Description
36
+
37
+ The developers of the Text-To-Text Transfer Transformer (T5) [write](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html):
38
+
39
+ > With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input. Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task.
40
+
41
+ T5-11B is the checkpoint with 11 billion parameters.
42
+
43
+ - **Developed by:** Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. See [associated paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) and [GitHub repo](https://github.com/google-research/text-to-text-transfer-transformer#released-model-checkpoints)
44
+ - **Model type:** Language model
45
+ - **Language(s) (NLP):** English, French, Romanian, German
46
+ - **License:** Apache 2.0
47
+ - **Related Models:** [All T5 Checkpoints](https://huggingface.co/models?search=t5)
48
+ - **Resources for more information:**
49
+ - [Research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf)
50
+ - [Google's T5 Blog Post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
51
+ - [GitHub Repo](https://github.com/google-research/text-to-text-transfer-transformer)
52
+ - [Hugging Face T5 Docs](https://huggingface.co/docs/transformers/model_doc/t5)
53
+
54
+ # Uses
55
+
56
+ ## Direct Use and Downstream Use
57
+
58
+ The developers write in a [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) that the model:
59
+
60
+ > Our text-to-text framework allows us to use the same model, loss function, and hyperparameters on any NLP task, including machine translation, document summarization, question answering, and classification tasks (e.g., sentiment analysis). We can even apply T5 to regression tasks by training it to predict the string representation of a number instead of the number itself.
61
+
62
+ See the [blog post](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
63
+
64
+ ## Out-of-Scope Use
65
+
66
+ More information needed.
67
+
68
+ # Bias, Risks, and Limitations
69
+
70
+ More information needed.
71
+
72
+ ## Recommendations
73
+
74
+ More information needed.
75
 
76
+ # Training Details
77
 
78
+ ## Training Data
79
+
80
+ The model is pre-trained on the [Colossal Clean Crawled Corpus (C4)](https://www.tensorflow.org/datasets/catalog/c4), which was developed and released in the context of the same [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) as T5.
81
 
82
  The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
83
  Thereby, the following datasets were being used for (1.) and (2.):
112
  - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
113
  - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
114
 
115
+ ## Training Procedure
116
 
117
+ In their [abstract](https://jmlr.org/papers/volume21/20-074/20-074.pdf), the model developers write:
118
 
119
+ > In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
120
 
121
+ The framework introduced, the T5 framework, involves a training procedure that brings together the approaches studied in the paper. See the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for further details.
122
 
123
+ # Evaluation
124
 
125
+ ## Testing Data, Factors & Metrics
126
 
127
+ The developers evaluated the model on 24 tasks, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf) for full details.
128
 
129
+ ## Results
130
 
131
+ For full results for T5-11B, see the [research paper](https://jmlr.org/papers/volume21/20-074/20-074.pdf), Table 14.
132
 
133
+ # Environmental Impact
134
+
135
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
136
+
137
+ - **Hardware Type:** Google Cloud TPU Pods
138
+ - **Hours used:** More information needed
139
+ - **Cloud Provider:** GCP
140
+ - **Compute Region:** More information needed
141
+ - **Carbon Emitted:** More information needed
142
+
143
+ # Citation
144
+
145
+ **BibTeX:**
146
+
147
+ ```bibtex
148
+ @article{2020t5,
149
+ author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu},
150
+ title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer},
151
+ journal = {Journal of Machine Learning Research},
152
+ year = {2020},
153
+ volume = {21},
154
+ number = {140},
155
+ pages = {1-67},
156
+ url = {http://jmlr.org/papers/v21/20-074.html}
157
+ }
158
+ ```
159
+
160
+ **APA:**
161
+ - Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., ... & Liu, P. J. (2020). Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140), 1-67.
162
+
163
+ # Model Card Authors
164
+
165
+ This model card was written by the team at Hugging Face.
166
+
167
+ # How to Get Started with the Model
168
+
169
+ ## Disclaimer
170
+
171
+ **Before `transformers` v3.5.0**, due do its immense size, `t5-11b` required some special treatment.
172
+ If you're using transformers `<= v3.4.0`, `t5-11b` should be loaded with flag `use_cdn` set to `False` as follows:
173
+
174
+ ```python
175
+ t5 = transformers.T5ForConditionalGeneration.from_pretrained('t5-11b', use_cdn = False)
176
+ ```
177
+
178
+ Secondly, a single GPU will most likely not have enough memory to even load the model into memory as the weights alone amount to over 40 GB.
179
+ - Model parallelism has to be used here to overcome this problem as is explained in this [PR](https://github.com/huggingface/transformers/pull/3578).
180
+ - DeepSpeed's ZeRO-Offload is another approach as explained in this [post](https://github.com/huggingface/transformers/issues/9996).
181
+
182
+ See the [Hugging Face T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Model) docs and a [Colab Notebook](https://colab.research.google.com/github/google-research/text-to-text-transfer-transformer/blob/main/notebooks/t5-trivia.ipynb) created by the model developers for more context.
183