patrickvonplaten commited on
Commit
a1ea5cf
1 Parent(s): b8fabb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -3
README.md CHANGED
@@ -27,17 +27,72 @@ Secondly, a single GPU will most likely not have enough memory to even load the
27
  - Model parallelism has to be used here to overcome this problem as is explained in this [PR](https://github.com/huggingface/transformers/pull/3578).
28
  - DeepSpeed's ZeRO-Offload is another approach as explained in this [post](https://github.com/huggingface/transformers/issues/9996).
29
 
30
- ## [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
 
 
 
 
 
 
 
 
 
 
31
 
32
- Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
35
 
 
 
 
 
36
  Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
37
 
38
  Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
39
 
40
- ## Abstract
 
41
 
42
  Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
43
 
 
27
  - Model parallelism has to be used here to overcome this problem as is explained in this [PR](https://github.com/huggingface/transformers/pull/3578).
28
  - DeepSpeed's ZeRO-Offload is another approach as explained in this [post](https://github.com/huggingface/transformers/issues/9996).
29
 
30
+ ---
31
+ language:
32
+ - en
33
+ - fr
34
+ - ro
35
+ - de
36
+ datasets:
37
+ - c4
38
+ tags:
39
+ - summarization
40
+ - translation
41
 
42
+ license: apache-2.0
43
+ ---
44
+
45
+ [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
46
+
47
+ ## PreTraining
48
+
49
+ The model was pre-trained on a on a **multi-task mixture of unsupervised (1.) and supervised tasks (2.)**.
50
+ Thereby, the following datasets were being used for (1.) and (2.):
51
+
52
+ 1. **Datasets used for Unsupervised denoising objective**:
53
+
54
+ - [C4](https://huggingface.co/datasets/c4)
55
+ - [Wiki-DPR](https://huggingface.co/datasets/wiki_dpr)
56
+
57
+
58
+ 2. **Datasets used for Supervised text-to-text language modeling objective**
59
+
60
+ - Sentence acceptability judgment
61
+ - CoLA [Warstadt et al., 2018](https://arxiv.org/abs/1805.12471)
62
+ - Sentiment analysis
63
+ - SST-2 [Socher et al., 2013](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
64
+ - Paraphrasing/sentence similarity
65
+ - MRPC [Dolan and Brockett, 2005](https://aclanthology.org/I05-5002)
66
+ - STS-B [Ceret al., 2017](https://arxiv.org/abs/1708.00055)
67
+ - QQP [Iyer et al., 2017](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs)
68
+ - Natural language inference
69
+ - MNLI [Williams et al., 2017](https://arxiv.org/abs/1704.05426)
70
+ - QNLI [Rajpurkar et al.,2016](https://arxiv.org/abs/1606.05250)
71
+ - RTE [Dagan et al., 2005](https://link.springer.com/chapter/10.1007/11736790_9)
72
+ - CB [De Marneff et al., 2019](https://semanticsarchive.net/Archive/Tg3ZGI2M/Marneffe.pdf)
73
+ - Sentence completion
74
+ - COPA [Roemmele et al., 2011](https://www.researchgate.net/publication/221251392_Choice_of_Plausible_Alternatives_An_Evaluation_of_Commonsense_Causal_Reasoning)
75
+ - Word sense disambiguation
76
+ - WIC [Pilehvar and Camacho-Collados, 2018](https://arxiv.org/abs/1808.09121)
77
+ - Question answering
78
+ - MultiRC [Khashabi et al., 2018](https://aclanthology.org/N18-1023)
79
+ - ReCoRD [Zhang et al., 2018](https://arxiv.org/abs/1810.12885)
80
+ - BoolQ [Clark et al., 2019](https://arxiv.org/abs/1905.10044)
81
+
82
+ ## All T5 checkpoints
83
 
84
  Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
85
 
86
+ ## Paper
87
+
88
+ For more information, please take a look at the original paper.
89
+
90
  Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
91
 
92
  Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
93
 
94
+
95
+ **Abstract**
96
 
97
  Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
98