patrickvonplaten
commited on
Commit
•
fb84e9e
1
Parent(s):
06fc113
Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,66 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
2 |
tags:
|
3 |
-
-
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
|
7 |
-
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
datasets:
|
5 |
+
- c4
|
6 |
tags:
|
7 |
+
- deep-narrow
|
8 |
+
|
9 |
+
license: apache-2.0
|
10 |
---
|
11 |
|
12 |
+
T5-Efficient-XL is a checkpoint of the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
|
13 |
+
|
14 |
+
The checkpoint was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
|
15 |
+
by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
|
16 |
+
|
17 |
+
In a nutshell, the paper indicates that a **DeepNarrow** model architecture is favorable for **downstream** performance compared to other model architectures
|
18 |
+
of similar parameter count.
|
19 |
+
|
20 |
+
To quote the paper:
|
21 |
+
|
22 |
+
> We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased
|
23 |
+
> before considering any other forms of uniform scaling across other dimensions. This is largely due to
|
24 |
+
> how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a
|
25 |
+
> tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise,
|
26 |
+
> a tall base model might also generally more efficient compared to a large model. We generally find
|
27 |
+
> that, regardless of size, even if absolute performance might increase as we continue to stack layers,
|
28 |
+
> the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36
|
29 |
+
> layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e.,
|
30 |
+
> params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params,
|
31 |
+
> FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to
|
32 |
+
> consider.
|
33 |
+
|
34 |
+
To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially.
|
35 |
+
A sequence of word embeddings is therefore processed sequentially by each transformer block.
|
36 |
+
|
37 |
+
## Details model architecture
|
38 |
+
|
39 |
+
The *conventional* T5 architectures are
|
40 |
+
|
41 |
+
| Model | NL | dff | dmodel | dkv | NH | #Params|
|
42 |
+
| ----| ---- | ---- | ---- | ---- | ---- | ----|
|
43 |
+
| Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M|
|
44 |
+
| Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M|
|
45 |
+
| Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
|
46 |
+
| Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
|
47 |
+
| Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
|
48 |
+
| XL | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
|
49 |
+
| XXL | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
|
50 |
+
|
51 |
+
## Pre-Training
|
52 |
+
|
53 |
+
The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using
|
54 |
+
the span-based masked language modeling (MLM) objective.
|
55 |
+
|
56 |
+
## Downstream Performance
|
57 |
+
|
58 |
+
TODO:
|
59 |
+
|
60 |
+
|
61 |
+
|
62 |
+
|
63 |
+
|
64 |
+
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
|
65 |
+
|
66 |
+
|