patrickvonplaten commited on
Commit
214a06c
1 Parent(s): 1734a1e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- language:
3
  - en
4
  datasets:
5
  - c4
@@ -13,11 +13,11 @@ license: apache-2.0
13
  # T5-Efficient-XL (Deep-Narrow version)
14
 
15
  T5-Efficient-XL is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
16
- It is a *pretrained-only* checkpoint and was released with the
17
- paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
18
  by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
19
 
20
- In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
21
  of similar parameter count.
22
 
23
  To quote the paper:
@@ -39,11 +39,11 @@ A sequence of word embeddings is therefore processed sequentially by each transf
39
 
40
  ## Details model architecture
41
 
42
- This model checkpoint - **t5-efficient-xl** - is of model type **XL** with **no** variations.
43
- It has **2852** million parameters and thus requires **11406 MB** of memory in full precision (*fp32*)
44
- or **5703 MB** of memory in half precision (*fp16* or *bf16*).
45
 
46
- The *conventional* T5 architectures are summarized as follows:
47
 
48
  | Model | nl (el/dl) | ff | dm | kv | nh | #Params|
49
  | ----| ---- | ---- | ---- | ---- | ---- | ----|
@@ -52,24 +52,24 @@ The *conventional* T5 architectures are summarized as follows:
52
  | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
53
  | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
54
  | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
55
- | **XL** | **24/24** | **16384** | **1024** | **128** | **32** | **3B**|
56
- | XXL | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
57
 
58
  whereas the following abbreviations are used:
59
 
60
  | Abbreviation | Definition |
61
  | ----| ---- |
62
- | nl | Number of transformer blocks (depth) |
63
- | dm | Dimension of embedding vector (output vector of transformers block) |
64
- | kv | Dimension of key/value projection matrix |
65
- | nh | Number of attention heads |
66
  | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
67
  | el | Number of transformer blocks in the encoder (encoder depth) |
68
  | dl | Number of transformer blocks in the decoder (decoder depth) |
69
  | sh | Signifies that attention heads are shared |
70
  | skv | Signifies that key-values projection matrices are tied |
71
 
72
- If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond no *nl*.
73
 
74
  ## Pre-Training
75
 
1
  ---
2
+ language:
3
  - en
4
  datasets:
5
  - c4
13
  # T5-Efficient-XL (Deep-Narrow version)
14
 
15
  T5-Efficient-XL is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
16
+ It is a *pretrained-only* checkpoint and was released with the
17
+ paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)**
18
  by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*.
19
 
20
+ In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures
21
  of similar parameter count.
22
 
23
  To quote the paper:
39
 
40
  ## Details model architecture
41
 
42
+ This model checkpoint - **t5-efficient-xl** - is of model type **Xl** with no variations.
43
+ It has **2851.66** million parameters and thus requires *ca.* **11406.62 MB** of memory in full precision (*fp32*)
44
+ or **5703.31 MB** of memory in half precision (*fp16* or *bf16*).
45
 
46
+ A summary of the *original* T5 model architectures can be seen here:
47
 
48
  | Model | nl (el/dl) | ff | dm | kv | nh | #Params|
49
  | ----| ---- | ---- | ---- | ---- | ---- | ----|
52
  | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M|
53
  | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M|
54
  | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M|
55
+ | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B|
56
+ | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B|
57
 
58
  whereas the following abbreviations are used:
59
 
60
  | Abbreviation | Definition |
61
  | ----| ---- |
62
+ | nl | Number of transformer blocks (depth) |
63
+ | dm | Dimension of embedding vector (output vector of transformers block) |
64
+ | kv | Dimension of key/value projection matrix |
65
+ | nh | Number of attention heads |
66
  | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) |
67
  | el | Number of transformer blocks in the encoder (encoder depth) |
68
  | dl | Number of transformer blocks in the decoder (decoder depth) |
69
  | sh | Signifies that attention heads are shared |
70
  | skv | Signifies that key-values projection matrices are tied |
71
 
72
+ If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*.
73
 
74
  ## Pre-Training
75