fgaim commited on
Commit
f822d94
1 Parent(s): 8eb3c17

Update README

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -17,6 +17,28 @@ The hyperparameters corresponding to model sizes mentioned above are as follows:
17
 
18
  | Model Size | L | AH | HS | FFN | P | Seq |
19
  |------------|----|----|-----|------|------|------|
20
- | BASE | 12 | 4 | 256 | 1024 | 14M | 512 |
21
 
22
  (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  | Model Size | L | AH | HS | FFN | P | Seq |
19
  |------------|----|----|-----|------|------|------|
20
+ | SMALL | 12 | 4 | 256 | 1024 | 14M | 512 |
21
 
22
  (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
23
+
24
+
25
+ ### Framework versions
26
+
27
+ - Transformers 4.12.0.dev0
28
+ - Pytorch 1.9.0+cu111
29
+ - Datasets 1.13.3
30
+ - Tokenizers 0.10.3
31
+
32
+
33
+ ## Citation
34
+
35
+ If you use this model in your product or research, please cite as follows:
36
+
37
+ ```
38
+ @article{Fitsum2021TiPLMs,
39
+ author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
40
+ title={Monolingual Pre-trained Language Models for Tigrinya},
41
+ year=2021,
42
+ publisher={WiNLP 2021 at EMNLP 2021}
43
+ }
44
+ ```