GorkaUrbizu commited on
Commit
c9ce8ff
1 Parent(s): 14d96b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -7,11 +7,9 @@ language:
7
 
8
  BERT base (cased) model trained on a subset of 125M tokens of cc100-Swahili for our work [Scaling Laws for BERT in Low-Resource Settings](https://youtu.be/dQw4w9WgXcQ) at ACL2023 Findings.
9
 
10
- The model has 124M parameters (12L), with a vocab size of 50K.
11
  It was trained for 500K steps with a sequence length of 512 tokens.
12
 
13
- A bert-medium and bert-mini (8 and 4L) models are available at our [GitHub](https://github.com/orai-nlp/low-scaling-laws/tree/main/models).
14
-
15
 
16
  Authors
17
  -----------
 
7
 
8
  BERT base (cased) model trained on a subset of 125M tokens of cc100-Swahili for our work [Scaling Laws for BERT in Low-Resource Settings](https://youtu.be/dQw4w9WgXcQ) at ACL2023 Findings.
9
 
10
+ The model has 124M parameters (12L), and a vocab size of 50K.
11
  It was trained for 500K steps with a sequence length of 512 tokens.
12
 
 
 
13
 
14
  Authors
15
  -----------