ptaszynski commited on
Commit
258cdec
1 Parent(s): cf9f8db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -18,9 +18,9 @@ The corpus was tokenized for pretraining with [MeCab](https://taku910.github.io/
18
 
19
  ## Model architecture
20
 
21
- This model uses the original ELECTRA Small model; 12 layers, 128 dimensions of hidden states, and 12 attention heads.
22
 
23
- Vocabulary size was 32,000 tokens.
24
 
25
  ## Licenses
26
 
 
18
 
19
  ## Model architecture
20
 
21
+ This model uses ELECTRA Small model settings, 12 layers, 128 dimensions of hidden states, and 12 attention heads.
22
 
23
+ Vocabulary size was set to 32,000 tokens.
24
 
25
  ## Licenses
26