Jón Daðason commited on
Commit
35c6d2a
1 Parent(s): 3dafc01

Adding model

Browse files
README.md CHANGED
@@ -1,3 +1,27 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - is
4
+ - no
5
+ - sv
6
+ - da
7
+ license: cc-by-4.0
8
+ datasets:
9
+ - igc
10
+ - ic3
11
+ - icc
12
+ - mc4
13
+ ---
14
+
15
+ # Nordic ELECTRA-Small
16
+ This model was pretrained on the following corpora:
17
+ * The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC)
18
+ * The [Icelandic Common Crawl Corpus](https://arxiv.org/abs/2201.05601) (IC3)
19
+ * The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC)
20
+ * The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic, Norwegian, Swedish and Danish text obtained from .is, .no, .se and .dk domains, respectively
21
+
22
+ The total size of the corpus after document-level deduplication and filtering was 14.82B tokens, split equally between the four languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 96,105 for one million steps with a batch size of 256, and otherwise with default settings.
23
+
24
+ # Acknowledgments
25
+ This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
26
+
27
+ This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ElectraForPreTraining"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "embedding_size": 128,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 256,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 1024,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "electra",
15
+ "num_attention_heads": 4,
16
+ "num_hidden_layers": 12,
17
+ "pad_token_id": 0,
18
+ "position_embedding_type": "absolute",
19
+ "summary_activation": "gelu",
20
+ "summary_last_dropout": 0.1,
21
+ "summary_type": "first",
22
+ "summary_use_proj": true,
23
+ "type_vocab_size": 2,
24
+ "vocab_size": 96105
25
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4d798fe5b5519e5e028f86b83c7fde9c64ffb666294f847b9db8c3fe869f845
3
+ size 87857961
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f0417af944d5d0902971d79995396de2e99d24f0dbae4e9a64944a075371188
3
+ size 88064316
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": false, "strip_accents": false, "model_max_length": 128}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff