Jón Daðason commited on
Commit
2fafb1d
1 Parent(s): 3d92a9a

Adding model

Browse files
README.md CHANGED
@@ -1,3 +1,25 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - is
4
+ - no
5
+ license: cc-by-4.0
6
+ datasets:
7
+ - igc
8
+ - ic3
9
+ - icc
10
+ - mc4
11
+ ---
12
+
13
+ # Icelandic-Norwegian ELECTRA-Small
14
+ This model was pretrained on the following corpora:
15
+ * The [Icelandic Gigaword Corpus](http://igc.arnastofnun.is/) (IGC)
16
+ * The [Icelandic Common Crawl Corpus](https://arxiv.org/abs/2201.05601) (IC3)
17
+ * The [Icelandic Crawled Corpus](https://huggingface.co/datasets/jonfd/ICC) (ICC)
18
+ * The [Multilingual Colossal Clean Crawled Corpus](https://huggingface.co/datasets/mc4) (mC4) - Icelandic and Norwegian text obtained from .is and .no domains, respectively
19
+
20
+ The total size of the corpus after document-level deduplication and filtering was 7.41B tokens, split equally between the two languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 64,105 for 1.1 million steps, and otherwise with default settings.
21
+
22
+ # Acknowledgments
23
+ This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).
24
+
25
+ This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ElectraForPreTraining"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "embedding_size": 128,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 256,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 1024,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "electra",
15
+ "num_attention_heads": 4,
16
+ "num_hidden_layers": 12,
17
+ "pad_token_id": 0,
18
+ "position_embedding_type": "absolute",
19
+ "summary_activation": "gelu",
20
+ "summary_last_dropout": 0.1,
21
+ "summary_type": "first",
22
+ "summary_use_proj": true,
23
+ "type_vocab_size": 2,
24
+ "vocab_size": 64105
25
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dcdeb27911d09d0ef50694f1bb2e5ab7cdc34338b8ffcc00cbf08b532bd4b21
3
+ size 71473961
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa8d9cdd988655045bde83c838870bdc785e7d8a16d5a7598bde3df4e469f51d
3
+ size 71680316
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": false, "strip_accents": false, "model_max_length": 128}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff