hjb commited on
Commit
7fe192b
1 Parent(s): f815645
README.md CHANGED
@@ -13,7 +13,7 @@ metrics:
13
  - f1
14
  ---
15
 
16
- # Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020).
17
  **Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
18
 
19
  Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
 
13
  - f1
14
  ---
15
 
16
+ # Ælæctra - Finetuned for Named Entity Recognition on the [DaNE dataset](https://danlp.alexandra.dk/304bd159d5de/datasets/ddt.zip) (Hvingelby et al., 2020) by Malte Højmark-Bertelsen.
17
  **Ælæctra** is a Danish Transformer-based language model created to enhance the variety of Danish NLP resources with a more efficient model compared to previous state-of-the-art (SOTA) models.
18
 
19
  Ælæctra was pretrained with the ELECTRA-Small (Clark et al., 2020) pretraining approach by using the Danish Gigaword Corpus (Strømberg-Derczynski et al., 2020) and evaluated on Named Entity Recognition (NER) tasks. Since NER only presents a limited picture of Ælæctra's capabilities I am very interested in further evaluations. Therefore, if you employ it for any task, feel free to hit me up your findings!
config.json CHANGED
@@ -9,30 +9,30 @@
9
  "hidden_dropout_prob": 0.1,
10
  "hidden_size": 256,
11
  "id2label": {
12
- "0": "LABEL_0",
13
- "1": "LABEL_1",
14
- "2": "LABEL_2",
15
- "3": "LABEL_3",
16
- "4": "LABEL_4",
17
- "5": "LABEL_5",
18
- "6": "LABEL_6",
19
- "7": "LABEL_7",
20
- "8": "LABEL_8",
21
- "9": "LABEL_9"
22
  },
23
  "initializer_range": 0.02,
24
  "intermediate_size": 1024,
25
  "label2id": {
26
- "LABEL_0": 0,
27
- "LABEL_1": 1,
28
- "LABEL_2": 2,
29
- "LABEL_3": 3,
30
- "LABEL_4": 4,
31
- "LABEL_5": 5,
32
- "LABEL_6": 6,
33
- "LABEL_7": 7,
34
- "LABEL_8": 8,
35
- "LABEL_9": 9
36
  },
37
  "layer_norm_eps": 1e-12,
38
  "max_position_embeddings": 512,
 
9
  "hidden_dropout_prob": 0.1,
10
  "hidden_size": 256,
11
  "id2label": {
12
+ "0": "B-PER",
13
+ "1": "I-PER",
14
+ "2": "B-LOC",
15
+ "3": "I-LOC",
16
+ "4": "B-ORG",
17
+ "5": "I-ORG",
18
+ "6": "O",
19
+ "7": "[PAD]",
20
+ "8": "[CLS]",
21
+ "9": "[SEP]"
22
  },
23
  "initializer_range": 0.02,
24
  "intermediate_size": 1024,
25
  "label2id": {
26
+ "B-PER": 0,
27
+ "I-PER": 1,
28
+ "B-LOC": 2,
29
+ "I-LOC": 3,
30
+ "B-ORG": 4,
31
+ "I-ORG": 5,
32
+ "O": 6,
33
+ "[PAD]": 7,
34
+ "[CLS]": 8,
35
+ "[SEP]": 9
36
  },
37
  "layer_norm_eps": 1e-12,
38
  "max_position_embeddings": 512,
pytorch_model.bin → test_pytorch_model.bin RENAMED
File without changes