ophelielacroix commited on
Commit
5d6032b
1 Parent(s): 22ac89b

First version of the da-bert-emotion-binary model and tokenizer

Browse files
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - da
4
+ tags:
5
+ - bert
6
+ - pytorch
7
+ - emotion
8
+ license: CC-BY_4.0
9
+ datasets:
10
+ - social media
11
+ metrics:
12
+ - f1
13
+ widget:
14
+ - text: "Der er et træ i haven."
15
+ ---
16
+
17
+ # Danish BERT for emotion detection
18
+
19
+ The BERT Emotion model detects whether a Danish text is emotional or not.
20
+ It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
21
+
22
+ See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details.
23
+
24
+
25
+ Here is how to use the model:
26
+
27
+ ```python
28
+ from transformers import BertTokenizer, BertForSequenceClassification
29
+
30
+ model = BertForSequenceClassification.from_pretrained("DaNLP/da-bert-emotion-binary")
31
+ tokenizer = BertTokenizer.from_pretrained("DaNLP/da-bert-emotion-binary")
32
+ ```
33
+
34
+ ## Training data
35
+
36
+ The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
37
+
added_tokens.json ADDED
@@ -0,0 +1 @@
 
1
+ {}
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": ".",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "directionality": "bidi",
8
+ "finetuning_task": "emo",
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "emotional",
15
+ "1": "no emotion"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 3072,
19
+ "label2id": {
20
+ "emotional": 0,
21
+ "no emotion": 1
22
+ },
23
+ "layer_norm_eps": 1e-12,
24
+ "max_position_embeddings": 512,
25
+ "model_type": "bert",
26
+ "num_attention_heads": 12,
27
+ "num_hidden_layers": 12,
28
+ "output_past": true,
29
+ "pad_token_id": 0,
30
+ "pooler_fc_size": 768,
31
+ "pooler_num_attention_heads": 12,
32
+ "pooler_num_fc_layers": 3,
33
+ "pooler_size_per_head": 128,
34
+ "pooler_type": "first_token_transform",
35
+ "type_vocab_size": 2,
36
+ "vocab_size": 32000
37
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8acc1e4572e4a15ef594dc707ecdcda17917a1b9e14d3d4d146a836dc1af612c
3
+ size 442562057
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:201b82ba337775fdd0971918eeeb61bdf1d9fb0e786d0a792fc2af46f960fab3
3
+ size 442746216
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": true, "init_inputs": []}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57ab674f7dc043d961d6bb19a24ebf338aec2c2f7378a1057830c9984d5c05fc
3
+ size 1257
vocab.txt ADDED
The diff for this file is too large to render. See raw diff