egilron commited on
Commit
bef6523
1 Parent(s): 2c880d5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md CHANGED
@@ -1,3 +1,56 @@
1
  ---
 
 
 
 
2
  license: cc-by-4.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - 'no'
4
+ - nb
5
+ - nn
6
  license: cc-by-4.0
7
+ pipeline_tag: token-classification
8
  ---
9
+ # Targeted Sentiment Analysis model for Norwegian text
10
+ This model is a fine-tuned version of [ltg/norbert3-large](https://huggingface.co/ltg/norbert3-large) For Targeted Sentiment Analysis (TSA) on Norwegian text. The fine-tuning script is avaiable [on github](https://github.com/egilron/seq-label.git).
11
+ In TSA, we identify sentiment targets, "That what is spoken positively or negatively about" in each sentence. Our models performs the task through sequence labeling, AKA "token classification".
12
+
13
+ The dataset used for fine-tuning is [ltg/norec_tsa](https://huggingface.co/datasets/ltg/norec_tsa), at its defaul settings, were sentiment targets are labeled as either "targ-Positive" or "targ-Negative". The norec_tsa dataset is derived from the [NoReC_fine dataset](https://github.com/ltgoslo/norec_fine).
14
+
15
+
16
+ ## Quick start
17
+ You can use this model in your scripts as follows:
18
+ ```>>> from transformers import pipeline
19
+ >>> origin = "ltg/norbert3-large_TSA"
20
+ >>> trust_remote = "norbert3" in origin.lower()
21
+ >>> text = "Hans hese , litt såre stemme kler bluesen , men denne platen kommer neppe til å bli blant hans største kommersielle suksesser ."
22
+ >>> if trust_remote: # Downloads configurations for norbert3
23
+ ... pipe = transformers.pipeline( "token-classification",
24
+ ... aggregation_strategy='first',
25
+ ... model = origin,
26
+ ... trust_remote_code=trust_remote,
27
+ ... tokenizer = AutoTokenizer.from_pretrained(origin)
28
+ ... )
29
+ ... preds = pipe(text)
30
+ ... for p in preds:
31
+ ... print(p)
32
+
33
+ {'entity_group': 'targ-Positive', 'score': 0.6990814, 'word': ' Hans hese , litt såre stemme', 'start': 0, 'end': 28}
34
+ {'entity_group': 'targ-Negative', 'score': 0.5721016, 'word': ' platen', 'start': 53, 'end': 60}
35
+ ```
36
+
37
+
38
+
39
+ ## Training hyperparameters
40
+ - per_device_train_batch_size: 64
41
+ - per_device_eval_batch_size: 8
42
+ - learning_rate: 1e-05
43
+ - gradient_accumulation_steps: 1
44
+ - num_train_epochs: 24 (best epoch 18)
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+
47
+ ## Evaluation
48
+ ``` precision recall f1-score support
49
+
50
+ targ-Negative 0.4648 0.3143 0.3750 210
51
+ targ-Positive 0.5097 0.6019 0.5520 525
52
+
53
+ micro avg 0.5013 0.5197 0.5104 735
54
+ macro avg 0.4872 0.4581 0.4635 735
55
+ weighted avg 0.4969 0.5197 0.5014 735
56
+ ```