larkkin commited on
Commit
a55f579
·
1 Parent(s): af67bab

Update model card

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -11,7 +11,7 @@ model-index:
11
  - name: SSA-Perin
12
  results:
13
  - task:
14
- type: structured sentiment analysis
15
  dataset:
16
  name: NoReC
17
  type: NoReC
@@ -29,8 +29,19 @@ model-index:
29
 
30
 
31
 
32
- This repository contains a pretrained model (and an easy-to-run wrapper for it) for structured sentiment analysis in Norwegian language, pre-trained on the [NoReC dataset](https://huggingface.co/datasets/norec).
33
- This is an implementation of the method described in "Direct parsing to sentiment graphs" (Samuel _et al._, ACL 2022). The main repository that also contains the scripts for training the model, can be found on the project [github](https://github.com/jerbarnes/direct_parsing_to_sent_graph).
 
 
 
 
 
 
 
 
 
 
 
34
  The model is also available in the form of a [HF space](https://huggingface.co/spaces/ltg/ssa-perin).
35
 
36
 
@@ -40,24 +51,13 @@ The current model
40
  - uses "labeled-edge" graph encoding
41
  - does not use character-level embedding
42
  - all other hyperparameters are set to [default values](https://github.com/jerbarnes/direct_parsing_to_sent_graph/blob/main/perin/config/edge_norec.yaml)
43
- , and it achieves the following results on the held-out set of the NoReC dataset:
44
 
45
  | Unlabeled sentiment tuple F1 | Target F1 | Relative polarity precision |
46
  |:----------------------------:|:----------:|:---------------------------:|
47
  | 0.434 | 0.541 | 0.926 |
48
 
49
 
50
- In "Word Substitution with Masked Language Models as Data Augmentation for Sentiment Analysis", we analyzed data augmentation strategies for improving performance of the model. Using masked-language modeling (MLM), we augmented the sentences with MLM-substituted words inside, outside, or inside+outside the actual sentiment tuples. The results below show that augmentation may be improve the model performance. This space, however, runs the original model trained without augmentation.
51
-
52
- | | Augmentation rate | Unlabeled sentiment tuple F1 | Target F1 | Relative polarity precision |
53
- |----------------|-------------------|------------------------------|-----------|-----------------------------|
54
- | Baseline | 0% | 43.39 | 54.13 | 92.59 |
55
- | Outside | 59% | **45.08** | 56.18 | 92.95 |
56
- | Inside | 9% | 43.38 | 55.62 | 92.49 |
57
- | Inside+Outside | 27% | 44.12 | **56.44** | **93.19** |
58
-
59
-
60
-
61
  The model can be easily used for predicting sentiment tuples as follows:
62
 
63
  ```python
 
11
  - name: SSA-Perin
12
  results:
13
  - task:
14
+ type: token-classification
15
  dataset:
16
  name: NoReC
17
  type: NoReC
 
29
 
30
 
31
 
32
+ This repository contains a pretrained model (and an easy-to-run wrapper for it) for structured sentiment analysis in Norwegian language, pre-trained on the [NoReC_fine dataset](https://github.com/ltgoslo/norec_fine).
33
+ This is an implementation of the method described in
34
+ ```bibtex
35
+ @misc{samuel2022direct,
36
+ title={Direct parsing to sentiment graphs},
37
+ author={David Samuel and Jeremy Barnes and Robin Kurtz and Stephan Oepen and Lilja Øvrelid and Erik Velldal},
38
+ year={2022},
39
+ eprint={2203.13209},
40
+ archivePrefix={arXiv},
41
+ primaryClass={cs.CL}
42
+ }
43
+ ```
44
+ The main repository that also contains the scripts for training the model, can be found on the project [github](https://github.com/jerbarnes/direct_parsing_to_sent_graph).
45
  The model is also available in the form of a [HF space](https://huggingface.co/spaces/ltg/ssa-perin).
46
 
47
 
 
51
  - uses "labeled-edge" graph encoding
52
  - does not use character-level embedding
53
  - all other hyperparameters are set to [default values](https://github.com/jerbarnes/direct_parsing_to_sent_graph/blob/main/perin/config/edge_norec.yaml)
54
+ , and it achieves the following results on the held-out set of the dataset:
55
 
56
  | Unlabeled sentiment tuple F1 | Target F1 | Relative polarity precision |
57
  |:----------------------------:|:----------:|:---------------------------:|
58
  | 0.434 | 0.541 | 0.926 |
59
 
60
 
 
 
 
 
 
 
 
 
 
 
 
61
  The model can be easily used for predicting sentiment tuples as follows:
62
 
63
  ```python