Edit model card

Model Card for SSA-PERIN for Norwegian

Model Details

We here release a pretrained model (and an easy-to-run wrapper) for structured sentiment analysis (SSA) of Norwegian text, trained on the NoReC_fine dataset. It implements a method described in the paper Direct parsing to sentiment graphs by Samuel et al. 2022 which demonstrated how a graph-based semantic parser (PERIN) can be applied to the task of structured sentiment analysis, directly predicting sentiment graphs from text.

Model Description

  • Developed by: The SANT project (Sentiment Analysis for Norwegian Text) at the Language Technology Group (LTG) at the University of Oslo.
  • Funded by: SANT is funded by the Research Council of Norway
  • Language(s): Norwegian (Bokmål/Nynorsk)
  • License: Apache 2.0

Model Sources

  • Paper: Direct parsing to sentiment graphs by Samuel et al. published at ACL 2022
  • Repository: The scripts used for training can be found on the github repository accompanying the paper of Samuel et al. (2022) above.
  • Demo: To see a demo of how it works, you can try the model in our Hugging Face Space.
  • Limitations The training data is based on professional reviews covering multiple domains, but the model may not necessarily generalize to other text types or domains.

How to Get Started with the Model

The model will attempt to identify the following components for a given sentence it deems to be sentiment-bearing: source expressions (the opinion holder), target expressions (what the opinion is directed towards), polar expressions (the part of the text indicating that an opinion is expressed), and finally the polarity (positive or negative). For more information about how these categories are defined in the training data, please see the paper A Fine-grained Sentiment Dataset for Norwegian by Øvrelid et al. 2020. For each identified expression, the character offsets in the text are also provided.

Here is an example showing how to use the model for predicting such sentiment tuples:

>>> import model_wrapper
>>> model = model_wrapper.PredictionModel()
>>> model.predict(['vi liker svart kaffe'])
[{'sent_id': '0',
  'text': 'vi liker svart kaffe',
  'opinions': [{'Source': [['vi'], ['0:2']],
    'Target': [['svart', 'kaffe'], ['9:14', '15:20']],
    'Polar_expression': [['liker'], ['3:8']],
    'Polarity': 'Positive'}]}]

Training Details

Training Data

The model is trained on NoReC_fine, a dataset for fine-grained sentiment analysis in Norwegian, based on a subset of documents from the Norwegian Review Corpus (NoReC) which constists of professionally authored reviews from multiple news-sources and across a wide variety of domains, including literature, games, music, products, movies and more.

Model Configuration and Training Hyperparameters

The method proposed by Samuel et al. (2022) suggests three different ways to encode sentiment graphs: "node-centric", "labeled-edge", and "opinion-tuple". The model released here uses the following configuration:

  • "labeled-edge" graph encoding,
  • no character-level embeddings,
  • all other hyperparameters are set to default values,
  • trained on top of underlying masked language model NorBERT 2.

Evaluation

The model achieves the following results on the held-out test set of NoReC_fine (see the paper for description the metrics):

  • Unlabeled sentiment tuple F1: 0.434
  • Target F1: 0.541
  • Relative polarity precision: 0.926

Citation

If you use this model in your academic work, please quote the following paper:

@inproceedings{samuel2022,
      title={Direct parsing to sentiment graphs},
      author={David Samuel and Jeremy Barnes and Robin Kurtz and
              Stephan Oepen and Lilja Øvrelid and Erik Velldal},
      year={2022},
      booktitle = "Proceedings of the 60th Annual Meeting of
                   the Association for Computational Linguistics",
      address = "Dublin, Ireland"
}

Model Card Authors

Erik Velldal and Larisa Kolesnichenko

Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train ltg/ssa-perin

Space using ltg/ssa-perin 1

Evaluation results