Edit model card

Model Description

FinABSA-Longer is a T5-Large model trained for Aspect-Based Sentiment Analysis(ABSA) tasks using SEntFiN 1.0 and additional augmentation techniques. It shows robust behavior to even longer sequences compared to the previous FinABSA model. By replacing the target aspect with a [TGT] token the model predicts the sentiment concentrating to the aspect.GitHub Repo

How to use

You can use this model directly using the AutoModelForSeq2SeqLM class.

>>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("amphora/FinABSA")
>>> model = AutoModelForSeq2SeqLM.from_pretrained("amphora/FinABSA")

>>> input_str = "[TGT] stocks dropped 42% while Samsung rallied."
>>> input = tokenizer(input_str, return_tensors='pt')
>>> output = model.generate(**input, max_length=20)
>>> print(tokenizer.decode(output[0]))
The sentiment for [TGT] in the given sentence is NEGATIVE.
>>> input_str = "Tesla stocks dropped 42% while [TGT] rallied."
>>> input = tokenizer(input_str, return_tensors='pt')
>>> output = model.generate(**input, max_length=20)
>>> print(tokenizer.decode(output[0]))
The sentiment for [TGT] in the given sentence is POSITIVE.

Evaluation Results

Using a test split arbitarly extracted from SEntFiN 1.0 the model scores an average accuracy of 87%.

Downloads last month
31
Safetensors
Model size
738M params
Tensor type
F32
·