Edit model card

Aspect-Based Sentiment Analysis

You can test the model at aspect-based-sentiment-analysis.
If you want to find out more information, please contact us at sg-nlp@aisingapore.org.

Table of Contents

Model Details

Model Name: Sentic-GCN

  • Description: This is a neural network that utilises LSTM and GCN to detect the sentiment polarities of different aspects in the same sentence. The models used corresponds to the associated models described in the paper.
  • Paper: Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks, 2021: 107643.
  • Author(s): Bin Liang, Hang Su, Lin Gui, Erik Cambria, Ruifeng Xu. (2021).
  • URL: https://github.com/BinLiang-NLP/Sentic-GCN

How to Get Started With the Model

Install Python package

SGnlp is an initiative by AI Singapore's NLP Hub. They aim to bridge the gap between research and industry, promote translational research, and encourage adoption of NLP techniques in the industry.

Various NLP models, other than aspect sentiment analysis are available in the python package. You can try them out at NLP Hub - Demo.

pip install sgnlp

Examples

For more full code guide (such as SenticGCN), please refer to this documentation.
Alternatively, you can also try out the demo for SenticGCN-Bert.

Example of SenticGCN-Bert model (with embedding):

from sgnlp.models.sentic_gcn import(
    SenticGCNBertConfig,
    SenticGCNBertModel,
    SenticGCNBertEmbeddingConfig,
    SenticGCNBertEmbeddingModel,
    SenticGCNBertTokenizer,
    SenticGCNBertPreprocessor,
    SenticGCNBertPostprocessor
)

tokenizer = SenticGCNBertTokenizer.from_pretrained("bert-base-uncased")

# Load Model
config  = SenticGCNBertConfig.from_pretrained("./senticgcn_bert/config.json")
model   = SenticGCNBertModel.from_pretrained("./senticgcn_bert/pytorch_model.bin",config=config)

# Load Embedding Model
embed_config    = SenticGCNBertEmbeddingConfig.from_pretrained("bert-base-uncased")
embed_model     = SenticGCNBertEmbeddingModel.from_pretrained("bert-base-uncased", config=embed_config)

preprocessor = SenticGCNBertPreprocessor(
    tokenizer=tokenizer, embedding_model=embed_model,
    senticnet="./senticgcn_bert/senticnet.pickle",
    device="cpu")

postprocessor = SenticGCNBertPostprocessor()

inputs = [
    {  # Single word aspect
        "aspects": ["service"],
        "sentence": "To sum it up : service varies from good to mediorce , \
        depending on which waiter you get ; generally it is just average ok .",
    },
    {  # Single-word, multiple aspects
        "aspects": ["service", "decor"],
        "sentence": "Everything is always cooked to perfection , the service \
        is excellent, the decor cool and understated.",
    },
    {  # Multi-word aspect
        "aspects": ["grilled chicken", "chicken"],
        "sentence": "the only chicken i moderately enjoyed was their grilled chicken \
        special with edamame puree .",
    },
]

processed_inputs, processed_indices = preprocessor(inputs)
raw_outputs = model(processed_indices)

post_outputs = postprocessor(processed_inputs=processed_inputs, model_outputs=raw_outputs)

print(post_outputs[0])
# {'sentence': ['To', 'sum', 'it', 'up', ':', 'service', 'varies', 'from', 'good', 'to', 'mediorce', ',', 'depending', 'on', 'which'
#               'waiter', 'you', 'get', ';', 'generally', 'it', 'is', 'just', 'average', 'ok', '.'],
#  'aspects': [[5]],
#  'labels': [0]}

print(post_outputs[1])
# {'sentence': ['Everything', 'is', 'always', 'cooked', 'to', 'perfection', ',', 'the', 'service',
#               'is', 'excellent,', 'the', 'decor', 'cool', 'and', 'understated.'],
#  'aspects': [[8], [12]],
#  'labels': [1, 1]}

print(post_outputs[2])
# {'sentence': ['the', 'only', 'chicken', 'i', 'moderately', 'enjoyed', 'was', 'their', 'grilled',
#               'chicken', 'special', 'with', 'edamame', 'puree', '.'],
#  'aspects': [[8, 9], [2], [9]],
#  'labels': [1, 1, 1]}

Training

The training datasets can be retrieved from the following Sentic-GCN(github).

Training Results - For Sentic-GCN

  • Training Time: ~10mins for ~35 epochs (early stopped)
  • Datasets: SemEval14-Laptop/ SemEval14-Restaurant/ SemEval15-Restaurant/ SemEval16-Restaurant

Training Results - For Sentic-GCN Bert

  • Training Time: ~1 hr for ~40 epochs (early stopped)
  • Datasets: SemEval14-Laptop/ SemEval14-Restaurant/ SemEval15-Restaurant/ SemEval16-Restaurant

Model Parameters

  • Model Weights: senticgcn | senticgcn-bert
  • Model Config: senticgcn | senticgcn-bert
  • Model Inputs: Aspect (word), sentence containing the aspect
  • Model Outputs: Sentiment of aspect, -1 (negative), 0 (neutral), 1 (postive)
  • Model Inference Info: 1 sec on Intel(R) i7 Quad-Core @ 1.7GHz.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API (serverless) has been turned off for this model.

Evaluation results