File size: 1,443 Bytes
e05d080
 
 
1048427
e05d080
35b3305
e05d080
 
 
 
 
 
 
 
1048427
 
 
e05d080
30bf7f7
770ef9b
e05d080
 
 
 
 
 
 
30bf7f7
e05d080
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
language: en
tags:
- aspect-term-sentiment-analysis
- pytorch
- ATSA
datasets:
- semeval2014
widget:
- text: "[CLS] The appearance is very nice, but the battery life is poor. [SEP] appearance [SEP] "
---

# Note

`Aspect term sentiment analysis`

BERT LSTM based baseline, based on https://github.com/avinashsai/BERT-Aspect *BERT LSTM* implementation.The model trained on SemEval2014-Task 4 laptop and restaurant datasets.

Our Github repo: https://github.com/tezignlab/BERT-LSTM-based-ABSA

Code for the paper "Utilizing BERT Intermediate Layers for Aspect Based Sentiment Analysis and Natural Language Inference" https://arxiv.org/pdf/2002.04815.pdf.

# Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline

MODEL = "tezign/BERT-LSTM-based-ABSA"

tokenizer = AutoTokenizer.from_pretrained(MODEL)

model = AutoModelForSequenceClassification.from_pretrained(MODEL, trust_remote_code=True)

classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)

result = classifier([
    {"text": "The appearance is very nice, but the battery life is poor", "text_pair": "appearance"},
    {"text": "The appearance is very nice, but the battery life is poor", "text_pair": "battery"}
],
    function_to_apply="softmax")

print(result)

"""
print result
>> [{'label': 'positive', 'score': 0.9129462838172913}, {'label': 'negative', 'score': 0.8834680914878845}]
"""

```