Edit model card

XLM-RoBERTa-Large for Aspect-Based Sentiment Analysis

This is a fine-tuned XLM-RoBERTa-Large model for Aspect-Based Sentiment Analysis in Thai. The model is fine-tuned on a dataset specifically for the task of identifying sentiments related to specific aspects within sentences.

This model was the winning model in the Aspect-Based Sentiment Analysis competition of Super AI Engineer Season 4 - Hackathon Online. It achieved the best performance among all participating models, demonstrating its effectiveness in the given task.

Model Description

XLM-RoBERTa is a large multilingual language model that has been fine-tuned for sequence tagging tasks. This model has been further fine-tuned for Aspect-Based Sentiment Analysis, making it suitable for applications that require understanding of sentiments expressed towards specific aspects within a text.

Classes

The model can predict the following classes:

ประเด็น ป้ายกำกับเชิงบวก (Positive) ป้ายกำกับเชิงลบ (Negative)
คุณภาพของสินค้า Quality NEG-Quality
ระยะเวลาที่ใช้ในการจัดส่ง DeliveryTime NEG-DeliveryTime
การบริการของร้านค้า StoreService NEG-StoreService
รูปลักษณ์ของสินค้า Appearance NEG-Appearance
การแพ็กสินค้า Packaging NEG-Packaging
ราคาของสินค้า Price NEG-Price
ขนาดของสินค้า Size NEG-Size
ไม่เกี่ยวข้องกับประเด็นที่สนใจ O

Usage

You can use this model for sequence tagging and aspect-based sentiment analysis in the Thai language. Here is a quick example of how to use it:

from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("Keetawan/xlm-roberta-large-aspect-based-sentiment")
model = AutoModelForTokenClassification.from_pretrained("Keetawan/xlm-roberta-large-aspect-based-sentiment")

nlp = pipeline("token-classification", model=model, tokenizer=tokenizer)

text = "ใส่ประโยคภาษาไทยที่ต้องการวิเคราะห์ที่นี่"
result = nlp(text)

for item in result:
    print(item)
Downloads last month
5
Safetensors
Model size
559M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from