widget:
- text: >-
Gapapa kalian gak tahu band Indo ini. Tapi jangan becanda. Karena mereka
berani menyanyikan dengan lantang bagaimana aktivis ditikam, diracun,
dikursilitrikkan, dan dibunuh di udara. Orang-orang yang berkorban nyawa
supaya kalian menikmati hari ini sambil ngetwit tanpa khawatir
example_title: Example 1
output:
- label: Negative
score: 0.2964
- label: Neutral
score: 0.067
- label: Positive
score: 0.6969
- text: >-
Selama ada kelompok yg ingin jd mesias, selama itu jg govt punya
justifikasi but bikin banyak aturan = celah korup/power abuse. Keadilan
adalah deregulasi.
example_title: Example 2
output:
- label: Negative
score: 0.971
- label: Neutral
score: 0.0165
- label: Positive
score: 0.126
- text: >-
saat pendukungmu oke😹 gas ✌🏽oke😹 gas ✌🏽tapi kamu malah ketawa 🤣 itu
ga respek 😠banget wok jangan lupa makan siang 😁geratisnya wok😋😹✌🏽
example_title: Example 3
output:
- label: Negative
score: 0.6457
- label: Neutral
score: 0.048
- label: Positive
score: 0.3063
- text: >-
Infoin loker wfh/freelance untuk mahasiswa dong, pengin bangget buat
tambahan uang jajan di kos
example_title: Example 4
output:
- label: Negative
score: 0.0544
- label: Neutral
score: 0.6973
- label: Positive
score: 0.2482
- text: >-
Cari kerja sekarang tuh susah. Anaknya Presiden aja mesti dicariin kerjaan
sama bapaknya
example_title: Example 5
output:
- label: Negative
score: 0.9852
- label: Neutral
score: 0.0116
- label: Positive
score: 0.0032
library_name: transformers
license: mit
language:
- id
Model Card for Model ID
Model Details
Model Description
This model is a fine-tuned version of IndoBertweet-base-uncased for Indonesian sentiment analysis. The model is designed to classify sentiment into three categories: negative, positive, and neutral. It has been trained on a diverse dataset comprising reactions from Twitter and other social media platforms, covering various topics, including politics, disasters, and education. The model is optimized using Optuna for hyperparameter tuning and evaluated using accuracy, F1-score, precision, and recall metrics.
Bias and Limitations
Do consider that this model is trained using certain data, which may cause bias in the sentiment classification process. The model may inherit socio-cultural biases from its training data and may be less accurate for the most recent events that are not covered in the data. The limitation of the three categories may also not fully grasp the complexity of emotions, especially in capturing particular contexts. Therefore, it is important to consider and account for such biases when using this model.
Evaluation Results
The training process uses hyperparameter optimization techniques with Optuna. The model was trained for a maximum of 10 epochs with a batch size of 16, using an optimized learning rate and weight decay. The evaluation strategy is performed every 100 steps, saving the best model based on accuracy. The training also applied early stopping with patience 3 to prevent overfitting.
Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|
100 | 1.052800 | 0.995017 | 0.482368 | 0.348356 | 0.580544 | 0.482368 |
200 | 0.893700 | 0.807756 | 0.730479 | 0.703134 | 0.756189 | 0.730479 |
300 | 0.583400 | 0.476157 | 0.850126 | 0.847161 | 0.849467 | 0.850126 |
400 | 0.413600 | 0.385942 | 0.867758 | 0.867614 | 0.870417 | 0.867758 |
500 | 0.345700 | 0.362191 | 0.885390 | 0.883918 | 0.886880 | 0.885390 |
600 | 0.245400 | 0.330090 | 0.897985 | 0.897466 | 0.897541 | 0.897985 |
700 | 0.485000 | 0.308807 | 0.899244 | 0.898736 | 0.898761 | 0.899244 |
800 | 0.363700 | 0.328786 | 0.896725 | 0.895167 | 0.898695 | 0.896725 |
900 | 0.369800 | 0.329429 | 0.892947 | 0.893138 | 0.898281 | 0.892947 |
1000 | 0.273300 | 0.305412 | 0.910579 | 0.910355 | 0.910519 | 0.910579 |
1100 | 0.272800 | 0.388976 | 0.891688 | 0.893113 | 0.896606 | 0.891688 |
1200 | 0.259900 | 0.305771 | 0.913098 | 0.913123 | 0.913669 | 0.913098 |
1300 | 0.293500 | 0.317654 | 0.908060 | 0.908654 | 0.909939 | 0.908060 |
1400 | 0.255200 | 0.331161 | 0.915617 | 0.915708 | 0.916149 | 0.915617 |
1500 | 0.139800 | 0.352545 | 0.909320 | 0.909768 | 0.911014 | 0.909320 |
1600 | 0.194400 | 0.372482 | 0.904282 | 0.904296 | 0.906285 | 0.904282 |
1700 | 0.134200 | 0.340576 | 0.906801 | 0.907110 | 0.907780 | 0.906801 |
Citation
@misc{Ardiyanto_Mikhael_2024,
author = {Mikhael Ardiyanto},
title = {Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis},
year = {2024},
URL = {https://huggingface.co/Aardiiiiy/indobertweet-base-Indonesian-sentiment-analysis},
publisher = {Hugging Face}
}