w11wo commited on
Commit
b359aed
1 Parent(s): a47daa7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -13,7 +13,7 @@ widget:
13
 
14
  Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews.
15
 
16
- After training, the model achieved an evaluation accuracy of 93.88% and F1-macro of 91.57%. On the benchmark test set, the model achieved an accuracy of 90.00% and F1-macro of 85.97%.
17
 
18
  Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
19
 
@@ -29,11 +29,11 @@ The model was trained for 5 epochs and the best model was loaded at the end.
29
 
30
  | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
31
  | ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
32
- | 1 | 0.346100 | 0.263456 | 0.915079 | 0.888680 | 0.877023 | 0.903502 |
33
- | 2 | 0.175200 | 0.215166 | 0.930952 | 0.908246 | 0.918557 | 0.898842 |
34
- | 3 | 0.111700 | 0.227525 | 0.932540 | 0.901823 | 0.916049 | 0.891263 |
35
- | 4 | 0.071800 | 0.244867 | 0.938889 | 0.915714 | 0.923105 | 0.909921 |
36
- | 5 | 0.055000 | 0.262004 | 0.935714 | 0.906755 | 0.918607 | 0.898044 |
37
 
38
  ## How to Use
39
 
 
13
 
14
  Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews.
15
 
16
+ After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%.
17
 
18
  Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
19
 
 
29
 
30
  | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
31
  | ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
32
+ | 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 |
33
+ | 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 |
34
+ | 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 |
35
+ | 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 |
36
+ | 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 |
37
 
38
  ## How to Use
39