--- language: - id license: mit base_model: indolem/indobert-base-uncased tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: sentiment-lora-r8a1d0.15-0 results: [] --- # sentiment-lora-r8a1d0.15-0 This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3217 - Accuracy: 0.8622 - Precision: 0.8326 - Recall: 0.8375 - F1: 0.8349 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 30 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.5593 | 1.0 | 122 | 0.5026 | 0.7268 | 0.6658 | 0.6542 | 0.6589 | | 0.4995 | 2.0 | 244 | 0.4797 | 0.7544 | 0.7149 | 0.7412 | 0.7226 | | 0.4612 | 3.0 | 366 | 0.4282 | 0.7644 | 0.7199 | 0.7358 | 0.7262 | | 0.4019 | 4.0 | 488 | 0.3934 | 0.8296 | 0.7949 | 0.7919 | 0.7934 | | 0.3665 | 5.0 | 610 | 0.4234 | 0.7970 | 0.7618 | 0.7964 | 0.7720 | | 0.334 | 6.0 | 732 | 0.3723 | 0.8195 | 0.7817 | 0.7973 | 0.7884 | | 0.3263 | 7.0 | 854 | 0.3704 | 0.8346 | 0.7990 | 0.8230 | 0.8086 | | 0.3076 | 8.0 | 976 | 0.3521 | 0.8471 | 0.8153 | 0.8168 | 0.8160 | | 0.298 | 9.0 | 1098 | 0.3522 | 0.8471 | 0.8138 | 0.8243 | 0.8187 | | 0.2923 | 10.0 | 1220 | 0.3375 | 0.8571 | 0.8289 | 0.8239 | 0.8264 | | 0.2689 | 11.0 | 1342 | 0.3392 | 0.8622 | 0.8319 | 0.8400 | 0.8357 | | 0.2686 | 12.0 | 1464 | 0.3484 | 0.8622 | 0.8309 | 0.8450 | 0.8373 | | 0.2726 | 13.0 | 1586 | 0.3258 | 0.8596 | 0.8316 | 0.8282 | 0.8298 | | 0.2713 | 14.0 | 1708 | 0.3246 | 0.8622 | 0.8333 | 0.8350 | 0.8341 | | 0.2577 | 15.0 | 1830 | 0.3307 | 0.8596 | 0.8293 | 0.8357 | 0.8324 | | 0.2519 | 16.0 | 1952 | 0.3305 | 0.8622 | 0.8314 | 0.8425 | 0.8365 | | 0.2488 | 17.0 | 2074 | 0.3234 | 0.8546 | 0.8246 | 0.8246 | 0.8246 | | 0.2546 | 18.0 | 2196 | 0.3247 | 0.8647 | 0.8346 | 0.8442 | 0.8391 | | 0.2463 | 19.0 | 2318 | 0.3204 | 0.8596 | 0.8307 | 0.8307 | 0.8307 | | 0.2458 | 20.0 | 2440 | 0.3217 | 0.8622 | 0.8326 | 0.8375 | 0.8349 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.15.2