--- license: mit base_model: indobenchmark/indobert-base-p2 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: general_model results: [] --- # general_model This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2986 - Accuracy: 0.9119 - F1: 0.8872 - Precision: 0.8921 - Recall: 0.8827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 0.06 | 50 | 0.3626 | 0.8748 | 0.8410 | 0.8423 | 0.8398 | | No log | 0.13 | 100 | 0.3231 | 0.8962 | 0.8699 | 0.8666 | 0.8734 | | No log | 0.19 | 150 | 0.4256 | 0.8974 | 0.8626 | 0.8892 | 0.8437 | | No log | 0.25 | 200 | 0.3339 | 0.9031 | 0.8744 | 0.8845 | 0.8658 | | No log | 0.31 | 250 | 0.3043 | 0.8823 | 0.8587 | 0.8446 | 0.8792 | | No log | 0.38 | 300 | 0.3125 | 0.9056 | 0.8808 | 0.8802 | 0.8813 | | No log | 0.44 | 350 | 0.2946 | 0.9075 | 0.8838 | 0.8813 | 0.8863 | | No log | 0.5 | 400 | 0.2924 | 0.9125 | 0.8898 | 0.8884 | 0.8912 | | No log | 0.57 | 450 | 0.2991 | 0.8855 | 0.8632 | 0.8480 | 0.8865 | | 0.3562 | 0.63 | 500 | 0.2986 | 0.9119 | 0.8872 | 0.8921 | 0.8827 | | 0.3562 | 0.69 | 550 | 0.2851 | 0.8779 | 0.8564 | 0.8395 | 0.8864 | | 0.3562 | 0.75 | 600 | 0.3272 | 0.9125 | 0.8868 | 0.8968 | 0.8781 | | 0.3562 | 0.82 | 650 | 0.3438 | 0.8987 | 0.8636 | 0.8933 | 0.8431 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0