--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - smsa metrics: - accuracy - f1 model-index: - name: scenario-normal-finetune-clf-data-smsa-model-xlm-roberta-base results: - task: name: Text Classification type: text-classification dataset: name: smsa type: smsa config: smsa_nusantara_text split: validation args: smsa_nusantara_text metrics: - name: Accuracy type: accuracy value: 0.9222222222222223 - name: F1 type: f1 value: 0.9010725836501758 --- # scenario-normal-finetune-clf-data-smsa-model-xlm-roberta-base This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the smsa dataset. It achieves the following results on the evaluation set: - Loss: 0.3511 - Accuracy: 0.9222 - F1: 0.9011 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6969 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 0.29 | 100 | 0.4204 | 0.8397 | 0.6487 | | No log | 0.58 | 200 | 0.3298 | 0.9095 | 0.8696 | | No log | 0.87 | 300 | 0.2664 | 0.9214 | 0.8843 | | No log | 1.16 | 400 | 0.2882 | 0.9151 | 0.8849 | | 0.3642 | 1.45 | 500 | 0.2531 | 0.9175 | 0.8808 | | 0.3642 | 1.74 | 600 | 0.2847 | 0.9175 | 0.8820 | | 0.3642 | 2.03 | 700 | 0.2889 | 0.9294 | 0.9060 | | 0.3642 | 2.33 | 800 | 0.3066 | 0.9270 | 0.8996 | | 0.3642 | 2.62 | 900 | 0.3736 | 0.9190 | 0.8914 | | 0.2064 | 2.91 | 1000 | 0.2706 | 0.9214 | 0.8853 | | 0.2064 | 3.2 | 1100 | 0.3201 | 0.9190 | 0.8878 | | 0.2064 | 3.49 | 1200 | 0.2372 | 0.9254 | 0.9007 | | 0.2064 | 3.78 | 1300 | 0.2534 | 0.9190 | 0.8904 | | 0.2064 | 4.07 | 1400 | 0.3266 | 0.9214 | 0.8939 | | 0.1543 | 4.36 | 1500 | 0.3405 | 0.9135 | 0.8815 | | 0.1543 | 4.65 | 1600 | 0.3485 | 0.9238 | 0.8988 | | 0.1543 | 4.94 | 1700 | 0.3287 | 0.9270 | 0.9011 | | 0.1543 | 5.23 | 1800 | 0.3631 | 0.9167 | 0.8866 | | 0.1543 | 5.52 | 1900 | 0.3714 | 0.9167 | 0.8922 | | 0.1227 | 5.81 | 2000 | 0.3030 | 0.9119 | 0.8794 | | 0.1227 | 6.1 | 2100 | 0.3363 | 0.9286 | 0.9046 | | 0.1227 | 6.4 | 2200 | 0.3511 | 0.9222 | 0.9011 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3