distilroberta-base-finetuned-resume2
This model is a fine-tuned version of distilbert/distilroberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.2864
- Accuracy: 0.9306
- Precision (macro): 0.9303
- Recall (macro): 0.9313
- F1 (macro): 0.9307
- Precision (weighted): 0.9308
- Recall (weighted): 0.9306
- F1 (weighted): 0.9306
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision (macro) | Recall (macro) | F1 (macro) | Precision (weighted) | Recall (weighted) | F1 (weighted) |
---|---|---|---|---|---|---|---|---|---|---|
No log | 0.27 | 200 | 0.3616 | 0.8974 | 0.9000 | 0.8997 | 0.8987 | 0.9001 | 0.8974 | 0.8978 |
No log | 0.54 | 400 | 0.3542 | 0.9061 | 0.9091 | 0.9056 | 0.9071 | 0.9070 | 0.9061 | 0.9063 |
0.2488 | 0.81 | 600 | 0.2900 | 0.9176 | 0.9195 | 0.9154 | 0.9170 | 0.9179 | 0.9176 | 0.9174 |
0.2488 | 1.08 | 800 | 0.3177 | 0.9193 | 0.9176 | 0.9214 | 0.9192 | 0.9199 | 0.9193 | 0.9194 |
0.2006 | 1.35 | 1000 | 0.3002 | 0.9246 | 0.9247 | 0.9254 | 0.9249 | 0.9248 | 0.9246 | 0.9246 |
0.2006 | 1.62 | 1200 | 0.3050 | 0.9224 | 0.9217 | 0.9250 | 0.9227 | 0.9235 | 0.9224 | 0.9224 |
0.2006 | 1.89 | 1400 | 0.3084 | 0.9251 | 0.9246 | 0.9260 | 0.9252 | 0.9253 | 0.9251 | 0.9251 |
0.15 | 2.16 | 1600 | 0.3294 | 0.9226 | 0.9239 | 0.9222 | 0.9230 | 0.9229 | 0.9226 | 0.9227 |
0.15 | 2.43 | 1800 | 0.3102 | 0.9264 | 0.9288 | 0.9256 | 0.9268 | 0.9270 | 0.9264 | 0.9263 |
0.0959 | 2.7 | 2000 | 0.2930 | 0.9286 | 0.9287 | 0.9277 | 0.9281 | 0.9288 | 0.9286 | 0.9286 |
0.0959 | 2.97 | 2200 | 0.2864 | 0.9306 | 0.9303 | 0.9313 | 0.9307 | 0.9308 | 0.9306 | 0.9306 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.