bart-noised-with-babylon-kaggle-dist
This model is a fine-tuned version of gayanin/bart-noised-with-babylon-dist on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2232
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.256 | 0.11 | 500 | 0.2499 |
0.2325 | 0.21 | 1000 | 0.2487 |
0.2694 | 0.32 | 1500 | 0.2387 |
0.2936 | 0.43 | 2000 | 0.2389 |
0.2341 | 0.54 | 2500 | 0.2452 |
0.2204 | 0.64 | 3000 | 0.2349 |
0.2162 | 0.75 | 3500 | 0.2395 |
0.2299 | 0.86 | 4000 | 0.2291 |
0.2975 | 0.96 | 4500 | 0.2258 |
0.2064 | 1.07 | 5000 | 0.2344 |
0.1681 | 1.18 | 5500 | 0.2324 |
0.1915 | 1.28 | 6000 | 0.2364 |
0.159 | 1.39 | 6500 | 0.2332 |
0.2176 | 1.5 | 7000 | 0.2278 |
0.2139 | 1.61 | 7500 | 0.2264 |
0.1988 | 1.71 | 8000 | 0.2263 |
0.1744 | 1.82 | 8500 | 0.2236 |
0.1848 | 1.93 | 9000 | 0.2207 |
0.1652 | 2.03 | 9500 | 0.2298 |
0.1571 | 2.14 | 10000 | 0.2278 |
0.1241 | 2.25 | 10500 | 0.2257 |
0.1409 | 2.35 | 11000 | 0.2278 |
0.125 | 2.46 | 11500 | 0.2258 |
0.1373 | 2.57 | 12000 | 0.2253 |
0.1371 | 2.68 | 12500 | 0.2237 |
0.1088 | 2.78 | 13000 | 0.2249 |
0.1464 | 2.89 | 13500 | 0.2231 |
0.121 | 3.0 | 14000 | 0.2232 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
- Downloads last month
- 0