amBART_261
This model is a fine-tuned version of Samuael/amBART_1000 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.9604
- Wer: 2.7857
- Cer: 3.6889
- Bleu: 0.0
- Lr: 0.02
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Bleu | Lr |
---|---|---|---|---|---|---|---|
No log | 1.0 | 1 | 3.9328 | 1.0 | 4.6333 | 0.0 | 0.02 |
No log | 2.0 | 2 | 4.1008 | 1.0 | 6.1778 | 0.0 | 0.02 |
No log | 3.0 | 3 | 3.8971 | 1.0714 | 3.7556 | 0.0 | 0.02 |
No log | 4.0 | 4 | 3.5169 | 1.5714 | 6.2889 | 0.0 | 0.02 |
No log | 5.0 | 5 | 3.4597 | 10.0714 | 6.1889 | 0.0 | 0.02 |
No log | 6.0 | 6 | 3.4714 | 1.0 | 6.3222 | 0.0 | 0.02 |
No log | 7.0 | 7 | 3.1601 | 1.0 | 6.0667 | 0.0 | 0.02 |
No log | 8.0 | 8 | 2.5631 | 1.0 | 0.7667 | 0.0 | 0.02 |
No log | 9.0 | 9 | 2.6357 | 2.0 | 6.3667 | 0.0 | 0.02 |
No log | 10.0 | 10 | 3.1707 | 2.3571 | 6.5111 | 0.0 | 0.02 |
No log | 11.0 | 11 | 2.9462 | 1.1429 | 0.7 | 0.0 | 0.02 |
No log | 12.0 | 12 | 3.0437 | 1.0 | 6.2111 | 0.0 | 0.02 |
No log | 13.0 | 13 | 2.6371 | 19.2143 | 8.8667 | 0.0 | 0.02 |
No log | 14.0 | 14 | 2.4126 | 7.7143 | 7.1 | 0.0 | 0.02 |
No log | 15.0 | 15 | 2.6156 | 19.1429 | 6.1 | 0.0 | 0.02 |
No log | 16.0 | 16 | 2.7927 | 19.5714 | 6.1778 | 0.0 | 0.02 |
No log | 17.0 | 17 | 2.6685 | 1.0 | 3.3333 | 0.0 | 0.02 |
No log | 18.0 | 18 | 2.9460 | 1.0 | 0.8111 | 0.0 | 0.02 |
No log | 19.0 | 19 | 3.3183 | 1.0714 | 3.4556 | 0.0 | 0.02 |
No log | 20.0 | 20 | 3.7492 | 1.2143 | 3.5222 | 0.0 | 0.02 |
No log | 21.0 | 21 | 3.8371 | 9.1429 | 6.6111 | 0.0 | 0.02 |
No log | 22.0 | 22 | 3.7951 | 13.9286 | 6.3333 | 0.0 | 0.02 |
No log | 23.0 | 23 | 3.4253 | 12.0714 | 6.1556 | 0.0 | 0.02 |
No log | 24.0 | 24 | 3.4148 | 1.0714 | 0.7333 | 0.0 | 0.02 |
No log | 25.0 | 25 | 3.0110 | 8.7143 | 5.9889 | 0.2910 | 0.02 |
No log | 26.0 | 26 | 2.7432 | 1.0 | 1.1444 | 0.0 | 0.02 |
No log | 27.0 | 27 | 2.5661 | 1.4286 | 0.9333 | 0.0 | 0.02 |
No log | 28.0 | 28 | 2.6703 | 1.0 | 3.4889 | 0.0 | 0.02 |
No log | 29.0 | 29 | 2.9169 | 18.7143 | 6.1111 | 0.0 | 0.02 |
No log | 30.0 | 30 | 3.1300 | 4.0 | 4.3667 | 0.0 | 0.02 |
No log | 31.0 | 31 | 3.2927 | 6.0 | 5.6222 | 0.0 | 0.02 |
No log | 32.0 | 32 | 3.0442 | 6.5714 | 6.0444 | 0.0 | 0.02 |
No log | 33.0 | 33 | 2.7768 | 1.7143 | 3.5222 | 0.0 | 0.02 |
No log | 34.0 | 34 | 2.6387 | 1.2857 | 3.4778 | 0.0 | 0.02 |
No log | 35.0 | 35 | 2.4790 | 1.2143 | 3.4444 | 0.0 | 0.02 |
No log | 36.0 | 36 | 2.3595 | 5.9286 | 4.8111 | 0.0 | 0.02 |
No log | 37.0 | 37 | 2.2934 | 7.6429 | 5.3 | 0.0 | 0.02 |
No log | 38.0 | 38 | 2.2778 | 1.6429 | 3.7556 | 1.6467 | 0.02 |
No log | 39.0 | 39 | 2.2839 | 6.0714 | 4.7333 | 0.0 | 0.02 |
No log | 40.0 | 40 | 2.2559 | 1.2857 | 0.8111 | 0.0 | 0.02 |
No log | 41.0 | 41 | 2.2032 | 2.5714 | 4.2333 | 0.0 | 0.02 |
No log | 42.0 | 42 | 2.1507 | 1.1429 | 3.4444 | 0.0 | 0.02 |
No log | 43.0 | 43 | 2.1281 | 1.0 | 0.7556 | 0.0 | 0.02 |
No log | 44.0 | 44 | 2.1175 | 1.5714 | 3.4556 | 0.0 | 0.02 |
No log | 45.0 | 45 | 2.0781 | 4.5714 | 4.3444 | 0.5569 | 0.02 |
No log | 46.0 | 46 | 2.0383 | 1.4286 | 3.3889 | 1.8161 | 0.02 |
No log | 47.0 | 47 | 2.0069 | 1.4286 | 3.3889 | 1.8161 | 0.02 |
No log | 48.0 | 48 | 1.9878 | 1.3571 | 3.3667 | 0.0 | 0.02 |
No log | 49.0 | 49 | 1.9714 | 3.6429 | 3.9556 | 0.0 | 0.02 |
No log | 50.0 | 50 | 1.9604 | 2.7857 | 3.6889 | 0.0 | 0.02 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 3