File size: 3,228 Bytes
7c534be |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
tags:
- summarization
- generated_from_trainer
model-index:
- name: led-risalah_data_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# led-risalah_data_v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7850
- Rouge1 Precision: 0.816
- Rouge1 Recall: 0.2149
- Rouge1 Fmeasure: 0.3393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Fmeasure | Rouge1 Precision | Rouge1 Recall |
|:-------------:|:-------:|:----:|:---------------:|:---------------:|:----------------:|:-------------:|
| 2.4163 | 0.9143 | 8 | 1.9482 | 0.2001 | 0.4982 | 0.1254 |
| 1.6578 | 1.9429 | 17 | 1.8076 | 0.2489 | 0.6295 | 0.1554 |
| 1.656 | 2.9143 | 24 | 1.4664 | 0.2459 | 0.6118 | 0.154 |
| 1.5142 | 3.9429 | 33 | 1.4191 | 0.2546 | 0.646 | 0.159 |
| 1.4169 | 4.9714 | 42 | 1.4162 | 0.27 | 0.6675 | 0.1698 |
| 1.4123 | 6.9143 | 56 | 1.3197 | 0.2807 | 0.7054 | 0.1755 |
| 1.3398 | 7.9429 | 65 | 1.3156 | 0.2797 | 0.6912 | 0.1759 |
| 1.146 | 8.9714 | 74 | 1.3247 | 0.2925 | 0.728 | 0.1834 |
| 1.1481 | 10.0 | 83 | 1.3366 | 0.2739 | 0.6799 | 0.1718 |
| 1.2033 | 10.9143 | 91 | 1.3387 | 0.2789 | 0.69 | 0.1752 |
| 1.0855 | 11.9429 | 100 | 1.3375 | 0.2888 | 0.7146 | 0.1814 |
| 0.999 | 12.9714 | 109 | 1.3589 | 0.2922 | 0.7265 | 0.1831 |
| 1.0034 | 14.0 | 118 | 1.3601 | 0.2872 | 0.7157 | 0.1801 |
| 0.9831 | 14.9143 | 126 | 1.3762 | 0.2851 | 0.7024 | 0.1792 |
| 0.9347 | 15.9429 | 135 | 1.3743 | 0.2769 | 0.6841 | 0.174 |
| 0.9018 | 16.9714 | 144 | 1.3820 | 0.2862 | 0.7139 | 0.1797 |
| 0.8939 | 18.0 | 153 | 1.3841 | 0.2879 | 0.7134 | 0.1806 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|