Edit model card

bert2bert-canonicalcleandata-extremecleandata-dropout-0.3-lr-5e-05

  • Dev set: Canonical clean data
  • Add more train with extreme dev set data 5 epoch
  • Encoder max length (input): 512
  • Decoder max length (output): 512

This model was trained from scratch on the id_liputan6 dataset. It achieves the following results on the evaluation set:

  • Loss: 2.4528
  • Rouge2 Precision: 0.1519
  • Rouge2 Recall: 0.1497
  • Rouge2 Fmeasure: 0.1496

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • dropout: 0.3
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure
2.6029 1.0 619 2.2821 0.1605 0.1573 0.1576
2.2629 2.0 1238 2.3146 0.1566 0.1588 0.1565
2.028 3.0 1857 2.3690 0.1548 0.1513 0.1519
1.8267 4.0 2476 2.4174 0.1527 0.1491 0.1497
1.5451 5.0 3095 2.4528 0.1519 0.1497 0.1496

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
52
Safetensors
Model size
250M params
Tensor type
F32
·

Dataset used to train Alfahluzi/bert2bert-model0