asafaya's picture
Librarian Bot: Add base_model information to model (#2)
11b7328
metadata
tags:
  - generated_from_trainer
datasets:
  - mlsum
metrics:
  - rouge
base_model: facebook/mbart-large-50
model-index:
  - name: mbart-large-turkish-sum
    results:
      - task:
          type: summarization
          name: Summarization
        dataset:
          name: mlsum tu
          type: mlsum
          args: tu
        metrics:
          - type: rouge
            value: 46.7011
            name: Rouge1

Mukayese: Turkish NLP Strikes Back

Summarization: mukayese/mbart-large-turkish-sum

This model is a fine-tuned version of facebook/mbart-large-50 on the mlsum/tu dataset.

It achieves the following results on the evaluation set:

  • Rouge1: 46.7011
  • Rouge2: 34.0087
  • Rougel: 41.5475
  • Rougelsum: 43.2108

Check this paper for more details on the model and the dataset.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10.0
  • mixed_precision_training: Native AMP
  • label_smoothing_factor: 0.1

Framework versions

  • Transformers 4.11.3
  • Pytorch 1.8.2+cu111
  • Datasets 1.14.0
  • Tokenizers 0.10.3

Citation

@misc{safaya-etal-2022-mukayese,
    title={Mukayese: Turkish NLP Strikes Back},
    author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
    year={2022},
    eprint={2203.01215},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}