Mukayese: Turkish NLP Strikes Back
Summarization: mukayese/mbart-large-turkish-sum
This model is a fine-tuned version of facebook/mbart-large-50 on the mlsum/tu dataset.
It achieves the following results on the evaluation set:
- Rouge1: 46.7011
- Rouge2: 34.0087
- Rougel: 41.5475
- Rougelsum: 43.2108
Check this paper for more details on the model and the dataset.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
Framework versions
- Transformers 4.11.3
- Pytorch 1.8.2+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
Citation
@misc{safaya-etal-2022-mukayese,
title={Mukayese: Turkish NLP Strikes Back},
author={Ali Safaya and Emirhan Kurtuluş and Arda Göktoğan and Deniz Yuret},
year={2022},
eprint={2203.01215},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 138
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for mukayese/mbart-large-turkish-summarization
Base model
facebook/mbart-large-50