--- license: apache-2.0 datasets: - ai4bharat/samanantar language: - en - hi metrics: - bleu pipeline_tag: translation --- # Finetuning This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the samanantar dataset. source group: English target group: Hindi model: transformer ## Model description facebook/mbart-large-50-many-to-many-mmt finetuned for translation task in Hindi language ## Training and evaluation data ai4bharath/samanantar ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-5 - total_train_batch_size: 8 - num_epochs: 1 ## Benchamark Evaluation BLEU score on Tatoeba: 11.208466750961147 BLUE score on IN-22: 26.069430553765887 ## Framework versions Transformers 4.42.3 Pytorch 2.1.2 Datasets 2.20.0 Tokenizers 0.19.1