khalidalt's picture
Update README.md
241e831
metadata
license: mit
datasets:
  - opus100
  - un_multi
language:
  - en
  - ar

M2M100 418M

M2M100 is a multilingual encoder-decoder transformer model trained for Many-to-Many multilingual translation. The model, originally introduced by researchers at Facebook, demonstrates impressive performance in cross-lingual translation tasks. For a better understanding of M2M100 you can look into the paper and the associated repository.
To further enhance the capabilities of M2M100, we conducted finetuning experiments on English-to-Arabic parallel text. The finetuning process involved training the model for 1000K steps using a batch size of 8.