Domain-Adapted NMT
Collection
1 item
•
Updated
•
1
Source language: fr
Target language: en
Training dataset: WMT20, Cochrane bilingual parallel corpus, Taus Corona Crisis corpus, Mlia Covid corpus
Development set: Medline 18, Medline 19
Test set: Medline 20
Model: transformer
Pre-processing: SentencePiece
Test set | BLEU |
---|---|
Medline20 | 35.8 |
git clone https://huggingface.co/SLPG/Biomedical_French_to_English
from fairseq.models.transformer import TransformerModel
model = TransformerModel.from_pretrained('path/to/model')
model.eval()
input_text = 'Hello, how are you?'
output_text = model.translate(input_text)
print(output_text)
If you use our model, kindly cite our paper:
@inproceedings{xu2021lisn,
title={LISN@ WMT 2021},
author={Xu, Jitao and Rauf, Sadaf Abdul and Pham, Minh Quang and Yvon, Fran{\c{c}}ois},
booktitle={6th Conference on Statistical Machine Translation},
year={2021}
}