fin-eng

Benchmarks

testset BLEU chr-F
newsdev2015-enfi-fineng.fin.eng 25.3 0.536
newstest2015-enfi-fineng.fin.eng 26.9 0.547
newstest2016-enfi-fineng.fin.eng 29.0 0.571
newstest2017-enfi-fineng.fin.eng 32.3 0.594
newstest2018-enfi-fineng.fin.eng 23.8 0.517
newstest2019-fien-fineng.fin.eng 29.0 0.565
newstestB2016-enfi-fineng.fin.eng 24.5 0.527
newstestB2017-enfi-fineng.fin.eng 27.4 0.557
newstestB2017-fien-fineng.fin.eng 27.4 0.557
Tatoeba-test.fin.eng 53.4 0.697

System Info:

New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
19,824
Hosted inference API
Translation
This model can be loaded on the Inference API on-demand.