Metric: comet

Description

Crosslingual Optimized Metric for Evaluation of Translation (COMET) is an open-source framework used to train Machine Translation metrics that achieve high levels of correlation with different types of human judgments (HTER, DA's or MQM). With the release of the framework the authors also released fully trained models that were used to compete in the WMT20 Metrics Shared Task achieving SOTA in that years competition. See the [README.md] file at https://unbabel.github.io/COMET/html/models.html for more information.

How to load this metric directly with the datasets library:

from datasets import load_metric
metric = load_metric("comet")

Citation

@inproceedings{rei-EtAl:2020:WMT,
author    = {Rei, Ricardo  and  Stewart, Craig  and  Farinha, Ana C  and  Lavie, Alon},
title     = {Unbabel's Participation in the WMT20 Metrics Shared Task},
booktitle      = {Proceedings of the Fifth Conference on Machine Translation},
month          = {November},
year           = {2020},
publisher      = {Association for Computational Linguistics},
pages     = {909--918},
}
@inproceedings{rei-etal-2020-comet,
title = "{COMET}: A Neural Framework for {MT} Evaluation",
author = "Rei, Ricardo  and
Stewart, Craig  and
Farinha, Ana C  and
Lavie, Alon",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
}