Edit model card

🛡️ Guardians of the Machine Translation Meta-Evaluation:
Sentinel Metrics Fall In!

       
       

This repository contains the SENTINELCAND metric model pre-trained on Direct Assessments (DA) annotations and further fine-tuned on Multidimensional Quality Metrics (MQM) data. For details on how to use our sentinel metric models, check our GitHub repository.

Usage

After having installed our repository package, you can use this model within Python in the following way:

from sentinel_metric import download_model, load_from_checkpoint

model_path = download_model("sapienzanlp/sentinel-cand-mqm")
model = load_from_checkpoint(model_path)

data = [
    {"mt": "There's no place like home."},
    {"mt": "Toto, I've a feeling we're not in Kansas anymore."}
]

output = model.predict(data, batch_size=8, gpus=1)

Output:

# Segment scores
>>> output.scores
[0.5421186089515686, 0.29804396629333496]

# System score
>>> output.system_score
0.4200812876224518

Cite this work

This work has been published at ACL 2024 (main conference). If you use any part, please consider citing our paper as follows:

@inproceedings{perrella-etal-2024-guardians,
    title = "Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In!",
    author = "Perrella, Stefano and
      Proietti, Lorenzo  and
      Scirè, Alessandro and
      Barba, Edoardo and
      Navigli, Roberto",
        booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2024)",
    year      = "2024",
    address   = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
}

License

This work is licensed under Creative Commons Attribution-ShareAlike-NonCommercial 4.0.

Downloads last month
18
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including sapienzanlp/sentinel-cand-mqm