File size: 2,203 Bytes
f48a436 9fc0c2e f48a436 9fc0c2e 3dfdd2c 9fc0c2e 65fa350 9fc0c2e 3dfdd2c 9fc0c2e 3dfdd2c 9fc0c2e 3dfdd2c 9fc0c2e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
pipeline_tag: translation
language:
- multilingual
- af
- am
- ar
- en
- fr
- ha
- ig
- mg
- ny
- om
- pcm
- rn
- rw
- sn
- so
- st
- sw
- xh
- yo
- zu
license: apache-2.0
---
This is a [AfriCOMET-QE-STL (quality estimation single task)](https://github.com/masakhane-io/africomet) evaluation model: It receives a source sentence, and a translation, and returns a score that reflects the quality of the translation compared to the source.
# Paper
[AfriMTE and AfriCOMET: Empowering COMET to Embrace Under-resourced African Languages](https://arxiv.org/abs/2311.09828) (Wang et al., arXiv 2023)
# License
Apache-2.0
# Usage (AfriCOMET)
Using this model requires unbabel-comet to be installed:
```bash
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
```
Then you can use it through comet CLI:
```bash
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt --model masakhane/africomet-qe-stl
```
Or using Python:
```python
from comet import download_model, load_from_checkpoint
model_path = download_model("masakhane/africomet-qe-stl")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Nadal sàkọọ́lẹ̀ ìforígbárí o ní àmì méje sóódo pẹ̀lú ilẹ̀ Canada.",
"mt": "Nadal's head to head record against the Canadian is 7–2.",
},
{
"src": "Laipe yi o padanu si Raoniki ni ere Sisi Brisbeni.",
"mt": "He recently lost against Raonic in the Brisbane Open.",
}
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
```
# Intended uses
Our model is intented to be used for **MT quality estimation**.
Given a source sentence and a translation outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of AfroXLMR which cover the following languages:
Afrikaans, Arabic, Amharic, English, French, Hausa, Igbo, Malagasy, Chichewa, Oromo, Nigerian-Pidgin, Kinyarwanda, Kirundi, Shona, Somali, Sesotho, Swahili, isiXhosa, Yoruba, and isiZulu.
Thus, results for language pairs containing uncovered languages are unreliable! |