File size: 1,829 Bytes
1a548b7
 
 
 
 
 
 
 
 
 
 
 
8d0dffd
1a548b7
8d0dffd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
language:
- en
- de
- es
- ru
- zh
base_model:
- microsoft/mdeberta-v3-base
- Unbabel/XCOMET-XXL
---

# XCOMET-lite

**Links:** [EMNLP 2024](https://aclanthology.org/2024.emnlp-main.1223/) | [Arxiv](https://arxiv.org/abs/2406.14553) | [Github repository](https://github.com/NL2G/xCOMET-lite) 

`XCOMET-lite` is a distilled version of [`Unbabel/XCOMET-XXL`](https://huggingface.co/Unbabel/XCOMET-XXL) — a machine translation evaluation model trained to provide an overall quality score between 0 and 1, where 1 represents a perfect translation.

This model uses [`microsoft/mdeberta-v3-base`](https://huggingface.co/microsoft/deberta-v3-base) as its backbone and has 278 million parameters, making it approximately 38 times smaller than the 10.7 billion-parameter `XCOMET-XXL`.

## Quick Start

1. Clone the [GitHub repository](https://github.com/NL2G/xCOMET-lite).
2. Create a conda environment as instructed in the README.

Then, run the following code:

```
from xcomet.deberta_encoder import XCOMETLite

model = XCOMETLite().from_pretrained("myyycroft/XCOMET-lite")
data = [
    {
        "src": "Elon Musk has acquired Twitter and plans significant changes.",
        "mt": "Илон Маск приобрел Twitter и планировал значительные искажения.",
        "ref": "Илон Маск приобрел Twitter и планирует значительные изменения."
    },
    {
        "src": "Elon Musk has acquired Twitter and plans significant changes.",
        "mt": "Илон Маск приобрел Twitter.",
        "ref": "Илон Маск приобрел Twitter и планирует значительные изменения."
    }
]

model_output = model.predict(data, batch_size=2, gpus=1)

print("Segment-level scores:", model_output.scores)
```