vince62s commited on
Commit
a6adf09
1 Parent(s): 86bc315

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -20
README.md CHANGED
@@ -1,28 +1,174 @@
1
- # InfoXLM
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- **InfoXLM** (NAACL 2021, [paper](https://arxiv.org/pdf/2007.07834.pdf), [repo](https://github.com/microsoft/unilm/tree/master/infoxlm), [model](https://huggingface.co/microsoft/infoxlm-base)) InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training.
4
 
5
- **MD5**
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ```
8
- 05b95b7d977450b364f8ea3269391953 config.json
9
- c19438359fed6d36b0c1bbb107929579 pytorch_model.bin
10
- bf25eb5120ad92ef5c7d8596b5dc4046 sentencepiece.bpe.model
11
- eedbd60a7268b9fc45981b849664f747 tokenizer.json
 
 
 
 
 
 
 
 
 
12
  ```
13
 
14
- **BibTeX**
 
 
 
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ```
17
- @inproceedings{chi-etal-2021-infoxlm,
18
- title = "{I}nfo{XLM}: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training",
19
- author={Chi, Zewen and Dong, Li and Wei, Furu and Yang, Nan and Singhal, Saksham and Wang, Wenhui and Song, Xia and Mao, Xian-Ling and Huang, Heyan and Zhou, Ming},
20
- booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
21
- month = jun,
22
- year = "2021",
23
- address = "Online",
24
- publisher = "Association for Computational Linguistics",
25
- url = "https://aclanthology.org/2021.naacl-main.280",
26
- doi = "10.18653/v1/2021.naacl-main.280",
27
- pages = "3576--3588",}
28
- ```
 
 
 
1
+ ---
2
+ extra_gated_heading: Acknowledge license to accept the repository
3
+ extra_gated_button_content: Acknowledge license
4
+ pipeline_tag: translation
5
+ language:
6
+ - multilingual
7
+ - af
8
+ - am
9
+ - ar
10
+ - as
11
+ - az
12
+ - be
13
+ - bg
14
+ - bn
15
+ - br
16
+ - bs
17
+ - ca
18
+ - cs
19
+ - cy
20
+ - da
21
+ - de
22
+ - el
23
+ - en
24
+ - eo
25
+ - es
26
+ - et
27
+ - eu
28
+ - fa
29
+ - fi
30
+ - fr
31
+ - fy
32
+ - ga
33
+ - gd
34
+ - gl
35
+ - gu
36
+ - ha
37
+ - he
38
+ - hi
39
+ - hr
40
+ - hu
41
+ - hy
42
+ - id
43
+ - is
44
+ - it
45
+ - ja
46
+ - jv
47
+ - ka
48
+ - kk
49
+ - km
50
+ - kn
51
+ - ko
52
+ - ku
53
+ - ky
54
+ - la
55
+ - lo
56
+ - lt
57
+ - lv
58
+ - mg
59
+ - mk
60
+ - ml
61
+ - mn
62
+ - mr
63
+ - ms
64
+ - my
65
+ - ne
66
+ - nl
67
+ - 'no'
68
+ - om
69
+ - or
70
+ - pa
71
+ - pl
72
+ - ps
73
+ - pt
74
+ - ro
75
+ - ru
76
+ - sa
77
+ - sd
78
+ - si
79
+ - sk
80
+ - sl
81
+ - so
82
+ - sq
83
+ - sr
84
+ - su
85
+ - sv
86
+ - sw
87
+ - ta
88
+ - te
89
+ - th
90
+ - tl
91
+ - tr
92
+ - ug
93
+ - uk
94
+ - ur
95
+ - uz
96
+ - vi
97
+ - xh
98
+ - yi
99
+ - zh
100
+ license: cc-by-nc-sa-4.0
101
+ library_name: transformers
102
+ ---
103
 
104
+ This is a [COMET](https://github.com/Unbabel/COMET) quality estimation model: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation.
105
 
106
+ # Paper
107
 
108
+ [CometKiwi: IST-Unbabel 2022 Submission for the Quality Estimation Shared Task](https://aclanthology.org/2022.wmt-1.60) (Rei et al., WMT 2022)
109
+
110
+ # License:
111
+
112
+ cc-by-nc-sa-4.0
113
+
114
+ # Usage (unbabel-comet)
115
+
116
+ Using this model requires unbabel-comet to be installed:
117
+
118
+ ```bash
119
+ pip install --upgrade pip # ensures that pip is current
120
+ pip install "unbabel-comet>=2.0.0"
121
  ```
122
+
123
+ Make sure you acknowledge its License and Log in into Hugging face hub before using:
124
+
125
+ ```bash
126
+ huggingface-cli login
127
+ # or using an environment variable
128
+ huggingface-cli login --token $HUGGINGFACE_TOKEN
129
+ ```
130
+
131
+ Then you can use it through comet CLI:
132
+
133
+ ```bash
134
+ comet-score -s {source-input}.txt -t {translation-output}.txt --model Unbabel/wmt22-cometkiwi-da
135
  ```
136
 
137
+ Or using Python:
138
+
139
+ ```python
140
+ from comet import download_model, load_from_checkpoint
141
 
142
+ model_path = download_model("Unbabel/wmt22-cometkiwi-da")
143
+ model = load_from_checkpoint(model_path)
144
+ data = [
145
+ {
146
+ "src": "The output signal provides constant sync so the display never glitches.",
147
+ "mt": "Das Ausgangssignal bietet eine konstante Synchronisation, so dass die Anzeige nie stört."
148
+ },
149
+ {
150
+ "src": "Kroužek ilustrace je určen všem milovníkům umění ve věku od 10 do 15 let.",
151
+ "mt": "Кільце ілюстрації призначене для всіх любителів мистецтва у віці від 10 до 15 років."
152
+ },
153
+ {
154
+ "src": "Mandela then became South Africa's first black president after his African National Congress party won the 1994 election.",
155
+ "mt": "その後、1994年の選挙でアフリカ国民会議派が勝利し、南アフリカ初の黒人大統領となった。"
156
+ }
157
+ ]
158
+ model_output = model.predict(data, batch_size=8, gpus=1)
159
+ print (model_output)
160
  ```
161
+
162
+ # Intended uses
163
+
164
+ Our model is intented to be used for **reference-free MT evaluation**.
165
+
166
+ Given a source text and its translation, outputs a single score between 0 and 1 where 1 represents a perfect translation.
167
+
168
+ # Languages Covered:
169
+
170
+ This model builds on top of InfoXLM which cover the following languages:
171
+
172
+ Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
173
+
174
+ Thus, results for language pairs containing uncovered languages are unreliable!