TransQuest commited on
Commit
b952ca8
1 Parent(s): f2425e4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en-lv
3
+ tags:
4
+ - Quality Estimation
5
+ - microtransquest
6
+ license: apache-2.0
7
+ ---
8
+
9
+
10
+ # TransQuest: Translation Quality Estimation with Cross-lingual Transformers
11
+ The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
12
+
13
+ With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
14
+
15
+
16
+ ## Features
17
+ - Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
18
+ - Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
19
+ - Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
20
+ - Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
21
+
22
+ ## Installation
23
+ ### From pip
24
+
25
+ ```bash
26
+ pip install transquest
27
+ ```
28
+
29
+ ### From Source
30
+
31
+ ```bash
32
+ git clone https://github.com/TharinduDR/TransQuest.git
33
+ cd TransQuest
34
+ pip install -r requirements.txt
35
+ ```
36
+
37
+ ## Using Pre-trained Models
38
+
39
+ ```python
40
+ from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
41
+ import torch
42
+
43
+ model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_lv-pharmaceutical-smt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
44
+ source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
45
+
46
+ ```
47
+
48
+
49
+ ## Documentation
50
+ For more details follow the documentation.
51
+
52
+ 1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
53
+ 2. **Architectures** - Checkout the architectures implemented in TransQuest
54
+ 1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
55
+ 2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
56
+ 3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
57
+ 1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
58
+ 2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
59
+ 4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
60
+ 1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
61
+ 2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
62
+ 5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
63
+
64
+
65
+ ## Citations
66
+ If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
67
+
68
+ ```bash
69
+ @InProceedings{ranasinghe2021,
70
+ author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
71
+ title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
72
+ booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
73
+ year = {2021}
74
+ }
75
+ ```
76
+
77
+ If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
78
+
79
+ ```bash
80
+ @InProceedings{transquest:2020a,
81
+ author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
82
+ title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
83
+ booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
84
+ year = {2020}
85
+ }
86
+ ```
87
+
88
+ ```bash
89
+ @InProceedings{transquest:2020b,
90
+ author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
91
+ title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
92
+ booktitle = {Proceedings of the Fifth Conference on Machine Translation},
93
+ year = {2020}
94
+ }
95
+ ```