Datasets:

Multilinguality:
translation
Size Categories:
1K<n<10K
Language Creators:
found
Source Datasets:
original
ArXiv:
Tags:
License:
gsarti commited on
Commit
57b7e0b
1 Parent(s): df05f50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -95,6 +95,7 @@ The following fields are contained in the training set:
95
  |`tot_shifted_words` | Total amount of shifted words from all shifts present in the sentence. |
96
  |`tot_edits` | Total of all edit types for the sentence. |
97
  |`hter` | Human-mediated Translation Edit Rate score computed between the MT and post-edited outputs using the [tercom](https://github.com/jhclark/tercom) library. |
 
98
  |`bleu` | Sentence-level BLEU score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
99
  |`chrf` | Sentence-level chrF score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
100
  |`lang_id` | Language identifier for the sentence |
@@ -162,6 +163,7 @@ The following is an example of the subject `t1` post-editing a machine translati
162
  'tot_shifted_words': 0.0,
163
  'tot_edits': 3.0,
164
  'hter': 20.0,
 
165
  'bleu': 0.0,
166
  'chrf': 2.569999933242798,
167
  'lang_id': 'tur',
 
95
  |`tot_shifted_words` | Total amount of shifted words from all shifts present in the sentence. |
96
  |`tot_edits` | Total of all edit types for the sentence. |
97
  |`hter` | Human-mediated Translation Edit Rate score computed between the MT and post-edited outputs using the [tercom](https://github.com/jhclark/tercom) library. |
98
+ |`cer` | Character-level HTER score computed between the MT and post-edited outputs using the [CharacTER](https://github.com/rwth-i6/CharacTER) library.
99
  |`bleu` | Sentence-level BLEU score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
100
  |`chrf` | Sentence-level chrF score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
101
  |`lang_id` | Language identifier for the sentence |
 
163
  'tot_shifted_words': 0.0,
164
  'tot_edits': 3.0,
165
  'hter': 20.0,
166
+ 'cer': 0.10,
167
  'bleu': 0.0,
168
  'chrf': 2.569999933242798,
169
  'lang_id': 'tur',