lvwerra HF staff commited on
Commit
d852794
1 Parent(s): 1129cfa

Update Space (evaluate main: 828c6327)

Browse files
Files changed (4) hide show
  1. README.md +140 -5
  2. app.py +6 -0
  3. chrf.py +175 -0
  4. requirements.txt +4 -0
README.md CHANGED
@@ -1,12 +1,147 @@
1
  ---
2
- title: Chrf
3
- emoji:
4
- colorFrom: purple
5
- colorTo: indigo
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: chrF
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
  ---
14
 
15
+ # Metric Card for chrF(++)
16
+
17
+
18
+ ## Metric Description
19
+ ChrF and ChrF++ are two MT evaluation metrics that use the F-score statistic for character n-gram matches. ChrF++ additionally includes word n-grams, which correlate more strongly with direct assessment. We use the implementation that is already present in sacrebleu.
20
+
21
+ While this metric is included in sacreBLEU, the implementation here is slightly different from sacreBLEU in terms of the required input format. Here, the length of the references and hypotheses lists need to be the same, so you may need to transpose your references compared to sacrebleu's required input format. See https://github.com/huggingface/datasets/issues/3154#issuecomment-950746534
22
+
23
+ See the [sacreBLEU README.md](https://github.com/mjpost/sacreBLEU#chrf--chrf) for more information.
24
+
25
+
26
+ ## How to Use
27
+ At minimum, this metric requires a `list` of predictions and a `list` of `list`s of references:
28
+ ```python
29
+ >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."]
30
+ >>> reference = [["The relationship between dogs and cats is not exactly friendly.", ], ["A good bookshop is just a genteel Black Hole that knows how to read."]]
31
+ >>> chrf = evaluate.load("chrf")
32
+ >>> results = chrf.compute(predictions=prediction, references=reference)
33
+ >>> print(results)
34
+ {'score': 84.64214891738334, 'char_order': 6, 'word_order': 0, 'beta': 2}
35
+ ```
36
+
37
+ ### Inputs
38
+ - **`predictions`** (`list` of `str`): The predicted sentences.
39
+ - **`references`** (`list` of `list` of `str`): The references. There should be one reference sub-list for each prediction sentence.
40
+ - **`char_order`** (`int`): Character n-gram order. Defaults to `6`.
41
+ - **`word_order`** (`int`): Word n-gram order. If equals to 2, the metric is referred to as chrF++. Defaults to `0`.
42
+ - **`beta`** (`int`): Determine the importance of recall w.r.t precision. Defaults to `2`.
43
+ - **`lowercase`** (`bool`): If `True`, enables case-insensitivity. Defaults to `False`.
44
+ - **`whitespace`** (`bool`): If `True`, include whitespaces when extracting character n-grams. Defaults to `False`.
45
+ - **`eps_smoothing`** (`bool`): If `True`, applies epsilon smoothing similar to reference chrF++.py, NLTK, and Moses implementations. If `False`, takes into account effective match order similar to sacreBLEU < 2.0.0. Defaults to `False`.
46
+
47
+
48
+
49
+ ### Output Values
50
+ The output is a dictionary containing the following fields:
51
+ - **`'score'`** (`float`): The chrF (chrF++) score.
52
+ - **`'char_order'`** (`int`): The character n-gram order.
53
+ - **`'word_order'`** (`int`): The word n-gram order. If equals to `2`, the metric is referred to as chrF++.
54
+ - **`'beta'`** (`int`): Determine the importance of recall w.r.t precision.
55
+
56
+
57
+ The output is formatted as below:
58
+ ```python
59
+ {'score': 61.576379378113785, 'char_order': 6, 'word_order': 0, 'beta': 2}
60
+ ```
61
+
62
+ The chrF(++) score can be any value between `0.0` and `100.0`, inclusive.
63
+
64
+ #### Values from Popular Papers
65
+
66
+
67
+ ### Examples
68
+ A simple example of calculating chrF:
69
+ ```python
70
+ >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."]
71
+ >>> reference = [["The relationship between dogs and cats is not exactly friendly.", ], ["A good bookshop is just a genteel Black Hole that knows how to read."]]
72
+ >>> chrf = evaluate.load("chrf")
73
+ >>> results = chrf.compute(predictions=prediction, references=reference)
74
+ >>> print(results)
75
+ {'score': 84.64214891738334, 'char_order': 6, 'word_order': 0, 'beta': 2}
76
+ ```
77
+
78
+ The same example, but with the argument `word_order=2`, to calculate chrF++ instead of chrF:
79
+ ```python
80
+ >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."]
81
+ >>> reference = [["The relationship between dogs and cats is not exactly friendly.", ], ["A good bookshop is just a genteel Black Hole that knows how to read."]]
82
+ >>> chrf = evaluate.load("chrf")
83
+ >>> results = chrf.compute(predictions=prediction,
84
+ ... references=reference,
85
+ ... word_order=2)
86
+ >>> print(results)
87
+ {'score': 82.87263732906315, 'char_order': 6, 'word_order': 2, 'beta': 2}
88
+ ```
89
+
90
+ The same chrF++ example as above, but with `lowercase=True` to normalize all case:
91
+ ```python
92
+ >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."]
93
+ >>> reference = [["The relationship between dogs and cats is not exactly friendly.", ], ["A good bookshop is just a genteel Black Hole that knows how to read."]]
94
+ >>> chrf = evaluate.load("chrf")
95
+ >>> results = chrf.compute(predictions=prediction,
96
+ ... references=reference,
97
+ ... word_order=2,
98
+ ... lowercase=True)
99
+ >>> print(results)
100
+ {'score': 92.12853119829202, 'char_order': 6, 'word_order': 2, 'beta': 2}
101
+ ```
102
+
103
+
104
+ ## Limitations and Bias
105
+ - According to [Popović 2017](https://www.statmt.org/wmt17/pdf/WMT70.pdf), chrF+ (where `word_order=1`) and chrF++ (where `word_order=2`) produce scores that correlate better with human judgements than chrF (where `word_order=0`) does.
106
+
107
+ ## Citation
108
+ ```bibtex
109
+ @inproceedings{popovic-2015-chrf,
110
+ title = "chr{F}: character n-gram {F}-score for automatic {MT} evaluation",
111
+ author = "Popovi{\'c}, Maja",
112
+ booktitle = "Proceedings of the Tenth Workshop on Statistical Machine Translation",
113
+ month = sep,
114
+ year = "2015",
115
+ address = "Lisbon, Portugal",
116
+ publisher = "Association for Computational Linguistics",
117
+ url = "https://aclanthology.org/W15-3049",
118
+ doi = "10.18653/v1/W15-3049",
119
+ pages = "392--395",
120
+ }
121
+ @inproceedings{popovic-2017-chrf,
122
+ title = "chr{F}++: words helping character n-grams",
123
+ author = "Popovi{\'c}, Maja",
124
+ booktitle = "Proceedings of the Second Conference on Machine Translation",
125
+ month = sep,
126
+ year = "2017",
127
+ address = "Copenhagen, Denmark",
128
+ publisher = "Association for Computational Linguistics",
129
+ url = "https://aclanthology.org/W17-4770",
130
+ doi = "10.18653/v1/W17-4770",
131
+ pages = "612--618",
132
+ }
133
+ @inproceedings{post-2018-call,
134
+ title = "A Call for Clarity in Reporting {BLEU} Scores",
135
+ author = "Post, Matt",
136
+ booktitle = "Proceedings of the Third Conference on Machine Translation: Research Papers",
137
+ month = oct,
138
+ year = "2018",
139
+ address = "Belgium, Brussels",
140
+ publisher = "Association for Computational Linguistics",
141
+ url = "https://www.aclweb.org/anthology/W18-6319",
142
+ pages = "186--191",
143
+ }
144
+ ```
145
+
146
+ ## Further References
147
+ - See the [sacreBLEU README.md](https://github.com/mjpost/sacreBLEU#chrf--chrf) for more information on this implementation.
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("chrf")
6
+ launch_gradio_widget(module)
chrf.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2021 The HuggingFace Evaluate Authors.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """ Chrf(++) metric as available in sacrebleu. """
15
+ import datasets
16
+ import sacrebleu as scb
17
+ from packaging import version
18
+ from sacrebleu import CHRF
19
+
20
+ import evaluate
21
+
22
+
23
+ _CITATION = """\
24
+ @inproceedings{popovic-2015-chrf,
25
+ title = "chr{F}: character n-gram {F}-score for automatic {MT} evaluation",
26
+ author = "Popovi{\'c}, Maja",
27
+ booktitle = "Proceedings of the Tenth Workshop on Statistical Machine Translation",
28
+ month = sep,
29
+ year = "2015",
30
+ address = "Lisbon, Portugal",
31
+ publisher = "Association for Computational Linguistics",
32
+ url = "https://aclanthology.org/W15-3049",
33
+ doi = "10.18653/v1/W15-3049",
34
+ pages = "392--395",
35
+ }
36
+ @inproceedings{popovic-2017-chrf,
37
+ title = "chr{F}++: words helping character n-grams",
38
+ author = "Popovi{\'c}, Maja",
39
+ booktitle = "Proceedings of the Second Conference on Machine Translation",
40
+ month = sep,
41
+ year = "2017",
42
+ address = "Copenhagen, Denmark",
43
+ publisher = "Association for Computational Linguistics",
44
+ url = "https://aclanthology.org/W17-4770",
45
+ doi = "10.18653/v1/W17-4770",
46
+ pages = "612--618",
47
+ }
48
+ @inproceedings{post-2018-call,
49
+ title = "A Call for Clarity in Reporting {BLEU} Scores",
50
+ author = "Post, Matt",
51
+ booktitle = "Proceedings of the Third Conference on Machine Translation: Research Papers",
52
+ month = oct,
53
+ year = "2018",
54
+ address = "Belgium, Brussels",
55
+ publisher = "Association for Computational Linguistics",
56
+ url = "https://www.aclweb.org/anthology/W18-6319",
57
+ pages = "186--191",
58
+ }
59
+ """
60
+
61
+ _DESCRIPTION = """\
62
+ ChrF and ChrF++ are two MT evaluation metrics. They both use the F-score statistic for character n-gram matches,
63
+ and ChrF++ adds word n-grams as well which correlates more strongly with direct assessment. We use the implementation
64
+ that is already present in sacrebleu.
65
+
66
+ The implementation here is slightly different from sacrebleu in terms of the required input format. The length of
67
+ the references and hypotheses lists need to be the same, so you may need to transpose your references compared to
68
+ sacrebleu's required input format. See https://github.com/huggingface/datasets/issues/3154#issuecomment-950746534
69
+
70
+ See the README.md file at https://github.com/mjpost/sacreBLEU#chrf--chrf for more information.
71
+ """
72
+
73
+ _KWARGS_DESCRIPTION = """
74
+ Produces ChrF(++) scores for hypotheses given reference translations.
75
+
76
+ Args:
77
+ predictions (list of str): The predicted sentences.
78
+ references (list of list of str): The references. There should be one reference sub-list for each prediction sentence.
79
+ char_order (int): Character n-gram order. Defaults to `6`.
80
+ word_order (int): Word n-gram order. If equals to `2`, the metric is referred to as chrF++. Defaults to `0`.
81
+ beta (int): Determine the importance of recall w.r.t precision. Defaults to `2`.
82
+ lowercase (bool): if `True`, enables case-insensitivity. Defaults to `False`.
83
+ whitespace (bool): If `True`, include whitespaces when extracting character n-grams.
84
+ eps_smoothing (bool): If `True`, applies epsilon smoothing similar
85
+ to reference chrF++.py, NLTK and Moses implementations. If `False`,
86
+ it takes into account effective match order similar to sacreBLEU < 2.0.0. Defaults to `False`.
87
+
88
+ Returns:
89
+ 'score' (float): The chrF (chrF++) score,
90
+ 'char_order' (int): The character n-gram order,
91
+ 'word_order' (int): The word n-gram order. If equals to 2, the metric is referred to as chrF++,
92
+ 'beta' (int): Determine the importance of recall w.r.t precision
93
+
94
+ Examples:
95
+ Example 1--a simple example of calculating chrF:
96
+ >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."]
97
+ >>> reference = [["The relationship between dogs and cats is not exactly friendly."], ["A good bookshop is just a genteel Black Hole that knows how to read."]]
98
+ >>> chrf = evaluate.load("chrf")
99
+ >>> results = chrf.compute(predictions=prediction, references=reference)
100
+ >>> print(results)
101
+ {'score': 84.64214891738334, 'char_order': 6, 'word_order': 0, 'beta': 2}
102
+
103
+ Example 2--the same example, but with the argument word_order=2, to calculate chrF++ instead of chrF:
104
+ >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."]
105
+ >>> reference = [["The relationship between dogs and cats is not exactly friendly."], ["A good bookshop is just a genteel Black Hole that knows how to read."]]
106
+ >>> chrf = evaluate.load("chrf")
107
+ >>> results = chrf.compute(predictions=prediction,
108
+ ... references=reference,
109
+ ... word_order=2)
110
+ >>> print(results)
111
+ {'score': 82.87263732906315, 'char_order': 6, 'word_order': 2, 'beta': 2}
112
+
113
+ Example 3--the same chrF++ example as above, but with `lowercase=True` to normalize all case:
114
+ >>> prediction = ["The relationship between cats and dogs is not exactly friendly.", "a good bookshop is just a genteel black hole that knows how to read."]
115
+ >>> reference = [["The relationship between dogs and cats is not exactly friendly."], ["A good bookshop is just a genteel Black Hole that knows how to read."]]
116
+ >>> chrf = evaluate.load("chrf")
117
+ >>> results = chrf.compute(predictions=prediction,
118
+ ... references=reference,
119
+ ... word_order=2,
120
+ ... lowercase=True)
121
+ >>> print(results)
122
+ {'score': 92.12853119829202, 'char_order': 6, 'word_order': 2, 'beta': 2}
123
+ """
124
+
125
+
126
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
127
+ class ChrF(evaluate.EvaluationModule):
128
+ def _info(self):
129
+ if version.parse(scb.__version__) < version.parse("1.4.12"):
130
+ raise ImportWarning(
131
+ "To use `sacrebleu`, the module `sacrebleu>=1.4.12` is required, and the current version of `sacrebleu` doesn't match this condition.\n"
132
+ 'You can install it with `pip install "sacrebleu>=1.4.12"`.'
133
+ )
134
+ return evaluate.EvaluationModuleInfo(
135
+ description=_DESCRIPTION,
136
+ citation=_CITATION,
137
+ homepage="https://github.com/mjpost/sacreBLEU#chrf--chrf",
138
+ inputs_description=_KWARGS_DESCRIPTION,
139
+ features=datasets.Features(
140
+ {
141
+ "predictions": datasets.Value("string", id="sequence"),
142
+ "references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"),
143
+ }
144
+ ),
145
+ codebase_urls=["https://github.com/mjpost/sacreBLEU#chrf--chrf"],
146
+ reference_urls=[
147
+ "https://github.com/m-popovic/chrF",
148
+ ],
149
+ )
150
+
151
+ def _compute(
152
+ self,
153
+ predictions,
154
+ references,
155
+ char_order: int = CHRF.CHAR_ORDER,
156
+ word_order: int = CHRF.WORD_ORDER,
157
+ beta: int = CHRF.BETA,
158
+ lowercase: bool = False,
159
+ whitespace: bool = False,
160
+ eps_smoothing: bool = False,
161
+ ):
162
+ references_per_prediction = len(references[0])
163
+ if any(len(refs) != references_per_prediction for refs in references):
164
+ raise ValueError("Sacrebleu requires the same number of references for each prediction")
165
+ transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]
166
+
167
+ sb_chrf = CHRF(char_order, word_order, beta, lowercase, whitespace, eps_smoothing)
168
+ output = sb_chrf.corpus_score(predictions, transformed_references)
169
+
170
+ return {
171
+ "score": output.score,
172
+ "char_order": output.char_order,
173
+ "word_order": output.word_order,
174
+ "beta": output.beta,
175
+ }
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # TODO: fix github to release
2
+ git+https://github.com/huggingface/evaluate.git@b6e6ed7f3e6844b297bff1b43a1b4be0709b9671
3
+ datasets~=2.0
4
+ sacrebleu