He-Xingwei commited on
Commit
c91d665
1 Parent(s): 630fe61

Add my new, shiny module.

Browse files
Files changed (4) hide show
  1. README.md +135 -1
  2. app.py +6 -0
  3. requirements.txt +3 -0
  4. sari_metric.py +302 -0
README.md CHANGED
@@ -7,6 +7,140 @@ sdk: gradio
7
  sdk_version: 3.28.3
8
  app_file: app.py
9
  pinned: false
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  sdk_version: 3.28.3
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
+ description: >-
14
+ SARI is a metric used for evaluating automatic text simplification systems.
15
+ The metric compares the predicted simplified sentences against the reference
16
+ and the source sentences. It explicitly measures the goodness of words that are
17
+ added, deleted and kept by the system.
18
+ Sari = (F1_add + F1_keep + P_del) / 3
19
+ where
20
+ F1_add: n-gram F1 score for add operation
21
+ F1_keep: n-gram F1 score for keep operation
22
+ P_del: n-gram precision score for delete operation
23
+ n = 4, as in the original paper.
24
+
25
+ This implementation is adapted from Tensorflow's tensor2tensor implementation [3].
26
+ It has two differences with the original GitHub [1] implementation:
27
+ (1) Defines 0/0=1 instead of 0 to give higher scores for predictions that match
28
+ a target exactly.
29
+ (2) Fixes an alleged bug [2] in the keep score computation.
30
+ [1] https://github.com/cocoxu/simplification/blob/master/SARI.py
31
+ (commit 0210f15)
32
+ [2] https://github.com/cocoxu/simplification/issues/6
33
+ [3] https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/sari_hook.py
34
  ---
35
 
36
+ # Metric Card for SARI
37
+
38
+
39
+ ## Metric description
40
+ SARI (***s**ystem output **a**gainst **r**eferences and against the **i**nput sentence*) is a metric used for evaluating automatic text simplification systems.
41
+
42
+ The metric compares the predicted simplified sentences against the reference and the source sentences. It explicitly measures the goodness of words that are added, deleted and kept by the system.
43
+
44
+ SARI can be computed as:
45
+
46
+ `sari = ( F1_add + F1_keep + P_del) / 3`
47
+
48
+ where
49
+
50
+ `F1_add` is the n-gram F1 score for add operations
51
+
52
+ `F1_keep` is the n-gram F1 score for keep operations
53
+
54
+ `P_del` is the n-gram precision score for delete operations
55
+
56
+ The number of n grams, `n`, is equal to 4, as in the original paper.
57
+
58
+ This implementation is adapted from [Tensorflow's tensor2tensor implementation](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/sari_hook.py).
59
+ It has two differences with the [original GitHub implementation](https://github.com/cocoxu/simplification/blob/master/SARI.py):
60
+
61
+ 1) It defines 0/0=1 instead of 0 to give higher scores for predictions that match a target exactly.
62
+ 2) It fixes an [alleged bug](https://github.com/cocoxu/simplification/issues/6) in the keep score computation.
63
+
64
+
65
+
66
+ ## How to use
67
+
68
+ The metric takes 3 inputs: sources (a list of source sentence strings), predictions (a list of predicted sentence strings) and references (a list of lists of reference sentence strings)
69
+
70
+ ```python
71
+ from evaluate import load
72
+ sari = load("sari")
73
+ sources=["About 95 species are currently accepted."]
74
+ predictions=["About 95 you now get in."]
75
+ references=[["About 95 species are currently known.","About 95 species are now accepted.","95 species are now accepted."]]
76
+ sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
77
+ ```
78
+ ## Output values
79
+
80
+ This metric outputs a dictionary with the SARI score:
81
+
82
+ ```
83
+ print(sari_score)
84
+ {'sari': 26.953601953601954}
85
+ ```
86
+
87
+ The range of values for the SARI score is between 0 and 100 -- the higher the value, the better the performance of the model being evaluated, with a SARI of 100 being a perfect score.
88
+
89
+ ### Values from popular papers
90
+
91
+ The [original paper that proposes the SARI metric](https://aclanthology.org/Q16-1029.pdf) reports scores ranging from 26 to 43 for different simplification systems and different datasets. They also find that the metric ranks all of the simplification systems and human references in the same order as the human assessment used as a comparison, and that it correlates reasonably with human judgments.
92
+
93
+ More recent SARI scores for text simplification can be found on leaderboards for datasets such as [TurkCorpus](https://paperswithcode.com/sota/text-simplification-on-turkcorpus) and [Newsela](https://paperswithcode.com/sota/text-simplification-on-newsela).
94
+
95
+ ## Examples
96
+
97
+ Perfect match between prediction and reference:
98
+
99
+ ```python
100
+ from evaluate import load
101
+ sari = load("sari")
102
+ sources=["About 95 species are currently accepted ."]
103
+ predictions=["About 95 species are currently accepted ."]
104
+ references=[["About 95 species are currently accepted ."]]
105
+ sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
106
+ print(sari_score)
107
+ {'sari': 100.0}
108
+ ```
109
+
110
+ Partial match between prediction and reference:
111
+
112
+ ```python
113
+ from evaluate import load
114
+ sari = load("sari")
115
+ sources=["About 95 species are currently accepted ."]
116
+ predictions=["About 95 you now get in ."]
117
+ references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]
118
+ sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
119
+ print(sari_score)
120
+ {'sari': 26.953601953601954}
121
+ ```
122
+
123
+ ## Limitations and bias
124
+
125
+ SARI is a valuable measure for comparing different text simplification systems as well as one that can assist the iterative development of a system.
126
+
127
+ However, while the [original paper presenting SARI](https://aclanthology.org/Q16-1029.pdf) states that it captures "the notion of grammaticality and meaning preservation", this is a difficult claim to empirically validate.
128
+
129
+ ## Citation
130
+
131
+ ```bibtex
132
+ @inproceedings{xu-etal-2016-optimizing,
133
+ title = {Optimizing Statistical Machine Translation for Text Simplification},
134
+ authors={Xu, Wei and Napoles, Courtney and Pavlick, Ellie and Chen, Quanze and Callison-Burch, Chris},
135
+ journal = {Transactions of the Association for Computational Linguistics},
136
+ volume = {4},
137
+ year={2016},
138
+ url = {https://www.aclweb.org/anthology/Q16-1029},
139
+ pages = {401--415},
140
+ }
141
+ ```
142
+
143
+ ## Further References
144
+
145
+ - [NLP Progress -- Text Simplification](http://nlpprogress.com/english/simplification.html)
146
+ - [Hugging Face Hub -- Text Simplification Models](https://huggingface.co/datasets?filter=task_ids:text-simplification)
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("sari_metric")
6
+ launch_gradio_widget(module)
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ git+https://github.com/huggingface/evaluate@{COMMIT_PLACEHOLDER}
2
+ sacrebleu
3
+ sacremoses
sari_metric.py ADDED
@@ -0,0 +1,302 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """ SARI metric."""
15
+
16
+ from collections import Counter
17
+
18
+ import datasets
19
+ import sacrebleu
20
+ import sacremoses
21
+ from packaging import version
22
+
23
+ import evaluate
24
+
25
+
26
+ _CITATION = """\
27
+ @inproceedings{xu-etal-2016-optimizing,
28
+ title = {Optimizing Statistical Machine Translation for Text Simplification},
29
+ authors={Xu, Wei and Napoles, Courtney and Pavlick, Ellie and Chen, Quanze and Callison-Burch, Chris},
30
+ journal = {Transactions of the Association for Computational Linguistics},
31
+ volume = {4},
32
+ year={2016},
33
+ url = {https://www.aclweb.org/anthology/Q16-1029},
34
+ pages = {401--415},
35
+ }
36
+ """
37
+
38
+ _DESCRIPTION = """\
39
+ SARI is a metric used for evaluating automatic text simplification systems.
40
+ The metric compares the predicted simplified sentences against the reference
41
+ and the source sentences. It explicitly measures the goodness of words that are
42
+ added, deleted and kept by the system.
43
+ Sari = (F1_add + F1_keep + P_del) / 3
44
+ where
45
+ F1_add: n-gram F1 score for add operation
46
+ F1_keep: n-gram F1 score for keep operation
47
+ P_del: n-gram precision score for delete operation
48
+ n = 4, as in the original paper.
49
+
50
+ This implementation is adapted from Tensorflow's tensor2tensor implementation [3].
51
+ It has two differences with the original GitHub [1] implementation:
52
+ (1) Defines 0/0=1 instead of 0 to give higher scores for predictions that match
53
+ a target exactly.
54
+ (2) Fixes an alleged bug [2] in the keep score computation.
55
+ [1] https://github.com/cocoxu/simplification/blob/master/SARI.py
56
+ (commit 0210f15)
57
+ [2] https://github.com/cocoxu/simplification/issues/6
58
+ [3] https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/sari_hook.py
59
+ """
60
+
61
+
62
+ _KWARGS_DESCRIPTION = """
63
+ Calculates sari score (between 0 and 100) given a list of source and predicted
64
+ sentences, and a list of lists of reference sentences.
65
+ Args:
66
+ sources: list of source sentences where each sentence should be a string.
67
+ predictions: list of predicted sentences where each sentence should be a string.
68
+ references: list of lists of reference sentences where each sentence should be a string.
69
+ Returns:
70
+ sari: sari score
71
+ Examples:
72
+ >>> sources=["About 95 species are currently accepted ."]
73
+ >>> predictions=["About 95 you now get in ."]
74
+ >>> references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]
75
+ >>> sari = evaluate.load("sari")
76
+ >>> results = sari.compute(sources=sources, predictions=predictions, references=references)
77
+ >>> print(results)
78
+ {'sari': 26.953601953601954}
79
+ """
80
+
81
+
82
+ def SARIngram(sgrams, cgrams, rgramslist, numref):
83
+ rgramsall = [rgram for rgrams in rgramslist for rgram in rgrams]
84
+ rgramcounter = Counter(rgramsall)
85
+
86
+ sgramcounter = Counter(sgrams)
87
+ sgramcounter_rep = Counter()
88
+ for sgram, scount in sgramcounter.items():
89
+ sgramcounter_rep[sgram] = scount * numref
90
+
91
+ cgramcounter = Counter(cgrams)
92
+ cgramcounter_rep = Counter()
93
+ for cgram, ccount in cgramcounter.items():
94
+ cgramcounter_rep[cgram] = ccount * numref
95
+
96
+ # KEEP
97
+ keepgramcounter_rep = sgramcounter_rep & cgramcounter_rep
98
+ keepgramcountergood_rep = keepgramcounter_rep & rgramcounter
99
+ keepgramcounterall_rep = sgramcounter_rep & rgramcounter
100
+
101
+ keeptmpscore1 = 0
102
+ keeptmpscore2 = 0
103
+ for keepgram in keepgramcountergood_rep:
104
+ keeptmpscore1 += keepgramcountergood_rep[keepgram] / keepgramcounter_rep[keepgram]
105
+ # Fix an alleged bug [2] in the keep score computation.
106
+ # keeptmpscore2 += keepgramcountergood_rep[keepgram] / keepgramcounterall_rep[keepgram]
107
+ keeptmpscore2 += keepgramcountergood_rep[keepgram]
108
+ # Define 0/0=1 instead of 0 to give higher scores for predictions that match
109
+ # a target exactly.
110
+ keepscore_precision = 1
111
+ keepscore_recall = 1
112
+ if len(keepgramcounter_rep) > 0:
113
+ keepscore_precision = keeptmpscore1 / len(keepgramcounter_rep)
114
+ if len(keepgramcounterall_rep) > 0:
115
+ # Fix an alleged bug [2] in the keep score computation.
116
+ # keepscore_recall = keeptmpscore2 / len(keepgramcounterall_rep)
117
+ keepscore_recall = keeptmpscore2 / sum(keepgramcounterall_rep.values())
118
+ keepscore = 0
119
+ if keepscore_precision > 0 or keepscore_recall > 0:
120
+ keepscore = 2 * keepscore_precision * keepscore_recall / (keepscore_precision + keepscore_recall)
121
+
122
+ # DELETION
123
+ delgramcounter_rep = sgramcounter_rep - cgramcounter_rep
124
+ delgramcountergood_rep = delgramcounter_rep - rgramcounter
125
+ delgramcounterall_rep = sgramcounter_rep - rgramcounter
126
+ deltmpscore1 = 0
127
+ deltmpscore2 = 0
128
+ for delgram in delgramcountergood_rep:
129
+ deltmpscore1 += delgramcountergood_rep[delgram] / delgramcounter_rep[delgram]
130
+ deltmpscore2 += delgramcountergood_rep[delgram] / delgramcounterall_rep[delgram]
131
+ # Define 0/0=1 instead of 0 to give higher scores for predictions that match
132
+ # a target exactly.
133
+ delscore_precision = 1
134
+ if len(delgramcounter_rep) > 0:
135
+ delscore_precision = deltmpscore1 / len(delgramcounter_rep)
136
+
137
+ # ADDITION
138
+ addgramcounter = set(cgramcounter) - set(sgramcounter)
139
+ addgramcountergood = set(addgramcounter) & set(rgramcounter)
140
+ addgramcounterall = set(rgramcounter) - set(sgramcounter)
141
+
142
+ addtmpscore = 0
143
+ for addgram in addgramcountergood:
144
+ addtmpscore += 1
145
+
146
+ # Define 0/0=1 instead of 0 to give higher scores for predictions that match
147
+ # a target exactly.
148
+ addscore_precision = 1
149
+ addscore_recall = 1
150
+ if len(addgramcounter) > 0:
151
+ addscore_precision = addtmpscore / len(addgramcounter)
152
+ if len(addgramcounterall) > 0:
153
+ addscore_recall = addtmpscore / len(addgramcounterall)
154
+ addscore = 0
155
+ if addscore_precision > 0 or addscore_recall > 0:
156
+ addscore = 2 * addscore_precision * addscore_recall / (addscore_precision + addscore_recall)
157
+
158
+ return (keepscore, delscore_precision, addscore)
159
+
160
+
161
+ def SARIsent(ssent, csent, rsents):
162
+ numref = len(rsents)
163
+
164
+ s1grams = ssent.split(" ")
165
+ c1grams = csent.split(" ")
166
+ s2grams = []
167
+ c2grams = []
168
+ s3grams = []
169
+ c3grams = []
170
+ s4grams = []
171
+ c4grams = []
172
+
173
+ r1gramslist = []
174
+ r2gramslist = []
175
+ r3gramslist = []
176
+ r4gramslist = []
177
+ for rsent in rsents:
178
+ r1grams = rsent.split(" ")
179
+ r2grams = []
180
+ r3grams = []
181
+ r4grams = []
182
+ r1gramslist.append(r1grams)
183
+ for i in range(0, len(r1grams) - 1):
184
+ if i < len(r1grams) - 1:
185
+ r2gram = r1grams[i] + " " + r1grams[i + 1]
186
+ r2grams.append(r2gram)
187
+ if i < len(r1grams) - 2:
188
+ r3gram = r1grams[i] + " " + r1grams[i + 1] + " " + r1grams[i + 2]
189
+ r3grams.append(r3gram)
190
+ if i < len(r1grams) - 3:
191
+ r4gram = r1grams[i] + " " + r1grams[i + 1] + " " + r1grams[i + 2] + " " + r1grams[i + 3]
192
+ r4grams.append(r4gram)
193
+ r2gramslist.append(r2grams)
194
+ r3gramslist.append(r3grams)
195
+ r4gramslist.append(r4grams)
196
+
197
+ for i in range(0, len(s1grams) - 1):
198
+ if i < len(s1grams) - 1:
199
+ s2gram = s1grams[i] + " " + s1grams[i + 1]
200
+ s2grams.append(s2gram)
201
+ if i < len(s1grams) - 2:
202
+ s3gram = s1grams[i] + " " + s1grams[i + 1] + " " + s1grams[i + 2]
203
+ s3grams.append(s3gram)
204
+ if i < len(s1grams) - 3:
205
+ s4gram = s1grams[i] + " " + s1grams[i + 1] + " " + s1grams[i + 2] + " " + s1grams[i + 3]
206
+ s4grams.append(s4gram)
207
+
208
+ for i in range(0, len(c1grams) - 1):
209
+ if i < len(c1grams) - 1:
210
+ c2gram = c1grams[i] + " " + c1grams[i + 1]
211
+ c2grams.append(c2gram)
212
+ if i < len(c1grams) - 2:
213
+ c3gram = c1grams[i] + " " + c1grams[i + 1] + " " + c1grams[i + 2]
214
+ c3grams.append(c3gram)
215
+ if i < len(c1grams) - 3:
216
+ c4gram = c1grams[i] + " " + c1grams[i + 1] + " " + c1grams[i + 2] + " " + c1grams[i + 3]
217
+ c4grams.append(c4gram)
218
+
219
+ (keep1score, del1score, add1score) = SARIngram(s1grams, c1grams, r1gramslist, numref)
220
+ (keep2score, del2score, add2score) = SARIngram(s2grams, c2grams, r2gramslist, numref)
221
+ (keep3score, del3score, add3score) = SARIngram(s3grams, c3grams, r3gramslist, numref)
222
+ (keep4score, del4score, add4score) = SARIngram(s4grams, c4grams, r4gramslist, numref)
223
+ avgkeepscore = sum([keep1score, keep2score, keep3score, keep4score]) / 4
224
+ avgdelscore = sum([del1score, del2score, del3score, del4score]) / 4
225
+ avgaddscore = sum([add1score, add2score, add3score, add4score]) / 4
226
+ finalscore = (avgkeepscore + avgdelscore + avgaddscore) / 3
227
+ return finalscore, avgkeepscore, avgdelscore, avgaddscore
228
+
229
+
230
+ def normalize(sentence, lowercase: bool = True, tokenizer: str = "13a", return_str: bool = True):
231
+
232
+ # Normalization is requried for the ASSET dataset (one of the primary
233
+ # datasets in sentence simplification) to allow using space
234
+ # to split the sentence. Even though Wiki-Auto and TURK datasets,
235
+ # do not require normalization, we do it for consistency.
236
+ # Code adapted from the EASSE library [1] written by the authors of the ASSET dataset.
237
+ # [1] https://github.com/feralvam/easse/blob/580bba7e1378fc8289c663f864e0487188fe8067/easse/utils/preprocessing.py#L7
238
+
239
+ if lowercase:
240
+ sentence = sentence.lower()
241
+
242
+ if tokenizer in ["13a", "intl"]:
243
+ if version.parse(sacrebleu.__version__).major >= 2:
244
+ normalized_sent = sacrebleu.metrics.bleu._get_tokenizer(tokenizer)()(sentence)
245
+ else:
246
+ normalized_sent = sacrebleu.TOKENIZERS[tokenizer]()(sentence)
247
+ elif tokenizer == "moses":
248
+ normalized_sent = sacremoses.MosesTokenizer().tokenize(sentence, return_str=True, escape=False)
249
+ elif tokenizer == "penn":
250
+ normalized_sent = sacremoses.MosesTokenizer().penn_tokenize(sentence, return_str=True)
251
+ else:
252
+ normalized_sent = sentence
253
+
254
+ if not return_str:
255
+ normalized_sent = normalized_sent.split()
256
+
257
+ return normalized_sent
258
+
259
+
260
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
261
+ class Sari(evaluate.Metric):
262
+ def _info(self):
263
+ return evaluate.MetricInfo(
264
+ description=_DESCRIPTION,
265
+ citation=_CITATION,
266
+ inputs_description=_KWARGS_DESCRIPTION,
267
+ features=datasets.Features(
268
+ {
269
+ "sources": datasets.Value("string", id="sequence"),
270
+ "predictions": datasets.Value("string", id="sequence"),
271
+ "references": datasets.Sequence(datasets.Value("string", id="sequence"), id="references"),
272
+ }
273
+ ),
274
+ codebase_urls=[
275
+ "https://github.com/cocoxu/simplification/blob/master/SARI.py",
276
+ "https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/sari_hook.py",
277
+ ],
278
+ reference_urls=["https://www.aclweb.org/anthology/Q16-1029.pdf"],
279
+ )
280
+
281
+ def _compute(self, sources, predictions, references):
282
+
283
+ if not (len(sources) == len(predictions) == len(references)):
284
+ raise ValueError("Sources length must match predictions and references lengths.")
285
+ sari_score = 0
286
+ avgkeepscore = 0
287
+ avgdelscore = 0
288
+ avgaddscore = 0
289
+ for src, pred, refs in zip(sources, predictions, references):
290
+ _sari_score, _avgkeepscore, _avgdelscore, _avgaddscore = SARIsent(normalize(src), normalize(pred), [normalize(sent) for sent in refs])
291
+ sari_score += _sari_score
292
+ avgkeepscore += _avgkeepscore
293
+ avgdelscore += _avgdelscore
294
+ avgaddscore += _avgaddscore
295
+
296
+ sari_score = sari_score / len(predictions)
297
+ avgkeepscore = avgkeepscore / len(predictions)
298
+ avgdelscore = avgdelscore / len(predictions)
299
+ avgaddscore = avgaddscore / len(predictions)
300
+ return {"sari": 100 * sari_score, "keep": avgkeepscore,
301
+ "del": avgdelscore, "add": avgaddscore}
302
+