He-Xingwei commited on
Commit
4d86c21
1 Parent(s): c91d665

Add my new, shiny module.

Browse files
Files changed (2) hide show
  1. README.md +12 -12
  2. sari_metric.py +4 -1
README.md CHANGED
@@ -69,19 +69,19 @@ The metric takes 3 inputs: sources (a list of source sentence strings), predicti
69
 
70
  ```python
71
  from evaluate import load
72
- sari = load("sari")
73
  sources=["About 95 species are currently accepted."]
74
  predictions=["About 95 you now get in."]
75
  references=[["About 95 species are currently known.","About 95 species are now accepted.","95 species are now accepted."]]
76
- sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
77
  ```
78
  ## Output values
79
 
80
  This metric outputs a dictionary with the SARI score:
81
 
82
  ```
83
- print(sari_score)
84
- {'sari': 26.953601953601954}
85
  ```
86
 
87
  The range of values for the SARI score is between 0 and 100 -- the higher the value, the better the performance of the model being evaluated, with a SARI of 100 being a perfect score.
@@ -98,26 +98,26 @@ Perfect match between prediction and reference:
98
 
99
  ```python
100
  from evaluate import load
101
- sari = load("sari")
102
  sources=["About 95 species are currently accepted ."]
103
  predictions=["About 95 species are currently accepted ."]
104
  references=[["About 95 species are currently accepted ."]]
105
- sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
106
- print(sari_score)
107
- {'sari': 100.0}
108
  ```
109
 
110
  Partial match between prediction and reference:
111
 
112
  ```python
113
  from evaluate import load
114
- sari = load("sari")
115
  sources=["About 95 species are currently accepted ."]
116
  predictions=["About 95 you now get in ."]
117
  references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]
118
- sari_score = sari.compute(sources=sources, predictions=predictions, references=references)
119
- print(sari_score)
120
- {'sari': 26.953601953601954}
121
  ```
122
 
123
  ## Limitations and bias
 
69
 
70
  ```python
71
  from evaluate import load
72
+ sari = load("hxw15/sari_metric")
73
  sources=["About 95 species are currently accepted."]
74
  predictions=["About 95 you now get in."]
75
  references=[["About 95 species are currently known.","About 95 species are now accepted.","95 species are now accepted."]]
76
+ results = sari.compute(sources=sources, predictions=predictions, references=references)
77
  ```
78
  ## Output values
79
 
80
  This metric outputs a dictionary with the SARI score:
81
 
82
  ```
83
+ print(results)
84
+ {'sari': 26.953601953601954, 'keep': 0.22527472527472525, 'del': 0.5, 'add': 0.08333333333333333}
85
  ```
86
 
87
  The range of values for the SARI score is between 0 and 100 -- the higher the value, the better the performance of the model being evaluated, with a SARI of 100 being a perfect score.
 
98
 
99
  ```python
100
  from evaluate import load
101
+ sari = load("hxw15/sari_metric")
102
  sources=["About 95 species are currently accepted ."]
103
  predictions=["About 95 species are currently accepted ."]
104
  references=[["About 95 species are currently accepted ."]]
105
+ results = sari.compute(sources=sources, predictions=predictions, references=references)
106
+ print(results)
107
+ {'sari': 100.0, 'keep': 1.0, 'del': 1.0, 'add': 1.0}
108
  ```
109
 
110
  Partial match between prediction and reference:
111
 
112
  ```python
113
  from evaluate import load
114
+ sari = load("hxw15/sari_metric")
115
  sources=["About 95 species are currently accepted ."]
116
  predictions=["About 95 you now get in ."]
117
  references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]]
118
+ results = sari.compute(sources=sources, predictions=predictions, references=references)
119
+ print(results)
120
+ {'sari': 26.953601953601954, 'keep': 0.22527472527472525, 'del': 0.5, 'add': 0.08333333333333333}
121
  ```
122
 
123
  ## Limitations and bias
sari_metric.py CHANGED
@@ -68,6 +68,9 @@ Args:
68
  references: list of lists of reference sentences where each sentence should be a string.
69
  Returns:
70
  sari: sari score
 
 
 
71
  Examples:
72
  >>> sources=["About 95 species are currently accepted ."]
73
  >>> predictions=["About 95 you now get in ."]
@@ -75,7 +78,7 @@ Examples:
75
  >>> sari = evaluate.load("sari")
76
  >>> results = sari.compute(sources=sources, predictions=predictions, references=references)
77
  >>> print(results)
78
- {'sari': 26.953601953601954}
79
  """
80
 
81
 
 
68
  references: list of lists of reference sentences where each sentence should be a string.
69
  Returns:
70
  sari: sari score
71
+ avgkeepscore: F1_keep score
72
+ avgdelscore: P_del score
73
+ avgaddscore: F1_add score
74
  Examples:
75
  >>> sources=["About 95 species are currently accepted ."]
76
  >>> predictions=["About 95 you now get in ."]
 
78
  >>> sari = evaluate.load("sari")
79
  >>> results = sari.compute(sources=sources, predictions=predictions, references=references)
80
  >>> print(results)
81
+ {'sari': 26.953601953601954, 'keep': 0.22527472527472525, 'del': 0.5, 'add': 0.08333333333333333}
82
  """
83
 
84