lvwerra HF staff commited on
Commit
30e0d5d
1 Parent(s): 5ce542d

Update Space (evaluate main: 828c6327)

Browse files
Files changed (4) hide show
  1. README.md +120 -4
  2. app.py +6 -0
  3. recall.py +135 -0
  4. requirements.txt +4 -0
README.md CHANGED
@@ -1,12 +1,128 @@
1
  ---
2
  title: Recall
3
- emoji: 🐢
4
- colorFrom: green
5
- colorTo: blue
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: Recall
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
  ---
14
 
15
+ # Metric Card for Recall
16
+
17
+
18
+ ## Metric Description
19
+
20
+ Recall is the fraction of the positive examples that were correctly labeled by the model as positive. It can be computed with the equation:
21
+ Recall = TP / (TP + FN)
22
+ Where TP is the number of true positives and FN is the number of false negatives.
23
+
24
+
25
+ ## How to Use
26
+
27
+ At minimum, this metric takes as input two `list`s, each containing `int`s: predictions and references.
28
+
29
+ ```python
30
+ >>> recall_metric = evaluate.load('recall')
31
+ >>> results = recall_metric.compute(references=[0, 1], predictions=[0, 1])
32
+ >>> print(results)
33
+ ["{'recall': 1.0}"]
34
+ ```
35
+
36
+
37
+ ### Inputs
38
+ - **predictions** (`list` of `int`): The predicted labels.
39
+ - **references** (`list` of `int`): The ground truth labels.
40
+ - **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None.
41
+ - **pos_label** (`int`): The class label to use as the 'positive class' when calculating the recall. Defaults to `1`.
42
+ - **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
43
+ - `'binary'`: Only report results for the class specified by `pos_label`. This is applicable only if the target labels and predictions are binary.
44
+ - `'micro'`: Calculate metrics globally by counting the total true positives, false negatives, and false positives.
45
+ - `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
46
+ - `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between precision and recall.
47
+ - `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
48
+ - **sample_weight** (`list` of `float`): Sample weights Defaults to `None`.
49
+ - **zero_division** (): Sets the value to return when there is a zero division. Defaults to .
50
+ - `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised.
51
+ - `0`: If there is a zero division, the return value is `0`.
52
+ - `1`: If there is a zero division, the return value is `1`.
53
+
54
+
55
+ ### Output Values
56
+ - **recall**(`float`, or `array` of `float`, for multiclass targets): Either the general recall score, or the recall scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher recall means that more of the positive examples have been labeled correctly. Therefore, a higher recall is generally considered better.
57
+
58
+ Output Example(s):
59
+ ```python
60
+ {'recall': 1.0}
61
+ ```
62
+ ```python
63
+ {'recall': array([1., 0., 0.])}
64
+ ```
65
+
66
+ This metric outputs a dictionary with one entry, `'recall'`.
67
+
68
+
69
+ #### Values from Popular Papers
70
+
71
+
72
+ ### Examples
73
+
74
+ Example 1-A simple example with some errors
75
+ ```python
76
+ >>> recall_metric = evaluate.load('recall')
77
+ >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])
78
+ >>> print(results)
79
+ {'recall': 0.6666666666666666}
80
+ ```
81
+
82
+ Example 2-The same example as Example 1, but with `pos_label=0` instead of the default `pos_label=1`.
83
+ ```python
84
+ >>> recall_metric = evaluate.load('recall')
85
+ >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], pos_label=0)
86
+ >>> print(results)
87
+ {'recall': 0.5}
88
+ ```
89
+
90
+ Example 3-The same example as Example 1, but with `sample_weight` included.
91
+ ```python
92
+ >>> recall_metric = evaluate.load('recall')
93
+ >>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]
94
+ >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)
95
+ >>> print(results)
96
+ {'recall': 0.55}
97
+ ```
98
+
99
+ Example 4-A multiclass example, using different averages.
100
+ ```python
101
+ >>> recall_metric = evaluate.load('recall')
102
+ >>> predictions = [0, 2, 1, 0, 0, 1]
103
+ >>> references = [0, 1, 2, 0, 1, 2]
104
+ >>> results = recall_metric.compute(predictions=predictions, references=references, average='macro')
105
+ >>> print(results)
106
+ {'recall': 0.3333333333333333}
107
+ >>> results = recall_metric.compute(predictions=predictions, references=references, average='micro')
108
+ >>> print(results)
109
+ {'recall': 0.3333333333333333}
110
+ >>> results = recall_metric.compute(predictions=predictions, references=references, average='weighted')
111
+ >>> print(results)
112
+ {'recall': 0.3333333333333333}
113
+ >>> results = recall_metric.compute(predictions=predictions, references=references, average=None)
114
+ >>> print(results)
115
+ {'recall': array([1., 0., 0.])}
116
+ ```
117
+
118
+
119
+ ## Limitations and Bias
120
+
121
+
122
+ ## Citation(s)
123
+ ```bibtex
124
+ @article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={12}, pages={2825--2830}, year={2011}
125
+ ```
126
+
127
+
128
+ ## Further References
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("recall")
6
+ launch_gradio_widget(module)
recall.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """Recall metric."""
15
+
16
+ import datasets
17
+ from sklearn.metrics import recall_score
18
+
19
+ import evaluate
20
+
21
+
22
+ _DESCRIPTION = """
23
+ Recall is the fraction of the positive examples that were correctly labeled by the model as positive. It can be computed with the equation:
24
+ Recall = TP / (TP + FN)
25
+ Where TP is the true positives and FN is the false negatives.
26
+ """
27
+
28
+
29
+ _KWARGS_DESCRIPTION = """
30
+ Args:
31
+ - **predictions** (`list` of `int`): The predicted labels.
32
+ - **references** (`list` of `int`): The ground truth labels.
33
+ - **labels** (`list` of `int`): The set of labels to include when `average` is not set to `binary`, and their order when average is `None`. Labels present in the data can be excluded in this input, for example to calculate a multiclass average ignoring a majority negative class, while labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in y_true and y_pred are used in sorted order. Defaults to None.
34
+ - **pos_label** (`int`): The class label to use as the 'positive class' when calculating the recall. Defaults to `1`.
35
+ - **average** (`string`): This parameter is required for multiclass/multilabel targets. If None, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
36
+ - `'binary'`: Only report results for the class specified by `pos_label`. This is applicable only if the target labels and predictions are binary.
37
+ - `'micro'`: Calculate metrics globally by counting the total true positives, false negatives, and false positives.
38
+ - `'macro'`: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
39
+ - `'weighted'`: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. Note that it can result in an F-score that is not between precision and recall.
40
+ - `'samples'`: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
41
+ - **sample_weight** (`list` of `float`): Sample weights Defaults to `None`.
42
+ - **zero_division** (): Sets the value to return when there is a zero division. Defaults to .
43
+ - `'warn'`: If there is a zero division, the return value is `0`, but warnings are also raised.
44
+ - `0`: If there is a zero division, the return value is `0`.
45
+ - `1`: If there is a zero division, the return value is `1`.
46
+
47
+ Returns:
48
+ - **recall** (`float`, or `array` of `float`): Either the general recall score, or the recall scores for individual classes, depending on the values input to `labels` and `average`. Minimum possible value is 0. Maximum possible value is 1. A higher recall means that more of the positive examples have been labeled correctly. Therefore, a higher recall is generally considered better.
49
+
50
+ Examples:
51
+
52
+ Example 1-A simple example with some errors
53
+ >>> recall_metric = evaluate.load('recall')
54
+ >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1])
55
+ >>> print(results)
56
+ {'recall': 0.6666666666666666}
57
+
58
+ Example 2-The same example as Example 1, but with `pos_label=0` instead of the default `pos_label=1`.
59
+ >>> recall_metric = evaluate.load('recall')
60
+ >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], pos_label=0)
61
+ >>> print(results)
62
+ {'recall': 0.5}
63
+
64
+ Example 3-The same example as Example 1, but with `sample_weight` included.
65
+ >>> recall_metric = evaluate.load('recall')
66
+ >>> sample_weight = [0.9, 0.2, 0.9, 0.3, 0.8]
67
+ >>> results = recall_metric.compute(references=[0, 0, 1, 1, 1], predictions=[0, 1, 0, 1, 1], sample_weight=sample_weight)
68
+ >>> print(results)
69
+ {'recall': 0.55}
70
+
71
+ Example 4-A multiclass example, using different averages.
72
+ >>> recall_metric = evaluate.load('recall')
73
+ >>> predictions = [0, 2, 1, 0, 0, 1]
74
+ >>> references = [0, 1, 2, 0, 1, 2]
75
+ >>> results = recall_metric.compute(predictions=predictions, references=references, average='macro')
76
+ >>> print(results)
77
+ {'recall': 0.3333333333333333}
78
+ >>> results = recall_metric.compute(predictions=predictions, references=references, average='micro')
79
+ >>> print(results)
80
+ {'recall': 0.3333333333333333}
81
+ >>> results = recall_metric.compute(predictions=predictions, references=references, average='weighted')
82
+ >>> print(results)
83
+ {'recall': 0.3333333333333333}
84
+ >>> results = recall_metric.compute(predictions=predictions, references=references, average=None)
85
+ >>> print(results)
86
+ {'recall': array([1., 0., 0.])}
87
+ """
88
+
89
+
90
+ _CITATION = """
91
+ @article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={12}, pages={2825--2830}, year={2011}
92
+ """
93
+
94
+
95
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
96
+ class Recall(evaluate.EvaluationModule):
97
+ def _info(self):
98
+ return evaluate.EvaluationModuleInfo(
99
+ description=_DESCRIPTION,
100
+ citation=_CITATION,
101
+ inputs_description=_KWARGS_DESCRIPTION,
102
+ features=datasets.Features(
103
+ {
104
+ "predictions": datasets.Sequence(datasets.Value("int32")),
105
+ "references": datasets.Sequence(datasets.Value("int32")),
106
+ }
107
+ if self.config_name == "multilabel"
108
+ else {
109
+ "predictions": datasets.Value("int32"),
110
+ "references": datasets.Value("int32"),
111
+ }
112
+ ),
113
+ reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html"],
114
+ )
115
+
116
+ def _compute(
117
+ self,
118
+ predictions,
119
+ references,
120
+ labels=None,
121
+ pos_label=1,
122
+ average="binary",
123
+ sample_weight=None,
124
+ zero_division="warn",
125
+ ):
126
+ score = recall_score(
127
+ references,
128
+ predictions,
129
+ labels=labels,
130
+ pos_label=pos_label,
131
+ average=average,
132
+ sample_weight=sample_weight,
133
+ zero_division=zero_division,
134
+ )
135
+ return {"recall": float(score) if score.size == 1 else score}
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ # TODO: fix github to release
2
+ git+https://github.com/huggingface/evaluate.git@b6e6ed7f3e6844b297bff1b43a1b4be0709b9671
3
+ datasets~=2.0
4
+ sklearn