lvwerra HF staff commited on
Commit
e6bffd8
1 Parent(s): f0d835e

Update Space (evaluate main: 828c6327)

Browse files
Files changed (4) hide show
  1. README.md +130 -4
  2. app.py +6 -0
  3. precision.py +145 -0
  4. requirements.txt +4 -0
README.md CHANGED
@@ -1,12 +1,138 @@
1
  ---
2
  title: Precision
3
- emoji: 🐨
4
- colorFrom: purple
5
- colorTo: yellow
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: Precision
3
+ emoji: 🤗
4
+ colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
  sdk_version: 3.0.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - metric
13
  ---
14
 
15
+ # Metric Card for Precision
16
+
17
+
18
+ ## Metric Description
19
+
20
+ Precision is the fraction of correctly labeled positive examples out of all of the examples that were labeled as positive. It is computed via the equation:
21
+ Precision = TP / (TP + FP)
22
+ where TP is the True positives (i.e. the examples correctly labeled as positive) and FP is the False positive examples (i.e. the examples incorrectly labeled as positive).
23
+
24
+
25
+ ## How to Use
26
+
27
+ At minimum, precision takes as input a list of predicted labels, `predictions`, and a list of output labels, `references`.
28
+
29
+ ```python
30
+ >>> precision_metric = evaluate.load("precision")
31
+ >>> results = precision_metric.compute(references=[0, 1], predictions=[0, 1])
32
+ >>> print(results)
33
+ {'precision': 1.0}
34
+ ```
35
+
36
+
37
+ ### Inputs
38
+ - **predictions** (`list` of `int`): Predicted class labels.
39
+ - **references** (`list` of `int`): Actual class labels.
40
+ - **labels** (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
41
+ - **pos_label** (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
42
+ - **average** (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
43
+ - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
44
+ - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
45
+ - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
46
+ - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
47
+ - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
48
+ - **sample_weight** (`list` of `float`): Sample weights Defaults to None.
49
+ - **zero_division** (): Sets the value to return when there is a zero division. Defaults to .
50
+ - 0: Returns 0 when there is a zero division.
51
+ - 1: Returns 1 when there is a zero division.
52
+ - 'warn': Raises warnings and then returns 0 when there is a zero division.
53
+
54
+
55
+ ### Output Values
56
+ - **precision**(`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better.
57
+
58
+ Output Example(s):
59
+ ```python
60
+ {'precision': 0.2222222222222222}
61
+ ```
62
+ ```python
63
+ {'precision': array([0.66666667, 0.0, 0.0])}
64
+ ```
65
+
66
+
67
+
68
+
69
+ #### Values from Popular Papers
70
+
71
+
72
+ ### Examples
73
+
74
+ Example 1-A simple binary example
75
+ ```python
76
+ >>> precision_metric = evaluate.load("precision")
77
+ >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
78
+ >>> print(results)
79
+ {'precision': 0.5}
80
+ ```
81
+
82
+ Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
83
+ ```python
84
+ >>> precision_metric = evaluate.load("precision")
85
+ >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
86
+ >>> print(round(results['precision'], 2))
87
+ 0.67
88
+ ```
89
+
90
+ Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
91
+ ```python
92
+ >>> precision_metric = evaluate.load("precision")
93
+ >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
94
+ >>> print(results)
95
+ {'precision': 0.23529411764705882}
96
+ ```
97
+
98
+ Example 4-A multiclass example, with different values for the `average` input.
99
+ ```python
100
+ >>> predictions = [0, 2, 1, 0, 0, 1]
101
+ >>> references = [0, 1, 2, 0, 1, 2]
102
+ >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')
103
+ >>> print(results)
104
+ {'precision': 0.2222222222222222}
105
+ >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')
106
+ >>> print(results)
107
+ {'precision': 0.3333333333333333}
108
+ >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')
109
+ >>> print(results)
110
+ {'precision': 0.2222222222222222}
111
+ >>> results = precision_metric.compute(predictions=predictions, references=references, average=None)
112
+ >>> print([round(res, 2) for res in results['precision']])
113
+ [0.67, 0.0, 0.0]
114
+ ```
115
+
116
+
117
+ ## Limitations and Bias
118
+
119
+ [Precision](https://huggingface.co/metrics/precision) and [recall](https://huggingface.co/metrics/recall) are complementary and can be used to measure different aspects of model performance -- using both of them (or an averaged measure like [F1 score](https://huggingface.co/metrics/F1) to better represent different aspects of performance. See [Wikipedia](https://en.wikipedia.org/wiki/Precision_and_recall) for more information.
120
+
121
+ ## Citation(s)
122
+ ```bibtex
123
+ @article{scikit-learn,
124
+ title={Scikit-learn: Machine Learning in {P}ython},
125
+ author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
126
+ and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
127
+ and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
128
+ Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
129
+ journal={Journal of Machine Learning Research},
130
+ volume={12},
131
+ pages={2825--2830},
132
+ year={2011}
133
+ }
134
+ ```
135
+
136
+
137
+ ## Further References
138
+ - [Wikipedia -- Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall)
app.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ import evaluate
2
+ from evaluate.utils import launch_gradio_widget
3
+
4
+
5
+ module = evaluate.load("precision")
6
+ launch_gradio_widget(module)
precision.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """Precision metric."""
15
+
16
+ import datasets
17
+ from sklearn.metrics import precision_score
18
+
19
+ import evaluate
20
+
21
+
22
+ _DESCRIPTION = """
23
+ Precision is the fraction of correctly labeled positive examples out of all of the examples that were labeled as positive. It is computed via the equation:
24
+ Precision = TP / (TP + FP)
25
+ where TP is the True positives (i.e. the examples correctly labeled as positive) and FP is the False positive examples (i.e. the examples incorrectly labeled as positive).
26
+ """
27
+
28
+
29
+ _KWARGS_DESCRIPTION = """
30
+ Args:
31
+ predictions (`list` of `int`): Predicted class labels.
32
+ references (`list` of `int`): Actual class labels.
33
+ labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None.
34
+ pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1.
35
+ average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`.
36
+
37
+ - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary.
38
+ - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives.
39
+ - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
40
+ - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall.
41
+ - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification).
42
+ sample_weight (`list` of `float`): Sample weights Defaults to None.
43
+ zero_division (`int` or `string`): Sets the value to return when there is a zero division. Defaults to 'warn'.
44
+
45
+ - 0: Returns 0 when there is a zero division.
46
+ - 1: Returns 1 when there is a zero division.
47
+ - 'warn': Raises warnings and then returns 0 when there is a zero division.
48
+
49
+ Returns:
50
+ precision (`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better.
51
+
52
+ Examples:
53
+
54
+ Example 1-A simple binary example
55
+ >>> precision_metric = evaluate.load("precision")
56
+ >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0])
57
+ >>> print(results)
58
+ {'precision': 0.5}
59
+
60
+ Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`.
61
+ >>> precision_metric = evaluate.load("precision")
62
+ >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0)
63
+ >>> print(round(results['precision'], 2))
64
+ 0.67
65
+
66
+ Example 3-The same simple binary example as in Example 1, but with `sample_weight` included.
67
+ >>> precision_metric = evaluate.load("precision")
68
+ >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3])
69
+ >>> print(results)
70
+ {'precision': 0.23529411764705882}
71
+
72
+ Example 4-A multiclass example, with different values for the `average` input.
73
+ >>> predictions = [0, 2, 1, 0, 0, 1]
74
+ >>> references = [0, 1, 2, 0, 1, 2]
75
+ >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro')
76
+ >>> print(results)
77
+ {'precision': 0.2222222222222222}
78
+ >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro')
79
+ >>> print(results)
80
+ {'precision': 0.3333333333333333}
81
+ >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted')
82
+ >>> print(results)
83
+ {'precision': 0.2222222222222222}
84
+ >>> results = precision_metric.compute(predictions=predictions, references=references, average=None)
85
+ >>> print([round(res, 2) for res in results['precision']])
86
+ [0.67, 0.0, 0.0]
87
+ """
88
+
89
+
90
+ _CITATION = """
91
+ @article{scikit-learn,
92
+ title={Scikit-learn: Machine Learning in {P}ython},
93
+ author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
94
+ and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
95
+ and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
96
+ Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
97
+ journal={Journal of Machine Learning Research},
98
+ volume={12},
99
+ pages={2825--2830},
100
+ year={2011}
101
+ }
102
+ """
103
+
104
+
105
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
106
+ class Precision(evaluate.EvaluationModule):
107
+ def _info(self):
108
+ return evaluate.EvaluationModuleInfo(
109
+ description=_DESCRIPTION,
110
+ citation=_CITATION,
111
+ inputs_description=_KWARGS_DESCRIPTION,
112
+ features=datasets.Features(
113
+ {
114
+ "predictions": datasets.Sequence(datasets.Value("int32")),
115
+ "references": datasets.Sequence(datasets.Value("int32")),
116
+ }
117
+ if self.config_name == "multilabel"
118
+ else {
119
+ "predictions": datasets.Value("int32"),
120
+ "references": datasets.Value("int32"),
121
+ }
122
+ ),
123
+ reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html"],
124
+ )
125
+
126
+ def _compute(
127
+ self,
128
+ predictions,
129
+ references,
130
+ labels=None,
131
+ pos_label=1,
132
+ average="binary",
133
+ sample_weight=None,
134
+ zero_division="warn",
135
+ ):
136
+ score = precision_score(
137
+ references,
138
+ predictions,
139
+ labels=labels,
140
+ pos_label=pos_label,
141
+ average=average,
142
+ sample_weight=sample_weight,
143
+ zero_division=zero_division,
144
+ )
145
+ return {"precision": float(score) if score.size == 1 else score}
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # TODO: fix github to release
2
+ git+https://github.com/huggingface/evaluate.git@b6e6ed7f3e6844b297bff1b43a1b4be0709b9671
3
+ datasets~=2.0
4
+ sklearn