yonting's picture
Correct `y_pred` -> `y_score`
45738ba

A newer version of the Gradio SDK is available: 4.37.2

Upgrade
metadata
title: average_precision_score
emoji: 🌍
colorFrom: blue
colorTo: orange
tags:
  - evaluate
  - metric
  - sklearn
description: Average precision score.
sdk: gradio
sdk_version: 3.18.0
app_file: app.py
pinned: false

Metric Card for sklearn.metrics.average_precision_score

Input Convention

To be consistent with the evaluate input conventions the scikit-learn inputs are renamed:

  • y_true: references
  • y_score: prediction_scores

Usage

import evaluate

metric = evaluate.load("yonting/average_precision_score")
results = metric.compute(references=references, prediction_scores=prediction_scores)

Description

Average precision score.

Compute average precision (AP) from prediction scores.
AP summarizes a precision-recall curve as the weighted mean of precisions
achieved at each threshold, with the increase in recall from the previous
threshold used as the weight:
.. math::
    \\text{AP} = \\sum_n (R_n - R_{n-1}) P_n
where :math:`P_n` and :math:`R_n` are the precision and recall at the nth
threshold [1]_. This implementation is not interpolated and is different
from computing the area under the precision-recall curve with the
trapezoidal rule, which uses linear interpolation and can be too
optimistic.
Note: this implementation is restricted to the binary classification task
or multilabel classification task.
Read more in the :ref:`User Guide <precision_recall_f_measure_metrics>`.
Parameters
----------
y_true : ndarray of shape (n_samples,) or (n_samples, n_classes)
    True binary labels or binary label indicators.
y_score : ndarray of shape (n_samples,) or (n_samples, n_classes)
    Target scores, can either be probability estimates of the positive
    class, confidence values, or non-thresholded measure of decisions
    (as returned by :term:`decision_function` on some classifiers).
average : {'micro', 'samples', 'weighted', 'macro'} or None, \
        default='macro'
    If ``None``, the scores for each class are returned. Otherwise,
    this determines the type of averaging performed on the data:
    ``'micro'``:
        Calculate metrics globally by considering each element of the label
        indicator matrix as a label.
    ``'macro'``:
        Calculate metrics for each label, and find their unweighted
        mean.  This does not take label imbalance into account.
    ``'weighted'``:
        Calculate metrics for each label, and find their average, weighted
        by support (the number of true instances for each label).
    ``'samples'``:
        Calculate metrics for each instance, and find their average.
    Will be ignored when ``y_true`` is binary.
pos_label : int or str, default=1
    The label of the positive class. Only applied to binary ``y_true``.
    For multilabel-indicator ``y_true``, ``pos_label`` is fixed to 1.
sample_weight : array-like of shape (n_samples,), default=None
    Sample weights.
Returns
-------
average_precision : float
    Average precision score.
See Also
--------
roc_auc_score : Compute the area under the ROC curve.
precision_recall_curve : Compute precision-recall pairs for different
    probability thresholds.
Notes
-----
.. versionchanged:: 0.19
  Instead of linearly interpolating between operating points, precisions
  are weighted by the change in recall since the last operating point.
References
----------
.. [1] `Wikipedia entry for the Average precision
       <https://en.wikipedia.org/w/index.php?title=Information_retrieval&
       oldid=793358396#Average_precision>`_
Examples
--------
>>> import numpy as np
>>> from sklearn.metrics import average_precision_score
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> average_precision_score(y_true, y_scores)
0.83...

Citation

@article{scikit-learn,
 title={Scikit-learn: Machine Learning in {P}ython},
 author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V.
         and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P.
         and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and
         Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.},
 journal={Journal of Machine Learning Research},
 volume={12},
 pages={2825--2830},
 year={2011}
}

Further References