content large_stringlengths 3 20.5k | url large_stringlengths 54 193 | branch large_stringclasses 4
values | source large_stringclasses 42
values | embeddings listlengths 384 384 | score float64 -0.21 0.65 |
|---|---|---|---|---|---|
labeled as ``-1`` for "noise"). When chosen too large, it causes close clusters to be merged into one cluster, and eventually the entire data set to be returned as a single cluster. Some heuristics for choosing this parameter have been discussed in the literature, for example based on a knee in the nearest neighbor distances plot (as discussed in the references below). In the figure below, the color indicates cluster membership, with large circles indicating core samples found by the algorithm. Smaller circles are non-core samples that are still part of a cluster. Moreover, the outliers are indicated by black points below. .. |dbscan\_results| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_dbscan\_002.png :target: ../auto\_examples/cluster/plot\_dbscan.html :scale: 50 .. centered:: |dbscan\_results| .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_dbscan.py` .. dropdown:: Implementation The DBSCAN algorithm is deterministic, always generating the same clusters when given the same data in the same order. However, the results can differ when data is provided in a different order. First, even though the core samples will always be assigned to the same clusters, the labels of those clusters will depend on the order in which those samples are encountered in the data. Second and more importantly, the clusters to which non-core samples are assigned can differ depending on the data order. This would happen when a non-core sample has a distance lower than ``eps`` to two core samples in different clusters. By the triangular inequality, those two core samples must be more distant than ``eps`` from each other, or they would be in the same cluster. The non-core sample is assigned to whichever cluster is generated first in a pass through the data, and so the results will depend on the data ordering. The current implementation uses ball trees and kd-trees to determine the neighborhood of points, which avoids calculating the full distance matrix (as was done in scikit-learn versions before 0.14). The possibility to use custom metrics is retained; for details, see :class:`NearestNeighbors`. .. dropdown:: Memory consumption for large sample sizes This implementation is by default not memory efficient because it constructs a full pairwise similarity matrix in the case where kd-trees or ball-trees cannot be used (e.g., with sparse matrices). This matrix will consume :math:`n^2` floats. A couple of mechanisms for getting around this are: - Use :ref:`OPTICS ` clustering in conjunction with the `extract\_dbscan` method. OPTICS clustering also calculates the full pairwise matrix, but only keeps one row in memory at a time (memory complexity :math:`\mathcal{O}(n)`). - A sparse radius neighborhood graph (where missing entries are presumed to be out of eps) can be precomputed in a memory-efficient way and dbscan can be run over this with ``metric='precomputed'``. See :meth:`sklearn.neighbors.NearestNeighbors.radius\_neighbors\_graph`. - The dataset can be compressed, either by removing exact duplicates if these occur in your data, or by using BIRCH. Then you only have a relatively small number of representatives for a large number of points. You can then provide a ``sample\_weight`` when fitting DBSCAN. .. dropdown:: References \* `A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise `\_ Ester, M., H. P. Kriegel, J. Sander, and X. Xu, In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, Portland, OR, AAAI Press, pp. 226-231. 1996. \* :doi:`DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. <10.1145/3068335>` Schubert, E., Sander, J., Ester, M., Kriegel, H. P., & Xu, X. (2017). In ACM Transactions on Database Systems (TODS), 42(3), 19. .. \_hdbscan: HDBSCAN ======= The :class:`HDBSCAN` algorithm can be seen as an extension of :class:`DBSCAN` and :class:`OPTICS`. Specifically, :class:`DBSCAN` assumes that the clustering criterion (i.e. density requirement) is \*globally homogeneous\*. In other words, :class:`DBSCAN` may | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.0073374733328819275,
-0.08768898993730545,
-0.04345749691128731,
0.05449296534061432,
0.12610311806201935,
-0.07569859176874161,
0.0337102971971035,
0.0006357814418151975,
-0.04820825159549713,
-0.05140082910656929,
0.029243670403957367,
-0.026121873408555984,
0.09111247956752777,
-0.046... | 0.18001 |
H. P., & Xu, X. (2017). In ACM Transactions on Database Systems (TODS), 42(3), 19. .. \_hdbscan: HDBSCAN ======= The :class:`HDBSCAN` algorithm can be seen as an extension of :class:`DBSCAN` and :class:`OPTICS`. Specifically, :class:`DBSCAN` assumes that the clustering criterion (i.e. density requirement) is \*globally homogeneous\*. In other words, :class:`DBSCAN` may struggle to successfully capture clusters with different densities. :class:`HDBSCAN` alleviates this assumption and explores all possible density scales by building an alternative representation of the clustering problem. .. note:: This implementation is adapted from the original implementation of HDBSCAN, `scikit-learn-contrib/hdbscan `\_ based on [LJ2017]\_. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_hdbscan.py` Mutual Reachability Graph ------------------------- HDBSCAN first defines :math:`d\_c(x\_p)`, the \*core distance\* of a sample :math:`x\_p`, as the distance to its `min\_samples` th-nearest neighbor, counting itself. For example, if `min\_samples=5` and :math:`x\_\*` is the 5th-nearest neighbor of :math:`x\_p` then the core distance is: .. math:: d\_c(x\_p)=d(x\_p, x\_\*). Next it defines :math:`d\_m(x\_p, x\_q)`, the \*mutual reachability distance\* of two points :math:`x\_p, x\_q`, as: .. math:: d\_m(x\_p, x\_q) = \max\{d\_c(x\_p), d\_c(x\_q), d(x\_p, x\_q)\} These two notions allow us to construct the \*mutual reachability graph\* :math:`G\_{ms}` defined for a fixed choice of `min\_samples` by associating each sample :math:`x\_p` with a vertex of the graph, and thus edges between points :math:`x\_p, x\_q` are the mutual reachability distance :math:`d\_m(x\_p, x\_q)` between them. We may build subsets of this graph, denoted as :math:`G\_{ms,\varepsilon}`, by removing any edges with value greater than :math:`\varepsilon`: from the original graph. Any points whose core distance is less than :math:`\varepsilon`: are at this staged marked as noise. The remaining points are then clustered by finding the connected components of this trimmed graph. .. note:: Taking the connected components of a trimmed graph :math:`G\_{ms,\varepsilon}` is equivalent to running DBSCAN\* with `min\_samples` and :math:`\varepsilon`. DBSCAN\* is a slightly modified version of DBSCAN mentioned in [CM2013]\_. Hierarchical Clustering ----------------------- HDBSCAN can be seen as an algorithm which performs DBSCAN\* clustering across all values of :math:`\varepsilon`. As mentioned prior, this is equivalent to finding the connected components of the mutual reachability graphs for all values of :math:`\varepsilon`. To do this efficiently, HDBSCAN first extracts a minimum spanning tree (MST) from the fully -connected mutual reachability graph, then greedily cuts the edges with highest weight. An outline of the HDBSCAN algorithm is as follows: 1. Extract the MST of :math:`G\_{ms}`. 2. Extend the MST by adding a "self edge" for each vertex, with weight equal to the core distance of the underlying sample. 3. Initialize a single cluster and label for the MST. 4. Remove the edge with the greatest weight from the MST (ties are removed simultaneously). 5. Assign cluster labels to the connected components which contain the end points of the now-removed edge. If the component does not have at least one edge it is instead assigned a "null" label marking it as noise. 6. Repeat 4-5 until there are no more connected components. HDBSCAN is therefore able to obtain all possible partitions achievable by DBSCAN\* for a fixed choice of `min\_samples` in a hierarchical fashion. Indeed, this allows HDBSCAN to perform clustering across multiple densities and as such it no longer needs :math:`\varepsilon` to be given as a hyperparameter. Instead it relies solely on the choice of `min\_samples`, which tends to be a more robust hyperparameter. .. |hdbscan\_ground\_truth| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_hdbscan\_005.png :target: ../auto\_examples/cluster/plot\_hdbscan.html :scale: 75 .. |hdbscan\_results| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_hdbscan\_007.png :target: ../auto\_examples/cluster/plot\_hdbscan.html :scale: 75 .. centered:: |hdbscan\_ground\_truth| .. centered:: |hdbscan\_results| HDBSCAN can be smoothed with an additional hyperparameter `min\_cluster\_size` which specifies that during the hierarchical clustering, components with fewer than `minimum\_cluster\_size` many samples are considered noise. In practice, one can set `minimum\_cluster\_size = min\_samples` to couple the | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.03299977257847786,
-0.08796947449445724,
-0.0932191014289856,
0.036380402743816376,
0.05608561262488365,
-0.04952046647667885,
0.04758646339178085,
-0.06982593983411789,
-0.11980748921632767,
0.029405036941170692,
-0.020750155672430992,
-0.027826888486742973,
0.15107949078083038,
-0.035... | 0.114206 |
|hdbscan\_results| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_hdbscan\_007.png :target: ../auto\_examples/cluster/plot\_hdbscan.html :scale: 75 .. centered:: |hdbscan\_ground\_truth| .. centered:: |hdbscan\_results| HDBSCAN can be smoothed with an additional hyperparameter `min\_cluster\_size` which specifies that during the hierarchical clustering, components with fewer than `minimum\_cluster\_size` many samples are considered noise. In practice, one can set `minimum\_cluster\_size = min\_samples` to couple the parameters and simplify the hyperparameter space. .. rubric:: References .. [CM2013] Campello, R.J.G.B., Moulavi, D., Sander, J. (2013). Density-Based Clustering Based on Hierarchical Density Estimates. In: Pei, J., Tseng, V.S., Cao, L., Motoda, H., Xu, G. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2013. Lecture Notes in Computer Science(), vol 7819. Springer, Berlin, Heidelberg. :doi:`Density-Based Clustering Based on Hierarchical Density Estimates <10.1007/978-3-642-37456-2\_14>` .. [LJ2017] L. McInnes and J. Healy, (2017). Accelerated Hierarchical Density Based Clustering. In: IEEE International Conference on Data Mining Workshops (ICDMW), 2017, pp. 33-42. :doi:`Accelerated Hierarchical Density Based Clustering <10.1109/ICDMW.2017.12>` .. \_optics: OPTICS ====== The :class:`OPTICS` algorithm shares many similarities with the :class:`DBSCAN` algorithm, and can be considered a generalization of DBSCAN that relaxes the ``eps`` requirement from a single value to a value range. The key difference between DBSCAN and OPTICS is that the OPTICS algorithm builds a \*reachability\* graph, which assigns each sample both a ``reachability\_`` distance, and a spot within the cluster ``ordering\_`` attribute; these two attributes are assigned when the model is fitted, and are used to determine cluster membership. If OPTICS is run with the default value of \*inf\* set for ``max\_eps``, then DBSCAN style cluster extraction can be performed repeatedly in linear time for any given ``eps`` value using the ``cluster\_optics\_dbscan`` method. Setting ``max\_eps`` to a lower value will result in shorter run times, and can be thought of as the maximum neighborhood radius from each point to find other potential reachable points. .. |optics\_results| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_optics\_001.png :target: ../auto\_examples/cluster/plot\_optics.html :scale: 50 .. centered:: |optics\_results| The \*reachability\* distances generated by OPTICS allow for variable density extraction of clusters within a single data set. As shown in the above plot, combining \*reachability\* distances and data set ``ordering\_`` produces a \*reachability plot\*, where point density is represented on the Y-axis, and points are ordered such that nearby points are adjacent. 'Cutting' the reachability plot at a single value produces DBSCAN like results; all points above the 'cut' are classified as noise, and each time that there is a break when reading from left to right signifies a new cluster. The default cluster extraction with OPTICS looks at the steep slopes within the graph to find clusters, and the user can define what counts as a steep slope using the parameter ``xi``. There are also other possibilities for analysis on the graph itself, such as generating hierarchical representations of the data through reachability-plot dendrograms, and the hierarchy of clusters detected by the algorithm can be accessed through the ``cluster\_hierarchy\_`` parameter. The plot above has been color-coded so that cluster colors in planar space match the linear segment clusters of the reachability plot. Note that the blue and red clusters are adjacent in the reachability plot, and can be hierarchically represented as children of a larger parent cluster. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_optics.py` .. dropdown:: Comparison with DBSCAN The results from OPTICS ``cluster\_optics\_dbscan`` method and DBSCAN are very similar, but not always identical; specifically, labeling of periphery and noise points. This is in part because the first samples of each dense area processed by OPTICS have a large reachability value while being close to other points in their area, and will thus sometimes be marked as noise rather than periphery. This affects adjacent points when they are considered as candidates for being marked as | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.027314824983477592,
-0.09137628972530365,
-0.0013912973226979375,
0.020951461046934128,
0.06000825762748718,
-0.02904292568564415,
0.028744176030158997,
-0.007446420844644308,
-0.0559118390083313,
-0.04150580242276192,
-0.037666983902454376,
-0.06584085524082184,
0.1450410783290863,
-0.... | 0.086984 |
because the first samples of each dense area processed by OPTICS have a large reachability value while being close to other points in their area, and will thus sometimes be marked as noise rather than periphery. This affects adjacent points when they are considered as candidates for being marked as either periphery or noise. Note that for any single value of ``eps``, DBSCAN will tend to have a shorter run time than OPTICS; however, for repeated runs at varying ``eps`` values, a single run of OPTICS may require less cumulative runtime than DBSCAN. It is also important to note that OPTICS' output is close to DBSCAN's only if ``eps`` and ``max\_eps`` are close. .. dropdown:: Computational Complexity Spatial indexing trees are used to avoid calculating the full distance matrix, and allow for efficient memory usage on large sets of samples. Different distance metrics can be supplied via the ``metric`` keyword. For large datasets, similar (but not identical) results can be obtained via :class:`HDBSCAN`. The HDBSCAN implementation is multithreaded, and has better algorithmic runtime complexity than OPTICS, at the cost of worse memory scaling. For extremely large datasets that exhaust system memory using HDBSCAN, OPTICS will maintain :math:`n` (as opposed to :math:`n^2`) memory scaling; however, tuning of the ``max\_eps`` parameter will likely need to be used to give a solution in a reasonable amount of wall time. .. dropdown:: References \* "OPTICS: ordering points to identify the clustering structure." Ankerst, Mihael, Markus M. Breunig, Hans-Peter Kriegel, and Jörg Sander. In ACM Sigmod Record, vol. 28, no. 2, pp. 49-60. ACM, 1999. .. \_birch: BIRCH ===== The :class:`Birch` builds a tree called the Clustering Feature Tree (CFT) for the given data. The data is essentially lossy compressed to a set of Clustering Feature nodes (CF Nodes). The CF Nodes have a number of subclusters called Clustering Feature subclusters (CF Subclusters) and these CF Subclusters located in the non-terminal CF Nodes can have CF Nodes as children. The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory. This information includes: - Number of samples in a subcluster. - Linear Sum - An n-dimensional vector holding the sum of all samples - Squared Sum - Sum of the squared L2 norm of all samples. - Centroids - To avoid recalculation linear sum / n\_samples. - Squared norm of the centroids. The BIRCH algorithm has two parameters, the threshold and the branching factor. The branching factor limits the number of subclusters in a node and the threshold limits the distance between the entering sample and the existing subclusters. This algorithm can be viewed as an instance of a data reduction method, since it reduces the input data to a set of subclusters which are obtained directly from the leaves of the CFT. This reduced data can be further processed by feeding it into a global clusterer. This global clusterer can be set by ``n\_clusters``. If ``n\_clusters`` is set to None, the subclusters from the leaves are directly read off, otherwise a global clustering step labels these subclusters into global clusters (labels) and the samples are mapped to the global label of the nearest subcluster. .. dropdown:: Algorithm description - A new sample is inserted into the root of the CF Tree which is a CF Node. It is then merged with the subcluster of the root, that has the smallest radius after merging, constrained by the threshold and branching factor conditions. If the subcluster has any child node, then this is done repeatedly till it reaches a leaf. After finding the nearest subcluster in | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.06954105198383331,
-0.10930328816175461,
-0.014286110177636147,
-0.033826690167188644,
0.03263434022665024,
-0.11522197723388672,
0.02300151437520981,
-0.04418510943651199,
0.0337928868830204,
0.04012030363082886,
-0.08955419808626175,
0.028872298076748848,
0.0755155012011528,
-0.0331590... | 0.08935 |
CF Node. It is then merged with the subcluster of the root, that has the smallest radius after merging, constrained by the threshold and branching factor conditions. If the subcluster has any child node, then this is done repeatedly till it reaches a leaf. After finding the nearest subcluster in the leaf, the properties of this subcluster and the parent subclusters are recursively updated. - If the radius of the subcluster obtained by merging the new sample and the nearest subcluster is greater than the square of the threshold and if the number of subclusters is greater than the branching factor, then a space is temporarily allocated to this new sample. The two farthest subclusters are taken and the subclusters are divided into two groups on the basis of the distance between these subclusters. - If this split node has a parent subcluster and there is room for a new subcluster, then the parent is split into two. If there is no room, then this node is again split into two and the process is continued recursively, till it reaches the root. .. dropdown:: BIRCH or MiniBatchKMeans? - BIRCH does not scale very well to high dimensional data. As a rule of thumb if ``n\_features`` is greater than twenty, it is generally better to use MiniBatchKMeans. - If the number of instances of data needs to be reduced, or if one wants a large number of subclusters either as a preprocessing step or otherwise, BIRCH is more useful than MiniBatchKMeans. .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_birch\_vs\_minibatchkmeans\_001.png :target: ../auto\_examples/cluster/plot\_birch\_vs\_minibatchkmeans.html .. dropdown:: How to use partial\_fit? To avoid the computation of global clustering, for every call of ``partial\_fit`` the user is advised: 1. To set ``n\_clusters=None`` initially. 2. Train all data by multiple calls to partial\_fit. 3. Set ``n\_clusters`` to a required value using ``brc.set\_params(n\_clusters=n\_clusters)``. 4. Call ``partial\_fit`` finally with no arguments, i.e. ``brc.partial\_fit()`` which performs the global clustering. .. dropdown:: References \* Tian Zhang, Raghu Ramakrishnan, Maron Livny BIRCH: An efficient data clustering method for large databases. https://www.cs.sfu.ca/CourseCentral/459/han/papers/zhang96.pdf \* Roberto Perdisci JBirch - Java implementation of BIRCH clustering algorithm https://code.google.com/archive/p/jbirch .. \_clustering\_evaluation: Clustering performance evaluation ================================= Evaluating the performance of a clustering algorithm is not as trivial as counting the number of errors or the precision and recall of a supervised classification algorithm. In particular any evaluation metric should not take the absolute values of the cluster labels into account but rather if this clustering define separations of the data similar to some ground truth set of classes or satisfying some assumption such that members belong to the same class are more similar than members of different classes according to some similarity metric. .. currentmodule:: sklearn.metrics .. \_rand\_score: .. \_adjusted\_rand\_score: Rand index ---------- Given the knowledge of the ground truth class assignments ``labels\_true`` and our clustering algorithm assignments of the same samples ``labels\_pred``, the \*\*(adjusted or unadjusted) Rand index\*\* is a function that measures the \*\*similarity\*\* of the two assignments, ignoring permutations:: >>> from sklearn import metrics >>> labels\_true = [0, 0, 0, 1, 1, 1] >>> labels\_pred = [0, 0, 1, 1, 2, 2] >>> metrics.rand\_score(labels\_true, labels\_pred) 0.66 The Rand index does not ensure to obtain a value close to 0.0 for a random labelling. The adjusted Rand index \*\*corrects for chance\*\* and will give such a baseline. >>> metrics.adjusted\_rand\_score(labels\_true, labels\_pred) 0.24 As with all clustering metrics, one can permute 0 and 1 in the predicted labels, rename 2 to 3, and get the same score:: >>> labels\_pred = [1, 1, 0, 0, 3, 3] >>> metrics.rand\_score(labels\_true, labels\_pred) 0.66 >>> metrics.adjusted\_rand\_score(labels\_true, labels\_pred) 0.24 Furthermore, both :func:`rand\_score` and :func:`adjusted\_rand\_score` are \*\*symmetric\*\*: swapping the argument | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.02564743347465992,
-0.04477578029036522,
-0.02265745773911476,
-0.004719073418527842,
0.13780583441257477,
-0.043771177530288696,
-0.0659303143620491,
0.017693281173706055,
0.08010371029376984,
0.0036789108999073505,
0.0875362902879715,
-0.03440728783607483,
-0.025305839255452156,
-0.04... | 0.078729 |
with all clustering metrics, one can permute 0 and 1 in the predicted labels, rename 2 to 3, and get the same score:: >>> labels\_pred = [1, 1, 0, 0, 3, 3] >>> metrics.rand\_score(labels\_true, labels\_pred) 0.66 >>> metrics.adjusted\_rand\_score(labels\_true, labels\_pred) 0.24 Furthermore, both :func:`rand\_score` and :func:`adjusted\_rand\_score` are \*\*symmetric\*\*: swapping the argument does not change the scores. They can thus be used as \*\*consensus measures\*\*:: >>> metrics.rand\_score(labels\_pred, labels\_true) 0.66 >>> metrics.adjusted\_rand\_score(labels\_pred, labels\_true) 0.24 Perfect labeling is scored 1.0:: >>> labels\_pred = labels\_true[:] >>> metrics.rand\_score(labels\_true, labels\_pred) 1.0 >>> metrics.adjusted\_rand\_score(labels\_true, labels\_pred) 1.0 Poorly agreeing labels (e.g. independent labelings) have lower scores, and for the adjusted Rand index the score will be negative or close to zero. However, for the unadjusted Rand index the score, while lower, will not necessarily be close to zero:: >>> labels\_true = [0, 0, 0, 0, 0, 0, 1, 1] >>> labels\_pred = [0, 1, 2, 3, 4, 5, 5, 6] >>> metrics.rand\_score(labels\_true, labels\_pred) 0.39 >>> metrics.adjusted\_rand\_score(labels\_true, labels\_pred) -0.072 .. topic:: Advantages: - \*\*Interpretability\*\*: The unadjusted Rand index is proportional to the number of sample pairs whose labels are the same in both `labels\_pred` and `labels\_true`, or are different in both. - \*\*Random (uniform) label assignments have an adjusted Rand index score close to 0.0\*\* for any value of ``n\_clusters`` and ``n\_samples`` (which is not the case for the unadjusted Rand index or the V-measure for instance). - \*\*Bounded range\*\*: Lower values indicate different labelings, similar clusterings have a high (adjusted or unadjusted) Rand index, 1.0 is the perfect match score. The score range is [0, 1] for the unadjusted Rand index and [-0.5, 1] for the adjusted Rand index. - \*\*No assumption is made on the cluster structure\*\*: The (adjusted or unadjusted) Rand index can be used to compare all kinds of clustering algorithms, and can be used to compare clustering algorithms such as k-means which assumes isotropic blob shapes with results of spectral clustering algorithms which can find cluster with "folded" shapes. .. topic:: Drawbacks: - Contrary to inertia, the \*\*(adjusted or unadjusted) Rand index requires knowledge of the ground truth classes\*\* which is almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting). However (adjusted or unadjusted) Rand index can also be useful in a purely unsupervised setting as a building block for a Consensus Index that can be used for clustering model selection (TODO). - The \*\*unadjusted Rand index is often close to 1.0\*\* even if the clusterings themselves differ significantly. This can be understood when interpreting the Rand index as the accuracy of element pair labeling resulting from the clusterings: In practice there often is a majority of element pairs that are assigned the ``different`` pair label under both the predicted and the ground truth clustering resulting in a high proportion of pair labels that agree, which leads subsequently to a high score. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_adjusted\_for\_chance\_measures.py`: Analysis of the impact of the dataset size on the value of clustering measures for random assignments. .. dropdown:: Mathematical formulation If C is a ground truth class assignment and K the clustering, let us define :math:`a` and :math:`b` as: - :math:`a`, the number of pairs of elements that are in the same set in C and in the same set in K - :math:`b`, the number of pairs of elements that are in different sets in C and in different sets in K The unadjusted Rand index is then given by: .. math:: \text{RI} = \frac{a + b}{C\_2^{n\_{samples}}} where :math:`C\_2^{n\_{samples}}` is the total number of possible pairs in the dataset. It does not matter if the calculation is | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.0015557892620563507,
-0.06345803290605545,
-0.05717850103974342,
0.040353696793317795,
0.020185116678476334,
0.019644824787974358,
0.07159868627786636,
-0.023891199380159378,
-0.025971442461013794,
-0.00851498544216156,
0.017100410535931587,
-0.11015055328607559,
0.05914696678519249,
-0... | 0.0414 |
of elements that are in different sets in C and in different sets in K The unadjusted Rand index is then given by: .. math:: \text{RI} = \frac{a + b}{C\_2^{n\_{samples}}} where :math:`C\_2^{n\_{samples}}` is the total number of possible pairs in the dataset. It does not matter if the calculation is performed on ordered pairs or unordered pairs as long as the calculation is performed consistently. However, the Rand index does not guarantee that random label assignments will get a value close to zero (esp. if the number of clusters is in the same order of magnitude as the number of samples). To counter this effect we can discount the expected RI :math:`E[\text{RI}]` of random labelings by defining the adjusted Rand index as follows: .. math:: \text{ARI} = \frac{\text{RI} - E[\text{RI}]}{\max(\text{RI}) - E[\text{RI}]} .. dropdown:: References \* `Comparing Partitions `\_ L. Hubert and P. Arabie, Journal of Classification 1985 \* `Properties of the Hubert-Arabie adjusted Rand index `\_ D. Steinley, Psychological Methods 2004 \* `Wikipedia entry for the Rand index `\_ \* :doi:`Minimum adjusted Rand index for two clusterings of a given size, 2022, J. E. Chacón and A. I. Rastrojo <10.1007/s11634-022-00491-w>` .. \_mutual\_info\_score: Mutual Information based scores ------------------------------- Given the knowledge of the ground truth class assignments ``labels\_true`` and our clustering algorithm assignments of the same samples ``labels\_pred``, the \*\*Mutual Information\*\* is a function that measures the \*\*agreement\*\* of the two assignments, ignoring permutations. Two different normalized versions of this measure are available, \*\*Normalized Mutual Information (NMI)\*\* and \*\*Adjusted Mutual Information (AMI)\*\*. NMI is often used in the literature, while AMI was proposed more recently and is \*\*normalized against chance\*\*:: >>> from sklearn import metrics >>> labels\_true = [0, 0, 0, 1, 1, 1] >>> labels\_pred = [0, 0, 1, 1, 2, 2] >>> metrics.adjusted\_mutual\_info\_score(labels\_true, labels\_pred) # doctest: +SKIP 0.22504 One can permute 0 and 1 in the predicted labels, rename 2 to 3 and get the same score:: >>> labels\_pred = [1, 1, 0, 0, 3, 3] >>> metrics.adjusted\_mutual\_info\_score(labels\_true, labels\_pred) # doctest: +SKIP 0.22504 All, :func:`mutual\_info\_score`, :func:`adjusted\_mutual\_info\_score` and :func:`normalized\_mutual\_info\_score` are symmetric: swapping the argument does not change the score. Thus they can be used as a \*\*consensus measure\*\*:: >>> metrics.adjusted\_mutual\_info\_score(labels\_pred, labels\_true) # doctest: +SKIP 0.22504 Perfect labeling is scored 1.0:: >>> labels\_pred = labels\_true[:] >>> metrics.adjusted\_mutual\_info\_score(labels\_true, labels\_pred) # doctest: +SKIP 1.0 >>> metrics.normalized\_mutual\_info\_score(labels\_true, labels\_pred) # doctest: +SKIP 1.0 This is not true for ``mutual\_info\_score``, which is therefore harder to judge:: >>> metrics.mutual\_info\_score(labels\_true, labels\_pred) # doctest: +SKIP 0.69 Bad (e.g. independent labelings) have non-positive scores:: >>> labels\_true = [0, 1, 2, 0, 3, 4, 5, 1] >>> labels\_pred = [1, 1, 0, 0, 2, 2, 2, 2] >>> metrics.adjusted\_mutual\_info\_score(labels\_true, labels\_pred) # doctest: +SKIP -0.10526 .. topic:: Advantages: - \*\*Random (uniform) label assignments have an AMI score close to 0.0\*\* for any value of ``n\_clusters`` and ``n\_samples`` (which is not the case for raw Mutual Information or the V-measure for instance). - \*\*Upper bound of 1\*\*: Values close to zero indicate two label assignments that are largely independent, while values close to one indicate significant agreement. Further, an AMI of exactly 1 indicates that the two label assignments are equal (with or without permutation). .. topic:: Drawbacks: - Contrary to inertia, \*\*MI-based measures require the knowledge of the ground truth classes\*\* while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting). However MI-based measures can also be useful in purely unsupervised setting as a building block for a Consensus Index that can be used for clustering model selection. - NMI and MI are not adjusted against chance. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_adjusted\_for\_chance\_measures.py`: | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.006376650650054216,
0.011349333450198174,
-0.056736983358860016,
0.024353016167879105,
0.07387000322341919,
0.019319135695695877,
0.0846315547823906,
-0.062052641063928604,
0.06072458252310753,
-0.026536958292126656,
-0.009544527158141136,
-0.05764339864253998,
0.06413345783948898,
-0.1... | 0.023858 |
by human annotators (as in the supervised learning setting). However MI-based measures can also be useful in purely unsupervised setting as a building block for a Consensus Index that can be used for clustering model selection. - NMI and MI are not adjusted against chance. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_adjusted\_for\_chance\_measures.py`: Analysis of the impact of the dataset size on the value of clustering measures for random assignments. This example also includes the Adjusted Rand Index. .. dropdown:: Mathematical formulation Assume two label assignments (of the same N objects), :math:`U` and :math:`V`. Their entropy is the amount of uncertainty for a partition set, defined by: .. math:: H(U) = - \sum\_{i=1}^{|U|}P(i)\log(P(i)) where :math:`P(i) = |U\_i| / N` is the probability that an object picked at random from :math:`U` falls into class :math:`U\_i`. Likewise for :math:`V`: .. math:: H(V) = - \sum\_{j=1}^{|V|}P'(j)\log(P'(j)) With :math:`P'(j) = |V\_j| / N`. The mutual information (MI) between :math:`U` and :math:`V` is calculated by: .. math:: \text{MI}(U, V) = \sum\_{i=1}^{|U|}\sum\_{j=1}^{|V|}P(i, j)\log\left(\frac{P(i,j)}{P(i)P'(j)}\right) where :math:`P(i, j) = |U\_i \cap V\_j| / N` is the probability that an object picked at random falls into both classes :math:`U\_i` and :math:`V\_j`. It also can be expressed in set cardinality formulation: .. math:: \text{MI}(U, V) = \sum\_{i=1}^{|U|} \sum\_{j=1}^{|V|} \frac{|U\_i \cap V\_j|}{N}\log\left(\frac{N|U\_i \cap V\_j|}{|U\_i||V\_j|}\right) The normalized mutual information is defined as .. math:: \text{NMI}(U, V) = \frac{\text{MI}(U, V)}{\text{mean}(H(U), H(V))} This value of the mutual information and also the normalized variant is not adjusted for chance and will tend to increase as the number of different labels (clusters) increases, regardless of the actual amount of "mutual information" between the label assignments. The expected value for the mutual information can be calculated using the following equation [VEB2009]\_. In this equation, :math:`a\_i = |U\_i|` (the number of elements in :math:`U\_i`) and :math:`b\_j = |V\_j|` (the number of elements in :math:`V\_j`). .. math:: E[\text{MI}(U,V)]=\sum\_{i=1}^{|U|} \sum\_{j=1}^{|V|} \sum\_{n\_{ij}=(a\_i+b\_j-N)^+ }^{\min(a\_i, b\_j)} \frac{n\_{ij}}{N}\log \left( \frac{ N.n\_{ij}}{a\_i b\_j}\right) \frac{a\_i!b\_j!(N-a\_i)!(N-b\_j)!}{N!n\_{ij}!(a\_i-n\_{ij})!(b\_j-n\_{ij})! (N-a\_i-b\_j+n\_{ij})!} Using the expected value, the adjusted mutual information can then be calculated using a similar form to that of the adjusted Rand index: .. math:: \text{AMI} = \frac{\text{MI} - E[\text{MI}]}{\text{mean}(H(U), H(V)) - E[\text{MI}]} For normalized mutual information and adjusted mutual information, the normalizing value is typically some \*generalized\* mean of the entropies of each clustering. Various generalized means exist, and no firm rules exist for preferring one over the others. The decision is largely a field-by-field basis; for instance, in community detection, the arithmetic mean is most common. Each normalizing method provides "qualitatively similar behaviours" [YAT2016]\_. In our implementation, this is controlled by the ``average\_method`` parameter. Vinh et al. (2010) named variants of NMI and AMI by their averaging method [VEB2010]\_. Their 'sqrt' and 'sum' averages are the geometric and arithmetic means; we use these more broadly common names. .. rubric:: References \* Strehl, Alexander, and Joydeep Ghosh (2002). "Cluster ensembles - a knowledge reuse framework for combining multiple partitions". Journal of Machine Learning Research 3: 583-617. `doi:10.1162/153244303321897735 `\_. \* `Wikipedia entry for the (normalized) Mutual Information `\_ \* `Wikipedia entry for the Adjusted Mutual Information `\_ .. [VEB2009] Vinh, Epps, and Bailey, (2009). "Information theoretic measures for clusterings comparison". Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09. `doi:10.1145/1553374.1553511 `\_. ISBN 9781605585161. .. [VEB2010] Vinh, Epps, and Bailey, (2010). "Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance". JMLR .. [YAT2016] Yang, Algesheimer, and Tessone, (2016). "A comparative analysis of community detection algorithms on artificial networks". Scientific Reports 6: 30750. `doi:10.1038/srep30750 `\_. .. \_homogeneity\_completeness: Homogeneity, completeness and V-measure --------------------------------------- Given the knowledge of the ground truth class assignments of the samples, it is possible | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.034555330872535706,
-0.06573785096406937,
0.03395891562104225,
0.048495225608348846,
0.09648220241069794,
-0.0172555074095726,
0.09637517482042313,
0.03182101994752884,
0.037661101669073105,
-0.012663443572819233,
0.006872417405247688,
-0.017272911965847015,
0.06756572425365448,
0.03270... | 0.123055 |
and Correction for Chance". JMLR .. [YAT2016] Yang, Algesheimer, and Tessone, (2016). "A comparative analysis of community detection algorithms on artificial networks". Scientific Reports 6: 30750. `doi:10.1038/srep30750 `\_. .. \_homogeneity\_completeness: Homogeneity, completeness and V-measure --------------------------------------- Given the knowledge of the ground truth class assignments of the samples, it is possible to define some intuitive metric using conditional entropy analysis. In particular Rosenberg and Hirschberg (2007) define the following two desirable objectives for any cluster assignment: - \*\*homogeneity\*\*: each cluster contains only members of a single class. - \*\*completeness\*\*: all members of a given class are assigned to the same cluster. We can turn those concept as scores :func:`homogeneity\_score` and :func:`completeness\_score`. Both are bounded below by 0.0 and above by 1.0 (higher is better):: >>> from sklearn import metrics >>> labels\_true = [0, 0, 0, 1, 1, 1] >>> labels\_pred = [0, 0, 1, 1, 2, 2] >>> metrics.homogeneity\_score(labels\_true, labels\_pred) 0.66 >>> metrics.completeness\_score(labels\_true, labels\_pred) 0.42 Their harmonic mean called \*\*V-measure\*\* is computed by :func:`v\_measure\_score`:: >>> metrics.v\_measure\_score(labels\_true, labels\_pred) 0.516 This function's formula is as follows: .. math:: v = \frac{(1 + \beta) \times \text{homogeneity} \times \text{completeness}}{(\beta \times \text{homogeneity} + \text{completeness})} `beta` defaults to a value of 1.0, but for using a value less than 1 for beta:: >>> metrics.v\_measure\_score(labels\_true, labels\_pred, beta=0.6) 0.547 more weight will be attributed to homogeneity, and using a value greater than 1:: >>> metrics.v\_measure\_score(labels\_true, labels\_pred, beta=1.8) 0.48 more weight will be attributed to completeness. The V-measure is actually equivalent to the mutual information (NMI) discussed above, with the aggregation function being the arithmetic mean [B2011]\_. Homogeneity, completeness and V-measure can be computed at once using :func:`homogeneity\_completeness\_v\_measure` as follows:: >>> metrics.homogeneity\_completeness\_v\_measure(labels\_true, labels\_pred) (0.67, 0.42, 0.52) The following clustering assignment is slightly better, since it is homogeneous but not complete:: >>> labels\_pred = [0, 0, 0, 1, 2, 2] >>> metrics.homogeneity\_completeness\_v\_measure(labels\_true, labels\_pred) (1.0, 0.68, 0.81) .. note:: :func:`v\_measure\_score` is \*\*symmetric\*\*: it can be used to evaluate the \*\*agreement\*\* of two independent assignments on the same dataset. This is not the case for :func:`completeness\_score` and :func:`homogeneity\_score`: both are bound by the relationship:: homogeneity\_score(a, b) == completeness\_score(b, a) .. topic:: Advantages: - \*\*Bounded scores\*\*: 0.0 is as bad as it can be, 1.0 is a perfect score. - Intuitive interpretation: clustering with bad V-measure can be \*\*qualitatively analyzed in terms of homogeneity and completeness\*\* to better feel what 'kind' of mistakes is done by the assignment. - \*\*No assumption is made on the cluster structure\*\*: can be used to compare clustering algorithms such as k-means which assumes isotropic blob shapes with results of spectral clustering algorithms which can find cluster with "folded" shapes. .. topic:: Drawbacks: - The previously introduced metrics are \*\*not normalized with regards to random labeling\*\*: this means that depending on the number of samples, clusters and ground truth classes, a completely random labeling will not always yield the same values for homogeneity, completeness and hence v-measure. In particular \*\*random labeling won't yield zero scores especially when the number of clusters is large\*\*. This problem can safely be ignored when the number of samples is more than a thousand and the number of clusters is less than 10. \*\*For smaller sample sizes or larger number of clusters it is safer to use an adjusted index such as the Adjusted Rand Index (ARI)\*\*. .. figure:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_adjusted\_for\_chance\_measures\_001.png :target: ../auto\_examples/cluster/plot\_adjusted\_for\_chance\_measures.html :align: center :scale: 100 - These metrics \*\*require the knowledge of the ground truth classes\*\* while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting). .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_adjusted\_for\_chance\_measures.py`: Analysis of the impact of the dataset size on the value of clustering | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.016433028504252434,
-0.028818044811487198,
0.04173731058835983,
-0.0042654480785131454,
0.07676374167203903,
0.020323684439063072,
0.039927661418914795,
-0.029364487156271935,
0.05780122056603432,
-0.03364839032292366,
0.029459934681653976,
-0.11074281483888626,
0.05577782914042473,
0.0... | 0.037484 |
:scale: 100 - These metrics \*\*require the knowledge of the ground truth classes\*\* while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting). .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_adjusted\_for\_chance\_measures.py`: Analysis of the impact of the dataset size on the value of clustering measures for random assignments. .. dropdown:: Mathematical formulation Homogeneity and completeness scores are formally given by: .. math:: h = 1 - \frac{H(C|K)}{H(C)} .. math:: c = 1 - \frac{H(K|C)}{H(K)} where :math:`H(C|K)` is the \*\*conditional entropy of the classes given the cluster assignments\*\* and is given by: .. math:: H(C|K) = - \sum\_{c=1}^{|C|} \sum\_{k=1}^{|K|} \frac{n\_{c,k}}{n} \cdot \log\left(\frac{n\_{c,k}}{n\_k}\right) and :math:`H(C)` is the \*\*entropy of the classes\*\* and is given by: .. math:: H(C) = - \sum\_{c=1}^{|C|} \frac{n\_c}{n} \cdot \log\left(\frac{n\_c}{n}\right) with :math:`n` the total number of samples, :math:`n\_c` and :math:`n\_k` the number of samples respectively belonging to class :math:`c` and cluster :math:`k`, and finally :math:`n\_{c,k}` the number of samples from class :math:`c` assigned to cluster :math:`k`. The \*\*conditional entropy of clusters given class\*\* :math:`H(K|C)` and the \*\*entropy of clusters\*\* :math:`H(K)` are defined in a symmetric manner. Rosenberg and Hirschberg further define \*\*V-measure\*\* as the \*\*harmonic mean of homogeneity and completeness\*\*: .. math:: v = 2 \cdot \frac{h \cdot c}{h + c} .. rubric:: References \* `V-Measure: A conditional entropy-based external cluster evaluation measure `\_ Andrew Rosenberg and Julia Hirschberg, 2007 .. [B2011] `Identification and Characterization of Events in Social Media `\_, Hila Becker, PhD Thesis. .. \_fowlkes\_mallows\_scores: Fowlkes-Mallows scores ---------------------- The original Fowlkes-Mallows index (FMI) was intended to measure the similarity between two clustering results, which is inherently an unsupervised comparison. The supervised adaptation of the Fowlkes-Mallows index (as implemented in :func:`sklearn.metrics.fowlkes\_mallows\_score`) can be used when the ground truth class assignments of the samples are known. The FMI is defined as the geometric mean of the pairwise precision and recall: .. math:: \text{FMI} = \frac{\text{TP}}{\sqrt{(\text{TP} + \text{FP}) (\text{TP} + \text{FN})}} In the above formula: \* ``TP`` (\*\*True Positive\*\*): The number of pairs of points that are clustered together both in the true labels and in the predicted labels. \* ``FP`` (\*\*False Positive\*\*): The number of pairs of points that are clustered together in the predicted labels but not in the true labels. \* ``FN`` (\*\*False Negative\*\*): The number of pairs of points that are clustered together in the true labels but not in the predicted labels. The score ranges from 0 to 1. A high value indicates a good similarity between two clusters. >>> from sklearn import metrics >>> labels\_true = [0, 0, 0, 1, 1, 1] >>> labels\_pred = [0, 0, 1, 1, 2, 2] >>> metrics.fowlkes\_mallows\_score(labels\_true, labels\_pred) 0.47140 One can permute 0 and 1 in the predicted labels, rename 2 to 3 and get the same score:: >>> labels\_pred = [1, 1, 0, 0, 3, 3] >>> metrics.fowlkes\_mallows\_score(labels\_true, labels\_pred) 0.47140 Perfect labeling is scored 1.0:: >>> labels\_pred = labels\_true[:] >>> metrics.fowlkes\_mallows\_score(labels\_true, labels\_pred) 1.0 Bad (e.g. independent labelings) have zero scores:: >>> labels\_true = [0, 1, 2, 0, 3, 4, 5, 1] >>> labels\_pred = [1, 1, 0, 0, 2, 2, 2, 2] >>> metrics.fowlkes\_mallows\_score(labels\_true, labels\_pred) 0.0 .. topic:: Advantages: - \*\*Random (uniform) label assignments have a FMI score close to 0.0\*\* for any value of ``n\_clusters`` and ``n\_samples`` (which is not the case for raw Mutual Information or the V-measure for instance). - \*\*Upper-bounded at 1\*\*: Values close to zero indicate two label assignments that are largely independent, while values close to one indicate significant agreement. Further, values of exactly 0 indicate \*\*purely\*\* independent label assignments and a FMI of exactly 1 indicates that the two label assignments are equal | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.01800347864627838,
-0.025852929800748825,
-0.007814773358404636,
-0.009018353186547756,
0.010211119428277016,
0.03825239837169647,
0.03679914399981499,
0.010227030143141747,
-0.038900211453437805,
-0.015598529018461704,
0.013884235173463821,
-0.09149694442749023,
0.04912332817912102,
0.... | 0.115803 |
for instance). - \*\*Upper-bounded at 1\*\*: Values close to zero indicate two label assignments that are largely independent, while values close to one indicate significant agreement. Further, values of exactly 0 indicate \*\*purely\*\* independent label assignments and a FMI of exactly 1 indicates that the two label assignments are equal (with or without permutation). - \*\*No assumption is made on the cluster structure\*\*: can be used to compare clustering algorithms such as k-means which assumes isotropic blob shapes with results of spectral clustering algorithms which can find cluster with "folded" shapes. .. topic:: Drawbacks: - Contrary to inertia, \*\*FMI-based measures require the knowledge of the ground truth classes\*\* while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting). .. dropdown:: References \* E. B. Fowkles and C. L. Mallows, 1983. "A method for comparing two hierarchical clusterings". Journal of the American Statistical Association. https://www.tandfonline.com/doi/abs/10.1080/01621459.1983.10478008 \* `Wikipedia entry for the Fowlkes-Mallows Index `\_ .. \_silhouette\_coefficient: Silhouette Coefficient ---------------------- If the ground truth labels are not known, evaluation must be performed using the model itself. The Silhouette Coefficient (:func:`sklearn.metrics.silhouette\_score`) is an example of such an evaluation, where a higher Silhouette Coefficient score relates to a model with better defined clusters. The Silhouette Coefficient is defined for each sample and is composed of two scores: - \*\*a\*\*: The mean distance between a sample and all other points in the same class. - \*\*b\*\*: The mean distance between a sample and all other points in the \*next nearest cluster\*. The Silhouette Coefficient \*s\* for a single sample is then given as: .. math:: s = \frac{b - a}{max(a, b)} The Silhouette Coefficient for a set of samples is given as the mean of the Silhouette Coefficient for each sample. >>> from sklearn import metrics >>> from sklearn.metrics import pairwise\_distances >>> from sklearn import datasets >>> X, y = datasets.load\_iris(return\_X\_y=True) In normal usage, the Silhouette Coefficient is applied to the results of a cluster analysis. >>> import numpy as np >>> from sklearn.cluster import KMeans >>> kmeans\_model = KMeans(n\_clusters=3, random\_state=1).fit(X) >>> labels = kmeans\_model.labels\_ >>> metrics.silhouette\_score(X, labels, metric='euclidean') 0.55 .. topic:: Advantages: - The score is bounded between -1 for incorrect clustering and +1 for highly dense clustering. Scores around zero indicate overlapping clusters. - The score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster. .. topic:: Drawbacks: - The Silhouette Coefficient is generally higher for convex clusters than other concepts of clusters, such as density based clusters like those obtained through DBSCAN. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_silhouette\_analysis.py` : In this example the silhouette analysis is used to choose an optimal value for n\_clusters. .. dropdown:: References \* Peter J. Rousseeuw (1987). :doi:`"Silhouettes: a Graphical Aid to the Interpretation and Validation of Cluster Analysis"<10.1016/0377-0427(87)90125-7>`. Computational and Applied Mathematics 20: 53-65. .. \_calinski\_harabasz\_index: Calinski-Harabasz Index ----------------------- If the ground truth labels are not known, the Calinski-Harabasz index (:func:`sklearn.metrics.calinski\_harabasz\_score`) - also known as the Variance Ratio Criterion - can be used to evaluate the model, where a higher Calinski-Harabasz score relates to a model with better defined clusters. The index is the ratio of the sum of between-clusters dispersion and of within-cluster dispersion for all clusters (where dispersion is defined as the sum of distances squared): >>> from sklearn import metrics >>> from sklearn.metrics import pairwise\_distances >>> from sklearn import datasets >>> X, y = datasets.load\_iris(return\_X\_y=True) In normal usage, the Calinski-Harabasz index is applied to the results of a cluster analysis: >>> import numpy as np >>> from sklearn.cluster import KMeans >>> kmeans\_model = KMeans(n\_clusters=3, random\_state=1).fit(X) >>> labels | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.05588896945118904,
-0.05048646777868271,
-0.03880975767970085,
0.016707295551896095,
-0.017023319378495216,
-0.017233800143003464,
0.007167026400566101,
0.012197576463222504,
0.008225979283452034,
-0.06576037406921387,
0.038819409906864166,
-0.06578768044710159,
0.04901565611362457,
0.0... | 0.218253 |
sklearn import metrics >>> from sklearn.metrics import pairwise\_distances >>> from sklearn import datasets >>> X, y = datasets.load\_iris(return\_X\_y=True) In normal usage, the Calinski-Harabasz index is applied to the results of a cluster analysis: >>> import numpy as np >>> from sklearn.cluster import KMeans >>> kmeans\_model = KMeans(n\_clusters=3, random\_state=1).fit(X) >>> labels = kmeans\_model.labels\_ >>> metrics.calinski\_harabasz\_score(X, labels) 561.59 .. topic:: Advantages: - The score is higher when clusters are dense and well separated, which relates to a standard concept of a cluster. - The score is fast to compute. .. topic:: Drawbacks: - The Calinski-Harabasz index is generally higher for convex clusters than other concepts of clusters, such as density based clusters like those obtained through DBSCAN. .. dropdown:: Mathematical formulation For a set of data :math:`E` of size :math:`n\_E` which has been clustered into :math:`k` clusters, the Calinski-Harabasz score :math:`s` is defined as the ratio of the between-clusters dispersion mean and the within-cluster dispersion: .. math:: s = \frac{\mathrm{tr}(B\_k)}{\mathrm{tr}(W\_k)} \times \frac{n\_E - k}{k - 1} where :math:`\mathrm{tr}(B\_k)` is trace of the between group dispersion matrix and :math:`\mathrm{tr}(W\_k)` is the trace of the within-cluster dispersion matrix defined by: .. math:: W\_k = \sum\_{q=1}^k \sum\_{x \in C\_q} (x - c\_q) (x - c\_q)^T .. math:: B\_k = \sum\_{q=1}^k n\_q (c\_q - c\_E) (c\_q - c\_E)^T with :math:`C\_q` the set of points in cluster :math:`q`, :math:`c\_q` the center of cluster :math:`q`, :math:`c\_E` the center of :math:`E`, and :math:`n\_q` the number of points in cluster :math:`q`. .. dropdown:: References \* Caliński, T., & Harabasz, J. (1974). `"A Dendrite Method for Cluster Analysis" `\_. :doi:`Communications in Statistics-theory and Methods 3: 1-27 <10.1080/03610927408827101>`. .. \_davies-bouldin\_index: Davies-Bouldin Index -------------------- If the ground truth labels are not known, the Davies-Bouldin index (:func:`sklearn.metrics.davies\_bouldin\_score`) can be used to evaluate the model, where a lower Davies-Bouldin index relates to a model with better separation between the clusters. This index signifies the average 'similarity' between clusters, where the similarity is a measure that compares the distance between clusters with the size of the clusters themselves. Zero is the lowest possible score. Values closer to zero indicate a better partition. In normal usage, the Davies-Bouldin index is applied to the results of a cluster analysis as follows: >>> from sklearn import datasets >>> iris = datasets.load\_iris() >>> X = iris.data >>> from sklearn.cluster import KMeans >>> from sklearn.metrics import davies\_bouldin\_score >>> kmeans = KMeans(n\_clusters=3, random\_state=1).fit(X) >>> labels = kmeans.labels\_ >>> davies\_bouldin\_score(X, labels) 0.666 .. topic:: Advantages: - The computation of Davies-Bouldin is simpler than that of Silhouette scores. - The index is solely based on quantities and features inherent to the dataset as its computation only uses point-wise distances. .. topic:: Drawbacks: - The Davies-Bouldin index is generally higher for convex clusters than other concepts of clusters, such as density-based clusters like those obtained from DBSCAN. - The usage of centroid distance limits the distance metric to Euclidean space. .. dropdown:: Mathematical formulation The index is defined as the average similarity between each cluster :math:`C\_i` for :math:`i=1, ..., k` and its most similar one :math:`C\_j`. In the context of this index, similarity is defined as a measure :math:`R\_{ij}` that trades off: - :math:`s\_i`, the average distance between each point of cluster :math:`i` and the centroid of that cluster -- also known as cluster diameter. - :math:`d\_{ij}`, the distance between cluster centroids :math:`i` and :math:`j`. A simple choice to construct :math:`R\_{ij}` so that it is nonnegative and symmetric is: .. math:: R\_{ij} = \frac{s\_i + s\_j}{d\_{ij}} Then the Davies-Bouldin index is defined as: .. math:: DB = \frac{1}{k} \sum\_{i=1}^k \max\_{i \neq j} R\_{ij} .. dropdown:: References \* Davies, David L.; Bouldin, Donald W. (1979). :doi:`"A | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.052296821027994156,
-0.03232130780816078,
-0.09433137625455856,
0.06019265949726105,
0.05745864287018776,
0.029416052624583244,
0.02858586795628071,
0.009058348834514618,
0.030087409541010857,
-0.01145833358168602,
0.012992918491363525,
-0.06412555277347565,
0.03844449296593666,
-0.0125... | 0.155275 |
A simple choice to construct :math:`R\_{ij}` so that it is nonnegative and symmetric is: .. math:: R\_{ij} = \frac{s\_i + s\_j}{d\_{ij}} Then the Davies-Bouldin index is defined as: .. math:: DB = \frac{1}{k} \sum\_{i=1}^k \max\_{i \neq j} R\_{ij} .. dropdown:: References \* Davies, David L.; Bouldin, Donald W. (1979). :doi:`"A Cluster Separation Measure" <10.1109/TPAMI.1979.4766909>` IEEE Transactions on Pattern Analysis and Machine Intelligence. PAMI-1 (2): 224-227. \* Halkidi, Maria; Batistakis, Yannis; Vazirgiannis, Michalis (2001). :doi:`"On Clustering Validation Techniques" <10.1023/A:1012801612483>` Journal of Intelligent Information Systems, 17(2-3), 107-145. \* `Wikipedia entry for Davies-Bouldin index `\_. .. \_contingency\_matrix: Contingency Matrix ------------------ Contingency matrix (:func:`sklearn.metrics.cluster.contingency\_matrix`) reports the intersection cardinality for every true/predicted cluster pair. The contingency matrix provides sufficient statistics for all clustering metrics where the samples are independent and identically distributed and one doesn't need to account for some instances not being clustered. Here is an example:: >>> from sklearn.metrics.cluster import contingency\_matrix >>> x = ["a", "a", "a", "b", "b", "b"] >>> y = [0, 0, 1, 1, 2, 2] >>> contingency\_matrix(x, y) array([[2, 1, 0], [0, 1, 2]]) The first row of the output array indicates that there are three samples whose true cluster is "a". Of them, two are in predicted cluster 0, one is in 1, and none is in 2. And the second row indicates that there are three samples whose true cluster is "b". Of them, none is in predicted cluster 0, one is in 1 and two are in 2. A :ref:`confusion matrix ` for classification is a square contingency matrix where the order of rows and columns correspond to a list of classes. .. topic:: Advantages: - Allows to examine the spread of each true cluster across predicted clusters and vice versa. - The contingency table calculated is typically utilized in the calculation of a similarity statistic (like the others listed in this document) between the two clusterings. .. topic:: Drawbacks: - Contingency matrix is easy to interpret for a small number of clusters, but becomes very hard to interpret for a large number of clusters. - It doesn't give a single metric to use as an objective for clustering optimisation. .. dropdown:: References \* `Wikipedia entry for contingency matrix `\_ .. \_pair\_confusion\_matrix: Pair Confusion Matrix --------------------- The pair confusion matrix (:func:`sklearn.metrics.cluster.pair\_confusion\_matrix`) is a 2x2 similarity matrix .. math:: C = \left[\begin{matrix} C\_{00} & C\_{01} \\ C\_{10} & C\_{11} \end{matrix}\right] between two clusterings computed by considering all pairs of samples and counting pairs that are assigned into the same or into different clusters under the true and predicted clusterings. It has the following entries: :math:`C\_{00}` : number of pairs with both clusterings having the samples not clustered together :math:`C\_{10}` : number of pairs with the true label clustering having the samples clustered together but the other clustering not having the samples clustered together :math:`C\_{01}` : number of pairs with the true label clustering not having the samples clustered together but the other clustering having the samples clustered together :math:`C\_{11}` : number of pairs with both clusterings having the samples clustered together Considering a pair of samples that is clustered together a positive pair, then as in binary classification the count of true negatives is :math:`C\_{00}`, false negatives is :math:`C\_{10}`, true positives is :math:`C\_{11}` and false positives is :math:`C\_{01}`. Perfectly matching labelings have all non-zero entries on the diagonal regardless of actual label values:: >>> from sklearn.metrics.cluster import pair\_confusion\_matrix >>> pair\_confusion\_matrix([0, 0, 1, 1], [0, 0, 1, 1]) array([[8, 0], [0, 4]]) :: >>> pair\_confusion\_matrix([0, 0, 1, 1], [1, 1, 0, 0]) array([[8, 0], [0, 4]]) Labelings that assign all classes members to the same clusters are complete | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
-0.06964216381311417,
-0.029346955940127373,
-0.026412051171064377,
-0.009794635698199272,
0.07165132462978363,
-0.03575284034013748,
-0.01103636622428894,
-0.008039766922593117,
-0.03143102303147316,
-0.07042942941188812,
-0.022856835275888443,
-0.014135438948869705,
0.04597550258040428,
... | 0.102271 |
diagonal regardless of actual label values:: >>> from sklearn.metrics.cluster import pair\_confusion\_matrix >>> pair\_confusion\_matrix([0, 0, 1, 1], [0, 0, 1, 1]) array([[8, 0], [0, 4]]) :: >>> pair\_confusion\_matrix([0, 0, 1, 1], [1, 1, 0, 0]) array([[8, 0], [0, 4]]) Labelings that assign all classes members to the same clusters are complete but may not always be pure, hence penalized, and have some off-diagonal non-zero entries:: >>> pair\_confusion\_matrix([0, 0, 1, 2], [0, 0, 1, 1]) array([[8, 2], [0, 2]]) The matrix is not symmetric:: >>> pair\_confusion\_matrix([0, 0, 1, 1], [0, 0, 1, 2]) array([[8, 0], [2, 2]]) If classes members are completely split across different clusters, the assignment is totally incomplete, hence the matrix has all zero diagonal entries:: >>> pair\_confusion\_matrix([0, 0, 0, 0], [0, 1, 2, 3]) array([[ 0, 0], [12, 0]]) .. dropdown:: References \* :doi:`"Comparing Partitions" <10.1007/BF01908075>` L. Hubert and P. Arabie, Journal of Classification 1985 | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst | main | scikit-learn | [
0.03610554710030556,
-0.036353789269924164,
-0.10029473155736923,
-0.040950026363134384,
-0.00015617898316122591,
-0.003051961772143841,
0.03884252533316612,
-0.07059621810913086,
-0.02263941429555416,
-0.09775861352682114,
0.01409119926393032,
-0.02301383949816227,
-0.00366304162889719,
0... | 0.028089 |
.. \_biclustering: ============ Biclustering ============ Biclustering algorithms simultaneously cluster rows and columns of a data matrix. These clusters of rows and columns are known as biclusters. Each determines a submatrix of the original data matrix with some desired properties. For instance, given a matrix of shape ``(10, 10)``, one possible bicluster with three rows and two columns induces a submatrix of shape ``(3, 2)``:: >>> import numpy as np >>> data = np.arange(100).reshape(10, 10) >>> rows = np.array([0, 2, 3])[:, np.newaxis] >>> columns = np.array([1, 2]) >>> data[rows, columns] array([[ 1, 2], [21, 22], [31, 32]]) For visualization purposes, given a bicluster, the rows and columns of the data matrix may be rearranged to make the bicluster contiguous. Algorithms differ in how they define biclusters. Some of the common types include: \* constant values, constant rows, or constant columns \* unusually high or low values \* submatrices with low variance \* correlated rows or columns Algorithms also differ in how rows and columns may be assigned to biclusters, which leads to different bicluster structures. Block diagonal or checkerboard structures occur when rows and columns are divided into partitions. If each row and each column belongs to exactly one bicluster, then rearranging the rows and columns of the data matrix reveals the biclusters on the diagonal. Here is an example of this structure where biclusters have higher average values than the other rows and columns: .. figure:: ../auto\_examples/bicluster/images/sphx\_glr\_plot\_spectral\_coclustering\_003.png :target: ../auto\_examples/bicluster/images/sphx\_glr\_plot\_spectral\_coclustering\_003.png :align: center :scale: 50 An example of biclusters formed by partitioning rows and columns. In the checkerboard case, each row belongs to all column clusters, and each column belongs to all row clusters. Here is an example of this structure where the variance of the values within each bicluster is small: .. figure:: ../auto\_examples/bicluster/images/sphx\_glr\_plot\_spectral\_biclustering\_003.png :target: ../auto\_examples/bicluster/images/sphx\_glr\_plot\_spectral\_biclustering\_003.png :align: center :scale: 50 An example of checkerboard biclusters. After fitting a model, row and column cluster membership can be found in the ``rows\_`` and ``columns\_`` attributes. ``rows\_[i]`` is a binary vector with nonzero entries corresponding to rows that belong to bicluster ``i``. Similarly, ``columns\_[i]`` indicates which columns belong to bicluster ``i``. Some models also have ``row\_labels\_`` and ``column\_labels\_`` attributes. These models partition the rows and columns, such as in the block diagonal and checkerboard bicluster structures. .. note:: Biclustering has many other names in different fields including co-clustering, two-mode clustering, two-way clustering, block clustering, coupled two-way clustering, etc. The names of some algorithms, such as the Spectral Co-Clustering algorithm, reflect these alternate names. .. currentmodule:: sklearn.cluster .. \_spectral\_coclustering: Spectral Co-Clustering ====================== The :class:`SpectralCoclustering` algorithm finds biclusters with values higher than those in the corresponding other rows and columns. Each row and each column belongs to exactly one bicluster, so rearranging the rows and columns to make partitions contiguous reveals these high values along the diagonal: .. note:: The algorithm treats the input data matrix as a bipartite graph: the rows and columns of the matrix correspond to the two sets of vertices, and each entry corresponds to an edge between a row and a column. The algorithm approximates the normalized cut of this graph to find heavy subgraphs. Mathematical formulation ------------------------ An approximate solution to the optimal normalized cut may be found via the generalized eigenvalue decomposition of the Laplacian of the graph. Usually this would mean working directly with the Laplacian matrix. If the original data matrix :math:`A` has shape :math:`m \times n`, the Laplacian matrix for the corresponding bipartite graph has shape :math:`(m + n) \times (m + n)`. However, in this case it is possible to work directly with :math:`A`, which is smaller and more efficient. The input matrix :math:`A` is | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/biclustering.rst | main | scikit-learn | [
0.02191389724612236,
-0.048732683062553406,
-0.05805512145161629,
-0.05481002479791641,
0.031232740730047226,
-0.10492484271526337,
-0.08782176673412323,
-0.07881806790828705,
-0.039900247007608414,
-0.00926363468170166,
-0.05647340416908264,
-0.0015629188856109977,
0.033241260796785355,
-... | 0.128179 |
the original data matrix :math:`A` has shape :math:`m \times n`, the Laplacian matrix for the corresponding bipartite graph has shape :math:`(m + n) \times (m + n)`. However, in this case it is possible to work directly with :math:`A`, which is smaller and more efficient. The input matrix :math:`A` is preprocessed as follows: .. math:: A\_n = R^{-1/2} A C^{-1/2} Where :math:`R` is the diagonal matrix with entry :math:`i` equal to :math:`\sum\_{j} A\_{ij}` and :math:`C` is the diagonal matrix with entry :math:`j` equal to :math:`\sum\_{i} A\_{ij}`. The singular value decomposition, :math:`A\_n = U \Sigma V^\top`, provides the partitions of the rows and columns of :math:`A`. A subset of the left singular vectors gives the row partitions, and a subset of the right singular vectors gives the column partitions. The :math:`\ell = \lceil \log\_2 k \rceil` singular vectors, starting from the second, provide the desired partitioning information. They are used to form the matrix :math:`Z`: .. math:: Z = \begin{bmatrix} R^{-1/2} U \\\\ C^{-1/2} V \end{bmatrix} where the columns of :math:`U` are :math:`u\_2, \dots, u\_{\ell + 1}`, and similarly for :math:`V`. Then the rows of :math:`Z` are clustered using :ref:`k-means `. The first ``n\_rows`` labels provide the row partitioning, and the remaining ``n\_columns`` labels provide the column partitioning. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_bicluster\_plot\_spectral\_coclustering.py`: A simple example showing how to generate a data matrix with biclusters and apply this method to it. \* :ref:`sphx\_glr\_auto\_examples\_bicluster\_plot\_bicluster\_newsgroups.py`: An example of finding biclusters in the twenty newsgroup dataset. .. rubric:: References \* Dhillon, Inderjit S, 2001. :doi:`Co-clustering documents and words using bipartite spectral graph partitioning <10.1145/502512.502550>` .. \_spectral\_biclustering: Spectral Biclustering ===================== The :class:`SpectralBiclustering` algorithm assumes that the input data matrix has a hidden checkerboard structure. The rows and columns of a matrix with this structure may be partitioned so that the entries of any bicluster in the Cartesian product of row clusters and column clusters are approximately constant. For instance, if there are two row partitions and three column partitions, each row will belong to three biclusters, and each column will belong to two biclusters. The algorithm partitions the rows and columns of a matrix so that a corresponding blockwise-constant checkerboard matrix provides a good approximation to the original matrix. Mathematical formulation ------------------------ The input matrix :math:`A` is first normalized to make the checkerboard pattern more obvious. There are three possible methods: 1. \*Independent row and column normalization\*, as in Spectral Co-Clustering. This method makes the rows sum to a constant and the columns sum to a different constant. 2. \*\*Bistochastization\*\*: repeated row and column normalization until convergence. This method makes both rows and columns sum to the same constant. 3. \*\*Log normalization\*\*: the log of the data matrix is computed: :math:`L = \log A`. Then the column mean :math:`\overline{L\_{i \cdot}}`, row mean :math:`\overline{L\_{\cdot j}}`, and overall mean :math:`\overline{L\_{\cdot \cdot}}` of :math:`L` are computed. The final matrix is computed according to the formula .. math:: K\_{ij} = L\_{ij} - \overline{L\_{i \cdot}} - \overline{L\_{\cdot j}} + \overline{L\_{\cdot \cdot}} After normalizing, the first few singular vectors are computed, just as in the Spectral Co-Clustering algorithm. If log normalization was used, all the singular vectors are meaningful. However, if independent normalization or bistochastization were used, the first singular vectors, :math:`u\_1` and :math:`v\_1`. are discarded. From now on, the "first" singular vectors refers to :math:`u\_2 \dots u\_{p+1}` and :math:`v\_2 \dots v\_{p+1}` except in the case of log normalization. Given these singular vectors, they are ranked according to which can be best approximated by a piecewise-constant vector. The approximations for each vector are found using one-dimensional k-means and scored using the Euclidean distance. Some subset of the best left and right | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/biclustering.rst | main | scikit-learn | [
-0.03818059340119362,
-0.014236542396247387,
-0.08835099637508392,
-0.06611424684524536,
-0.0192902572453022,
-0.01658022776246071,
-0.05697528272867203,
-0.024829154834151268,
-0.029822735115885735,
0.05186838284134865,
0.037968993186950684,
0.020160948857665062,
-0.002322350163012743,
0.... | -0.001073 |
v\_{p+1}` except in the case of log normalization. Given these singular vectors, they are ranked according to which can be best approximated by a piecewise-constant vector. The approximations for each vector are found using one-dimensional k-means and scored using the Euclidean distance. Some subset of the best left and right singular vectors are selected. Next, the data is projected to this best subset of singular vectors and clustered. For instance, if :math:`p` singular vectors were calculated, the :math:`q` best are found as described, where :math:`q` .. \_biclustering\_evaluation: .. currentmodule:: sklearn.metrics Biclustering evaluation ======================= There are two ways of evaluating a biclustering result: internal and external. Internal measures, such as cluster stability, rely only on the data and the result themselves. Currently there are no internal bicluster measures in scikit-learn. External measures refer to an external source of information, such as the true solution. When working with real data the true solution is usually unknown, but biclustering artificial data may be useful for evaluating algorithms precisely because the true solution is known. To compare a set of found biclusters to the set of true biclusters, two similarity measures are needed: a similarity measure for individual biclusters, and a way to combine these individual similarities into an overall score. To compare individual biclusters, several measures have been used. For now, only the Jaccard index is implemented: .. math:: J(A, B) = \frac{|A \cap B|}{|A| + |B| - |A \cap B|} where :math:`A` and :math:`B` are biclusters, :math:`|A \cap B|` is the number of elements in their intersection. The Jaccard index achieves its minimum of 0 when the biclusters do not overlap at all and its maximum of 1 when they are identical. Several methods have been developed to compare two sets of biclusters. For now, only :func:`consensus\_score` (Hochreiter et. al., 2010) is available: 1. Compute bicluster similarities for pairs of biclusters, one in each set, using the Jaccard index or a similar measure. 2. Assign biclusters from one set to another in a one-to-one fashion to maximize the sum of their similarities. This step is performed using :func:`scipy.optimize.linear\_sum\_assignment`, which uses a modified Jonker-Volgenant algorithm. 3. The final sum of similarities is divided by the size of the larger set. The minimum consensus score, 0, occurs when all pairs of biclusters are totally dissimilar. The maximum score, 1, occurs when both sets are identical. .. rubric:: References \* Hochreiter, Bodenhofer, et. al., 2010. `FABIA: factor analysis for bicluster acquisition `\_\_. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/biclustering.rst | main | scikit-learn | [
0.015313551761209965,
-0.07345552742481232,
-0.02685989812016487,
-0.06176371872425079,
0.08165642619132996,
-0.05151468887925148,
-0.025542763993144035,
0.04714834317564964,
0.03302625194191933,
0.030672380700707436,
-0.021265456452965736,
0.02485961839556694,
-0.01417310070246458,
0.0165... | 0.057547 |
.. \_naive\_bayes: =========== Naive Bayes =========== .. currentmodule:: sklearn.naive\_bayes Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes' theorem with the "naive" assumption of conditional independence between every pair of features given the value of the class variable. Bayes' theorem states the following relationship, given class variable :math:`y` and dependent feature vector :math:`x\_1` through :math:`x\_n`, : .. math:: P(y \mid x\_1, \dots, x\_n) = \frac{P(y) P(x\_1, \dots, x\_n \mid y)} {P(x\_1, \dots, x\_n)} Using the naive conditional independence assumption that .. math:: P(x\_i | y, x\_1, \dots, x\_{i-1}, x\_{i+1}, \dots, x\_n) = P(x\_i | y), for all :math:`i`, this relationship is simplified to .. math:: P(y \mid x\_1, \dots, x\_n) = \frac{P(y) \prod\_{i=1}^{n} P(x\_i \mid y)} {P(x\_1, \dots, x\_n)} Since :math:`P(x\_1, \dots, x\_n)` is constant given the input, we can use the following classification rule: .. math:: P(y \mid x\_1, \dots, x\_n) \propto P(y) \prod\_{i=1}^{n} P(x\_i \mid y) \Downarrow \hat{y} = \arg\max\_y P(y) \prod\_{i=1}^{n} P(x\_i \mid y), and we can use Maximum A Posteriori (MAP) estimation to estimate :math:`P(y)` and :math:`P(x\_i \mid y)`; the former is then the relative frequency of class :math:`y` in the training set. The different naive Bayes classifiers differ mainly by the assumptions they make regarding the distribution of :math:`P(x\_i \mid y)`. In spite of their apparently over-simplified assumptions, naive Bayes classifiers have worked quite well in many real-world situations, famously document classification and spam filtering. They require a small amount of training data to estimate the necessary parameters. (For theoretical reasons why naive Bayes works well, and on which types of data it does, see the references below.) Naive Bayes learners and classifiers can be extremely fast compared to more sophisticated methods. The decoupling of the class conditional feature distributions means that each distribution can be independently estimated as a one dimensional distribution. This in turn helps to alleviate problems stemming from the curse of dimensionality. On the flip side, although naive Bayes is known as a decent classifier, it is known to be a bad estimator, so the probability outputs from ``predict\_proba`` are not to be taken too seriously. .. dropdown:: References \* H. Zhang (2004). `The optimality of Naive Bayes. `\_ Proc. FLAIRS. .. \_gaussian\_naive\_bayes: Gaussian Naive Bayes -------------------- :class:`GaussianNB` implements the Gaussian Naive Bayes algorithm for classification. The likelihood of the features is assumed to be Gaussian: .. math:: P(x\_i \mid y) = \frac{1}{\sqrt{2\pi\sigma^2\_y}} \exp\left(-\frac{(x\_i - \mu\_y)^2}{2\sigma^2\_y}\right) The parameters :math:`\sigma\_y` and :math:`\mu\_y` are estimated using maximum likelihood. >>> from sklearn.datasets import load\_iris >>> from sklearn.model\_selection import train\_test\_split >>> from sklearn.naive\_bayes import GaussianNB >>> X, y = load\_iris(return\_X\_y=True) >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split(X, y, test\_size=0.5, random\_state=0) >>> gnb = GaussianNB() >>> y\_pred = gnb.fit(X\_train, y\_train).predict(X\_test) >>> print("Number of mislabeled points out of a total %d points : %d" ... % (X\_test.shape[0], (y\_test != y\_pred).sum())) Number of mislabeled points out of a total 75 points : 4 .. \_multinomial\_naive\_bayes: Multinomial Naive Bayes ----------------------- :class:`MultinomialNB` implements the naive Bayes algorithm for multinomially distributed data, and is one of the two classic naive Bayes variants used in text classification (where the data are typically represented as word vector counts, although tf-idf vectors are also known to work well in practice). The distribution is parametrized by vectors :math:`\theta\_y = (\theta\_{y1},\ldots,\theta\_{yn})` for each class :math:`y`, where :math:`n` is the number of features (in text classification, the size of the vocabulary) and :math:`\theta\_{yi}` is the probability :math:`P(x\_i \mid y)` of feature :math:`i` appearing in a sample belonging to class :math:`y`. The parameters :math:`\theta\_y` are estimated by a smoothed version of maximum likelihood, i.e. relative frequency counting: .. math:: \hat{\theta}\_{yi} = \frac{ N\_{yi} | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/naive_bayes.rst | main | scikit-learn | [
-0.08005866408348083,
-0.06694275885820389,
0.020295236259698868,
-0.0513407327234745,
0.10579261928796768,
-0.03265887871384621,
0.10240624099969864,
-0.07018984109163284,
-0.0519341416656971,
0.048194997012615204,
0.07095149904489517,
-0.010839825496077538,
0.06357251107692719,
0.0092872... | -0.024656 |
features (in text classification, the size of the vocabulary) and :math:`\theta\_{yi}` is the probability :math:`P(x\_i \mid y)` of feature :math:`i` appearing in a sample belonging to class :math:`y`. The parameters :math:`\theta\_y` are estimated by a smoothed version of maximum likelihood, i.e. relative frequency counting: .. math:: \hat{\theta}\_{yi} = \frac{ N\_{yi} + \alpha}{N\_y + \alpha n} where :math:`N\_{yi} = \sum\_{x \in T} x\_i` is the number of times feature :math:`i` appears in all samples of class :math:`y` in the training set :math:`T`, and :math:`N\_{y} = \sum\_{i=1}^{n} N\_{yi}` is the total count of all features for class :math:`y`. The smoothing priors :math:`\alpha \ge 0` account for features not present in the learning samples and prevent zero probabilities in further computations. Setting :math:`\alpha = 1` is called Laplace smoothing, while :math:`\alpha < 1` is called Lidstone smoothing. .. \_complement\_naive\_bayes: Complement Naive Bayes ---------------------- :class:`ComplementNB` implements the complement naive Bayes (CNB) algorithm. CNB is an adaptation of the standard multinomial naive Bayes (MNB) algorithm that is particularly suited for imbalanced data sets. Specifically, CNB uses statistics from the \*complement\* of each class to compute the model's weights. The inventors of CNB show empirically that the parameter estimates for CNB are more stable than those for MNB. Further, CNB regularly outperforms MNB (often by a considerable margin) on text classification tasks. .. dropdown:: Weights calculation The procedure for calculating the weights is as follows: .. math:: \hat{\theta}\_{ci} = \frac{\alpha\_i + \sum\_{j:y\_j \neq c} d\_{ij}} {\alpha + \sum\_{j:y\_j \neq c} \sum\_{k} d\_{kj}} w\_{ci} = \log \hat{\theta}\_{ci} w\_{ci} = \frac{w\_{ci}}{\sum\_{j} |w\_{cj}|} where the summations are over all documents :math:`j` not in class :math:`c`, :math:`d\_{ij}` is either the count or tf-idf value of term :math:`i` in document :math:`j`, :math:`\alpha\_i` is a smoothing hyperparameter like that found in MNB, and :math:`\alpha = \sum\_{i} \alpha\_i`. The second normalization addresses the tendency for longer documents to dominate parameter estimates in MNB. The classification rule is: .. math:: \hat{c} = \arg\min\_c \sum\_{i} t\_i w\_{ci} i.e., a document is assigned to the class that is the \*poorest\* complement match. .. dropdown:: References \* Rennie, J. D., Shih, L., Teevan, J., & Karger, D. R. (2003). `Tackling the poor assumptions of naive bayes text classifiers. `\_ In ICML (Vol. 3, pp. 616-623). .. \_bernoulli\_naive\_bayes: Bernoulli Naive Bayes --------------------- :class:`BernoulliNB` implements the naive Bayes training and classification algorithms for data that is distributed according to multivariate Bernoulli distributions; i.e., there may be multiple features but each one is assumed to be a binary-valued (Bernoulli, boolean) variable. Therefore, this class requires samples to be represented as binary-valued feature vectors; if handed any other kind of data, a :class:`BernoulliNB` instance may binarize its input (depending on the ``binarize`` parameter). The decision rule for Bernoulli naive Bayes is based on .. math:: P(x\_i \mid y) = P(x\_i = 1 \mid y) x\_i + (1 - P(x\_i = 1 \mid y)) (1 - x\_i) which differs from multinomial NB's rule in that it explicitly penalizes the non-occurrence of a feature :math:`i` that is an indicator for class :math:`y`, where the multinomial variant would simply ignore a non-occurring feature. In the case of text classification, word occurrence vectors (rather than word count vectors) may be used to train and use this classifier. :class:`BernoulliNB` might perform better on some datasets, especially those with shorter documents. It is advisable to evaluate both models, if time permits. .. dropdown:: References \* C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 234-265. \* A. McCallum and K. Nigam (1998). `A comparison of event models for Naive Bayes text classification. `\_ Proc. AAAI/ICML-98 Workshop on Learning for Text | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/naive_bayes.rst | main | scikit-learn | [
-0.014012843370437622,
-0.08331848680973053,
0.049685653299093246,
0.02128024399280548,
0.03364138305187225,
0.021942611783742905,
0.0915663093328476,
0.022851984947919846,
0.036280740052461624,
-0.04312531650066376,
0.08074105530977249,
0.025966567918658257,
0.02667219005525112,
-0.012778... | 0.094299 |
models, if time permits. .. dropdown:: References \* C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press, pp. 234-265. \* A. McCallum and K. Nigam (1998). `A comparison of event models for Naive Bayes text classification. `\_ Proc. AAAI/ICML-98 Workshop on Learning for Text Categorization, pp. 41-48. \* V. Metsis, I. Androutsopoulos and G. Paliouras (2006). `Spam filtering with Naive Bayes -- Which Naive Bayes? `\_ 3rd Conf. on Email and Anti-Spam (CEAS). .. \_categorical\_naive\_bayes: Categorical Naive Bayes ----------------------- :class:`CategoricalNB` implements the categorical naive Bayes algorithm for categorically distributed data. It assumes that each feature, which is described by the index :math:`i`, has its own categorical distribution. For each feature :math:`i` in the training set :math:`X`, :class:`CategoricalNB` estimates a categorical distribution for each feature i of X conditioned on the class y. The index set of the samples is defined as :math:`J = \{ 1, \dots, m \}`, with :math:`m` as the number of samples. .. dropdown:: Probability calculation The probability of category :math:`t` in feature :math:`i` given class :math:`c` is estimated as: .. math:: P(x\_i = t \mid y = c \: ;\, \alpha) = \frac{ N\_{tic} + \alpha}{N\_{c} + \alpha n\_i}, where :math:`N\_{tic} = |\{j \in J \mid x\_{ij} = t, y\_j = c\}|` is the number of times category :math:`t` appears in the samples :math:`x\_{i}`, which belong to class :math:`c`, :math:`N\_{c} = |\{ j \in J\mid y\_j = c\}|` is the number of samples with class c, :math:`\alpha` is a smoothing parameter and :math:`n\_i` is the number of available categories of feature :math:`i`. :class:`CategoricalNB` assumes that the sample matrix :math:`X` is encoded (for instance with the help of :class:`~sklearn.preprocessing.OrdinalEncoder`) such that all categories for each feature :math:`i` are represented with numbers :math:`0, ..., n\_i - 1` where :math:`n\_i` is the number of available categories of feature :math:`i`. Out-of-core naive Bayes model fitting ------------------------------------- Naive Bayes models can be used to tackle large scale classification problems for which the full training set might not fit in memory. To handle this case, :class:`MultinomialNB`, :class:`BernoulliNB`, and :class:`GaussianNB` expose a ``partial\_fit`` method that can be used incrementally as done with other classifiers as demonstrated in :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_out\_of\_core\_classification.py`. All naive Bayes classifiers support sample weighting. Contrary to the ``fit`` method, the first call to ``partial\_fit`` needs to be passed the list of all the expected class labels. For an overview of available strategies in scikit-learn, see also the :ref:`out-of-core learning ` documentation. .. note:: The ``partial\_fit`` method call of naive Bayes models introduces some computational overhead. It is recommended to use data chunk sizes that are as large as possible, that is as the available RAM allows. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/naive_bayes.rst | main | scikit-learn | [
-0.035035740584135056,
-0.050318971276283264,
0.026687141507864,
0.006307452451437712,
0.060245875269174576,
0.005570763256400824,
0.06919397413730621,
-0.026192177087068558,
0.029111670330166817,
-0.03638942539691925,
0.050223708152770996,
-0.018195392563939095,
0.04465080425143242,
-0.03... | 0.077213 |
.. \_cross\_decomposition: =================== Cross decomposition =================== .. currentmodule:: sklearn.cross\_decomposition The cross decomposition module contains \*\*supervised\*\* estimators for dimensionality reduction and regression, belonging to the "Partial Least Squares" family. .. figure:: ../auto\_examples/cross\_decomposition/images/sphx\_glr\_plot\_compare\_cross\_decomposition\_001.png :target: ../auto\_examples/cross\_decomposition/plot\_compare\_cross\_decomposition.html :scale: 75% :align: center Cross decomposition algorithms find the fundamental relations between two matrices (X and Y). They are latent variable approaches to modeling the covariance structures in these two spaces. They will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. In other words, PLS projects both `X` and `Y` into a lower-dimensional subspace such that the covariance between `transformed(X)` and `transformed(Y)` is maximal. PLS draws similarities with `Principal Component Regression `\_ (PCR), where the samples are first projected into a lower-dimensional subspace, and the targets `y` are predicted using `transformed(X)`. One issue with PCR is that the dimensionality reduction is unsupervised, and may lose some important variables: PCR would keep the features with the most variance, but it's possible that features with small variances are relevant for predicting the target. In a way, PLS allows for the same kind of dimensionality reduction, but by taking into account the targets `y`. An illustration of this fact is given in the following example: \* :ref:`sphx\_glr\_auto\_examples\_cross\_decomposition\_plot\_pcr\_vs\_pls.py`. Apart from CCA, the PLS estimators are particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among the features. By contrast, standard linear regression would fail in these cases unless it is regularized. Classes included in this module are :class:`PLSRegression`, :class:`PLSCanonical`, :class:`CCA` and :class:`PLSSVD` PLSCanonical ------------ We here describe the algorithm used in :class:`PLSCanonical`. The other estimators use variants of this algorithm, and are detailed below. We recommend section [1]\_ for more details and comparisons between these algorithms. In [1]\_, :class:`PLSCanonical` corresponds to "PLSW2A". Given two centered matrices :math:`X \in \mathbb{R}^{n \times d}` and :math:`Y \in \mathbb{R}^{n \times t}`, and a number of components :math:`K`, :class:`PLSCanonical` proceeds as follows: Set :math:`X\_1` to :math:`X` and :math:`Y\_1` to :math:`Y`. Then, for each :math:`k \in [1, K]`: - a) compute :math:`u\_k \in \mathbb{R}^d` and :math:`v\_k \in \mathbb{R}^t`, the first left and right singular vectors of the cross-covariance matrix :math:`C = X\_k^T Y\_k`. :math:`u\_k` and :math:`v\_k` are called the \*weights\*. By definition, :math:`u\_k` and :math:`v\_k` are chosen so that they maximize the covariance between the projected :math:`X\_k` and the projected target, that is :math:`\text{Cov}(X\_k u\_k, Y\_k v\_k)`. - b) Project :math:`X\_k` and :math:`Y\_k` on the singular vectors to obtain \*scores\*: :math:`\xi\_k = X\_k u\_k` and :math:`\omega\_k = Y\_k v\_k` - c) Regress :math:`X\_k` on :math:`\xi\_k`, i.e. find a vector :math:`\gamma\_k \in \mathbb{R}^d` such that the rank-1 matrix :math:`\xi\_k \gamma\_k^T` is as close as possible to :math:`X\_k`. Do the same on :math:`Y\_k` with :math:`\omega\_k` to obtain :math:`\delta\_k`. The vectors :math:`\gamma\_k` and :math:`\delta\_k` are called the \*loadings\*. - d) \*deflate\* :math:`X\_k` and :math:`Y\_k`, i.e. subtract the rank-1 approximations: :math:`X\_{k+1} = X\_k - \xi\_k \gamma\_k^T`, and :math:`Y\_{k + 1} = Y\_k - \omega\_k \delta\_k^T`. At the end, we have approximated :math:`X` as a sum of rank-1 matrices: :math:`X = \Xi \Gamma^T` where :math:`\Xi \in \mathbb{R}^{n \times K}` contains the scores in its columns, and :math:`\Gamma^T \in \mathbb{R}^{K \times d}` contains the loadings in its rows. Similarly for :math:`Y`, we have :math:`Y = \Omega \Delta^T`. Note that the scores matrices :math:`\Xi` and :math:`\Omega` correspond to the projections of the training data :math:`X` and :math:`Y`, respectively. Step \*a)\* may be performed in two ways: either by computing the whole SVD of :math:`C` and only retaining the singular vectors with the biggest singular values, or by directly computing the singular vectors using | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_decomposition.rst | main | scikit-learn | [
-0.06501148641109467,
-0.08967917412519455,
-0.034643445163965225,
-0.0733325406908989,
0.1241132915019989,
-0.02159169875085354,
-0.042439863085746765,
-0.03502042964100838,
-0.02075875736773014,
-0.015048995614051819,
0.035399455577135086,
-0.021509820595383644,
-0.034986015409231186,
0.... | 0.120233 |
:math:`\Xi` and :math:`\Omega` correspond to the projections of the training data :math:`X` and :math:`Y`, respectively. Step \*a)\* may be performed in two ways: either by computing the whole SVD of :math:`C` and only retaining the singular vectors with the biggest singular values, or by directly computing the singular vectors using the power method (cf section 11.3 in [1]\_), which corresponds to the `'nipals'` option of the `algorithm` parameter. .. dropdown:: Transforming data To transform :math:`X` into :math:`\bar{X}`, we need to find a projection matrix :math:`P` such that :math:`\bar{X} = XP`. We know that for the training data, :math:`\Xi = XP`, and :math:`X = \Xi \Gamma^T`. Setting :math:`P = U(\Gamma^T U)^{-1}` where :math:`U` is the matrix with the :math:`u\_k` in the columns, we have :math:`XP = X U(\Gamma^T U)^{-1} = \Xi (\Gamma^T U) (\Gamma^T U)^{-1} = \Xi` as desired. The rotation matrix :math:`P` can be accessed from the `x\_rotations\_` attribute. Similarly, :math:`Y` can be transformed using the rotation matrix :math:`V(\Delta^T V)^{-1}`, accessed via the `y\_rotations\_` attribute. .. dropdown:: Predicting the targets `Y` To predict the targets of some data :math:`X`, we are looking for a coefficient matrix :math:`\beta \in R^{d \times t}` such that :math:`Y = X\beta`. The idea is to try to predict the transformed targets :math:`\Omega` as a function of the transformed samples :math:`\Xi`, by computing :math:`\alpha \in \mathbb{R}` such that :math:`\Omega = \alpha \Xi`. Then, we have :math:`Y = \Omega \Delta^T = \alpha \Xi \Delta^T`, and since :math:`\Xi` is the transformed training data we have that :math:`Y = X \alpha P \Delta^T`, and as a result the coefficient matrix :math:`\beta = \alpha P \Delta^T`. :math:`\beta` can be accessed through the `coef\_` attribute. PLSSVD ------ :class:`PLSSVD` is a simplified version of :class:`PLSCanonical` described earlier: instead of iteratively deflating the matrices :math:`X\_k` and :math:`Y\_k`, :class:`PLSSVD` computes the SVD of :math:`C = X^TY` only \*once\*, and stores the `n\_components` singular vectors corresponding to the biggest singular values in the matrices `U` and `V`, corresponding to the `x\_weights\_` and `y\_weights\_` attributes. Here, the transformed data is simply `transformed(X) = XU` and `transformed(Y) = YV`. If `n\_components == 1`, :class:`PLSSVD` and :class:`PLSCanonical` are strictly equivalent. PLSRegression ------------- The :class:`PLSRegression` estimator is similar to :class:`PLSCanonical` with `algorithm='nipals'`, with 2 significant differences: - at step a) in the power method to compute :math:`u\_k` and :math:`v\_k`, :math:`v\_k` is never normalized. - at step c), the targets :math:`Y\_k` are approximated using the projection of :math:`X\_k` (i.e. :math:`\xi\_k`) instead of the projection of :math:`Y\_k` (i.e. :math:`\omega\_k`). In other words, the loadings computation is different. As a result, the deflation in step d) will also be affected. These two modifications affect the output of `predict` and `transform`, which are not the same as for :class:`PLSCanonical`. Also, while the number of components is limited by `min(n\_samples, n\_features, n\_targets)` in :class:`PLSCanonical`, here the limit is the rank of :math:`X^TX`, i.e. `min(n\_samples, n\_features)`. :class:`PLSRegression` is also known as PLS1 (single targets) and PLS2 (multiple targets). Much like :class:`~sklearn.linear\_model.Lasso`, :class:`PLSRegression` is a form of regularized linear regression where the number of components controls the strength of the regularization. Canonical Correlation Analysis ------------------------------ Canonical Correlation Analysis was developed prior and independently to PLS. But it turns out that :class:`CCA` is a special case of PLS, and corresponds to PLS in "Mode B" in the literature. :class:`CCA` differs from :class:`PLSCanonical` in the way the weights :math:`u\_k` and :math:`v\_k` are computed in the power method of step a). Details can be found in section 10 of [1]\_. Since :class:`CCA` involves the inversion of :math:`X\_k^TX\_k` and :math:`Y\_k^TY\_k`, this estimator can be unstable if the number of features or targets is greater than the number of samples. .. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_decomposition.rst | main | scikit-learn | [
-0.09243126213550568,
0.01998312957584858,
-0.010398953221738338,
-0.06161902844905853,
0.014588713645935059,
-0.020647166296839714,
0.050419967621564865,
-0.036515962332487106,
-0.010534918867051601,
-0.023790206760168076,
0.07560516893863678,
-0.006922050379216671,
-0.0007371331448666751,
... | -0.021239 |
:math:`u\_k` and :math:`v\_k` are computed in the power method of step a). Details can be found in section 10 of [1]\_. Since :class:`CCA` involves the inversion of :math:`X\_k^TX\_k` and :math:`Y\_k^TY\_k`, this estimator can be unstable if the number of features or targets is greater than the number of samples. .. rubric:: References .. [1] `A survey of Partial Least Squares (PLS) methods, with emphasis on the two-block case `\_, JA Wegelin .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cross\_decomposition\_plot\_compare\_cross\_decomposition.py` \* :ref:`sphx\_glr\_auto\_examples\_cross\_decomposition\_plot\_pcr\_vs\_pls.py` | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_decomposition.rst | main | scikit-learn | [
-0.08692425489425659,
-0.021741867065429688,
0.030321363359689713,
0.009287774562835693,
0.09631801396608353,
-0.0023740094620734453,
-0.031037144362926483,
0.06000078096985817,
-0.021219853311777115,
0.00716340122744441,
-0.028012743219733238,
0.05825984850525856,
0.04836810752749443,
-0.... | 0.013788 |
.. \_tree: ============== Decision Trees ============== .. currentmodule:: sklearn.tree \*\*Decision Trees (DTs)\*\* are a non-parametric supervised learning method used for :ref:`classification ` and :ref:`regression `. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. A tree can be seen as a piecewise constant approximation. For instance, in the example below, decision trees learn from data to approximate a sine curve with a set of if-then-else decision rules. The deeper the tree, the more complex the decision rules and the fitter the model. .. figure:: ../auto\_examples/tree/images/sphx\_glr\_plot\_tree\_regression\_001.png :target: ../auto\_examples/tree/plot\_tree\_regression.html :scale: 75 :align: center Some advantages of decision trees are: - Simple to understand and to interpret. Trees can be visualized. - Requires little data preparation. Other techniques often require data normalization, dummy variables need to be created and blank values to be removed. Some tree and algorithm combinations support :ref:`missing values `. - The cost of using the tree (i.e., predicting data) is logarithmic in the number of data points used to train the tree. - Able to handle both numerical and categorical data. However, the scikit-learn implementation does not support categorical variables for now. Other techniques are usually specialized in analyzing datasets that have only one type of variable. See :ref:`algorithms ` for more information. - Able to handle multi-output problems. - Uses a white box model. If a given situation is observable in a model, the explanation for the condition is easily explained by boolean logic. By contrast, in a black box model (e.g., in an artificial neural network), results may be more difficult to interpret. - Possible to validate a model using statistical tests. That makes it possible to account for the reliability of the model. - Performs well even if its assumptions are somewhat violated by the true model from which the data were generated. The disadvantages of decision trees include: - Decision-tree learners can create over-complex trees that do not generalize the data well. This is called overfitting. Mechanisms such as pruning, setting the minimum number of samples required at a leaf node or setting the maximum depth of the tree are necessary to avoid this problem. - Decision trees can be unstable because small variations in the data might result in a completely different tree being generated. This problem is mitigated by using decision trees within an ensemble. - Predictions of decision trees are neither smooth nor continuous, but piecewise constant approximations as seen in the above figure. Therefore, they are not good at extrapolation. - The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement. - There are concepts that are hard to learn because decision trees do not express them easily, such as XOR, parity or multiplexer problems. - Decision tree learners create biased trees if some classes dominate. It is therefore recommended to balance the dataset prior to fitting with the decision tree. .. \_tree\_classification: Classification ============== :class:`DecisionTreeClassifier` is a class capable of performing multi-class classification on a dataset. As with other classifiers, :class:`DecisionTreeClassifier` takes as input two arrays: an array X, sparse or dense, of shape ``(n\_samples, n\_features)`` holding the training samples, and an array Y | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.08889603614807129,
-0.022319884970784187,
0.007689184509217739,
0.026448234915733337,
0.1248602345585823,
-0.0971420481801033,
-0.02374790981411934,
0.0296317171305418,
0.004214509390294552,
0.08947049826383591,
-0.04875597357749939,
-0.033454541116952896,
0.023160668089985847,
0.006342... | 0.148898 |
to fitting with the decision tree. .. \_tree\_classification: Classification ============== :class:`DecisionTreeClassifier` is a class capable of performing multi-class classification on a dataset. As with other classifiers, :class:`DecisionTreeClassifier` takes as input two arrays: an array X, sparse or dense, of shape ``(n\_samples, n\_features)`` holding the training samples, and an array Y of integer values, shape ``(n\_samples,)``, holding the class labels for the training samples:: >>> from sklearn import tree >>> X = [[0, 0], [1, 1]] >>> Y = [0, 1] >>> clf = tree.DecisionTreeClassifier() >>> clf = clf.fit(X, Y) After being fitted, the model can then be used to predict the class of samples:: >>> clf.predict([[2., 2.]]) array([1]) In case that there are multiple classes with the same and highest probability, the classifier will predict the class with the lowest index amongst those classes. As an alternative to outputting a specific class, the probability of each class can be predicted, which is the fraction of training samples of the class in a leaf:: >>> clf.predict\_proba([[2., 2.]]) array([[0., 1.]]) :class:`DecisionTreeClassifier` is capable of both binary (where the labels are [-1, 1]) classification and multiclass (where the labels are [0, ..., K-1]) classification. Using the Iris dataset, we can construct a tree as follows:: >>> from sklearn.datasets import load\_iris >>> from sklearn import tree >>> iris = load\_iris() >>> X, y = iris.data, iris.target >>> clf = tree.DecisionTreeClassifier() >>> clf = clf.fit(X, y) Once trained, you can plot the tree with the :func:`plot\_tree` function:: >>> tree.plot\_tree(clf) [...] .. figure:: ../auto\_examples/tree/images/sphx\_glr\_plot\_iris\_dtc\_002.png :target: ../auto\_examples/tree/plot\_iris\_dtc.html :scale: 75 :align: center .. dropdown:: Alternative ways to export trees We can also export the tree in `Graphviz `\_ format using the :func:`export\_graphviz` exporter. If you use the `conda `\_ package manager, the graphviz binaries and the python package can be installed with `conda install python-graphviz`. Alternatively binaries for graphviz can be downloaded from the graphviz project homepage, and the Python wrapper installed from pypi with `pip install graphviz`. Below is an example graphviz export of the above tree trained on the entire iris dataset; the results are saved in an output file `iris.pdf`:: >>> import graphviz # doctest: +SKIP >>> dot\_data = tree.export\_graphviz(clf, out\_file=None) # doctest: +SKIP >>> graph = graphviz.Source(dot\_data) # doctest: +SKIP >>> graph.render("iris") # doctest: +SKIP The :func:`export\_graphviz` exporter also supports a variety of aesthetic options, including coloring nodes by their class (or value for regression) and using explicit variable and class names if desired. Jupyter notebooks also render these plots inline automatically:: >>> dot\_data = tree.export\_graphviz(clf, out\_file=None, # doctest: +SKIP ... feature\_names=iris.feature\_names, # doctest: +SKIP ... class\_names=iris.target\_names, # doctest: +SKIP ... filled=True, rounded=True, # doctest: +SKIP ... special\_characters=True) # doctest: +SKIP >>> graph = graphviz.Source(dot\_data) # doctest: +SKIP >>> graph # doctest: +SKIP .. only:: html .. figure:: ../images/iris.svg :align: center .. only:: latex .. figure:: ../images/iris.pdf :align: center .. figure:: ../auto\_examples/tree/images/sphx\_glr\_plot\_iris\_dtc\_001.png :target: ../auto\_examples/tree/plot\_iris\_dtc.html :align: center :scale: 75 Alternatively, the tree can also be exported in textual format with the function :func:`export\_text`. This method doesn't require the installation of external libraries and is more compact: >>> from sklearn.datasets import load\_iris >>> from sklearn.tree import DecisionTreeClassifier >>> from sklearn.tree import export\_text >>> iris = load\_iris() >>> decision\_tree = DecisionTreeClassifier(random\_state=0, max\_depth=2) >>> decision\_tree = decision\_tree.fit(iris.data, iris.target) >>> r = export\_text(decision\_tree, feature\_names=iris['feature\_names']) >>> print(r) |--- petal width (cm) <= 0.80 | |--- class: 0 |--- petal width (cm) > 0.80 | |--- petal width (cm) <= 1.75 | | |--- class: 1 | |--- petal width (cm) > 1.75 | | |--- class: 2 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_iris\_dtc.py` \* :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_unveil\_tree\_structure.py` .. \_tree\_regression: Regression ========== .. figure:: ../auto\_examples/tree/images/sphx\_glr\_plot\_tree\_regression\_001.png :target: ../auto\_examples/tree/plot\_tree\_regression.html :scale: 75 :align: center Decision trees can | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.009759120643138885,
-0.06195050850510597,
0.0018259672215208411,
-0.034941237419843674,
0.10284904390573502,
-0.07540695369243622,
0.056866686791181564,
-0.06098358705639839,
-0.06990982592105865,
-0.03038744628429413,
-0.08899031579494476,
-0.05920647084712982,
0.03141574189066887,
-0.... | 0.04941 |
(cm) > 0.80 | |--- petal width (cm) <= 1.75 | | |--- class: 1 | |--- petal width (cm) > 1.75 | | |--- class: 2 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_iris\_dtc.py` \* :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_unveil\_tree\_structure.py` .. \_tree\_regression: Regression ========== .. figure:: ../auto\_examples/tree/images/sphx\_glr\_plot\_tree\_regression\_001.png :target: ../auto\_examples/tree/plot\_tree\_regression.html :scale: 75 :align: center Decision trees can also be applied to regression problems, using the :class:`DecisionTreeRegressor` class. As in the classification setting, the fit method will take as argument arrays X and y, only that in this case y is expected to have floating point values instead of integer values:: >>> from sklearn import tree >>> X = [[0, 0], [2, 2]] >>> y = [0.5, 2.5] >>> clf = tree.DecisionTreeRegressor() >>> clf = clf.fit(X, y) >>> clf.predict([[1, 1]]) array([0.5]) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_tree\_regression.py` .. \_tree\_multioutput: Multi-output problems ===================== A multi-output problem is a supervised learning problem with several outputs to predict, that is when Y is a 2d array of shape ``(n\_samples, n\_outputs)``. When there is no correlation between the outputs, a very simple way to solve this kind of problem is to build n independent models, i.e. one for each output, and then to use those models to independently predict each one of the n outputs. However, because it is likely that the output values related to the same input are themselves correlated, an often better way is to build a single model capable of predicting simultaneously all n outputs. First, it requires lower training time since only a single estimator is built. Second, the generalization accuracy of the resulting estimator may often be increased. With regard to decision trees, this strategy can readily be used to support multi-output problems. This requires the following changes: - Store n output values in leaves, instead of 1; - Use splitting criteria that compute the average reduction across all n outputs. This module offers support for multi-output problems by implementing this strategy in both :class:`DecisionTreeClassifier` and :class:`DecisionTreeRegressor`. If a decision tree is fit on an output array Y of shape ``(n\_samples, n\_outputs)`` then the resulting estimator will: \* Output n\_output values upon ``predict``; \* Output a list of n\_output arrays of class probabilities upon ``predict\_proba``. The use of multi-output trees for regression is demonstrated in :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_tree\_regression.py`. In this example, the input X is a single real value and the outputs Y are the sine and cosine of X. .. figure:: ../auto\_examples/tree/images/sphx\_glr\_plot\_tree\_regression\_002.png :target: ../auto\_examples/tree/plot\_tree\_regression.html :scale: 75 :align: center The use of multi-output trees for classification is demonstrated in :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_multioutput\_face\_completion.py`. In this example, the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces. .. figure:: ../auto\_examples/miscellaneous/images/sphx\_glr\_plot\_multioutput\_face\_completion\_001.png :target: ../auto\_examples/miscellaneous/plot\_multioutput\_face\_completion.html :scale: 75 :align: center .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_multioutput\_face\_completion.py` .. rubric:: References \* M. Dumont et al, `Fast multi-class image annotation with random subwindows and multiple output randomized trees `\_, International Conference on Computer Vision Theory and Applications 2009 .. \_tree\_complexity: Complexity ========== The following table shows the worst-case complexity estimates for a balanced binary tree: +----------+----------------------------------------------------------------------+----------------------------------------+ | Splitter | Total training cost | Total inference cost | +==========+======================================================================+========================================+ | "best" | :math:`\mathcal{O}(n\_{features} \, n^2\_{samples} \log(n\_{samples}))` | :math:`\mathcal{O}(\log(n\_{samples}))` | +----------+----------------------------------------------------------------------+----------------------------------------+ | "random" | :math:`\mathcal{O}(n\_{features} \, n^2\_{samples})` | :math:`\mathcal{O}(\log(n\_{samples}))` | +----------+----------------------------------------------------------------------+----------------------------------------+ In general, the training cost to construct a balanced binary tree \*\*at each node\*\* is .. math:: \mathcal{O}(n\_{features}n\_{samples}\log (n\_{samples})) + \mathcal{O}(n\_{features}n\_{samples}) The first term is the cost of sorting :math:`n\_{samples}` repeated for :math:`n\_{features}`. The second term is the linear scan over candidate split points to find the feature that offers the largest reduction in the impurity criterion. The latter is sub-leading for the greedy splitter | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.056500375270843506,
-0.051154132932424545,
-0.016204075887799263,
0.028887059539556503,
0.09673555195331573,
-0.0004273015365470201,
-0.030262582004070282,
0.05333024635910988,
-0.022422166541218758,
0.06928561627864838,
-0.010110671631991863,
-0.030632810667157173,
-0.007916788570582867,... | 0.108308 |
.. math:: \mathcal{O}(n\_{features}n\_{samples}\log (n\_{samples})) + \mathcal{O}(n\_{features}n\_{samples}) The first term is the cost of sorting :math:`n\_{samples}` repeated for :math:`n\_{features}`. The second term is the linear scan over candidate split points to find the feature that offers the largest reduction in the impurity criterion. The latter is sub-leading for the greedy splitter strategy "best", and is therefore typically discarded. Regardless of the splitting strategy, after summing the cost over \*\*all internal nodes\*\*, the total complexity scales linearly with :math:`n\_{nodes}=n\_{leaves}-1`, which is :math:`\mathcal{O}(n\_{samples})` in the worst-case complexity, that is, when the tree is grown until each sample ends up in its own leaf. Many implementations such as scikit-learn use efficient caching tricks to keep track of the general order of indices at each node such that the features do not need to be re-sorted at each node; hence, the time complexity of these implementations is just :math:`\mathcal{O}(n\_{features}n\_{samples}\log(n\_{samples}))` [1]\_. Inference cost is independent of the splitter strategy. It depends only on the tree depth, :math:`\mathcal{O}(\text{depth})`. In an approximately balanced binary tree, each split halves the data, and then the number of such halvings grows with the depth as powers of two. If this process continues until each sample is isolated in its own leaf, the resulting depth is :math:`\mathcal{O}(\log(n\_{samples}))`. .. rubric:: References .. [1] S. Raschka, `Stat 451: Machine learning lecture notes. `\_ University of Wisconsin-Madison (2020). Tips on practical use ===================== \* Decision trees tend to overfit on data with a large number of features. Getting the right ratio of samples to number of features is important, since a tree with few samples in high dimensional space is very likely to overfit. \* Consider performing dimensionality reduction (:ref:`PCA `, :ref:`ICA `, or :ref:`feature\_selection`) beforehand to give your tree a better chance of finding features that are discriminative. \* :ref:`sphx\_glr\_auto\_examples\_tree\_plot\_unveil\_tree\_structure.py` will help in gaining more insights about how the decision tree makes predictions, which is important for understanding the important features in the data. \* Visualize your tree as you are training by using the ``export`` function. Use ``max\_depth=3`` as an initial tree depth to get a feel for how the tree is fitting to your data, and then increase the depth. \* Remember that the number of samples required to populate the tree doubles for each additional level the tree grows to. Use ``max\_depth`` to control the size of the tree to prevent overfitting. \* Use ``min\_samples\_split`` or ``min\_samples\_leaf`` to ensure that multiple samples inform every decision in the tree, by controlling which splits will be considered. A very small number will usually mean the tree will overfit, whereas a large number will prevent the tree from learning the data. Try ``min\_samples\_leaf=5`` as an initial value. If the sample size varies greatly, a float number can be used as percentage in these two parameters. While ``min\_samples\_split`` can create arbitrarily small leaves, ``min\_samples\_leaf`` guarantees that each leaf has a minimum size, avoiding low-variance, over-fit leaf nodes in regression problems. For classification with few classes, ``min\_samples\_leaf=1`` is often the best choice. Note that ``min\_samples\_split`` considers samples directly and independent of ``sample\_weight``, if provided (e.g. a node with m weighted samples is still treated as having exactly m samples). Consider ``min\_weight\_fraction\_leaf`` or ``min\_impurity\_decrease`` if accounting for sample weights is required at splits. \* Balance your dataset before training to prevent the tree from being biased toward the classes that are dominant. Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (``sample\_weight``) for each class to the same value. Also note that weight-based pre-pruning criteria, such as ``min\_weight\_fraction\_leaf``, will then be | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.05546732246875763,
0.025211619213223457,
0.09399297833442688,
0.008518864400684834,
0.14102090895175934,
-0.1401994824409485,
-0.012907193042337894,
0.05051498860120773,
0.01992550864815712,
0.06485249102115631,
-0.0457165353000164,
-0.011626076884567738,
0.005580691620707512,
-0.017381... | 0.050222 |
the classes that are dominant. Class balancing can be done by sampling an equal number of samples from each class, or preferably by normalizing the sum of the sample weights (``sample\_weight``) for each class to the same value. Also note that weight-based pre-pruning criteria, such as ``min\_weight\_fraction\_leaf``, will then be less biased toward dominant classes than criteria that are not aware of the sample weights, like ``min\_samples\_leaf``. \* If the samples are weighted, it will be easier to optimize the tree structure using weight-based pre-pruning criterion such as ``min\_weight\_fraction\_leaf``, which ensures that leaf nodes contain at least a fraction of the overall sum of the sample weights. \* All decision trees use ``np.float32`` arrays internally. If training data is not in this format, a copy of the dataset will be made. \* If the input matrix X is very sparse, it is recommended to convert to sparse ``csc\_matrix`` before calling fit and sparse ``csr\_matrix`` before calling predict. Training time can be orders of magnitude faster for a sparse matrix input compared to a dense matrix when features have zero values in most of the samples. .. \_tree\_algorithms: Tree algorithms: ID3, C4.5, C5.0 and CART ========================================== What are all the various decision tree algorithms and how do they differ from each other? Which one is implemented in scikit-learn? .. dropdown:: Various decision tree algorithms ID3\_ (Iterative Dichotomiser 3) was developed in 1986 by Ross Quinlan. The algorithm creates a multiway tree, finding for each node (i.e. in a greedy manner) the categorical feature that will yield the largest information gain for categorical targets. Trees are grown to their maximum size and then a pruning step is usually applied to improve the ability of the tree to generalize to unseen data. C4.5 is the successor to ID3 and removed the restriction that features must be categorical by dynamically defining a discrete attribute (based on numerical variables) that partitions the continuous attribute value into a discrete set of intervals. C4.5 converts the trained trees (i.e. the output of the ID3 algorithm) into sets of if-then rules. The accuracy of each rule is then evaluated to determine the order in which they should be applied. Pruning is done by removing a rule's precondition if the accuracy of the rule improves without it. C5.0 is Quinlan's latest version release under a proprietary license. It uses less memory and builds smaller rulesets than C4.5 while being more accurate. CART (Classification and Regression Trees) is very similar to C4.5, but it differs in that it supports numerical target variables (regression) and does not compute rule sets. CART constructs binary trees using the feature and threshold that yield the largest information gain at each node. scikit-learn uses an optimized version of the CART algorithm; however, the scikit-learn implementation does not support categorical variables for now. .. \_ID3: https://en.wikipedia.org/wiki/ID3\_algorithm .. \_tree\_mathematical\_formulation: Mathematical formulation ======================== Given training vectors :math:`x\_i \in R^n`, i=1,..., l and a label vector :math:`y \in R^l`, a decision tree recursively partitions the feature space such that the samples with the same labels or similar target values are grouped together. Let the data at node :math:`m` be represented by :math:`Q\_m` with :math:`n\_m` samples. For each candidate split :math:`\theta = (j, t\_m)` consisting of a feature :math:`j` and threshold :math:`t\_m`, partition the data into :math:`Q\_m^{left}(\theta)` and :math:`Q\_m^{right}(\theta)` subsets .. math:: Q\_m^{left}(\theta) = \{(x, y) | x\_j \leq t\_m\} Q\_m^{right}(\theta) = Q\_m \setminus Q\_m^{left}(\theta) The quality of a candidate split of node :math:`m` is then computed using an impurity function or loss function :math:`H()`, the choice of which depends on the task being solved (classification or regression) .. math:: | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.0029452829621732235,
0.0037312000058591366,
-0.004772797226905823,
-0.030965695157647133,
0.08369652926921844,
-0.10436394810676575,
-0.029448335990309715,
0.0007938510971143842,
-0.050870589911937714,
0.06655920296907425,
-0.09255202114582062,
-0.028869885951280594,
0.0512738861143589,
... | 0.057184 |
.. math:: Q\_m^{left}(\theta) = \{(x, y) | x\_j \leq t\_m\} Q\_m^{right}(\theta) = Q\_m \setminus Q\_m^{left}(\theta) The quality of a candidate split of node :math:`m` is then computed using an impurity function or loss function :math:`H()`, the choice of which depends on the task being solved (classification or regression) .. math:: G(Q\_m, \theta) = \frac{n\_m^{left}}{n\_m} H(Q\_m^{left}(\theta)) + \frac{n\_m^{right}}{n\_m} H(Q\_m^{right}(\theta)) Select the parameters that minimises the impurity .. math:: \theta^\* = \operatorname{argmin}\_\theta G(Q\_m, \theta) The strategy to choose the split at each node is controlled by the `splitter` parameter: \* With the \*\*best splitter\*\* (default, ``splitter='best'``), :math:`\theta^\*` is found by performing a \*\*greedy exhaustive search\*\* over all available features and all possible thresholds :math:`t\_m` (i.e. midpoints between sorted, distinct feature values), selecting the pair that exactly minimizes :math:`G(Q\_m, \theta)`. \* With the \*\*random splitter\*\* (``splitter='random'``), :math:`\theta^\*` is found by sampling a \*\*single random candidate threshold\*\* for each available feature. This performs a stochastic approximation of the greedy search, effectively reducing computation time (see :ref:`tree\_complexity`). After choosing the optimal split :math:`\theta^\*` at node :math:`m`, the same splitting procedure is then applied recursively to each partition :math:`Q\_m^{left}(\theta^\*)` and :math:`Q\_m^{right}(\theta^\*)` until a stopping condition is reached, such as: \* the maximum allowable depth is reached (`max\_depth`); \* :math:`n\_m` is smaller than `min\_samples\_split`; \* the impurity decrease for this split is smaller than `min\_impurity\_decrease`. See the respective estimator docstring for other stopping conditions. Classification criteria ----------------------- If a target is a classification outcome taking on values 0,1,...,K-1, for node :math:`m`, let .. math:: p\_{mk} = \frac{1}{n\_m} \sum\_{y \in Q\_m} I(y = k) be the proportion of class k observations in node :math:`m`. If :math:`m` is a terminal node, `predict\_proba` for this region is set to :math:`p\_{mk}`. Common measures of impurity are the following. Gini: .. math:: H(Q\_m) = \sum\_k p\_{mk} (1 - p\_{mk}) Log Loss or Entropy: .. math:: H(Q\_m) = - \sum\_k p\_{mk} \log(p\_{mk}) .. dropdown:: Shannon entropy The entropy criterion computes the Shannon entropy of the possible classes. It takes the class frequencies of the training data points that reached a given leaf :math:`m` as their probability. Using the \*\*Shannon entropy as tree node splitting criterion is equivalent to minimizing the log loss\*\* (also known as cross-entropy and multinomial deviance) between the true labels :math:`y\_i` and the probabilistic predictions :math:`T\_k(x\_i)` of the tree model :math:`T` for class :math:`k`. To see this, first recall that the log loss of a tree model :math:`T` computed on a dataset :math:`D` is defined as follows: .. math:: \mathrm{LL}(D, T) = -\frac{1}{n} \sum\_{(x\_i, y\_i) \in D} \sum\_k I(y\_i = k) \log(T\_k(x\_i)) where :math:`D` is a training dataset of :math:`n` pairs :math:`(x\_i, y\_i)`. In a classification tree, the predicted class probabilities within leaf nodes are constant, that is: for all :math:`(x\_i, y\_i) \in Q\_m`, one has: :math:`T\_k(x\_i) = p\_{mk}` for each class :math:`k`. This property makes it possible to rewrite :math:`\mathrm{LL}(D, T)` as the sum of the Shannon entropies computed for each leaf of :math:`T` weighted by the number of training data points that reached each leaf: .. math:: \mathrm{LL}(D, T) = \sum\_{m \in T} \frac{n\_m}{n} H(Q\_m) Regression criteria ------------------- If the target is a continuous value, then for node :math:`m`, common criteria to minimize as for determining locations for future splits are Mean Squared Error (MSE or L2 error), Poisson deviance as well as Mean Absolute Error (MAE or L1 error). MSE and Poisson deviance both set the predicted value of terminal nodes to the learned mean value :math:`\bar{y}\_m` of the node whereas the MAE sets the predicted value of terminal nodes to the median :math:`median(y)\_m`. Mean Squared Error: .. math:: \bar{y}\_m = \frac{1}{n\_m} \sum\_{y \in Q\_m} y H(Q\_m) | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.09481488913297653,
0.035903654992580414,
0.05610654130578041,
0.0034886787179857492,
0.018441835418343544,
-0.003138304688036442,
0.042687319219112396,
0.025380220264196396,
0.029632551595568657,
0.023722760379314423,
-0.04116297885775566,
0.02019249089062214,
0.06590169668197632,
-0.01... | 0.068104 |
L1 error). MSE and Poisson deviance both set the predicted value of terminal nodes to the learned mean value :math:`\bar{y}\_m` of the node whereas the MAE sets the predicted value of terminal nodes to the median :math:`median(y)\_m`. Mean Squared Error: .. math:: \bar{y}\_m = \frac{1}{n\_m} \sum\_{y \in Q\_m} y H(Q\_m) = \frac{1}{n\_m} \sum\_{y \in Q\_m} (y - \bar{y}\_m)^2 Mean Poisson deviance: .. math:: H(Q\_m) = \frac{2}{n\_m} \sum\_{y \in Q\_m} (y \log\frac{y}{\bar{y}\_m} - y + \bar{y}\_m) Setting `criterion="poisson"` might be a good choice if your target is a count or a frequency (count per some unit). In any case, :math:`y >= 0` is a necessary condition to use this criterion. For performance reasons the actual implementation minimizes the half mean poisson deviance, i.e. the mean poisson deviance divided by 2. Mean Absolute Error: .. math:: median(y)\_m = \underset{y \in Q\_m}{\mathrm{median}}(y) H(Q\_m) = \frac{1}{n\_m} \sum\_{y \in Q\_m} |y - median(y)\_m| Note that it is 3–6× slower to fit than the MSE criterion as of version 1.8. .. \_tree\_missing\_value\_support: Missing Values Support ====================== :class:`DecisionTreeClassifier`, :class:`DecisionTreeRegressor` have built-in support for missing values using `splitter='best'`, where the splits are determined in a greedy fashion. :class:`ExtraTreeClassifier`, and :class:`ExtraTreeRegressor` have built-in support for missing values for `splitter='random'`, where the splits are determined randomly. For more details on how the splitter differs on non-missing values, see the :ref:`Forest section `. The criterion supported when there are missing values are `'gini'`, `'entropy'`, or `'log\_loss'`, for classification or `'squared\_error'` or `'poisson'` for regression. First we will describe how :class:`DecisionTreeClassifier`, :class:`DecisionTreeRegressor` handle missing-values in the data. For each potential threshold on the non-missing data, the splitter will evaluate the split with all the missing values going to the left node or the right node. Decisions are made as follows: - By default when predicting, the samples with missing values are classified with the class used in the split found during training:: >>> from sklearn.tree import DecisionTreeClassifier >>> import numpy as np >>> X = np.array([0, 1, 6, np.nan]).reshape(-1, 1) >>> y = [0, 0, 1, 1] >>> tree = DecisionTreeClassifier(random\_state=0).fit(X, y) >>> tree.predict(X) array([0, 0, 1, 1]) - If the criterion evaluation is the same for both nodes, then the tie for missing value at predict time is broken by going to the right node. The splitter also checks the split where all the missing values go to one child and non-missing values go to the other:: >>> from sklearn.tree import DecisionTreeClassifier >>> import numpy as np >>> X = np.array([np.nan, -1, np.nan, 1]).reshape(-1, 1) >>> y = [0, 0, 1, 1] >>> tree = DecisionTreeClassifier(random\_state=0, max\_depth=1).fit(X, y) >>> X\_test = np.array([np.nan]).reshape(-1, 1) >>> tree.predict(X\_test) array([1]) - If no missing values are seen during training for a given feature, then during prediction missing values are mapped to the child with the most samples:: >>> from sklearn.tree import DecisionTreeClassifier >>> import numpy as np >>> X = np.array([0, 1, 2, 3]).reshape(-1, 1) >>> y = [0, 1, 1, 1] >>> tree = DecisionTreeClassifier(random\_state=0).fit(X, y) >>> X\_test = np.array([np.nan]).reshape(-1, 1) >>> tree.predict(X\_test) array([1]) :class:`ExtraTreeClassifier`, and :class:`ExtraTreeRegressor` handle missing values in a slightly different way. When splitting a node, a random threshold will be chosen to split the non-missing values on. Then the non-missing values will be sent to the left and right child based on the randomly selected threshold, while the missing values will also be randomly sent to the left or right child. This is repeated for every feature considered at each split. The best split among these is chosen. During prediction, the treatment of missing-values is the same as that of the decision tree: - By default when predicting, the samples | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
0.030426964163780212,
-0.07105883210897446,
0.005443023517727852,
0.040460653603076935,
0.05823228880763054,
-0.035937923938035965,
0.013924364931881428,
0.07798044383525848,
0.06107128784060478,
-0.00842240173369646,
0.05479924753308296,
-0.0002635400742292404,
0.09924646466970444,
-0.008... | -0.073246 |
will also be randomly sent to the left or right child. This is repeated for every feature considered at each split. The best split among these is chosen. During prediction, the treatment of missing-values is the same as that of the decision tree: - By default when predicting, the samples with missing values are classified with the class used in the split found during training. - If no missing values are seen during training for a given feature, then during prediction missing values are mapped to the child with the most samples. .. \_minimal\_cost\_complexity\_pruning: Minimal Cost-Complexity Pruning =============================== Minimal cost-complexity pruning is an algorithm used to prune a tree to avoid over-fitting, described in Chapter 3 of [BRE]\_. This algorithm is parameterized by :math:`\alpha\ge0` known as the complexity parameter. The complexity parameter is used to define the cost-complexity measure, :math:`R\_\alpha(T)` of a given tree :math:`T`: .. math:: R\_\alpha(T) = R(T) + \alpha|\widetilde{T}| where :math:`|\widetilde{T}|` is the number of terminal nodes in :math:`T` and :math:`R(T)` is traditionally defined as the total misclassification rate of the terminal nodes. Alternatively, scikit-learn uses the total sample weighted impurity of the terminal nodes for :math:`R(T)`. As shown above, the impurity of a node depends on the criterion. Minimal cost-complexity pruning finds the subtree of :math:`T` that minimizes :math:`R\_\alpha(T)`. The cost complexity measure of a single node is :math:`R\_\alpha(t)=R(t)+\alpha`. The branch, :math:`T\_t`, is defined to be a tree where node :math:`t` is its root. In general, the impurity of a node is greater than the sum of impurities of its terminal nodes, :math:`R(T\_t) | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/tree.rst | main | scikit-learn | [
-0.029115742072463036,
0.037913739681243896,
-0.027163229882717133,
0.05946851149201393,
0.09615499526262283,
-0.02375740557909012,
0.002712578745558858,
0.01460593193769455,
-0.013457280583679676,
0.02683207020163536,
0.005848215892910957,
-0.021339597180485725,
0.023520193994045258,
-0.0... | 0.077307 |
.. currentmodule:: sklearn .. \_model\_evaluation: =========================================================== Metrics and scoring: quantifying the quality of predictions =========================================================== .. \_which\_scoring\_function: Which scoring function should I use? ==================================== Before we take a closer look into the details of the many scores and :term:`evaluation metrics`, we want to give some guidance, inspired by statistical decision theory, on the choice of \*\*scoring functions\*\* for \*\*supervised learning\*\*, see [Gneiting2009]\_: - \*Which scoring function should I use?\* - \*Which scoring function is a good one for my task?\* In a nutshell, if the scoring function is given, e.g. in a kaggle competition or in a business context, use that one. If you are free to choose, it starts by considering the ultimate goal and application of the prediction. It is useful to distinguish two steps: \* Predicting \* Decision making \*\*Predicting:\*\* Usually, the response variable :math:`Y` is a random variable, in the sense that there is \*no deterministic\* function :math:`Y = g(X)` of the features :math:`X`. Instead, there is a probability distribution :math:`F` of :math:`Y`. One can aim to predict the whole distribution, known as \*probabilistic prediction\*, or---more the focus of scikit-learn---issue a \*point prediction\* (or point forecast) by choosing a property or functional of that distribution :math:`F`. Typical examples are the mean (expected value), the median or a quantile of the response variable :math:`Y` (conditionally on :math:`X`). Once that is settled, use a \*\*strictly consistent\*\* scoring function for that (target) functional, see [Gneiting2009]\_. This means using a scoring function that is aligned with \*measuring the distance between predictions\* `y\_pred` \*and the true target functional using observations of\* :math:`Y`, i.e. `y\_true`. For classification \*\*strictly proper scoring rules\*\*, see `Wikipedia entry for Scoring rule `\_ and [Gneiting2007]\_, coincide with strictly consistent scoring functions. The table further below provides examples. One could say that consistent scoring functions act as \*truth serum\* in that they guarantee \*"that truth telling [. . .] is an optimal strategy in expectation"\* [Gneiting2014]\_. Once a strictly consistent scoring function is chosen, it is best used for both: as loss function for model training and as metric/score in model evaluation and model comparison. Note that for regressors, the prediction is done with :term:`predict` while for classifiers it is usually :term:`predict\_proba`. \*\*Decision Making:\*\* The most common decisions are done on binary classification tasks, where the result of :term:`predict\_proba` is turned into a single outcome, e.g., from the predicted probability of rain a decision is made on how to act (whether to take mitigating measures like an umbrella or not). For classifiers, this is what :term:`predict` returns. See also :ref:`TunedThresholdClassifierCV`. There are many scoring functions which measure different aspects of such a decision, most of them are covered with or derived from the :func:`metrics.confusion\_matrix`. \*\*List of strictly consistent scoring functions:\*\* Here, we list some of the most relevant statistical functionals and corresponding strictly consistent scoring functions for tasks in practice. Note that the list is not complete and that there are more of them. For further criteria on how to select a specific one, see [Fissler2022]\_. ================== =================================================== ==================== ================================= functional scoring or loss function response `y` prediction ================== =================================================== ==================== ================================= \*\*Classification\*\* mean :ref:`Brier score ` :sup:`1` multi-class ``predict\_proba`` mean :ref:`log loss ` multi-class ``predict\_proba`` mode :ref:`zero-one loss ` :sup:`2` multi-class ``predict``, categorical \*\*Regression\*\* mean :ref:`squared error ` :sup:`3` all reals ``predict``, all reals mean :ref:`Poisson deviance ` non-negative ``predict``, strictly positive mean :ref:`Gamma deviance ` strictly positive ``predict``, strictly positive mean :ref:`Tweedie deviance ` depends on ``power`` ``predict``, depends on ``power`` median :ref:`absolute error ` all reals ``predict``, all reals quantile :ref:`pinball loss ` all reals ``predict``, all reals mode no consistent one exists reals ================== | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.025610003620386124,
-0.04816023260354996,
-0.07144654542207718,
-0.010433397255837917,
0.015584866516292095,
0.043850161135196686,
0.04665854200720787,
0.05691798776388168,
0.023156223818659782,
-0.0035359577741473913,
-0.045957162976264954,
-0.09755822271108627,
0.05735715851187706,
0.... | 0.103576 |
` non-negative ``predict``, strictly positive mean :ref:`Gamma deviance ` strictly positive ``predict``, strictly positive mean :ref:`Tweedie deviance ` depends on ``power`` ``predict``, depends on ``power`` median :ref:`absolute error ` all reals ``predict``, all reals quantile :ref:`pinball loss ` all reals ``predict``, all reals mode no consistent one exists reals ================== =================================================== ==================== ================================= :sup:`1` The Brier score is just a different name for the squared error in case of classification with one-hot encoded targets. :sup:`2` The zero-one loss is only consistent but not strictly consistent for the mode. The zero-one loss is equivalent to one minus the accuracy score, meaning it gives different score values but the same ranking. :sup:`3` R² gives the same ranking as squared error. \*\*Fictitious Example:\*\* Let's make the above arguments more tangible. Consider a setting in network reliability engineering, such as maintaining stable internet or Wi-Fi connections. As provider of the network, you have access to the dataset of log entries of network connections containing network load over time and many interesting features. Your goal is to improve the reliability of the connections. In fact, you promise your customers that on at least 99% of all days there are no connection discontinuities larger than 1 minute. Therefore, you are interested in a prediction of the 99% quantile (of longest connection interruption duration per day) in order to know in advance when to add more bandwidth and thereby satisfy your customers. So the \*target functional\* is the 99% quantile. From the table above, you choose the pinball loss as scoring function (fair enough, not much choice given), for model training (e.g. `HistGradientBoostingRegressor(loss="quantile", quantile=0.99)`) as well as model evaluation (`mean\_pinball\_loss(..., alpha=0.99)` - we apologize for the different argument names, `quantile` and `alpha`) be it in grid search for finding hyperparameters or in comparing to other models like `QuantileRegressor(quantile=0.99)`. .. rubric:: References .. [Gneiting2007] T. Gneiting and A. E. Raftery. :doi:`Strictly Proper Scoring Rules, Prediction, and Estimation <10.1198/016214506000001437>` In: Journal of the American Statistical Association 102 (2007), pp. 359– 378. `link to pdf `\_ .. [Gneiting2009] T. Gneiting. :arxiv:`Making and Evaluating Point Forecasts <0912.0902>` Journal of the American Statistical Association 106 (2009): 746 - 762. .. [Gneiting2014] T. Gneiting and M. Katzfuss. :doi:`Probabilistic Forecasting <10.1146/annurev-statistics-062713-085831>`. In: Annual Review of Statistics and Its Application 1.1 (2014), pp. 125–151. .. [Fissler2022] T. Fissler, C. Lorentzen and M. Mayer. :arxiv:`Model Comparison and Calibration Assessment: User Guide for Consistent Scoring Functions in Machine Learning and Actuarial Practice. <2202.12780>` .. \_scoring\_api\_overview: Scoring API overview ==================== There are 3 different APIs for evaluating the quality of a model's predictions: \* \*\*Estimator score method\*\*: Estimators have a ``score`` method providing a default evaluation criterion for the problem they are designed to solve. Most commonly this is :ref:`accuracy ` for classifiers and the :ref:`coefficient of determination ` (:math:`R^2`) for regressors. Details for each estimator can be found in its documentation. \* \*\*Scoring parameter\*\*: Model-evaluation tools that use :ref:`cross-validation ` (such as :class:`model\_selection.GridSearchCV`, :func:`model\_selection.validation\_curve` and :class:`linear\_model.LogisticRegressionCV`) rely on an internal \*scoring\* strategy. This can be specified using the `scoring` parameter of that tool and is discussed in the section :ref:`scoring\_parameter`. \* \*\*Metric functions\*\*: The :mod:`sklearn.metrics` module implements functions assessing prediction error for specific purposes. These metrics are detailed in sections on :ref:`classification\_metrics`, :ref:`multilabel\_ranking\_metrics`, :ref:`regression\_metrics` and :ref:`clustering\_metrics`. Finally, :ref:`dummy\_estimators` are useful to get a baseline value of those metrics for random predictions. .. seealso:: For "pairwise" metrics, between \*samples\* and not estimators or predictions, see the :ref:`metrics` section. .. \_scoring\_parameter: The ``scoring`` parameter: defining model evaluation rules ========================================================== Model selection and evaluation tools that internally use :ref:`cross-validation ` (such as :class:`model\_selection.GridSearchCV`, :func:`model\_selection.validation\_curve` and :class:`linear\_model.LogisticRegressionCV`) take a ``scoring`` parameter | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.0383504256606102,
-0.036594122648239136,
-0.049429379403591156,
0.026122430339455605,
-0.03525083512067795,
-0.002379216253757477,
0.004497116897255182,
0.10516505688428879,
-0.012438596226274967,
-0.015461444854736328,
-0.012396911159157753,
-0.08361726254224777,
0.07285728305578232,
0... | -0.015291 |
metrics for random predictions. .. seealso:: For "pairwise" metrics, between \*samples\* and not estimators or predictions, see the :ref:`metrics` section. .. \_scoring\_parameter: The ``scoring`` parameter: defining model evaluation rules ========================================================== Model selection and evaluation tools that internally use :ref:`cross-validation ` (such as :class:`model\_selection.GridSearchCV`, :func:`model\_selection.validation\_curve` and :class:`linear\_model.LogisticRegressionCV`) take a ``scoring`` parameter that controls what metric they apply to the estimators evaluated. They can be specified in several ways: \* `None`: the estimator's default evaluation criterion (i.e., the metric used in the estimator's `score` method) is used. \* :ref:`String name `: common metrics can be passed via a string name. \* :ref:`Callable `: more complex metrics can be passed via a custom metric callable (e.g., function). Some tools do also accept multiple metric evaluation. See :ref:`multimetric\_scoring` for details. .. \_scoring\_string\_names: String name scorers ------------------- For the most common use cases, you can designate a scorer object with the ``scoring`` parameter via a string name; the table below shows all possible values. All scorer objects follow the convention that \*\*higher return values are better than lower return values\*\*. Thus metrics which measure the distance between the model and the data, like :func:`metrics.mean\_squared\_error`, are available as 'neg\_mean\_squared\_error' which return the negated value of the metric. ==================================== ============================================== ================================== Scoring string name Function Comment ==================================== ============================================== ================================== \*\*Classification\*\* 'accuracy' :func:`metrics.accuracy\_score` 'balanced\_accuracy' :func:`metrics.balanced\_accuracy\_score` 'top\_k\_accuracy' :func:`metrics.top\_k\_accuracy\_score` 'average\_precision' :func:`metrics.average\_precision\_score` 'neg\_brier\_score' :func:`metrics.brier\_score\_loss` requires ``predict\_proba`` support 'f1' :func:`metrics.f1\_score` for binary targets 'f1\_micro' :func:`metrics.f1\_score` micro-averaged 'f1\_macro' :func:`metrics.f1\_score` macro-averaged 'f1\_weighted' :func:`metrics.f1\_score` weighted average 'f1\_samples' :func:`metrics.f1\_score` by multilabel sample 'neg\_log\_loss' :func:`metrics.log\_loss` requires ``predict\_proba`` support 'precision' etc. :func:`metrics.precision\_score` suffixes apply as with 'f1' 'recall' etc. :func:`metrics.recall\_score` suffixes apply as with 'f1' 'jaccard' etc. :func:`metrics.jaccard\_score` suffixes apply as with 'f1' 'roc\_auc' :func:`metrics.roc\_auc\_score` 'roc\_auc\_ovr' :func:`metrics.roc\_auc\_score` 'roc\_auc\_ovo' :func:`metrics.roc\_auc\_score` 'roc\_auc\_ovr\_weighted' :func:`metrics.roc\_auc\_score` 'roc\_auc\_ovo\_weighted' :func:`metrics.roc\_auc\_score` 'd2\_log\_loss\_score' :func:`metrics.d2\_log\_loss\_score` requires ``predict\_proba`` support 'd2\_brier\_score' :func:`metrics.d2\_brier\_score` requires ``predict\_proba`` support \*\*Clustering\*\* 'adjusted\_mutual\_info\_score' :func:`metrics.adjusted\_mutual\_info\_score` 'adjusted\_rand\_score' :func:`metrics.adjusted\_rand\_score` 'completeness\_score' :func:`metrics.completeness\_score` 'fowlkes\_mallows\_score' :func:`metrics.fowlkes\_mallows\_score` 'homogeneity\_score' :func:`metrics.homogeneity\_score` 'mutual\_info\_score' :func:`metrics.mutual\_info\_score` 'normalized\_mutual\_info\_score' :func:`metrics.normalized\_mutual\_info\_score` 'rand\_score' :func:`metrics.rand\_score` 'v\_measure\_score' :func:`metrics.v\_measure\_score` \*\*Regression\*\* 'explained\_variance' :func:`metrics.explained\_variance\_score` 'neg\_max\_error' :func:`metrics.max\_error` 'neg\_mean\_absolute\_error' :func:`metrics.mean\_absolute\_error` 'neg\_mean\_squared\_error' :func:`metrics.mean\_squared\_error` 'neg\_root\_mean\_squared\_error' :func:`metrics.root\_mean\_squared\_error` 'neg\_mean\_squared\_log\_error' :func:`metrics.mean\_squared\_log\_error` 'neg\_root\_mean\_squared\_log\_error' :func:`metrics.root\_mean\_squared\_log\_error` 'neg\_median\_absolute\_error' :func:`metrics.median\_absolute\_error` 'r2' :func:`metrics.r2\_score` 'neg\_mean\_poisson\_deviance' :func:`metrics.mean\_poisson\_deviance` 'neg\_mean\_gamma\_deviance' :func:`metrics.mean\_gamma\_deviance` 'neg\_mean\_absolute\_percentage\_error' :func:`metrics.mean\_absolute\_percentage\_error` 'd2\_absolute\_error\_score' :func:`metrics.d2\_absolute\_error\_score` ==================================== ============================================== ================================== Usage examples: >>> from sklearn import svm, datasets >>> from sklearn.model\_selection import cross\_val\_score >>> X, y = datasets.load\_iris(return\_X\_y=True) >>> clf = svm.SVC(random\_state=0) >>> cross\_val\_score(clf, X, y, cv=5, scoring='recall\_macro') array([0.96, 0.96, 0.96, 0.93, 1. ]) .. note:: If a wrong scoring name is passed, an ``InvalidParameterError`` is raised. You can retrieve the names of all available scorers by calling :func:`~sklearn.metrics.get\_scorer\_names`. .. currentmodule:: sklearn.metrics .. \_scoring\_callable: Callable scorers ---------------- For more complex use cases and more flexibility, you can pass a callable to the `scoring` parameter. This can be done by: \* :ref:`scoring\_adapt\_metric` \* :ref:`scoring\_custom` (most flexible) .. \_scoring\_adapt\_metric: Adapting predefined metrics via `make\_scorer` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The following metric functions are not implemented as named scorers, sometimes because they require additional parameters, such as :func:`fbeta\_score`. They cannot be passed to the ``scoring`` parameters; instead their callable needs to be passed to :func:`make\_scorer` together with the value of the user-settable parameters. ===================================== ========= ============================================== Function Parameter Example usage ===================================== ========= ============================================== \*\*Classification\*\* :func:`metrics.fbeta\_score` ``beta`` ``make\_scorer(fbeta\_score, beta=2)`` \*\*Regression\*\* :func:`metrics.mean\_tweedie\_deviance` ``power`` ``make\_scorer(mean\_tweedie\_deviance, power=1.5)`` :func:`metrics.mean\_pinball\_loss` ``alpha`` ``make\_scorer(mean\_pinball\_loss, alpha=0.95)`` :func:`metrics.d2\_tweedie\_score` ``power`` ``make\_scorer(d2\_tweedie\_score, power=1.5)`` :func:`metrics.d2\_pinball\_score` ``alpha`` ``make\_scorer(d2\_pinball\_score, alpha=0.95)`` ===================================== ========= ============================================== One typical use case is to wrap an existing metric function from the library with non-default values for its parameters, such as the ``beta`` parameter for the :func:`fbeta\_score` function:: >>> from sklearn.metrics import fbeta\_score, make\_scorer >>> ftwo\_scorer = make\_scorer(fbeta\_score, beta=2) >>> from sklearn.model\_selection import GridSearchCV >>> from sklearn.svm import LinearSVC >>> grid = GridSearchCV(LinearSVC(), param\_grid={'C': [1, 10]}, ... scoring=ftwo\_scorer, cv=5) The module :mod:`sklearn.metrics` also exposes a set of simple functions measuring a prediction error | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.06511414051055908,
-0.08524397015571594,
-0.10991574823856354,
0.04310058802366257,
0.06735284626483917,
-0.023447219282388687,
-0.0025202122051268816,
0.022051284089684486,
0.011507782153785229,
-0.007083606440573931,
-0.04760152846574783,
-0.143066868185997,
-0.0050961109809577465,
-0... | 0.047157 |
parameter for the :func:`fbeta\_score` function:: >>> from sklearn.metrics import fbeta\_score, make\_scorer >>> ftwo\_scorer = make\_scorer(fbeta\_score, beta=2) >>> from sklearn.model\_selection import GridSearchCV >>> from sklearn.svm import LinearSVC >>> grid = GridSearchCV(LinearSVC(), param\_grid={'C': [1, 10]}, ... scoring=ftwo\_scorer, cv=5) The module :mod:`sklearn.metrics` also exposes a set of simple functions measuring a prediction error given ground truth and prediction: - functions ending with ``\_score`` return a value to maximize, the higher the better. - functions ending with ``\_error``, ``\_loss``, or ``\_deviance`` return a value to minimize, the lower the better. When converting into a scorer object using :func:`make\_scorer`, set the ``greater\_is\_better`` parameter to ``False`` (``True`` by default; see the parameter description below). .. \_scoring\_custom: Creating a custom scorer object ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can create your own custom scorer object using :func:`make\_scorer`. .. dropdown:: Custom scorer objects using `make\_scorer` You can build a completely custom scorer object from a simple python function using :func:`make\_scorer`, which can take several parameters: \* the python function you want to use (``my\_custom\_loss\_func`` in the example below) \* whether the python function returns a score (``greater\_is\_better=True``, the default) or a loss (``greater\_is\_better=False``). If a loss, the output of the python function is negated by the scorer object, conforming to the cross validation convention that scorers return higher values for better models. \* for classification metrics only: whether the python function you provided requires continuous decision certainties. If the scoring function only accepts probability estimates (e.g. :func:`metrics.log\_loss`), then one needs to set the parameter `response\_method="predict\_proba"`. Some scoring functions do not necessarily require probability estimates but rather non-thresholded decision values (e.g. :func:`metrics.roc\_auc\_score`). In this case, one can provide a list (e.g., `response\_method=["decision\_function", "predict\_proba"]`), and scorer will use the first available method, in the order given in the list, to compute the scores. \* any additional parameters of the scoring function, such as ``beta`` or ``labels``. Here is an example of building custom scorers, and of using the ``greater\_is\_better`` parameter:: >>> import numpy as np >>> def my\_custom\_loss\_func(y\_true, y\_pred): ... diff = np.abs(y\_true - y\_pred).max() ... return float(np.log1p(diff)) ... >>> # score will negate the return value of my\_custom\_loss\_func, >>> # which will be np.log(2), 0.693, given the values for X >>> # and y defined below. >>> score = make\_scorer(my\_custom\_loss\_func, greater\_is\_better=False) >>> X = [[1], [1]] >>> y = [0, 1] >>> from sklearn.dummy import DummyClassifier >>> clf = DummyClassifier(strategy='most\_frequent', random\_state=0) >>> clf = clf.fit(X, y) >>> my\_custom\_loss\_func(y, clf.predict(X)) 0.69 >>> score(clf, X, y) -0.69 .. dropdown:: Using custom scorers in functions where n\_jobs > 1 While defining the custom scoring function alongside the calling function should work out of the box with the default joblib backend (loky), importing it from another module will be a more robust approach and work independently of the joblib backend. For example, to use ``n\_jobs`` greater than 1 in the example below, ``custom\_scoring\_function`` function is saved in a user-created module (``custom\_scorer\_module.py``) and imported:: >>> from custom\_scorer\_module import custom\_scoring\_function # doctest: +SKIP >>> cross\_val\_score(model, ... X\_train, ... y\_train, ... scoring=make\_scorer(custom\_scoring\_function, greater\_is\_better=False), ... cv=5, ... n\_jobs=-1) # doctest: +SKIP .. \_multimetric\_scoring: Using multiple metric evaluation -------------------------------- Scikit-learn also permits evaluation of multiple metrics in ``GridSearchCV``, ``RandomizedSearchCV`` and ``cross\_validate``. There are three ways to specify multiple scoring metrics for the ``scoring`` parameter: - As an iterable of string metrics:: >>> scoring = ['accuracy', 'precision'] - As a ``dict`` mapping the scorer name to the scoring function:: >>> from sklearn.metrics import accuracy\_score >>> from sklearn.metrics import make\_scorer >>> scoring = {'accuracy': make\_scorer(accuracy\_score), ... 'prec': 'precision'} Note that the dict values can either be scorer functions or one of the predefined metric strings. - As a callable that returns a dictionary of | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.02501794323325157,
-0.042719244956970215,
-0.1263946294784546,
0.06223404034972191,
0.06836721301078796,
-0.022260602563619614,
0.047398194670677185,
0.07622460275888443,
0.03189101815223694,
0.026484405621886253,
-0.0370088666677475,
-0.030695460736751556,
-0.039389468729496,
0.0098292... | 0.069323 |
scorer name to the scoring function:: >>> from sklearn.metrics import accuracy\_score >>> from sklearn.metrics import make\_scorer >>> scoring = {'accuracy': make\_scorer(accuracy\_score), ... 'prec': 'precision'} Note that the dict values can either be scorer functions or one of the predefined metric strings. - As a callable that returns a dictionary of scores:: >>> from sklearn.model\_selection import cross\_validate >>> from sklearn.metrics import confusion\_matrix >>> # A sample toy binary classification dataset >>> X, y = datasets.make\_classification(n\_classes=2, random\_state=0) >>> svm = LinearSVC(random\_state=0) >>> def confusion\_matrix\_scorer(clf, X, y): ... y\_pred = clf.predict(X) ... cm = confusion\_matrix(y, y\_pred) ... return {'tn': cm[0, 0], 'fp': cm[0, 1], ... 'fn': cm[1, 0], 'tp': cm[1, 1]} >>> cv\_results = cross\_validate(svm, X, y, cv=5, ... scoring=confusion\_matrix\_scorer) >>> # Getting the test set true positive scores >>> print(cv\_results['test\_tp']) [10 9 8 7 8] >>> # Getting the test set false negative scores >>> print(cv\_results['test\_fn']) [0 1 2 3 2] .. \_classification\_metrics: Classification metrics ======================= .. currentmodule:: sklearn.metrics The :mod:`sklearn.metrics` module implements several loss, score, and utility functions to measure classification performance. Some metrics might require probability estimates of the positive class, confidence values, or binary decisions values. Most implementations allow each sample to provide a weighted contribution to the overall score, through the ``sample\_weight`` parameter. Some of these are restricted to the binary classification case: .. autosummary:: precision\_recall\_curve roc\_curve class\_likelihood\_ratios det\_curve confusion\_matrix\_at\_thresholds Others also work in the multiclass case: .. autosummary:: balanced\_accuracy\_score cohen\_kappa\_score confusion\_matrix hinge\_loss matthews\_corrcoef roc\_auc\_score top\_k\_accuracy\_score Some also work in the multilabel case: .. autosummary:: accuracy\_score classification\_report f1\_score fbeta\_score hamming\_loss jaccard\_score log\_loss multilabel\_confusion\_matrix precision\_recall\_fscore\_support precision\_score recall\_score roc\_auc\_score zero\_one\_loss d2\_log\_loss\_score And some work with binary and multilabel (but not multiclass) problems: .. autosummary:: average\_precision\_score In the following sub-sections, we will describe each of those functions, preceded by some notes on common API and metric definition. .. \_average: From binary to multiclass and multilabel ---------------------------------------- Some metrics are essentially defined for binary classification tasks (e.g. :func:`f1\_score`, :func:`roc\_auc\_score`). In these cases, by default only the positive label is evaluated, assuming by default that the positive class is labelled ``1`` (though this may be configurable through the ``pos\_label`` parameter). In extending a binary metric to multiclass or multilabel problems, the data is treated as a collection of binary problems, one for each class. There are then a number of ways to average binary metric calculations across the set of classes, each of which may be useful in some scenario. Where available, you should select among these using the ``average`` parameter. \* ``"macro"`` simply calculates the mean of the binary metrics, giving equal weight to each class. In problems where infrequent classes are nonetheless important, macro-averaging may be a means of highlighting their performance. On the other hand, the assumption that all classes are equally important is often untrue, such that macro-averaging will over-emphasize the typically low performance on an infrequent class. \* ``"weighted"`` accounts for class imbalance by computing the average of binary metrics in which each class's score is weighted by its presence in the true data sample. \* ``"micro"`` gives each sample-class pair an equal contribution to the overall metric (except as a result of sample-weight). Rather than summing the metric per class, this sums the dividends and divisors that make up the per-class metrics to calculate an overall quotient. Micro-averaging may be preferred in multilabel settings, including multiclass classification where a majority class is to be ignored. \* ``"samples"`` applies only to multilabel problems. It does not calculate a per-class measure, instead calculating the metric over the true and predicted classes for each sample in the evaluation data, and returning their (``sample\_weight``-weighted) average. \* Selecting ``average=None`` will return an | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.01019304059445858,
-0.05765314772725105,
-0.0819144994020462,
-0.017586473375558853,
0.027460459619760513,
-0.03757733851671219,
0.07838365435600281,
0.04167517274618149,
-0.037046268582344055,
0.013104856014251709,
0.01740274392068386,
-0.08785900473594666,
0.024980735033750534,
0.0299... | 0.10932 |
where a majority class is to be ignored. \* ``"samples"`` applies only to multilabel problems. It does not calculate a per-class measure, instead calculating the metric over the true and predicted classes for each sample in the evaluation data, and returning their (``sample\_weight``-weighted) average. \* Selecting ``average=None`` will return an array with the score for each class. While multiclass data is provided to the metric, like binary targets, as an array of class labels, multilabel data is specified as an indicator matrix, in which cell ``[i, j]`` has value 1 if sample ``i`` has label ``j`` and value 0 otherwise. .. \_accuracy\_score: Accuracy score -------------- The :func:`accuracy\_score` function computes the `accuracy `\_, either the fraction (default) or the count (normalize=False) of correct predictions. In multilabel classification, the function returns the subset accuracy. If the entire set of predicted labels for a sample strictly match with the true set of labels, then the subset accuracy is 1.0; otherwise it is 0.0. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample and :math:`y\_i` is the corresponding true value, then the fraction of correct predictions over :math:`n\_\text{samples}` is defined as .. math:: \texttt{accuracy}(y, \hat{y}) = \frac{1}{n\_\text{samples}} \sum\_{i=0}^{n\_\text{samples}-1} 1(\hat{y}\_i = y\_i) where :math:`1(x)` is the `indicator function `\_. >>> import numpy as np >>> from sklearn.metrics import accuracy\_score >>> y\_pred = [0, 2, 1, 3] >>> y\_true = [0, 1, 2, 3] >>> accuracy\_score(y\_true, y\_pred) 0.5 >>> accuracy\_score(y\_true, y\_pred, normalize=False) 2.0 In the multilabel case with binary label indicators:: >>> accuracy\_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2))) 0.5 .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_permutation\_tests\_for\_classification.py` for an example of accuracy score usage using permutations of the dataset. .. \_top\_k\_accuracy\_score: Top-k accuracy score -------------------- The :func:`top\_k\_accuracy\_score` function is a generalization of :func:`accuracy\_score`. The difference is that a prediction is considered correct as long as the true label is associated with one of the ``k`` highest predicted scores. :func:`accuracy\_score` is the special case of `k = 1`. The function covers the binary and multiclass classification cases but not the multilabel case. If :math:`\hat{f}\_{i,j}` is the predicted class for the :math:`i`-th sample corresponding to the :math:`j`-th largest predicted score and :math:`y\_i` is the corresponding true value, then the fraction of correct predictions over :math:`n\_\text{samples}` is defined as .. math:: \texttt{top-k accuracy}(y, \hat{f}) = \frac{1}{n\_\text{samples}} \sum\_{i=0}^{n\_\text{samples}-1} \sum\_{j=1}^{k} 1(\hat{f}\_{i,j} = y\_i) where :math:`k` is the number of guesses allowed and :math:`1(x)` is the `indicator function `\_. >>> import numpy as np >>> from sklearn.metrics import top\_k\_accuracy\_score >>> y\_true = np.array([0, 1, 2, 2]) >>> y\_score = np.array([[0.5, 0.2, 0.2], ... [0.3, 0.4, 0.2], ... [0.2, 0.4, 0.3], ... [0.7, 0.2, 0.1]]) >>> top\_k\_accuracy\_score(y\_true, y\_score, k=2) 0.75 >>> # Not normalizing gives the number of "correctly" classified samples >>> top\_k\_accuracy\_score(y\_true, y\_score, k=2, normalize=False) 3.0 .. \_balanced\_accuracy\_score: Balanced accuracy score ----------------------- The :func:`balanced\_accuracy\_score` function computes the `balanced accuracy `\_, which avoids inflated performance estimates on imbalanced datasets. It is the macro-average of recall scores per class or, equivalently, raw accuracy where each sample is weighted according to the inverse prevalence of its true class. Thus for balanced datasets, the score is equal to accuracy. In the binary case, balanced accuracy is equal to the arithmetic mean of `sensitivity `\_ (true positive rate) and `specificity `\_ (true negative rate), or the area under the ROC curve with binary predictions rather than scores: .. math:: \texttt{balanced-accuracy} = \frac{1}{2}\left( \frac{TP}{TP + FN} + \frac{TN}{TN + FP}\right ) If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventional accuracy is above chance | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.029651742428541183,
-0.026883551850914955,
-0.06568140536546707,
-0.05262859910726547,
0.01810235157608986,
-0.025103595107793808,
0.05044922977685928,
0.03144409507513046,
-0.010147886350750923,
-0.026666641235351562,
-0.04260088503360748,
-0.11878515779972076,
0.049404941499233246,
-0... | 0.052047 |
math:: \texttt{balanced-accuracy} = \frac{1}{2}\left( \frac{TP}{TP + FN} + \frac{TN}{TN + FP}\right ) If the classifier performs equally well on either class, this term reduces to the conventional accuracy (i.e., the number of correct predictions divided by the total number of predictions). In contrast, if the conventional accuracy is above chance only because the classifier takes advantage of an imbalanced test set, then the balanced accuracy, as appropriate, will drop to :math:`\frac{1}{n\\_classes}`. The score ranges from 0 to 1, or when ``adjusted=True`` is used, it is rescaled to the range :math:`\frac{1}{1 - n\\_classes}` to 1, inclusive, with performance at random scoring 0. If :math:`y\_i` is the true value of the :math:`i`-th sample, and :math:`w\_i` is the corresponding sample weight, then we adjust the sample weight to: .. math:: \hat{w}\_i = \frac{w\_i}{\sum\_j{1(y\_j = y\_i) w\_j}} where :math:`1(x)` is the `indicator function `\_. Given predicted :math:`\hat{y}\_i` for sample :math:`i`, balanced accuracy is defined as: .. math:: \texttt{balanced-accuracy}(y, \hat{y}, w) = \frac{1}{\sum{\hat{w}\_i}} \sum\_i 1(\hat{y}\_i = y\_i) \hat{w}\_i With ``adjusted=True``, balanced accuracy reports the relative increase from :math:`\texttt{balanced-accuracy}(y, \mathbf{0}, w) = \frac{1}{n\\_classes}`. In the binary case, this is also known as `Youden's J statistic `\_, or \*informedness\*. .. note:: The multiclass definition here seems the most reasonable extension of the metric used in binary classification, though there is no certain consensus in the literature: \* Our definition: [Mosley2013]\_, [Kelleher2015]\_ and [Guyon2015]\_, where [Guyon2015]\_ adopt the adjusted version to ensure that random predictions have a score of :math:`0` and perfect predictions have a score of :math:`1`. \* Class balanced accuracy as described in [Mosley2013]\_: the minimum between the precision and the recall for each class is computed. Those values are then averaged over the total number of classes to get the balanced accuracy. \* Balanced Accuracy as described in [Urbanowicz2015]\_: the average of sensitivity and specificity is computed for each class and then averaged over total number of classes. .. rubric:: References .. [Guyon2015] I. Guyon, K. Bennett, G. Cawley, H.J. Escalante, S. Escalera, T.K. Ho, N. Macià, B. Ray, M. Saeed, A.R. Statnikov, E. Viegas, `Design of the 2015 ChaLearn AutoML Challenge `\_, IJCNN 2015. .. [Mosley2013] L. Mosley, `A balanced approach to the multi-class imbalance problem `\_, IJCV 2010. .. [Kelleher2015] John. D. Kelleher, Brian Mac Namee, Aoife D'Arcy, `Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies `\_, 2015. .. [Urbanowicz2015] Urbanowicz R.J., Moore, J.H. :doi:`ExSTraCS 2.0: description and evaluation of a scalable learning classifier system <10.1007/s12065-015-0128-8>`, Evol. Intel. (2015) 8: 89. .. \_cohen\_kappa: Cohen's kappa ------------- The function :func:`cohen\_kappa\_score` computes `Cohen's kappa `\_ statistic. This measure is intended to compare labelings by different human annotators, not a classifier versus a ground truth. The kappa score is a number between -1 and 1. Scores above .8 are generally considered good agreement; zero or lower means no agreement (practically random labels). Kappa scores can be computed for binary or multiclass problems, but not for multilabel problems (except by manually computing a per-label score) and not for more than two annotators. >>> from sklearn.metrics import cohen\_kappa\_score >>> labeling1 = [2, 0, 2, 2, 0, 1] >>> labeling2 = [0, 0, 2, 2, 0, 2] >>> cohen\_kappa\_score(labeling1, labeling2) 0.4285714285714286 .. \_confusion\_matrix: Confusion matrix ---------------- The :func:`confusion\_matrix` function evaluates classification accuracy by computing the `confusion matrix `\_ with each row corresponding to the true class (Wikipedia and other references may use different convention for axes). By definition, entry :math:`i, j` in a confusion matrix is the number of observations actually in group :math:`i`, but predicted to be in group :math:`j`. Here is an example:: >>> from sklearn.metrics import confusion\_matrix >>> y\_true = | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.09759632498025894,
-0.0011620705481618643,
0.01697179675102234,
0.008166042156517506,
0.00454268092289567,
-0.08772793412208557,
-0.007821626961231232,
0.07436773180961609,
0.032408908009529114,
0.011411258019506931,
-0.08515956252813339,
-0.087753526866436,
0.028681235387921333,
-0.039... | 0.033147 |
the true class (Wikipedia and other references may use different convention for axes). By definition, entry :math:`i, j` in a confusion matrix is the number of observations actually in group :math:`i`, but predicted to be in group :math:`j`. Here is an example:: >>> from sklearn.metrics import confusion\_matrix >>> y\_true = [2, 0, 2, 2, 0, 1] >>> y\_pred = [0, 0, 2, 2, 0, 2] >>> confusion\_matrix(y\_true, y\_pred) array([[2, 0, 0], [0, 0, 1], [1, 0, 2]]) :class:`ConfusionMatrixDisplay` can be used to visually represent a confusion matrix as shown in the :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_confusion\_matrix.py` example, which creates the following figure: .. image:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_confusion\_matrix\_001.png :target: ../auto\_examples/model\_selection/plot\_confusion\_matrix.html :scale: 75 :align: center The parameter ``normalize`` allows to report ratios instead of counts. The confusion matrix can be normalized in 3 different ways: ``'pred'``, ``'true'``, and ``'all'`` which will divide the counts by the sum of each columns, rows, or the entire matrix, respectively. >>> y\_true = [0, 0, 0, 1, 1, 1, 1, 1] >>> y\_pred = [0, 1, 0, 1, 0, 1, 0, 1] >>> confusion\_matrix(y\_true, y\_pred, normalize='all') array([[0.25 , 0.125], [0.25 , 0.375]]) For binary problems, we can get counts of true negatives, false positives, false negatives and true positives as follows:: >>> y\_true = [0, 0, 0, 1, 1, 1, 1, 1] >>> y\_pred = [0, 1, 0, 1, 0, 1, 0, 1] >>> tn, fp, fn, tp = confusion\_matrix(y\_true, y\_pred).ravel().tolist() >>> tn, fp, fn, tp (2, 1, 2, 3) With :func:`confusion\_matrix\_at\_thresholds` we can get true negatives, false positives, false negatives and true positives for different thresholds:: >>> from sklearn.metrics import confusion\_matrix\_at\_thresholds >>> y\_true = np.array([0., 0., 1., 1.]) >>> y\_score = np.array([0.1, 0.4, 0.35, 0.8]) >>> tns, fps, fns, tps, thresholds = confusion\_matrix\_at\_thresholds(y\_true, y\_score) >>> tns array([2., 1., 1., 0.]) >>> fps array([0., 1., 1., 2.]) >>> fns array([1., 1., 0., 0.]) >>> tps array([1., 1., 2., 2.]) >>> thresholds array([0.8, 0.4, 0.35, 0.1]) Note that the thresholds consist of distinct `y\_score` values, in decreasing order. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_confusion\_matrix.py` for an example of using a confusion matrix to evaluate classifier output quality. \* See :ref:`sphx\_glr\_auto\_examples\_classification\_plot\_digits\_classification.py` for an example of using a confusion matrix to classify hand-written digits. \* See :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_classification\_20newsgroups.py` for an example of using a confusion matrix to classify text documents. .. \_classification\_report: Classification report ---------------------- The :func:`classification\_report` function builds a text report showing the main classification metrics. Here is a small example with custom ``target\_names`` and inferred labels:: >>> from sklearn.metrics import classification\_report >>> y\_true = [0, 1, 2, 2, 0] >>> y\_pred = [0, 0, 2, 1, 0] >>> target\_names = ['class 0', 'class 1', 'class 2'] >>> print(classification\_report(y\_true, y\_pred, target\_names=target\_names)) precision recall f1-score support class 0 0.67 1.00 0.80 2 class 1 0.00 0.00 0.00 1 class 2 1.00 0.50 0.67 2 accuracy 0.60 5 macro avg 0.56 0.50 0.49 5 weighted avg 0.67 0.60 0.59 5 .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_classification\_plot\_digits\_classification.py` for an example of classification report usage for hand-written digits. \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_grid\_search\_digits.py` for an example of classification report usage for grid search with nested cross-validation. .. \_hamming\_loss: Hamming loss ------------- The :func:`hamming\_loss` computes the average Hamming loss or `Hamming distance `\_ between two sets of samples. If :math:`\hat{y}\_{i,j}` is the predicted value for the :math:`j`-th label of a given sample :math:`i`, :math:`y\_{i,j}` is the corresponding true value, :math:`n\_\text{samples}` is the number of samples and :math:`n\_\text{labels}` is the number of labels, then the Hamming loss :math:`L\_{Hamming}` is defined as: .. math:: L\_{Hamming}(y, \hat{y}) = \frac{1}{n\_\text{samples} \* n\_\text{labels}} \sum\_{i=0}^{n\_\text{samples}-1} \sum\_{j=0}^{n\_\text{labels} - 1} 1(\hat{y}\_{i,j} \not= y\_{i,j}) where :math:`1(x)` is the `indicator function `\_. The equation above does not hold true in the case | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.04989682510495186,
-0.10071568936109543,
-0.10355681926012039,
-0.038386061787605286,
0.004981097765266895,
-0.0688590332865715,
0.060352642089128494,
-0.042102664709091187,
0.048419125378131866,
0.008851341903209686,
0.05037097632884979,
0.009496510960161686,
0.027321726083755493,
0.02... | 0.079797 |
number of samples and :math:`n\_\text{labels}` is the number of labels, then the Hamming loss :math:`L\_{Hamming}` is defined as: .. math:: L\_{Hamming}(y, \hat{y}) = \frac{1}{n\_\text{samples} \* n\_\text{labels}} \sum\_{i=0}^{n\_\text{samples}-1} \sum\_{j=0}^{n\_\text{labels} - 1} 1(\hat{y}\_{i,j} \not= y\_{i,j}) where :math:`1(x)` is the `indicator function `\_. The equation above does not hold true in the case of multiclass classification. Please refer to the note below for more information. :: >>> from sklearn.metrics import hamming\_loss >>> y\_pred = [1, 2, 3, 4] >>> y\_true = [2, 2, 3, 4] >>> hamming\_loss(y\_true, y\_pred) 0.25 In the multilabel case with binary label indicators:: >>> hamming\_loss(np.array([[0, 1], [1, 1]]), np.zeros((2, 2))) 0.75 .. note:: In multiclass classification, the Hamming loss corresponds to the Hamming distance between ``y\_true`` and ``y\_pred`` which is similar to the :ref:`zero\_one\_loss` function. However, while zero-one loss penalizes prediction sets that do not strictly match true sets, the Hamming loss penalizes individual labels. Thus the Hamming loss, upper bounded by the zero-one loss, is always between zero and one, inclusive; and predicting a proper subset or superset of the true labels will give a Hamming loss between zero and one, exclusive. .. \_precision\_recall\_f\_measure\_metrics: Precision, recall and F-measures --------------------------------- Intuitively, `precision `\_ is the ability of the classifier not to label as positive a sample that is negative, and `recall `\_ is the ability of the classifier to find all the positive samples. The `F-measure `\_ (:math:`F\_\beta` and :math:`F\_1` measures) can be interpreted as a weighted harmonic mean of the precision and recall. A :math:`F\_\beta` measure reaches its best value at 1 and its worst score at 0. With :math:`\beta = 1`, :math:`F\_\beta` and :math:`F\_1` are equivalent, and the recall and the precision are equally important. The :func:`precision\_recall\_curve` computes a precision-recall curve from the ground truth label and a score given by the classifier by varying a decision threshold. The :func:`average\_precision\_score` function computes the `average precision `\_ (AP) from prediction scores. The value is between 0 and 1 and higher is better. AP is defined as .. math:: \text{AP} = \sum\_n (R\_n - R\_{n-1}) P\_n where :math:`P\_n` and :math:`R\_n` are the precision and recall at the nth threshold. With random predictions, the AP is the fraction of positive samples. References [Manning2008]\_ and [Everingham2010]\_ present alternative variants of AP that interpolate the precision-recall curve. Currently, :func:`average\_precision\_score` does not implement any interpolated variant. References [Davis2006]\_ and [Flach2015]\_ describe why a linear interpolation of points on the precision-recall curve provides an overly-optimistic measure of classifier performance. This linear interpolation is used when computing area under the curve with the trapezoidal rule in :func:`auc`. [Chen2024]\_ benchmarks different interpolation strategies to demonstrate the effects. Several functions allow you to analyze the precision, recall and F-measures score: .. autosummary:: average\_precision\_score f1\_score fbeta\_score precision\_recall\_curve precision\_recall\_fscore\_support precision\_score recall\_score Note that the :func:`precision\_recall\_curve` function is restricted to the binary case. The :func:`average\_precision\_score` function supports multiclass and multilabel formats by computing each class score in a One-vs-the-rest (OvR) fashion and averaging them or not depending of its ``average`` argument value. The :func:`PrecisionRecallDisplay.from\_estimator` and :func:`PrecisionRecallDisplay.from\_predictions` functions will plot the precision-recall curve as follows. .. image:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_precision\_recall\_001.png :target: ../auto\_examples/model\_selection/plot\_precision\_recall.html#plot-the-precision-recall-curve :scale: 75 :align: center .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_grid\_search\_digits.py` for an example of :func:`precision\_score` and :func:`recall\_score` usage to estimate parameters using grid search with nested cross-validation. \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_precision\_recall.py` for an example of :func:`precision\_recall\_curve` usage to evaluate classifier output quality. .. rubric:: References .. [Manning2008] C.D. Manning, P. Raghavan, H. Schütze, `Introduction to Information Retrieval `\_, 2008. .. [Everingham2010] M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn, A. Zisserman, `The Pascal Visual Object Classes (VOC) Challenge `\_, IJCV 2010. .. [Davis2006] J. Davis, M. Goadrich, `The Relationship Between Precision-Recall | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.01596893183887005,
-0.04298774525523186,
0.022244378924369812,
-0.06305834650993347,
0.06758858263492584,
0.07682955265045166,
0.05072658136487007,
-0.022098202258348465,
0.017127839848399162,
-0.015980446711182594,
-0.08471540361642838,
-0.05822020396590233,
0.04283594712615013,
-0.0715... | 0.040628 |
.. rubric:: References .. [Manning2008] C.D. Manning, P. Raghavan, H. Schütze, `Introduction to Information Retrieval `\_, 2008. .. [Everingham2010] M. Everingham, L. Van Gool, C.K.I. Williams, J. Winn, A. Zisserman, `The Pascal Visual Object Classes (VOC) Challenge `\_, IJCV 2010. .. [Davis2006] J. Davis, M. Goadrich, `The Relationship Between Precision-Recall and ROC Curves `\_, ICML 2006. .. [Flach2015] P.A. Flach, M. Kull, `Precision-Recall-Gain Curves: PR Analysis Done Right `\_, NIPS 2015. .. [Chen2024] W. Chen, C. Miao, Z. Zhang, C.S. Fung, R. Wang, Y. Chen, Y. Qian, L. Cheng, K.Y. Yip, S.K Tsui, Q. Cao, `Commonly used software tools produce conflicting and overly-optimistic AUPRC values `\_, Genome Biology 2024. Binary classification ^^^^^^^^^^^^^^^^^^^^^ In a binary classification task, the terms ''positive'' and ''negative'' refer to the classifier's prediction, and the terms ''true'' and ''false'' refer to whether that prediction corresponds to the external judgment (sometimes known as the ''observation''). Given these definitions, we can formulate the following table: +-------------------+------------------------------------------------+ | | Actual class (observation) | +-------------------+---------------------+--------------------------+ | Predicted class | tp (true positive) | fp (false positive) | | (expectation) | Correct result | Unexpected result | | +---------------------+--------------------------+ | | fn (false negative) | tn (true negative) | | | Missing result | Correct absence of result| +-------------------+---------------------+--------------------------+ In this context, we can define the notions of precision and recall: .. math:: \text{precision} = \frac{\text{tp}}{\text{tp} + \text{fp}}, .. math:: \text{recall} = \frac{\text{tp}}{\text{tp} + \text{fn}}, (Sometimes recall is also called ''sensitivity'') F-measure is the weighted harmonic mean of precision and recall, with precision's contribution to the mean weighted by some parameter :math:`\beta`: .. math:: F\_\beta = (1 + \beta^2) \frac{\text{precision} \times \text{recall}}{\beta^2 \text{precision} + \text{recall}} To avoid division by zero when precision and recall are zero, Scikit-Learn calculates F-measure with this otherwise-equivalent formula: .. math:: F\_\beta = \frac{(1 + \beta^2) \text{tp}}{(1 + \beta^2) \text{tp} + \text{fp} + \beta^2 \text{fn}} Note that this formula is still undefined when there are no true positives, false positives, or false negatives. By default, F-1 for a set of exclusively true negatives is calculated as 0, however this behavior can be changed using the `zero\_division` parameter. Here are some small examples in binary classification:: >>> from sklearn import metrics >>> y\_pred = [0, 1, 0, 0] >>> y\_true = [0, 1, 0, 1] >>> metrics.precision\_score(y\_true, y\_pred) 1.0 >>> metrics.recall\_score(y\_true, y\_pred) 0.5 >>> metrics.f1\_score(y\_true, y\_pred) 0.66 >>> metrics.fbeta\_score(y\_true, y\_pred, beta=0.5) 0.83 >>> metrics.fbeta\_score(y\_true, y\_pred, beta=1) 0.66 >>> metrics.fbeta\_score(y\_true, y\_pred, beta=2) 0.55 >>> metrics.precision\_recall\_fscore\_support(y\_true, y\_pred, beta=0.5) (array([0.66, 1. ]), array([1. , 0.5]), array([0.71, 0.83]), array([2, 2])) >>> import numpy as np >>> from sklearn.metrics import precision\_recall\_curve >>> from sklearn.metrics import average\_precision\_score >>> y\_true = np.array([0, 0, 1, 1]) >>> y\_scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> precision, recall, threshold = precision\_recall\_curve(y\_true, y\_scores) >>> precision array([0.5 , 0.66, 0.5 , 1. , 1. ]) >>> recall array([1. , 1. , 0.5, 0.5, 0. ]) >>> threshold array([0.1 , 0.35, 0.4 , 0.8 ]) >>> average\_precision\_score(y\_true, y\_scores) 0.83 Multiclass and multilabel classification ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ In a multiclass and multilabel classification task, the notions of precision, recall, and F-measures can be applied to each label independently. There are a few ways to combine results across labels, specified by the ``average`` argument to the :func:`average\_precision\_score`, :func:`f1\_score`, :func:`fbeta\_score`, :func:`precision\_recall\_fscore\_support`, :func:`precision\_score` and :func:`recall\_score` functions, as described :ref:`above `. Note the following behaviors when averaging: \* If all labels are included, "micro"-averaging in a multiclass setting will produce precision, recall and :math:`F` that are all identical to accuracy. \* "weighted" averaging may produce an F-score that is not between precision and recall. \* "macro" averaging for F-measures is calculated as the arithmetic mean over per-label/class F-measures, | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.09695926308631897,
-0.05946200340986252,
-0.11013133823871613,
0.03137492015957832,
0.030602330341935158,
-0.037003595381975174,
0.022928541526198387,
0.09723859280347824,
0.007875830866396427,
0.0025086018722504377,
-0.03713863715529442,
-0.05667470023036003,
0.025042612105607986,
-0.0... | 0.125325 |
If all labels are included, "micro"-averaging in a multiclass setting will produce precision, recall and :math:`F` that are all identical to accuracy. \* "weighted" averaging may produce an F-score that is not between precision and recall. \* "macro" averaging for F-measures is calculated as the arithmetic mean over per-label/class F-measures, not the harmonic mean over the arithmetic precision and recall means. Both calculations can be seen in the literature but are not equivalent, see [OB2019]\_ for details. To make this more explicit, consider the following notation: \* :math:`y` the set of \*true\* :math:`(sample, label)` pairs \* :math:`\hat{y}` the set of \*predicted\* :math:`(sample, label)` pairs \* :math:`L` the set of labels \* :math:`S` the set of samples \* :math:`y\_s` the subset of :math:`y` with sample :math:`s`, i.e. :math:`y\_s := \left\{(s', l) \in y | s' = s\right\}` \* :math:`y\_l` the subset of :math:`y` with label :math:`l` \* similarly, :math:`\hat{y}\_s` and :math:`\hat{y}\_l` are subsets of :math:`\hat{y}` \* :math:`P(A, B) := \frac{\left| A \cap B \right|}{\left|B\right|}` for some sets :math:`A` and :math:`B` \* :math:`R(A, B) := \frac{\left| A \cap B \right|}{\left|A\right|}` (Conventions vary on handling :math:`A = \emptyset`; this implementation uses :math:`R(A, B):=0`, and similar for :math:`P`.) \* :math:`F\_\beta(A, B) := \left(1 + \beta^2\right) \frac{P(A, B) \times R(A, B)}{\beta^2 P(A, B) + R(A, B)}` Then the metrics are defined as: +---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+ |``average`` | Precision | Recall | F\\_beta | +===============+==================================================================================================================+==================================================================================================================+======================================================================================================================+ |``"micro"`` | :math:`P(y, \hat{y})` | :math:`R(y, \hat{y})` | :math:`F\_\beta(y, \hat{y})` | +---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+ |``"samples"`` | :math:`\frac{1}{\left|S\right|} \sum\_{s \in S} P(y\_s, \hat{y}\_s)` | :math:`\frac{1}{\left|S\right|} \sum\_{s \in S} R(y\_s, \hat{y}\_s)` | :math:`\frac{1}{\left|S\right|} \sum\_{s \in S} F\_\beta(y\_s, \hat{y}\_s)` | +---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+ |``"macro"`` | :math:`\frac{1}{\left|L\right|} \sum\_{l \in L} P(y\_l, \hat{y}\_l)` | :math:`\frac{1}{\left|L\right|} \sum\_{l \in L} R(y\_l, \hat{y}\_l)` | :math:`\frac{1}{\left|L\right|} \sum\_{l \in L} F\_\beta(y\_l, \hat{y}\_l)` | +---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+ |``"weighted"`` | :math:`\frac{1}{\sum\_{l \in L} \left|y\_l\right|} \sum\_{l \in L} \left|y\_l\right| P(y\_l, \hat{y}\_l)` | :math:`\frac{1}{\sum\_{l \in L} \left|y\_l\right|} \sum\_{l \in L} \left|y\_l\right| R(y\_l, \hat{y}\_l)` | :math:`\frac{1}{\sum\_{l \in L} \left|y\_l\right|} \sum\_{l \in L} \left|y\_l\right| F\_\beta(y\_l, \hat{y}\_l)` | +---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+ |``None`` | :math:`\langle P(y\_l, \hat{y}\_l) | l \in L \rangle` | :math:`\langle R(y\_l, \hat{y}\_l) | l \in L \rangle` | :math:`\langle F\_\beta(y\_l, \hat{y}\_l) | l \in L \rangle` | +---------------+------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+ >>> from sklearn import metrics >>> y\_true = [0, 1, 2, 0, 1, 2] >>> y\_pred = [0, 2, 1, 0, 0, 1] >>> metrics.precision\_score(y\_true, y\_pred, average='macro') 0.22 >>> metrics.recall\_score(y\_true, y\_pred, average='micro') 0.33 >>> metrics.f1\_score(y\_true, y\_pred, average='weighted') 0.267 >>> metrics.fbeta\_score(y\_true, y\_pred, average='macro', beta=0.5) 0.238 >>> metrics.precision\_recall\_fscore\_support(y\_true, y\_pred, beta=0.5, average=None) (array([0.667, 0., 0.]), array([1., 0., 0.]), array([0.714, 0., 0.]), array([2, 2, 2])) For multiclass classification with a "negative class", it is possible to exclude some labels: >>> metrics.recall\_score(y\_true, y\_pred, labels=[1, 2], average='micro') ... # excluding 0, no labels were correctly recalled 0.0 Similarly, labels not present in the data sample may be accounted for in macro-averaging. >>> metrics.precision\_score(y\_true, y\_pred, labels=[0, 1, 2, 3], average='macro') 0.166 .. rubric:: References .. [OB2019] :arxiv:`Opitz, J., & Burst, S. (2019). "Macro f1 and macro f1." <1911.03347>` .. \_jaccard\_similarity\_score: Jaccard similarity coefficient score ------------------------------------- The :func:`jaccard\_score` function computes the average of `Jaccard similarity coefficients `\_, also called the Jaccard index, between pairs of label sets. The Jaccard similarity coefficient with a ground truth label set :math:`y` and predicted label set :math:`\hat{y}`, is defined as .. math:: J(y, \hat{y}) = \frac{|y \cap \hat{y}|}{|y \cup \hat{y}|}. The :func:`jaccard\_score` (like :func:`precision\_recall\_fscore\_support`) applies natively to binary targets. By computing it set-wise it can be extended to apply to multilabel and multiclass through the use of `average` (see :ref:`above `). In the binary case:: >>> import numpy as np >>> from sklearn.metrics import jaccard\_score >>> y\_true = np.array([[0, 1, 1], ... [1, 1, 0]]) >>> y\_pred = np.array([[1, | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.009129034355282784,
-0.036455970257520676,
-0.015048235654830933,
0.005511503200978041,
-0.008496181108057499,
0.04860945791006088,
-0.028777316212654114,
0.1262209713459015,
0.010134506039321423,
-0.009878524579107761,
0.003083339426666498,
-0.060788653790950775,
0.056933145970106125,
... | 0.041012 |
By computing it set-wise it can be extended to apply to multilabel and multiclass through the use of `average` (see :ref:`above `). In the binary case:: >>> import numpy as np >>> from sklearn.metrics import jaccard\_score >>> y\_true = np.array([[0, 1, 1], ... [1, 1, 0]]) >>> y\_pred = np.array([[1, 1, 1], ... [1, 0, 0]]) >>> jaccard\_score(y\_true[0], y\_pred[0]) 0.6666 In the 2D comparison case (e.g. image similarity): >>> jaccard\_score(y\_true, y\_pred, average="micro") 0.6 In the multilabel case with binary label indicators:: >>> jaccard\_score(y\_true, y\_pred, average='samples') 0.5833 >>> jaccard\_score(y\_true, y\_pred, average='macro') 0.6666 >>> jaccard\_score(y\_true, y\_pred, average=None) array([0.5, 0.5, 1. ]) Multiclass problems are binarized and treated like the corresponding multilabel problem:: >>> y\_pred = [0, 2, 1, 2] >>> y\_true = [0, 1, 2, 2] >>> jaccard\_score(y\_true, y\_pred, average=None) array([1. , 0. , 0.33]) >>> jaccard\_score(y\_true, y\_pred, average='macro') 0.44 >>> jaccard\_score(y\_true, y\_pred, average='micro') 0.33 .. \_hinge\_loss: Hinge loss ---------- The :func:`hinge\_loss` function computes the average distance between the model and the data using `hinge loss `\_, a one-sided metric that considers only prediction errors. (Hinge loss is used in maximal margin classifiers such as support vector machines.) If the true label :math:`y\_i` of a binary classification task is encoded as :math:`y\_i=\left\{-1, +1\right\}` for every sample :math:`i`; and :math:`w\_i` is the corresponding predicted decision (an array of shape (`n\_samples`,) as output by the `decision\_function` method), then the hinge loss is defined as: .. math:: L\_\text{Hinge}(y, w) = \frac{1}{n\_\text{samples}} \sum\_{i=0}^{n\_\text{samples}-1} \max\left\{1 - w\_i y\_i, 0\right\} If there are more than two labels, :func:`hinge\_loss` uses a multiclass variant due to Crammer & Singer. `Here `\_ is the paper describing it. In this case the predicted decision is an array of shape (`n\_samples`, `n\_labels`). If :math:`w\_{i, y\_i}` is the predicted decision for the true label :math:`y\_i` of the :math:`i`-th sample; and :math:`\hat{w}\_{i, y\_i} = \max\left\{w\_{i, y\_j}~|~y\_j \ne y\_i \right\}` is the maximum of the predicted decisions for all the other labels, then the multi-class hinge loss is defined by: .. math:: L\_\text{Hinge}(y, w) = \frac{1}{n\_\text{samples}} \sum\_{i=0}^{n\_\text{samples}-1} \max\left\{1 + \hat{w}\_{i, y\_i} - w\_{i, y\_i}, 0\right\} Here is a small example demonstrating the use of the :func:`hinge\_loss` function with an svm classifier in a binary class problem:: >>> from sklearn import svm >>> from sklearn.metrics import hinge\_loss >>> X = [[0], [1]] >>> y = [-1, 1] >>> est = svm.LinearSVC(random\_state=0) >>> est.fit(X, y) LinearSVC(random\_state=0) >>> pred\_decision = est.decision\_function([[-2], [3], [0.5]]) >>> pred\_decision array([-2.18, 2.36, 0.09]) >>> hinge\_loss([-1, 1, 1], pred\_decision) 0.3 Here is an example demonstrating the use of the :func:`hinge\_loss` function with an svm classifier in a multiclass problem:: >>> X = np.array([[0], [1], [2], [3]]) >>> Y = np.array([0, 1, 2, 3]) >>> labels = np.array([0, 1, 2, 3]) >>> est = svm.LinearSVC() >>> est.fit(X, Y) LinearSVC() >>> pred\_decision = est.decision\_function([[-1], [2], [3]]) >>> y\_true = [0, 2, 3] >>> hinge\_loss(y\_true, pred\_decision, labels=labels) 0.56 .. \_log\_loss: Log loss -------- Log loss, also called logistic regression loss or cross-entropy loss, is defined on probability estimates. It is commonly used in (multinomial) logistic regression and neural networks, as well as in some variants of expectation-maximization, and can be used to evaluate the probability outputs (``predict\_proba``) of a classifier instead of its discrete predictions. For binary classification with a true label :math:`y \in \{0,1\}` and a probability estimate :math:`\hat{p} \approx \operatorname{Pr}(y = 1)`, the log loss per sample is the negative log-likelihood of the classifier given the true label: .. math:: L\_{\log}(y, \hat{p}) = -\log \operatorname{Pr}(y|\hat{p}) = -(y \log (\hat{p}) + (1 - y) \log (1 - \hat{p})) This extends to the multiclass case as follows. Let the true labels for a set of samples be encoded | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.025346864014863968,
-0.0756475105881691,
-0.07633741945028305,
-0.08087967336177826,
-0.0001984257687581703,
0.01813475973904133,
0.010758327320218086,
0.030303968116641045,
-0.022249050438404083,
-0.09253942221403122,
-0.021857835352420807,
-0.08167360723018646,
0.09597758948802948,
0.0... | 0.119146 |
sample is the negative log-likelihood of the classifier given the true label: .. math:: L\_{\log}(y, \hat{p}) = -\log \operatorname{Pr}(y|\hat{p}) = -(y \log (\hat{p}) + (1 - y) \log (1 - \hat{p})) This extends to the multiclass case as follows. Let the true labels for a set of samples be encoded as a 1-of-K binary indicator matrix :math:`Y`, i.e., :math:`y\_{i,k} = 1` if sample :math:`i` has label :math:`k` taken from a set of :math:`K` labels. Let :math:`\hat{P}` be a matrix of probability estimates, with elements :math:`\hat{p}\_{i,k} \approx \operatorname{Pr}(y\_{i,k} = 1)`. Then the log loss of the whole set is .. math:: L\_{\log}(Y, \hat{P}) = -\log \operatorname{Pr}(Y|\hat{P}) = - \frac{1}{N} \sum\_{i=0}^{N-1} \sum\_{k=0}^{K-1} y\_{i,k} \log \hat{p}\_{i,k} To see how this generalizes the binary log loss given above, note that in the binary case, :math:`\hat{p}\_{i,0} = 1 - \hat{p}\_{i,1}` and :math:`y\_{i,0} = 1 - y\_{i,1}`, so expanding the inner sum over :math:`y\_{i,k} \in \{0,1\}` gives the binary log loss. The :func:`log\_loss` function computes log loss given a list of ground-truth labels and a probability matrix, as returned by an estimator's ``predict\_proba`` method. >>> from sklearn.metrics import log\_loss >>> y\_true = [0, 0, 1, 1] >>> y\_pred = [[.9, .1], [.8, .2], [.3, .7], [.01, .99]] >>> log\_loss(y\_true, y\_pred) 0.1738 The first ``[.9, .1]`` in ``y\_pred`` denotes 90% probability that the first sample has label 0. The log loss is non-negative. .. \_matthews\_corrcoef: Matthews correlation coefficient --------------------------------- The :func:`matthews\_corrcoef` function computes the `Matthew's correlation coefficient (MCC) `\_ for binary classes. Quoting Wikipedia: "The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary (two-class) classifications. It takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes. The MCC is in essence a correlation coefficient value between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 an average random prediction and -1 an inverse prediction. The statistic is also known as the phi coefficient." In the binary (two-class) case, :math:`tp`, :math:`tn`, :math:`fp` and :math:`fn` are respectively the number of true positives, true negatives, false positives and false negatives, the MCC is defined as .. math:: MCC = \frac{tp \times tn - fp \times fn}{\sqrt{(tp + fp)(tp + fn)(tn + fp)(tn + fn)}}. In the multiclass case, the Matthews correlation coefficient can be `defined `\_ in terms of a :func:`confusion\_matrix` :math:`C` for :math:`K` classes. To simplify the definition consider the following intermediate variables: \* :math:`t\_k=\sum\_{i}^{K} C\_{ik}` the number of times class :math:`k` truly occurred, \* :math:`p\_k=\sum\_{i}^{K} C\_{ki}` the number of times class :math:`k` was predicted, \* :math:`c=\sum\_{k}^{K} C\_{kk}` the total number of samples correctly predicted, \* :math:`s=\sum\_{i}^{K} \sum\_{j}^{K} C\_{ij}` the total number of samples. Then the multiclass MCC is defined as: .. math:: MCC = \frac{ c \times s - \sum\_{k}^{K} p\_k \times t\_k }{\sqrt{ (s^2 - \sum\_{k}^{K} p\_k^2) \times (s^2 - \sum\_{k}^{K} t\_k^2) }} When there are more than two labels, the value of the MCC will no longer range between -1 and +1. Instead the minimum value will be somewhere between -1 and 0 depending on the number and distribution of ground truth labels. The maximum value is always +1. For additional information, see [WikipediaMCC2021]\_. Here is a small example illustrating the usage of the :func:`matthews\_corrcoef` function: >>> from sklearn.metrics import matthews\_corrcoef >>> y\_true = [+1, +1, +1, -1] >>> y\_pred = [+1, -1, +1, +1] >>> matthews\_corrcoef(y\_true, y\_pred) -0.33 .. rubric:: References .. [WikipediaMCC2021] Wikipedia contributors. Phi coefficient. Wikipedia, The Free Encyclopedia. April 21, 2021, 12:21 CEST. Available at: https://en.wikipedia.org/wiki/Phi\_coefficient Accessed April 21, 2021. .. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.022292232140898705,
-0.018689081072807312,
0.025110876187682152,
-0.06896643340587616,
0.07763335108757019,
-0.026147346943616867,
0.0927773267030716,
-0.017709124833345413,
0.03682979196310043,
-0.008206185884773731,
-0.012379077263176441,
-0.056904081255197525,
0.07050395011901855,
-0... | 0.015372 |
function: >>> from sklearn.metrics import matthews\_corrcoef >>> y\_true = [+1, +1, +1, -1] >>> y\_pred = [+1, -1, +1, +1] >>> matthews\_corrcoef(y\_true, y\_pred) -0.33 .. rubric:: References .. [WikipediaMCC2021] Wikipedia contributors. Phi coefficient. Wikipedia, The Free Encyclopedia. April 21, 2021, 12:21 CEST. Available at: https://en.wikipedia.org/wiki/Phi\_coefficient Accessed April 21, 2021. .. \_multilabel\_confusion\_matrix: Multi-label confusion matrix ---------------------------- The :func:`multilabel\_confusion\_matrix` function computes class-wise (default) or sample-wise (samplewise=True) multilabel confusion matrix to evaluate the accuracy of a classification. multilabel\_confusion\_matrix also treats multiclass data as if it were multilabel, as this is a transformation commonly applied to evaluate multiclass problems with binary classification metrics (such as precision, recall, etc.). When calculating class-wise multilabel confusion matrix :math:`C`, the count of true negatives for class :math:`i` is :math:`C\_{i,0,0}`, false negatives is :math:`C\_{i,1,0}`, true positives is :math:`C\_{i,1,1}` and false positives is :math:`C\_{i,0,1}`. Here is an example demonstrating the use of the :func:`multilabel\_confusion\_matrix` function with :term:`multilabel indicator matrix` input:: >>> import numpy as np >>> from sklearn.metrics import multilabel\_confusion\_matrix >>> y\_true = np.array([[1, 0, 1], ... [0, 1, 0]]) >>> y\_pred = np.array([[1, 0, 0], ... [0, 1, 1]]) >>> multilabel\_confusion\_matrix(y\_true, y\_pred) array([[[1, 0], [0, 1]], [[1, 0], [0, 1]], [[0, 1], [1, 0]]]) Or a confusion matrix can be constructed for each sample's labels: >>> multilabel\_confusion\_matrix(y\_true, y\_pred, samplewise=True) array([[[1, 0], [1, 1]], [[1, 1], [0, 1]]]) Here is an example demonstrating the use of the :func:`multilabel\_confusion\_matrix` function with :term:`multiclass` input:: >>> y\_true = ["cat", "ant", "cat", "cat", "ant", "bird"] >>> y\_pred = ["ant", "ant", "cat", "cat", "ant", "cat"] >>> multilabel\_confusion\_matrix(y\_true, y\_pred, ... labels=["ant", "bird", "cat"]) array([[[3, 1], [0, 2]], [[5, 0], [1, 0]], [[2, 1], [1, 2]]]) Here are some examples demonstrating the use of the :func:`multilabel\_confusion\_matrix` function to calculate recall (or sensitivity), specificity, fall out and miss rate for each class in a problem with multilabel indicator matrix input. Calculating `recall `\_\_ (also called the true positive rate or the sensitivity) for each class:: >>> y\_true = np.array([[0, 0, 1], ... [0, 1, 0], ... [1, 1, 0]]) >>> y\_pred = np.array([[0, 1, 0], ... [0, 0, 1], ... [1, 1, 0]]) >>> mcm = multilabel\_confusion\_matrix(y\_true, y\_pred) >>> tn = mcm[:, 0, 0] >>> tp = mcm[:, 1, 1] >>> fn = mcm[:, 1, 0] >>> fp = mcm[:, 0, 1] >>> tp / (tp + fn) array([1. , 0.5, 0. ]) Calculating `specificity `\_\_ (also called the true negative rate) for each class:: >>> tn / (tn + fp) array([1. , 0. , 0.5]) Calculating `fall out `\_\_ (also called the false positive rate) for each class:: >>> fp / (fp + tn) array([0. , 1. , 0.5]) Calculating `miss rate `\_\_ (also called the false negative rate) for each class:: >>> fn / (fn + tp) array([0. , 0.5, 1. ]) .. \_roc\_metrics: Receiver operating characteristic (ROC) --------------------------------------- The function :func:`roc\_curve` computes the `receiver operating characteristic curve, or ROC curve `\_. Quoting Wikipedia : "A receiver operating characteristic (ROC), or simply ROC curve, is a graphical plot which illustrates the performance of a binary classifier system as its discrimination threshold is varied. It is created by plotting the fraction of true positives out of the positives (TPR = true positive rate) vs. the fraction of false positives out of the negatives (FPR = false positive rate), at various threshold settings. TPR is also known as sensitivity, and FPR is one minus the specificity or true negative rate." This function requires the true binary value and the target scores, which can either be probability estimates of the positive class, confidence values, or binary decisions. Here is a small example of how to use the :func:`roc\_curve` function:: | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.04640481621026993,
-0.07984962314367294,
-0.056644439697265625,
-0.02236885391175747,
0.01886364445090294,
0.02375314198434353,
0.03917631879448891,
0.023739779368042946,
0.02226944826543331,
0.058449458330869675,
0.03452431038022041,
-0.022460397332906723,
-0.03518882393836975,
0.03068... | 0.087131 |
sensitivity, and FPR is one minus the specificity or true negative rate." This function requires the true binary value and the target scores, which can either be probability estimates of the positive class, confidence values, or binary decisions. Here is a small example of how to use the :func:`roc\_curve` function:: >>> import numpy as np >>> from sklearn.metrics import roc\_curve >>> y = np.array([1, 1, 2, 2]) >>> scores = np.array([0.1, 0.4, 0.35, 0.8]) >>> fpr, tpr, thresholds = roc\_curve(y, scores, pos\_label=2) >>> fpr array([0. , 0. , 0.5, 0.5, 1. ]) >>> tpr array([0. , 0.5, 0.5, 1. , 1. ]) >>> thresholds array([ inf, 0.8 , 0.4 , 0.35, 0.1 ]) Compared to metrics such as the subset accuracy, the Hamming loss, or the F1 score, ROC doesn't require optimizing a threshold for each label. The :func:`roc\_auc\_score` function, denoted by ROC-AUC or AUROC, computes the area under the ROC curve. By doing so, the curve information is summarized in one number. The following figure shows the ROC curve and ROC-AUC score for a classifier aimed to distinguish the virginica flower from the rest of the species in the :ref:`iris\_dataset`: .. image:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_roc\_001.png :target: ../auto\_examples/model\_selection/plot\_roc.html :scale: 75 :align: center For more information see the `Wikipedia article on AUC `\_. .. \_roc\_auc\_binary: Binary case ^^^^^^^^^^^ In the \*\*binary case\*\*, you can either provide the probability estimates, using the `classifier.predict\_proba()` method, or the non-thresholded decision values given by the `classifier.decision\_function()` method. In the case of providing the probability estimates, the probability of the class with the "greater label" should be provided. The "greater label" corresponds to `classifier.classes\_[1]` and thus `classifier.predict\_proba(X)[:, 1]`. Therefore, the `y\_score` parameter is of size (n\_samples,). >>> from sklearn.datasets import load\_breast\_cancer >>> from sklearn.linear\_model import LogisticRegression >>> from sklearn.metrics import roc\_auc\_score >>> X, y = load\_breast\_cancer(return\_X\_y=True) >>> clf = LogisticRegression().fit(X, y) >>> clf.classes\_ array([0, 1]) We can use the probability estimates corresponding to `clf.classes\_[1]`. >>> y\_score = clf.predict\_proba(X)[:, 1] >>> roc\_auc\_score(y, y\_score) 0.99 Otherwise, we can use the non-thresholded decision values >>> roc\_auc\_score(y, clf.decision\_function(X)) 0.99 .. \_roc\_auc\_multiclass: Multi-class case ^^^^^^^^^^^^^^^^ The :func:`roc\_auc\_score` function can also be used in \*\*multi-class classification\*\*. Two averaging strategies are currently supported: the one-vs-one algorithm computes the average of the pairwise ROC AUC scores, and the one-vs-rest algorithm computes the average of the ROC AUC scores for each class against all other classes. In both cases, the predicted labels are provided in an array with values from 0 to ``n\_classes``, and the scores correspond to the probability estimates that a sample belongs to a particular class. The OvO and OvR algorithms support weighting uniformly (``average='macro'``) and by prevalence (``average='weighted'``). .. dropdown:: One-vs-one Algorithm Computes the average AUC of all possible pairwise combinations of classes. [HT2001]\_ defines a multiclass AUC metric weighted uniformly: .. math:: \frac{1}{c(c-1)}\sum\_{j=1}^{c}\sum\_{k > j}^c (\text{AUC}(j | k) + \text{AUC}(k | j)) where :math:`c` is the number of classes and :math:`\text{AUC}(j | k)` is the AUC with class :math:`j` as the positive class and class :math:`k` as the negative class. In general, :math:`\text{AUC}(j | k) \neq \text{AUC}(k | j)` in the multiclass case. This algorithm is used by setting the keyword argument ``multiclass`` to ``'ovo'`` and ``average`` to ``'macro'``. The [HT2001]\_ multiclass AUC metric can be extended to be weighted by the prevalence: .. math:: \frac{1}{c(c-1)}\sum\_{j=1}^{c}\sum\_{k > j}^c p(j \cup k)( \text{AUC}(j | k) + \text{AUC}(k | j)) where :math:`c` is the number of classes. This algorithm is used by setting the keyword argument ``multiclass`` to ``'ovo'`` and ``average`` to ``'weighted'``. The ``'weighted'`` option returns a prevalence-weighted average as described in [FC2009]\_. .. dropdown:: One-vs-rest Algorithm Computes the AUC of each class | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.044515013694763184,
-0.0773862972855568,
-0.0909699946641922,
-0.010734901763498783,
0.08351055532693863,
-0.08145948499441147,
0.05192925035953522,
0.12827551364898682,
0.01709004119038582,
0.0020335561130195856,
-0.013324487954378128,
-0.1113920584321022,
-0.021233655512332916,
0.03419... | 0.113827 |
| k) + \text{AUC}(k | j)) where :math:`c` is the number of classes. This algorithm is used by setting the keyword argument ``multiclass`` to ``'ovo'`` and ``average`` to ``'weighted'``. The ``'weighted'`` option returns a prevalence-weighted average as described in [FC2009]\_. .. dropdown:: One-vs-rest Algorithm Computes the AUC of each class against the rest [PD2000]\_. The algorithm is functionally the same as the multilabel case. To enable this algorithm set the keyword argument ``multiclass`` to ``'ovr'``. Additionally to ``'macro'`` [F2006]\_ and ``'weighted'`` [F2001]\_ averaging, OvR supports ``'micro'`` averaging. In applications where a high false positive rate is not tolerable the parameter ``max\_fpr`` of :func:`roc\_auc\_score` can be used to summarize the ROC curve up to the given limit. The following figure shows the micro-averaged ROC curve and its corresponding ROC-AUC score for a classifier aimed to distinguish the different species in the :ref:`iris\_dataset`: .. image:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_roc\_002.png :target: ../auto\_examples/model\_selection/plot\_roc.html :scale: 75 :align: center .. \_roc\_auc\_multilabel: Multi-label case ^^^^^^^^^^^^^^^^ In \*\*multi-label classification\*\*, the :func:`roc\_auc\_score` function is extended by averaging over the labels as :ref:`above `. In this case, you should provide a `y\_score` of shape `(n\_samples, n\_classes)`. Thus, when using the probability estimates, one needs to select the probability of the class with the greater label for each output. >>> from sklearn.datasets import make\_multilabel\_classification >>> from sklearn.multioutput import MultiOutputClassifier >>> X, y = make\_multilabel\_classification(random\_state=0) >>> inner\_clf = LogisticRegression(random\_state=0) >>> clf = MultiOutputClassifier(inner\_clf).fit(X, y) >>> y\_score = np.transpose([y\_pred[:, 1] for y\_pred in clf.predict\_proba(X)]) >>> roc\_auc\_score(y, y\_score, average=None) array([0.828, 0.851, 0.94, 0.87, 0.95]) And the decision values do not require such processing. >>> from sklearn.linear\_model import RidgeClassifierCV >>> clf = RidgeClassifierCV().fit(X, y) >>> y\_score = clf.decision\_function(X) >>> roc\_auc\_score(y, y\_score, average=None) array([0.82, 0.85, 0.93, 0.87, 0.94]) .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_roc.py` for an example of using ROC to evaluate the quality of the output of a classifier. \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_roc\_crossval.py` for an example of using ROC to evaluate classifier output quality, using cross-validation. \* See :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_species\_distribution\_modeling.py` for an example of using ROC to model species distribution. .. rubric:: References .. [HT2001] Hand, D.J. and Till, R.J., (2001). `A simple generalisation of the area under the ROC curve for multiple class classification problems. `\_ Machine learning, 45(2), pp. 171-186. .. [FC2009] Ferri, Cèsar & Hernandez-Orallo, Jose & Modroiu, R. (2009). `An Experimental Comparison of Performance Measures for Classification. `\_ Pattern Recognition Letters. 30. 27-38. .. [PD2000] Provost, F., Domingos, P. (2000). `Well-trained PETs: Improving probability estimation trees `\_ (Section 6.2), CeDER Working Paper #IS-00-04, Stern School of Business, New York University. .. [F2006] Fawcett, T., 2006. `An introduction to ROC analysis. `\_ Pattern Recognition Letters, 27(8), pp. 861-874. .. [F2001] Fawcett, T., 2001. `Using rule sets to maximize ROC performance `\_ In Data Mining, 2001. Proceedings IEEE International Conference, pp. 131-138. .. \_det\_curve: Detection error tradeoff (DET) ------------------------------ The function :func:`det\_curve` computes the detection error tradeoff curve (DET) curve [WikipediaDET2017]\_. Quoting Wikipedia: "A detection error tradeoff (DET) graph is a graphical plot of error rates for binary classification systems, plotting false reject rate vs. false accept rate. The x- and y-axes are scaled non-linearly by their standard normal deviates (or just by logarithmic transformation), yielding tradeoff curves that are more linear than ROC curves, and use most of the image area to highlight the differences of importance in the critical operating region." DET curves are a variation of receiver operating characteristic (ROC) curves where False Negative Rate is plotted on the y-axis instead of True Positive Rate. DET curves are commonly plotted in normal deviate scale by transformation with :math:`\phi^{-1}` (with :math:`\phi` being the cumulative distribution function). The resulting performance curves explicitly visualize the tradeoff of error | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.04807925224304199,
-0.04306896775960922,
-0.09564799070358276,
0.011749982833862305,
0.0027954301331192255,
-0.02178644761443138,
0.03173983469605446,
0.0848945751786232,
0.014703060500323772,
0.012143252417445183,
0.016331786289811134,
-0.092840276658535,
0.01891540177166462,
0.0231974... | 0.124479 |
of receiver operating characteristic (ROC) curves where False Negative Rate is plotted on the y-axis instead of True Positive Rate. DET curves are commonly plotted in normal deviate scale by transformation with :math:`\phi^{-1}` (with :math:`\phi` being the cumulative distribution function). The resulting performance curves explicitly visualize the tradeoff of error types for given classification algorithms. See [Martin1997]\_ for examples and further motivation. This figure compares the ROC and DET curves of two example classifiers on the same classification task: .. image:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_det\_001.png :target: ../auto\_examples/model\_selection/plot\_det.html :scale: 75 :align: center .. dropdown:: Properties \* DET curves form a linear curve in normal deviate scale if the detection scores are normally (or close-to normally) distributed. It was shown by [Navratil2007]\_ that the reverse is not necessarily true and even more general distributions are able to produce linear DET curves. \* The normal deviate scale transformation spreads out the points such that a comparatively larger space of plot is occupied. Therefore curves with similar classification performance might be easier to distinguish on a DET plot. \* With False Negative Rate being "inverse" to True Positive Rate the point of perfection for DET curves is the origin (in contrast to the top left corner for ROC curves). .. dropdown:: Applications and limitations DET curves are intuitive to read and hence allow quick visual assessment of a classifier's performance. Additionally DET curves can be consulted for threshold analysis and operating point selection. This is particularly helpful if a comparison of error types is required. On the other hand DET curves do not provide their metric as a single number. Therefore for either automated evaluation or comparison to other classification tasks metrics like the derived area under ROC curve might be better suited. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_det.py` for an example comparison between receiver operating characteristic (ROC) curves and Detection error tradeoff (DET) curves. .. rubric:: References .. [WikipediaDET2017] Wikipedia contributors. Detection error tradeoff. Wikipedia, The Free Encyclopedia. September 4, 2017, 23:33 UTC. Available at: https://en.wikipedia.org/w/index.php?title=Detection\_error\_tradeoff&oldid=798982054. Accessed February 19, 2018. .. [Martin1997] A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki, `The DET Curve in Assessment of Detection Task Performance `\_, NIST 1997. .. [Navratil2007] J. Navratil and D. Klusacek, `"On Linear DETs" `\_, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07, Honolulu, HI, 2007, pp. IV-229-IV-232. .. \_zero\_one\_loss: Zero one loss -------------- The :func:`zero\_one\_loss` function computes the sum or the average of the 0-1 classification loss (:math:`L\_{0-1}`) over :math:`n\_{\text{samples}}`. By default, the function normalizes over the sample. To get the sum of the :math:`L\_{0-1}`, set ``normalize`` to ``False``. In multilabel classification, the :func:`zero\_one\_loss` scores a subset as one if its labels strictly match the predictions, and as a zero if there are any errors. By default, the function returns the percentage of imperfectly predicted subsets. To get the count of such subsets instead, set ``normalize`` to ``False``. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample and :math:`y\_i` is the corresponding true value, then the 0-1 loss :math:`L\_{0-1}` is defined as: .. math:: L\_{0-1}(y, \hat{y}) = \frac{1}{n\_\text{samples}} \sum\_{i=0}^{n\_\text{samples}-1} 1(\hat{y}\_i \not= y\_i) where :math:`1(x)` is the `indicator function `\_. The zero-one loss can also be computed as :math:`\text{zero-one loss} = 1 - \text{accuracy}`. >>> from sklearn.metrics import zero\_one\_loss >>> y\_pred = [1, 2, 3, 4] >>> y\_true = [2, 2, 3, 4] >>> zero\_one\_loss(y\_true, y\_pred) 0.25 >>> zero\_one\_loss(y\_true, y\_pred, normalize=False) 1.0 In the multilabel case with binary label indicators, where the first label set [0,1] has an error:: >>> zero\_one\_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2))) 0.5 >>> zero\_one\_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)), normalize=False) 1.0 .. rubric:: Examples | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.007991552352905273,
-0.08635078370571136,
0.0005815019831061363,
0.04603593796491623,
0.09797503054141998,
-0.038000110536813736,
0.014976131729781628,
0.06052953749895096,
0.031297799199819565,
0.005903169047087431,
0.0908217504620552,
-0.031214438378810883,
0.005925917997956276,
0.0635... | 0.091773 |
[2, 2, 3, 4] >>> zero\_one\_loss(y\_true, y\_pred) 0.25 >>> zero\_one\_loss(y\_true, y\_pred, normalize=False) 1.0 In the multilabel case with binary label indicators, where the first label set [0,1] has an error:: >>> zero\_one\_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2))) 0.5 >>> zero\_one\_loss(np.array([[0, 1], [1, 1]]), np.ones((2, 2)), normalize=False) 1.0 .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_feature\_selection\_plot\_rfe\_with\_cross\_validation.py` for an example of zero one loss usage to perform recursive feature elimination with cross-validation. .. \_brier\_score\_loss: Brier score loss ---------------- The :func:`brier\_score\_loss` function computes the `Brier score `\_ for binary and multiclass probabilistic predictions and is equivalent to the mean squared error. Quoting Wikipedia: "The Brier score is a strictly proper scoring rule that measures the accuracy of probabilistic predictions. [...] [It] is applicable to tasks in which predictions must assign probabilities to a set of mutually exclusive discrete outcomes or classes." Let the true labels for a set of :math:`N` data points be encoded as a 1-of-K binary indicator matrix :math:`Y`, i.e., :math:`y\_{i,k} = 1` if sample :math:`i` has label :math:`k` taken from a set of :math:`K` labels. Let :math:`\hat{P}` be a matrix of probability estimates with elements :math:`\hat{p}\_{i,k} \approx \operatorname{Pr}(y\_{i,k} = 1)`. Following the original definition by [Brier1950]\_, the Brier score is given by: .. math:: BS(Y, \hat{P}) = \frac{1}{N}\sum\_{i=0}^{N-1}\sum\_{k=0}^{K-1}(y\_{i,k} - \hat{p}\_{i,k})^{2} The Brier score lies in the interval :math:`[0, 2]` and the lower the value the better the probability estimates are (the mean squared difference is smaller). Actually, the Brier score is a strictly proper scoring rule, meaning that it achieves the best score only when the estimated probabilities equal the true ones. Note that in the binary case, the Brier score is usually divided by two and ranges between :math:`[0,1]`. For binary targets :math:`y\_i \in \{0, 1\}` and probability estimates :math:`\hat{p}\_i \approx \operatorname{Pr}(y\_i = 1)` for the positive class, the Brier score is then equal to: .. math:: BS(y, \hat{p}) = \frac{1}{N} \sum\_{i=0}^{N - 1}(y\_i - \hat{p}\_i)^2 The :func:`brier\_score\_loss` function computes the Brier score given the ground-truth labels and predicted probabilities, as returned by an estimator's ``predict\_proba`` method. The `scale\_by\_half` parameter controls which of the two above definitions to follow. >>> import numpy as np >>> from sklearn.metrics import brier\_score\_loss >>> y\_true = np.array([0, 1, 1, 0]) >>> y\_true\_categorical = np.array(["spam", "ham", "ham", "spam"]) >>> y\_prob = np.array([0.1, 0.9, 0.8, 0.4]) >>> brier\_score\_loss(y\_true, y\_prob) 0.055 >>> brier\_score\_loss(y\_true, 1 - y\_prob, pos\_label=0) 0.055 >>> brier\_score\_loss(y\_true\_categorical, y\_prob, pos\_label="ham") 0.055 >>> brier\_score\_loss( ... ["eggs", "ham", "spam"], ... [[0.8, 0.1, 0.1], [0.2, 0.7, 0.1], [0.2, 0.2, 0.6]], ... labels=["eggs", "ham", "spam"], ... ) 0.146 The Brier score can be used to assess how well a classifier is calibrated. However, a lower Brier score loss does not always mean a better calibration. This is because, by analogy with the bias-variance decomposition of the mean squared error, the Brier score loss can be decomposed as the sum of calibration loss and refinement loss [Bella2012]\_. Calibration loss is defined as the mean squared deviation from empirical probabilities derived from the slope of ROC segments. Refinement loss can be defined as the expected optimal loss as measured by the area under the optimal cost curve. Refinement loss can change independently from calibration loss, thus a lower Brier score loss does not necessarily mean a better calibrated model. "Only when refinement loss remains the same does a lower Brier score loss always mean better calibration" [Bella2012]\_, [Flach2008]\_. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_calibration\_plot\_calibration.py` for an example of Brier score loss usage to perform probability calibration of classifiers. .. rubric:: References .. [Brier1950] G. Brier, `Verification of forecasts expressed in terms of probability `\_, Monthly weather review 78.1 (1950) .. [Bella2012] Bella, | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.008110110647976398,
-0.03956974670290947,
-0.05918927118182182,
-0.032694920897483826,
0.06756792962551117,
0.0183796938508749,
0.059820596128702164,
-0.029311584308743477,
-0.03771653026342392,
-0.05492091551423073,
-0.010723034851253033,
-0.10182619839906693,
0.05025061219930649,
0.038... | 0.065427 |
always mean better calibration" [Bella2012]\_, [Flach2008]\_. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_calibration\_plot\_calibration.py` for an example of Brier score loss usage to perform probability calibration of classifiers. .. rubric:: References .. [Brier1950] G. Brier, `Verification of forecasts expressed in terms of probability `\_, Monthly weather review 78.1 (1950) .. [Bella2012] Bella, Ferri, Hernández-Orallo, and Ramírez-Quintana `"Calibration of Machine Learning Models" `\_ in Khosrow-Pour, M. "Machine learning: concepts, methodologies, tools and applications." Hershey, PA: Information Science Reference (2012). .. [Flach2008] Flach, Peter, and Edson Matsubara. `"On classification, ranking, and probability estimation." `\_ Dagstuhl Seminar Proceedings. Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2008). .. \_class\_likelihood\_ratios: Class likelihood ratios ----------------------- The :func:`class\_likelihood\_ratios` function computes the `positive and negative likelihood ratios `\_ :math:`LR\_\pm` for binary classes, which can be interpreted as the ratio of post-test to pre-test odds as explained below. As a consequence, this metric is invariant w.r.t. the class prevalence (the number of samples in the positive class divided by the total number of samples) and \*\*can be extrapolated between populations regardless of any possible class imbalance.\*\* The :math:`LR\_\pm` metrics are therefore very useful in settings where the data available to learn and evaluate a classifier is a study population with nearly balanced classes, such as a case-control study, while the target application, i.e. the general population, has very low prevalence. The positive likelihood ratio :math:`LR\_+` is the probability of a classifier to correctly predict that a sample belongs to the positive class divided by the probability of predicting the positive class for a sample belonging to the negative class: .. math:: LR\_+ = \frac{\text{PR}(P+|T+)}{\text{PR}(P+|T-)}. The notation here refers to predicted (:math:`P`) or true (:math:`T`) label and the sign :math:`+` and :math:`-` refer to the positive and negative class, respectively, e.g. :math:`P+` stands for "predicted positive". Analogously, the negative likelihood ratio :math:`LR\_-` is the probability of a sample of the positive class being classified as belonging to the negative class divided by the probability of a sample of the negative class being correctly classified: .. math:: LR\_- = \frac{\text{PR}(P-|T+)}{\text{PR}(P-|T-)}. For classifiers above chance :math:`LR\_+` above 1 \*\*higher is better\*\*, while :math:`LR\_-` ranges from 0 to 1 and \*\*lower is better\*\*. Values of :math:`LR\_\pm\approx 1` correspond to chance level. Notice that probabilities differ from counts, for instance :math:`\operatorname{PR}(P+|T+)` is not equal to the number of true positive counts ``tp`` (see `the wikipedia page `\_ for the actual formulas). .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_likelihood\_ratios.py` .. dropdown:: Interpretation across varying prevalence Both class likelihood ratios are interpretable in terms of an odds ratio (pre-test and post-tests): .. math:: \text{post-test odds} = \text{Likelihood ratio} \times \text{pre-test odds}. Odds are in general related to probabilities via .. math:: \text{odds} = \frac{\text{probability}}{1 - \text{probability}}, or equivalently .. math:: \text{probability} = \frac{\text{odds}}{1 + \text{odds}}. On a given population, the pre-test probability is given by the prevalence. By converting odds to probabilities, the likelihood ratios can be translated into a probability of truly belonging to either class before and after a classifier prediction: .. math:: \text{post-test odds} = \text{Likelihood ratio} \times \frac{\text{pre-test probability}}{1 - \text{pre-test probability}}, .. math:: \text{post-test probability} = \frac{\text{post-test odds}}{1 + \text{post-test odds}}. .. dropdown:: Mathematical divergences The positive likelihood ratio (`LR+`) is undefined when :math:`fp=0`, meaning the classifier does not misclassify any negative labels as positives. This condition can either indicate a perfect identification of all the negative cases or, if there are also no true positive predictions (:math:`tp=0`), that the classifier does not predict the positive class at all. In the first case, `LR+` can be interpreted as `np.inf`, in the second case (for instance, with highly imbalanced data) it can be interpreted as `np.nan`. The negative likelihood | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.11129685491323471,
-0.0916718989610672,
-0.02758028917014599,
0.019231179729104042,
0.0506790354847908,
-0.0295826755464077,
-0.00004527947021415457,
0.0696437656879425,
-0.005401854403316975,
-0.01685248129069805,
-0.08049085736274719,
-0.017529858276247978,
0.08947567641735077,
-0.018... | 0.110661 |
or, if there are also no true positive predictions (:math:`tp=0`), that the classifier does not predict the positive class at all. In the first case, `LR+` can be interpreted as `np.inf`, in the second case (for instance, with highly imbalanced data) it can be interpreted as `np.nan`. The negative likelihood ratio (`LR-`) is undefined when :math:`tn=0`. Such divergence is invalid, as :math:`LR\_- > 1.0` would indicate an increase in the odds of a sample belonging to the positive class after being classified as negative, as if the act of classifying caused the positive condition. This includes the case of a :class:`~sklearn.dummy.DummyClassifier` that always predicts the positive class (i.e. when :math:`tn=fn=0`). Both class likelihood ratios (`LR+ and LR-`) are undefined when :math:`tp=fn=0`, which means that no samples of the positive class were present in the test set. This can happen when cross-validating on highly imbalanced data and also leads to a division by zero. If a division by zero occurs and `raise\_warning` is set to `True` (default), :func:`class\_likelihood\_ratios` raises an `UndefinedMetricWarning` and returns `np.nan` by default to avoid pollution when averaging over cross-validation folds. Users can set return values in case of a division by zero with the `replace\_undefined\_by` param. For a worked-out demonstration of the :func:`class\_likelihood\_ratios` function, see the example below. .. dropdown:: References \* `Wikipedia entry for Likelihood ratios in diagnostic testing `\_ \* Brenner, H., & Gefeller, O. (1997). Variation of sensitivity, specificity, likelihood ratios and predictive values with disease prevalence. Statistics in medicine, 16(9), 981-991. .. \_d2\_score\_classification: D² score for classification --------------------------- The D² score computes the fraction of deviance explained. It is a generalization of R², where the squared error is generalized and replaced by a classification deviance of choice :math:`\text{dev}(y, \hat{y})` (e.g., Log loss, Brier score,). D² is a form of a \*skill score\*. It is calculated as .. math:: D^2(y, \hat{y}) = 1 - \frac{\text{dev}(y, \hat{y})}{\text{dev}(y, y\_{\text{null}})} \,. Where :math:`y\_{\text{null}}` is the optimal prediction of an intercept-only model (e.g., the per-class proportion of `y\_true` in the case of the Log loss and Brier score). Like R², the best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts :math:`y\_{\text{null}}`, disregarding the input features, would get a D² score of 0.0. .. dropdown:: D2 log loss score The :func:`d2\_log\_loss\_score` function implements the special case of D² with the log loss, see :ref:`log\_loss`, i.e.: .. math:: \text{dev}(y, \hat{y}) = \text{log\_loss}(y, \hat{y}). Here are some usage examples of the :func:`d2\_log\_loss\_score` function:: >>> from sklearn.metrics import d2\_log\_loss\_score >>> y\_true = [1, 1, 2, 3] >>> y\_pred = [ ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... ] >>> d2\_log\_loss\_score(y\_true, y\_pred) 0.0 >>> y\_true = [1, 2, 3] >>> y\_pred = [ ... [0.98, 0.01, 0.01], ... [0.01, 0.98, 0.01], ... [0.01, 0.01, 0.98], ... ] >>> d2\_log\_loss\_score(y\_true, y\_pred) 0.981 >>> y\_true = [1, 2, 3] >>> y\_pred = [ ... [0.1, 0.6, 0.3], ... [0.1, 0.6, 0.3], ... [0.4, 0.5, 0.1], ... ] >>> d2\_log\_loss\_score(y\_true, y\_pred) -0.552 .. dropdown:: D2 Brier score The :func:`d2\_brier\_score` function implements the special case of D² with the Brier score, see :ref:`brier\_score\_loss`, i.e.: .. math:: \text{dev}(y, \hat{y}) = \text{brier\_score\_loss}(y, \hat{y}). This is also referred to as the Brier Skill Score (BSS). Here are some usage examples of the :func:`d2\_brier\_score` function:: >>> from sklearn.metrics import d2\_brier\_score >>> y\_true = [1, 1, 2, 3] >>> y\_pred = [ ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... ] >>> d2\_brier\_score(y\_true, y\_pred) 0.0 >>> y\_true = [1, | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.08508359640836716,
-0.08455872535705566,
-0.021342338994145393,
-0.043495260179042816,
0.1136443242430687,
-0.06026483699679375,
-0.02101980708539486,
0.00905129685997963,
0.00154666427988559,
0.014917343854904175,
0.07149886339902878,
-0.07679889351129532,
0.034616872668266296,
0.00411... | -0.060789 |
are some usage examples of the :func:`d2\_brier\_score` function:: >>> from sklearn.metrics import d2\_brier\_score >>> y\_true = [1, 1, 2, 3] >>> y\_pred = [ ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... [0.5, 0.25, 0.25], ... ] >>> d2\_brier\_score(y\_true, y\_pred) 0.0 >>> y\_true = [1, 2, 3] >>> y\_pred = [ ... [0.98, 0.01, 0.01], ... [0.01, 0.98, 0.01], ... [0.01, 0.01, 0.98], ... ] >>> d2\_brier\_score(y\_true, y\_pred) 0.9991 >>> y\_true = [1, 2, 3] >>> y\_pred = [ ... [0.1, 0.6, 0.3], ... [0.1, 0.6, 0.3], ... [0.4, 0.5, 0.1], ... ] >>> d2\_brier\_score(y\_true, y\_pred) -0.370... .. \_multilabel\_ranking\_metrics: Multilabel ranking metrics ========================== .. currentmodule:: sklearn.metrics In multilabel learning, each sample can have any number of ground truth labels associated with it. The goal is to give high scores and better rank to the ground truth labels. .. \_coverage\_error: Coverage error -------------- The :func:`coverage\_error` function computes the average number of labels that have to be included in the final prediction such that all true labels are predicted. This is useful if you want to know how many top-scored-labels you have to predict in average without missing any true one. The best value of this metric is thus the average number of true labels. .. note:: Our implementation's score is 1 greater than the one given in Tsoumakas et al., 2010. This extends it to handle the degenerate case in which an instance has 0 true labels. Formally, given a binary indicator matrix of the ground truth labels :math:`y \in \left\{0, 1\right\}^{n\_\text{samples} \times n\_\text{labels}}` and the score associated with each label :math:`\hat{f} \in \mathbb{R}^{n\_\text{samples} \times n\_\text{labels}}`, the coverage is defined as .. math:: coverage(y, \hat{f}) = \frac{1}{n\_{\text{samples}}} \sum\_{i=0}^{n\_{\text{samples}} - 1} \max\_{j:y\_{ij} = 1} \text{rank}\_{ij} with :math:`\text{rank}\_{ij} = \left|\left\{k: \hat{f}\_{ik} \geq \hat{f}\_{ij} \right\}\right|`. Given the rank definition, ties in ``y\_scores`` are broken by giving the maximal rank that would have been assigned to all tied values. Here is a small example of usage of this function:: >>> import numpy as np >>> from sklearn.metrics import coverage\_error >>> y\_true = np.array([[1, 0, 0], [0, 0, 1]]) >>> y\_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]]) >>> coverage\_error(y\_true, y\_score) 2.5 .. \_label\_ranking\_average\_precision: Label ranking average precision ------------------------------- The :func:`label\_ranking\_average\_precision\_score` function implements label ranking average precision (LRAP). This metric is linked to the :func:`average\_precision\_score` function, but is based on the notion of label ranking instead of precision and recall. Label ranking average precision (LRAP) averages over the samples the answer to the following question: for each ground truth label, what fraction of higher-ranked labels were true labels? This performance measure will be higher if you are able to give better rank to the labels associated with each sample. The obtained score is always strictly greater than 0, and the best value is 1. If there is exactly one relevant label per sample, label ranking average precision is equivalent to the `mean reciprocal rank `\_. Formally, given a binary indicator matrix of the ground truth labels :math:`y \in \left\{0, 1\right\}^{n\_\text{samples} \times n\_\text{labels}}` and the score associated with each label :math:`\hat{f} \in \mathbb{R}^{n\_\text{samples} \times n\_\text{labels}}`, the average precision is defined as .. math:: LRAP(y, \hat{f}) = \frac{1}{n\_{\text{samples}}} \sum\_{i=0}^{n\_{\text{samples}} - 1} \frac{1}{||y\_i||\_0} \sum\_{j:y\_{ij} = 1} \frac{|\mathcal{L}\_{ij}|}{\text{rank}\_{ij}} where :math:`\mathcal{L}\_{ij} = \left\{k: y\_{ik} = 1, \hat{f}\_{ik} \geq \hat{f}\_{ij} \right\}`, :math:`\text{rank}\_{ij} = \left|\left\{k: \hat{f}\_{ik} \geq \hat{f}\_{ij} \right\}\right|`, :math:`|\cdot|` computes the cardinality of the set (i.e., the number of elements in the set), and :math:`||\cdot||\_0` is the :math:`\ell\_0` "norm" (which computes the number of nonzero elements in a vector). Here is a small example of usage of this function:: >>> import numpy as np >>> from sklearn.metrics | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.04084505885839462,
-0.07598382234573364,
-0.03603658080101013,
-0.04819821938872337,
0.013485659845173359,
-0.03881339356303215,
0.008176261559128761,
0.03348664566874504,
0.010337721556425095,
0.0188356414437294,
-0.017440252006053925,
-0.035196684300899506,
0.003972687758505344,
0.014... | 0.120562 |
\right\}\right|`, :math:`|\cdot|` computes the cardinality of the set (i.e., the number of elements in the set), and :math:`||\cdot||\_0` is the :math:`\ell\_0` "norm" (which computes the number of nonzero elements in a vector). Here is a small example of usage of this function:: >>> import numpy as np >>> from sklearn.metrics import label\_ranking\_average\_precision\_score >>> y\_true = np.array([[1, 0, 0], [0, 0, 1]]) >>> y\_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]]) >>> label\_ranking\_average\_precision\_score(y\_true, y\_score) 0.416 .. \_label\_ranking\_loss: Ranking loss ------------ The :func:`label\_ranking\_loss` function computes the ranking loss which averages over the samples the number of label pairs that are incorrectly ordered, i.e. true labels have a lower score than false labels, weighted by the inverse of the number of ordered pairs of false and true labels. The lowest achievable ranking loss is zero. Formally, given a binary indicator matrix of the ground truth labels :math:`y \in \left\{0, 1\right\}^{n\_\text{samples} \times n\_\text{labels}}` and the score associated with each label :math:`\hat{f} \in \mathbb{R}^{n\_\text{samples} \times n\_\text{labels}}`, the ranking loss is defined as .. math:: ranking\\_loss(y, \hat{f}) = \frac{1}{n\_{\text{samples}}} \sum\_{i=0}^{n\_{\text{samples}} - 1} \frac{1}{||y\_i||\_0(n\_\text{labels} - ||y\_i||\_0)} \left|\left\{(k, l): \hat{f}\_{ik} \leq \hat{f}\_{il}, y\_{ik} = 1, y\_{il} = 0 \right\}\right| where :math:`|\cdot|` computes the cardinality of the set (i.e., the number of elements in the set) and :math:`||\cdot||\_0` is the :math:`\ell\_0` "norm" (which computes the number of nonzero elements in a vector). Here is a small example of usage of this function:: >>> import numpy as np >>> from sklearn.metrics import label\_ranking\_loss >>> y\_true = np.array([[1, 0, 0], [0, 0, 1]]) >>> y\_score = np.array([[0.75, 0.5, 1], [1, 0.2, 0.1]]) >>> label\_ranking\_loss(y\_true, y\_score) 0.75 >>> # With the following prediction, we have perfect and minimal loss >>> y\_score = np.array([[1.0, 0.1, 0.2], [0.1, 0.2, 0.9]]) >>> label\_ranking\_loss(y\_true, y\_score) 0.0 .. dropdown:: References \* Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US. .. \_ndcg: Normalized Discounted Cumulative Gain ------------------------------------- Discounted Cumulative Gain (DCG) and Normalized Discounted Cumulative Gain (NDCG) are ranking metrics implemented in :func:`~sklearn.metrics.dcg\_score` and :func:`~sklearn.metrics.ndcg\_score` ; they compare a predicted order to ground-truth scores, such as the relevance of answers to a query. From the Wikipedia page for Discounted Cumulative Gain: "Discounted cumulative gain (DCG) is a measure of ranking quality. In information retrieval, it is often used to measure effectiveness of web search engine algorithms or related applications. Using a graded relevance scale of documents in a search-engine result set, DCG measures the usefulness, or gain, of a document based on its position in the result list. The gain is accumulated from the top of the result list to the bottom, with the gain of each result discounted at lower ranks." DCG orders the true targets (e.g. relevance of query answers) in the predicted order, then multiplies them by a logarithmic decay and sums the result. The sum can be truncated after the first :math:`K` results, in which case we call it DCG@K. NDCG, or NDCG@K is DCG divided by the DCG obtained by a perfect prediction, so that it is always between 0 and 1. Usually, NDCG is preferred to DCG. Compared with the ranking loss, NDCG can take into account relevance scores, rather than a ground-truth ranking. So if the ground-truth consists only of an ordering, the ranking loss should be preferred; if the ground-truth consists of actual usefulness scores (e.g. 0 for irrelevant, 1 for relevant, 2 for very relevant), NDCG can be used. For one sample, given the vector of continuous ground-truth values for each target :math:`y \in \mathbb{R}^{M}`, where :math:`M` is the number of outputs, and the prediction | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.05780016630887985,
-0.07481926679611206,
-0.06000209227204323,
-0.041756175458431244,
0.019045324996113777,
-0.024025430902838707,
0.0530054084956646,
-0.02455190382897854,
-0.003417857689782977,
-0.025705590844154358,
-0.04912625253200531,
-0.004380571655929089,
0.08775202184915543,
0.0... | 0.067018 |
preferred; if the ground-truth consists of actual usefulness scores (e.g. 0 for irrelevant, 1 for relevant, 2 for very relevant), NDCG can be used. For one sample, given the vector of continuous ground-truth values for each target :math:`y \in \mathbb{R}^{M}`, where :math:`M` is the number of outputs, and the prediction :math:`\hat{y}`, which induces the ranking function :math:`f`, the DCG score is .. math:: \sum\_{r=1}^{\min(K, M)}\frac{y\_{f(r)}}{\log(1 + r)} and the NDCG score is the DCG score divided by the DCG score obtained for :math:`y`. .. dropdown:: References \* `Wikipedia entry for Discounted Cumulative Gain `\_ \* Jarvelin, K., & Kekalainen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4), 422-446. \* Wang, Y., Wang, L., Li, Y., He, D., Chen, W., & Liu, T. Y. (2013, May). A theoretical analysis of NDCG ranking measures. In Proceedings of the 26th Annual Conference on Learning Theory (COLT 2013) \* McSherry, F., & Najork, M. (2008, March). Computing information retrieval performance measures efficiently in the presence of tied scores. In European conference on information retrieval (pp. 414-421). Springer, Berlin, Heidelberg. .. \_regression\_metrics: Regression metrics =================== .. currentmodule:: sklearn.metrics The :mod:`sklearn.metrics` module implements several loss, score, and utility functions to measure regression performance. Some of those have been enhanced to handle the multioutput case: :func:`mean\_squared\_error`, :func:`mean\_absolute\_error`, :func:`r2\_score`, :func:`explained\_variance\_score`, :func:`mean\_pinball\_loss`, :func:`d2\_pinball\_score` and :func:`d2\_absolute\_error\_score`. These functions have a ``multioutput`` keyword argument which specifies the way the scores or losses for each individual target should be averaged. The default is ``'uniform\_average'``, which specifies a uniformly weighted mean over outputs. If an ``ndarray`` of shape ``(n\_outputs,)`` is passed, then its entries are interpreted as weights and an according weighted average is returned. If ``multioutput`` is ``'raw\_values'``, then all unaltered individual scores or losses will be returned in an array of shape ``(n\_outputs,)``. The :func:`r2\_score` and :func:`explained\_variance\_score` accept an additional value ``'variance\_weighted'`` for the ``multioutput`` parameter. This option leads to a weighting of each individual score by the variance of the corresponding target variable. This setting quantifies the globally captured unscaled variance. If the target variables are of different scale, then this score puts more importance on explaining the higher variance variables. .. \_r2\_score: R² score, the coefficient of determination ------------------------------------------- The :func:`r2\_score` function computes the `coefficient of determination `\_, usually denoted as :math:`R^2`. It represents the proportion of variance (of y) that has been explained by the independent variables in the model. It provides an indication of goodness of fit and therefore a measure of how well unseen samples are likely to be predicted by the model, through the proportion of explained variance. As such variance is dataset dependent, :math:`R^2` may not be meaningfully comparable across different datasets. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts the expected (average) value of y, disregarding the input features, would get an :math:`R^2` score of 0.0. Note: when the prediction residuals have zero mean, the :math:`R^2` score and the :ref:`explained\_variance\_score` are identical. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample and :math:`y\_i` is the corresponding true value for total :math:`n` samples, the estimated :math:`R^2` is defined as: .. math:: R^2(y, \hat{y}) = 1 - \frac{\sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2}{\sum\_{i=1}^{n} (y\_i - \bar{y})^2} where :math:`\bar{y} = \frac{1}{n} \sum\_{i=1}^{n} y\_i` and :math:`\sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2 = \sum\_{i=1}^{n} \epsilon\_i^2`. Note that :func:`r2\_score` calculates unadjusted :math:`R^2` without correcting for bias in sample variance of y. In the particular case where the true target is constant, the :math:`R^2` score is not finite: it is either ``NaN`` (perfect predictions) or ``-Inf`` (imperfect predictions). Such non-finite scores | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.03715613856911659,
-0.0399063341319561,
-0.05915414169430733,
0.012130667455494404,
0.00892670452594757,
0.02500271424651146,
-0.006435893941670656,
0.09554747492074966,
0.0561465248465538,
0.051167089492082596,
-0.09986168146133423,
-0.020399101078510284,
0.08837853372097015,
0.0366577... | 0.049193 |
(y\_i - \hat{y}\_i)^2 = \sum\_{i=1}^{n} \epsilon\_i^2`. Note that :func:`r2\_score` calculates unadjusted :math:`R^2` without correcting for bias in sample variance of y. In the particular case where the true target is constant, the :math:`R^2` score is not finite: it is either ``NaN`` (perfect predictions) or ``-Inf`` (imperfect predictions). Such non-finite scores may prevent correct model optimization such as grid-search cross-validation to be performed correctly. For this reason the default behaviour of :func:`r2\_score` is to replace them with 1.0 (perfect predictions) or 0.0 (imperfect predictions). If ``force\_finite`` is set to ``False``, this score falls back on the original :math:`R^2` definition. Here is a small example of usage of the :func:`r2\_score` function:: >>> from sklearn.metrics import r2\_score >>> y\_true = [3, -0.5, 2, 7] >>> y\_pred = [2.5, 0.0, 2, 8] >>> r2\_score(y\_true, y\_pred) 0.948 >>> y\_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y\_pred = [[0, 2], [-1, 2], [8, -5]] >>> r2\_score(y\_true, y\_pred, multioutput='variance\_weighted') 0.938 >>> y\_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y\_pred = [[0, 2], [-1, 2], [8, -5]] >>> r2\_score(y\_true, y\_pred, multioutput='uniform\_average') 0.936 >>> r2\_score(y\_true, y\_pred, multioutput='raw\_values') array([0.965, 0.908]) >>> r2\_score(y\_true, y\_pred, multioutput=[0.3, 0.7]) 0.925 >>> y\_true = [-2, -2, -2] >>> y\_pred = [-2, -2, -2] >>> r2\_score(y\_true, y\_pred) 1.0 >>> r2\_score(y\_true, y\_pred, force\_finite=False) nan >>> y\_true = [-2, -2, -2] >>> y\_pred = [-2, -2, -2 + 1e-8] >>> r2\_score(y\_true, y\_pred) 0.0 >>> r2\_score(y\_true, y\_pred, force\_finite=False) -inf .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_and\_elasticnet.py` for an example of R² score usage to evaluate Lasso and Elastic Net on sparse signals. .. \_mean\_absolute\_error: Mean absolute error ------------------- The :func:`mean\_absolute\_error` function computes `mean absolute error `\_, a risk metric corresponding to the expected value of the absolute error loss or :math:`l1`-norm loss. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample, and :math:`y\_i` is the corresponding true value, then the mean absolute error (MAE) estimated over :math:`n\_{\text{samples}}` is defined as .. math:: \text{MAE}(y, \hat{y}) = \frac{1}{n\_{\text{samples}}} \sum\_{i=0}^{n\_{\text{samples}}-1} \left| y\_i - \hat{y}\_i \right|. Here is a small example of usage of the :func:`mean\_absolute\_error` function:: >>> from sklearn.metrics import mean\_absolute\_error >>> y\_true = [3, -0.5, 2, 7] >>> y\_pred = [2.5, 0.0, 2, 8] >>> mean\_absolute\_error(y\_true, y\_pred) 0.5 >>> y\_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y\_pred = [[0, 2], [-1, 2], [8, -5]] >>> mean\_absolute\_error(y\_true, y\_pred) 0.75 >>> mean\_absolute\_error(y\_true, y\_pred, multioutput='raw\_values') array([0.5, 1. ]) >>> mean\_absolute\_error(y\_true, y\_pred, multioutput=[0.3, 0.7]) 0.85 .. \_mean\_squared\_error: Mean squared error ------------------- The :func:`mean\_squared\_error` function computes `mean squared error `\_, a risk metric corresponding to the expected value of the squared (quadratic) error or loss. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample, and :math:`y\_i` is the corresponding true value, then the mean squared error (MSE) estimated over :math:`n\_{\text{samples}}` is defined as .. math:: \text{MSE}(y, \hat{y}) = \frac{1}{n\_\text{samples}} \sum\_{i=0}^{n\_\text{samples} - 1} (y\_i - \hat{y}\_i)^2. Here is a small example of usage of the :func:`mean\_squared\_error` function:: >>> from sklearn.metrics import mean\_squared\_error >>> y\_true = [3, -0.5, 2, 7] >>> y\_pred = [2.5, 0.0, 2, 8] >>> mean\_squared\_error(y\_true, y\_pred) 0.375 >>> y\_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y\_pred = [[0, 2], [-1, 2], [8, -5]] >>> mean\_squared\_error(y\_true, y\_pred) 0.7083 .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_regression.py` for an example of mean squared error usage to evaluate gradient boosting regression. Taking the square root of the MSE, called the root mean squared error (RMSE), is another common metric that provides a measure in the same units as the target variable. RMSE is available through the :func:`root\_mean\_squared\_error` function. .. \_mean\_squared\_log\_error: Mean squared logarithmic error ------------------------------ The :func:`mean\_squared\_log\_error` function computes a risk metric corresponding to the expected value of the squared | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.03288310021162033,
-0.052143748849630356,
-0.0541624017059803,
0.06600628793239594,
0.10424767434597015,
-0.004242406692355871,
-0.022226789966225624,
-0.010952690616250038,
0.007533300202339888,
0.01600169949233532,
-0.0209221001714468,
-0.003216513665392995,
0.03733709454536438,
0.030... | 0.028742 |
root mean squared error (RMSE), is another common metric that provides a measure in the same units as the target variable. RMSE is available through the :func:`root\_mean\_squared\_error` function. .. \_mean\_squared\_log\_error: Mean squared logarithmic error ------------------------------ The :func:`mean\_squared\_log\_error` function computes a risk metric corresponding to the expected value of the squared logarithmic (quadratic) error or loss. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample, and :math:`y\_i` is the corresponding true value, then the mean squared logarithmic error (MSLE) estimated over :math:`n\_{\text{samples}}` is defined as .. math:: \text{MSLE}(y, \hat{y}) = \frac{1}{n\_\text{samples}} \sum\_{i=0}^{n\_\text{samples} - 1} (\log\_e (1 + y\_i) - \log\_e (1 + \hat{y}\_i) )^2. Where :math:`\log\_e (x)` means the natural logarithm of :math:`x`. This metric is best to use when targets having exponential growth, such as population counts, average sales of a commodity over a span of years etc. Note that this metric penalizes an under-predicted estimate greater than an over-predicted estimate. Here is a small example of usage of the :func:`mean\_squared\_log\_error` function:: >>> from sklearn.metrics import mean\_squared\_log\_error >>> y\_true = [3, 5, 2.5, 7] >>> y\_pred = [2.5, 5, 4, 8] >>> mean\_squared\_log\_error(y\_true, y\_pred) 0.0397 >>> y\_true = [[0.5, 1], [1, 2], [7, 6]] >>> y\_pred = [[0.5, 2], [1, 2.5], [8, 8]] >>> mean\_squared\_log\_error(y\_true, y\_pred) 0.044 The root mean squared logarithmic error (RMSLE) is available through the :func:`root\_mean\_squared\_log\_error` function. .. \_mean\_absolute\_percentage\_error: Mean absolute percentage error ------------------------------ The :func:`mean\_absolute\_percentage\_error` (MAPE), also known as mean absolute percentage deviation (MAPD), is an evaluation metric for regression problems. The idea of this metric is to be sensitive to relative errors. It is for example not changed by a global scaling of the target variable. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample and :math:`y\_i` is the corresponding true value, then the mean absolute percentage error (MAPE) estimated over :math:`n\_{\text{samples}}` is defined as .. math:: \text{MAPE}(y, \hat{y}) = \frac{1}{n\_{\text{samples}}} \sum\_{i=0}^{n\_{\text{samples}}-1} \frac{{}\left| y\_i - \hat{y}\_i \right|}{\max(\epsilon, \left| y\_i \right|)} where :math:`\epsilon` is an arbitrary small yet strictly positive number to avoid undefined results when y is zero. The :func:`mean\_absolute\_percentage\_error` function supports multioutput. Here is a small example of usage of the :func:`mean\_absolute\_percentage\_error` function:: >>> from sklearn.metrics import mean\_absolute\_percentage\_error >>> y\_true = [1, 10, 1e6] >>> y\_pred = [0.9, 15, 1.2e6] >>> mean\_absolute\_percentage\_error(y\_true, y\_pred) 0.2666 In above example, if we had used `mean\_absolute\_error`, it would have ignored the small magnitude values and only reflected the error in prediction of highest magnitude value. But that problem is resolved in case of MAPE because it calculates relative percentage error with respect to actual output. .. note:: The MAPE formula here does not represent the common "percentage" definition: the percentage in the range [0, 100] is converted to a relative value in the range [0, 1] by dividing by 100. Thus, an error of 200% corresponds to a relative error of 2. The motivation here is to have a range of values that is more consistent with other error metrics in scikit-learn, such as `accuracy\_score`. To obtain the mean absolute percentage error as per the Wikipedia formula, multiply the `mean\_absolute\_percentage\_error` computed here by 100. .. dropdown:: References \* `Wikipedia entry for Mean Absolute Percentage Error `\_ .. \_median\_absolute\_error: Median absolute error --------------------- The :func:`median\_absolute\_error` is particularly interesting because it is robust to outliers. The loss is calculated by taking the median of all absolute differences between the target and the prediction. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample and :math:`y\_i` is the corresponding true value, then the median absolute error (MedAE) estimated over :math:`n\_{\text{samples}}` is defined as .. math:: \text{MedAE}(y, \hat{y}) = \text{median}(\mid y\_1 - \hat{y}\_1 \mid, \ldots, \mid y\_n - \hat{y}\_n \mid). | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.05059374123811722,
-0.09365888684988022,
-0.06304341554641724,
0.055764421820640564,
-0.01134362630546093,
-0.05395198613405228,
-0.023587893694639206,
0.14375871419906616,
0.09033199399709702,
0.03851596266031265,
-0.07848784327507019,
-0.027990756556391716,
0.08146565407514572,
0.0274... | 0.1103 |
the target and the prediction. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample and :math:`y\_i` is the corresponding true value, then the median absolute error (MedAE) estimated over :math:`n\_{\text{samples}}` is defined as .. math:: \text{MedAE}(y, \hat{y}) = \text{median}(\mid y\_1 - \hat{y}\_1 \mid, \ldots, \mid y\_n - \hat{y}\_n \mid). The :func:`median\_absolute\_error` does not support multioutput. Here is a small example of usage of the :func:`median\_absolute\_error` function:: >>> from sklearn.metrics import median\_absolute\_error >>> y\_true = [3, -0.5, 2, 7] >>> y\_pred = [2.5, 0.0, 2, 8] >>> median\_absolute\_error(y\_true, y\_pred) 0.5 .. \_max\_error: Max error ------------------- The :func:`max\_error` function computes the maximum `residual error `\_ , a metric that captures the worst case error between the predicted value and the true value. In a perfectly fitted single output regression model, ``max\_error`` would be ``0`` on the training set and though this would be highly unlikely in the real world, this metric shows the extent of error that the model had when it was fitted. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample, and :math:`y\_i` is the corresponding true value, then the max error is defined as .. math:: \text{Max Error}(y, \hat{y}) = \max(| y\_i - \hat{y}\_i |) Here is a small example of usage of the :func:`max\_error` function:: >>> from sklearn.metrics import max\_error >>> y\_true = [3, 2, 7, 1] >>> y\_pred = [9, 2, 7, 1] >>> max\_error(y\_true, y\_pred) 6.0 The :func:`max\_error` does not support multioutput. .. \_explained\_variance\_score: Explained variance score ------------------------- The :func:`explained\_variance\_score` computes the `explained variance regression score `\_. If :math:`\hat{y}` is the estimated target output, :math:`y` the corresponding (correct) target output, and :math:`Var` is `Variance `\_, the square of the standard deviation, then the explained variance is estimated as follow: .. math:: explained\\_{}variance(y, \hat{y}) = 1 - \frac{Var\{ y - \hat{y}\}}{Var\{y\}} The best possible score is 1.0, lower values are worse. .. topic:: Link to :ref:`r2\_score` The difference between the explained variance score and the :ref:`r2\_score` is that the explained variance score does not account for systematic offset in the prediction. For this reason, the :ref:`r2\_score` should be preferred in general. In the particular case where the true target is constant, the Explained Variance score is not finite: it is either ``NaN`` (perfect predictions) or ``-Inf`` (imperfect predictions). Such non-finite scores may prevent correct model optimization such as grid-search cross-validation to be performed correctly. For this reason the default behaviour of :func:`explained\_variance\_score` is to replace them with 1.0 (perfect predictions) or 0.0 (imperfect predictions). You can set the ``force\_finite`` parameter to ``False`` to prevent this fix from happening and fallback on the original Explained Variance score. Here is a small example of usage of the :func:`explained\_variance\_score` function:: >>> from sklearn.metrics import explained\_variance\_score >>> y\_true = [3, -0.5, 2, 7] >>> y\_pred = [2.5, 0.0, 2, 8] >>> explained\_variance\_score(y\_true, y\_pred) 0.957 >>> y\_true = [[0.5, 1], [-1, 1], [7, -6]] >>> y\_pred = [[0, 2], [-1, 2], [8, -5]] >>> explained\_variance\_score(y\_true, y\_pred, multioutput='raw\_values') array([0.967, 1. ]) >>> explained\_variance\_score(y\_true, y\_pred, multioutput=[0.3, 0.7]) 0.990 >>> y\_true = [-2, -2, -2] >>> y\_pred = [-2, -2, -2] >>> explained\_variance\_score(y\_true, y\_pred) 1.0 >>> explained\_variance\_score(y\_true, y\_pred, force\_finite=False) nan >>> y\_true = [-2, -2, -2] >>> y\_pred = [-2, -2, -2 + 1e-8] >>> explained\_variance\_score(y\_true, y\_pred) 0.0 >>> explained\_variance\_score(y\_true, y\_pred, force\_finite=False) -inf .. \_mean\_tweedie\_deviance: Mean Poisson, Gamma, and Tweedie deviances ------------------------------------------ The :func:`mean\_tweedie\_deviance` function computes the `mean Tweedie deviance error `\_ with a ``power`` parameter (:math:`p`). This is a metric that elicits predicted expectation values of regression targets. Following special cases exist, - when ``power=0`` it is equivalent to :func:`mean\_squared\_error`. - when ``power=1`` it is equivalent to :func:`mean\_poisson\_deviance`. - when ``power=2`` it | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.018041564151644707,
-0.054300449788570404,
-0.0378771610558033,
-0.045110072940588,
0.07244487851858139,
-0.0893838107585907,
-0.03269488736987114,
0.11285030841827393,
-0.03859024867415428,
-0.00883051473647356,
-0.06499406695365906,
-0.053924623876810074,
0.0729043111205101,
-0.057451... | 0.125235 |
:func:`mean\_tweedie\_deviance` function computes the `mean Tweedie deviance error `\_ with a ``power`` parameter (:math:`p`). This is a metric that elicits predicted expectation values of regression targets. Following special cases exist, - when ``power=0`` it is equivalent to :func:`mean\_squared\_error`. - when ``power=1`` it is equivalent to :func:`mean\_poisson\_deviance`. - when ``power=2`` it is equivalent to :func:`mean\_gamma\_deviance`. If :math:`\hat{y}\_i` is the predicted value of the :math:`i`-th sample, and :math:`y\_i` is the corresponding true value, then the mean Tweedie deviance error (D) for power :math:`p`, estimated over :math:`n\_{\text{samples}}` is defined as .. math:: \text{D}(y, \hat{y}) = \frac{1}{n\_\text{samples}} \sum\_{i=0}^{n\_\text{samples} - 1} \begin{cases} (y\_i-\hat{y}\_i)^2, & \text{for }p=0\text{ (Normal)}\\ 2(y\_i \log(y\_i/\hat{y}\_i) + \hat{y}\_i - y\_i), & \text{for }p=1\text{ (Poisson)}\\ 2(\log(\hat{y}\_i/y\_i) + y\_i/\hat{y}\_i - 1), & \text{for }p=2\text{ (Gamma)}\\ 2\left(\frac{\max(y\_i,0)^{2-p}}{(1-p)(2-p)}- \frac{y\_i\,\hat{y}\_i^{1-p}}{1-p}+\frac{\hat{y}\_i^{2-p}}{2-p}\right), & \text{otherwise} \end{cases} Tweedie deviance is a homogeneous function of degree ``2-power``. Thus, Gamma distribution with ``power=2`` means that simultaneously scaling ``y\_true`` and ``y\_pred`` has no effect on the deviance. For Poisson distribution ``power=1`` the deviance scales linearly, and for Normal distribution (``power=0``), quadratically. In general, the higher ``power`` the less weight is given to extreme deviations between true and predicted targets. For instance, let's compare the two predictions 1.5 and 150 that are both 50% larger than their corresponding true value. The mean squared error (``power=0``) is very sensitive to the prediction difference of the second point,:: >>> from sklearn.metrics import mean\_tweedie\_deviance >>> mean\_tweedie\_deviance([1.0], [1.5], power=0) 0.25 >>> mean\_tweedie\_deviance([100.], [150.], power=0) 2500.0 If we increase ``power`` to 1,:: >>> mean\_tweedie\_deviance([1.0], [1.5], power=1) 0.189 >>> mean\_tweedie\_deviance([100.], [150.], power=1) 18.9 the difference in errors decreases. Finally, by setting, ``power=2``:: >>> mean\_tweedie\_deviance([1.0], [1.5], power=2) 0.144 >>> mean\_tweedie\_deviance([100.], [150.], power=2) 0.144 we would get identical errors. The deviance when ``power=2`` is thus only sensitive to relative errors. .. \_pinball\_loss: Pinball loss ------------ The :func:`mean\_pinball\_loss` function is used to evaluate the predictive performance of `quantile regression `\_ models. .. math:: \text{pinball}(y, \hat{y}) = \frac{1}{n\_{\text{samples}}} \sum\_{i=0}^{n\_{\text{samples}}-1} \alpha \max(y\_i - \hat{y}\_i, 0) + (1 - \alpha) \max(\hat{y}\_i - y\_i, 0) The value of pinball loss is equivalent to half of :func:`mean\_absolute\_error` when the quantile parameter ``alpha`` is set to 0.5. Here is a small example of usage of the :func:`mean\_pinball\_loss` function:: >>> from sklearn.metrics import mean\_pinball\_loss >>> y\_true = [1, 2, 3] >>> mean\_pinball\_loss(y\_true, [0, 2, 3], alpha=0.1) 0.033 >>> mean\_pinball\_loss(y\_true, [1, 2, 4], alpha=0.1) 0.3 >>> mean\_pinball\_loss(y\_true, [0, 2, 3], alpha=0.9) 0.3 >>> mean\_pinball\_loss(y\_true, [1, 2, 4], alpha=0.9) 0.033 >>> mean\_pinball\_loss(y\_true, y\_true, alpha=0.1) 0.0 >>> mean\_pinball\_loss(y\_true, y\_true, alpha=0.9) 0.0 It is possible to build a scorer object with a specific choice of ``alpha``:: >>> from sklearn.metrics import make\_scorer >>> mean\_pinball\_loss\_95p = make\_scorer(mean\_pinball\_loss, alpha=0.95) Such a scorer can be used to evaluate the generalization performance of a quantile regressor via cross-validation: >>> from sklearn.datasets import make\_regression >>> from sklearn.model\_selection import cross\_val\_score >>> from sklearn.ensemble import GradientBoostingRegressor >>> >>> X, y = make\_regression(n\_samples=100, random\_state=0) >>> estimator = GradientBoostingRegressor( ... loss="quantile", ... alpha=0.95, ... random\_state=0, ... ) >>> cross\_val\_score(estimator, X, y, cv=5, scoring=mean\_pinball\_loss\_95p) array([14.3, 9.8, 23.9, 9.4, 10.8]) It is also possible to build scorer objects for hyper-parameter tuning. The sign of the loss must be switched to ensure that greater means better as explained in the example linked below. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_quantile.py` for an example of using the pinball loss to evaluate and tune the hyper-parameters of quantile regression models on data with non-symmetric noise and outliers. .. \_d2\_score: D² score -------- The D² score computes the fraction of deviance explained. It is a generalization of R², where the squared error is generalized and replaced by a deviance of choice :math:`\text{dev}(y, \hat{y})` (e.g., Tweedie, pinball or mean absolute error). D² is | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.03751997649669647,
-0.013308441266417503,
-0.007540650200098753,
0.03715711832046509,
0.028113730251789093,
-0.058532651513814926,
0.09226761758327484,
0.05453534796833992,
0.04100358858704567,
0.020966460928320885,
0.013025099411606789,
-0.05825641006231308,
0.04226643964648247,
-0.001... | -0.039103 |
data with non-symmetric noise and outliers. .. \_d2\_score: D² score -------- The D² score computes the fraction of deviance explained. It is a generalization of R², where the squared error is generalized and replaced by a deviance of choice :math:`\text{dev}(y, \hat{y})` (e.g., Tweedie, pinball or mean absolute error). D² is a form of a \*skill score\*. It is calculated as .. math:: D^2(y, \hat{y}) = 1 - \frac{\text{dev}(y, \hat{y})}{\text{dev}(y, y\_{\text{null}})} \,. Where :math:`y\_{\text{null}}` is the optimal prediction of an intercept-only model (e.g., the mean of `y\_true` for the Tweedie case, the median for absolute error and the alpha-quantile for pinball loss). Like R², the best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse). A constant model that always predicts :math:`y\_{\text{null}}`, disregarding the input features, would get a D² score of 0.0. .. dropdown:: D² Tweedie score The :func:`d2\_tweedie\_score` function implements the special case of D² where :math:`\text{dev}(y, \hat{y})` is the Tweedie deviance, see :ref:`mean\_tweedie\_deviance`. It is also known as D² Tweedie and is related to McFadden's likelihood ratio index. The argument ``power`` defines the Tweedie power as for :func:`mean\_tweedie\_deviance`. Note that for `power=0`, :func:`d2\_tweedie\_score` equals :func:`r2\_score` (for single targets). A scorer object with a specific choice of ``power`` can be built by:: >>> from sklearn.metrics import d2\_tweedie\_score, make\_scorer >>> d2\_tweedie\_score\_15 = make\_scorer(d2\_tweedie\_score, power=1.5) .. dropdown:: D² pinball score The :func:`d2\_pinball\_score` function implements the special case of D² with the pinball loss, see :ref:`pinball\_loss`, i.e.: .. math:: \text{dev}(y, \hat{y}) = \text{pinball}(y, \hat{y}). The argument ``alpha`` defines the slope of the pinball loss as for :func:`mean\_pinball\_loss` (:ref:`pinball\_loss`). It determines the quantile level ``alpha`` for which the pinball loss and also D² are optimal. Note that for `alpha=0.5` (the default) :func:`d2\_pinball\_score` equals :func:`d2\_absolute\_error\_score`. A scorer object with a specific choice of ``alpha`` can be built by:: >>> from sklearn.metrics import d2\_pinball\_score, make\_scorer >>> d2\_pinball\_score\_08 = make\_scorer(d2\_pinball\_score, alpha=0.8) .. dropdown:: D² absolute error score The :func:`d2\_absolute\_error\_score` function implements the special case of the :ref:`mean\_absolute\_error`: .. math:: \text{dev}(y, \hat{y}) = \text{MAE}(y, \hat{y}). Here are some usage examples of the :func:`d2\_absolute\_error\_score` function:: >>> from sklearn.metrics import d2\_absolute\_error\_score >>> y\_true = [3, -0.5, 2, 7] >>> y\_pred = [2.5, 0.0, 2, 8] >>> d2\_absolute\_error\_score(y\_true, y\_pred) 0.764 >>> y\_true = [1, 2, 3] >>> y\_pred = [1, 2, 3] >>> d2\_absolute\_error\_score(y\_true, y\_pred) 1.0 >>> y\_true = [1, 2, 3] >>> y\_pred = [2, 2, 2] >>> d2\_absolute\_error\_score(y\_true, y\_pred) 0.0 .. \_visualization\_regression\_evaluation: Visual evaluation of regression models -------------------------------------- Among methods to assess the quality of regression models, scikit-learn provides the :class:`~sklearn.metrics.PredictionErrorDisplay` class. It allows to visually inspect the prediction errors of a model in two different manners. .. image:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_predict\_001.png :target: ../auto\_examples/model\_selection/plot\_cv\_predict.html :scale: 75 :align: center The plot on the left shows the actual values vs predicted values. For a noise-free regression task aiming to predict the (conditional) expectation of `y`, a perfect regression model would display data points on the diagonal defined by predicted equal to actual values. The further away from this optimal line, the larger the error of the model. In a more realistic setting with irreducible noise, that is, when not all the variations of `y` can be explained by features in `X`, then the best model would lead to a cloud of points densely arranged around the diagonal. Note that the above only holds when the predicted values is the expected value of `y` given `X`. This is typically the case for regression models that minimize the mean squared error objective function or more generally the :ref:`mean Tweedie deviance ` for any value of its "power" parameter. When plotting the predictions of an estimator that predicts | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
0.04150140658020973,
-0.03195393085479736,
-0.01997922733426094,
0.05394897609949112,
0.053624704480171204,
-0.015476838685572147,
0.012417451478540897,
0.08478537201881409,
0.03423949331045151,
0.0347120501101017,
-0.03720010071992874,
0.04791147634387016,
0.049696147441864014,
0.03906948... | 0.049692 |
predicted values is the expected value of `y` given `X`. This is typically the case for regression models that minimize the mean squared error objective function or more generally the :ref:`mean Tweedie deviance ` for any value of its "power" parameter. When plotting the predictions of an estimator that predicts a quantile of `y` given `X`, e.g. :class:`~sklearn.linear\_model.QuantileRegressor` or any other model minimizing the :ref:`pinball loss `, a fraction of the points are either expected to lie above or below the diagonal depending on the estimated quantile level. All in all, while intuitive to read, this plot does not really inform us on what to do to obtain a better model. The right-hand side plot shows the residuals (i.e. the difference between the actual and the predicted values) vs. the predicted values. This plot makes it easier to visualize if the residuals follow and `homoscedastic or heteroschedastic `\_ distribution. In particular, if the true distribution of `y|X` is Poisson or Gamma distributed, it is expected that the variance of the residuals of the optimal model would grow with the predicted value of `E[y|X]` (either linearly for Poisson or quadratically for Gamma). When fitting a linear least squares regression model (see :class:`~sklearn.linear\_model.LinearRegression` and :class:`~sklearn.linear\_model.Ridge`), we can use this plot to check if some of the `model assumptions `\_ are met, in particular that the residuals should be uncorrelated, their expected value should be null and that their variance should be constant (homoschedasticity). If this is not the case, and in particular if the residuals plot show some banana-shaped structure, this is a hint that the model is likely mis-specified and that non-linear feature engineering or switching to a non-linear regression model might be useful. Refer to the example below to see a model evaluation that makes use of this display. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_transformed\_target.py` for an example on how to use :class:`~sklearn.metrics.PredictionErrorDisplay` to visualize the prediction quality improvement of a regression model obtained by transforming the target before learning. .. \_clustering\_metrics: Clustering metrics ================== .. currentmodule:: sklearn.metrics The :mod:`sklearn.metrics` module implements several loss, score, and utility functions to measure clustering performance. For more information see the :ref:`clustering\_evaluation` section for instance clustering, and :ref:`biclustering\_evaluation` for biclustering. .. \_dummy\_estimators: Dummy estimators ================= .. currentmodule:: sklearn.dummy When doing supervised learning, a simple sanity check consists of comparing one's estimator against simple rules of thumb. :class:`DummyClassifier` implements several such simple strategies for classification: - ``stratified`` generates random predictions by respecting the training set class distribution. - ``most\_frequent`` always predicts the most frequent label in the training set. - ``prior`` always predicts the class that maximizes the class prior (like ``most\_frequent``) and ``predict\_proba`` returns the class prior. - ``uniform`` generates predictions uniformly at random. - ``constant`` always predicts a constant label that is provided by the user. A major motivation of this method is F1-scoring, when the positive class is in the minority. Note that with all these strategies, the ``predict`` method completely ignores the input data! To illustrate :class:`DummyClassifier`, first let's create an imbalanced dataset:: >>> from sklearn.datasets import load\_iris >>> from sklearn.model\_selection import train\_test\_split >>> X, y = load\_iris(return\_X\_y=True) >>> y[y != 1] = -1 >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split(X, y, random\_state=0) Next, let's compare the accuracy of ``SVC`` and ``most\_frequent``:: >>> from sklearn.dummy import DummyClassifier >>> from sklearn.svm import SVC >>> clf = SVC(kernel='linear', C=1).fit(X\_train, y\_train) >>> clf.score(X\_test, y\_test) 0.63 >>> clf = DummyClassifier(strategy='most\_frequent', random\_state=0) >>> clf.fit(X\_train, y\_train) DummyClassifier(random\_state=0, strategy='most\_frequent') >>> clf.score(X\_test, y\_test) 0.579 We see that ``SVC`` doesn't do much better than a dummy classifier. Now, let's change the kernel:: >>> clf = SVC(kernel='rbf', C=1).fit(X\_train, y\_train) >>> clf.score(X\_test, | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.07067841291427612,
0.004684713669121265,
-0.011132624931633472,
0.040025316178798676,
0.03920422121882439,
-0.019925681874155998,
0.04820659011602402,
0.05034951493144035,
0.07854752987623215,
0.010703765787184238,
0.00046530322288163006,
0.00689743971452117,
-0.008494340814650059,
0.02... | -0.027408 |
SVC >>> clf = SVC(kernel='linear', C=1).fit(X\_train, y\_train) >>> clf.score(X\_test, y\_test) 0.63 >>> clf = DummyClassifier(strategy='most\_frequent', random\_state=0) >>> clf.fit(X\_train, y\_train) DummyClassifier(random\_state=0, strategy='most\_frequent') >>> clf.score(X\_test, y\_test) 0.579 We see that ``SVC`` doesn't do much better than a dummy classifier. Now, let's change the kernel:: >>> clf = SVC(kernel='rbf', C=1).fit(X\_train, y\_train) >>> clf.score(X\_test, y\_test) 0.94 We see that the accuracy was boosted to almost 100%. A cross validation strategy is recommended for a better estimate of the accuracy, if it is not too CPU costly. For more information see the :ref:`cross\_validation` section. Moreover if you want to optimize over the parameter space, it is highly recommended to use an appropriate methodology; see the :ref:`grid\_search` section for details. More generally, when the accuracy of a classifier is too close to random, it probably means that something went wrong: features are not helpful, a hyperparameter is not correctly tuned, the classifier is suffering from class imbalance, etc... :class:`DummyRegressor` also implements four simple rules of thumb for regression: - ``mean`` always predicts the mean of the training targets. - ``median`` always predicts the median of the training targets. - ``quantile`` always predicts a user provided quantile of the training targets. - ``constant`` always predicts a constant value that is provided by the user. In all these strategies, the ``predict`` method completely ignores the input data. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/model_evaluation.rst | main | scikit-learn | [
-0.018024127930402756,
-0.1031966581940651,
-0.09513743966817856,
-0.0015007866313681006,
0.08809394389390945,
-0.029383962973952293,
0.025058789178729057,
0.0338185615837574,
-0.03597480431199074,
-0.04841097071766853,
-0.04798563942313194,
-0.04242321103811264,
-0.029834775254130363,
-0.... | 0.065024 |
.. \_metrics: Pairwise metrics, Affinities and Kernels ======================================== The :mod:`sklearn.metrics.pairwise` submodule implements utilities to evaluate pairwise distances or affinity of sets of samples. This module contains both distance metrics and kernels. A brief summary is given on the two here. Distance metrics are functions ``d(a, b)`` such that ``d(a, b) < d(a, c)`` if objects ``a`` and ``b`` are considered "more similar" than objects ``a`` and ``c``. Two objects exactly alike would have a distance of zero. One of the most popular examples is Euclidean distance. To be a 'true' metric, it must obey the following four conditions:: 1. d(a, b) >= 0, for all a and b 2. d(a, b) == 0, if and only if a = b, positive definiteness 3. d(a, b) == d(b, a), symmetry 4. d(a, c) <= d(a, b) + d(b, c), the triangle inequality Kernels are measures of similarity, i.e. ``s(a, b) > s(a, c)`` if objects ``a`` and ``b`` are considered "more similar" than objects ``a`` and ``c``. A kernel must also be positive semi-definite. There are a number of ways to convert between a distance metric and a similarity measure, such as a kernel. Let ``D`` be the distance, and ``S`` be the kernel: 1. ``S = np.exp(-D \* gamma)``, where one heuristic for choosing ``gamma`` is ``1 / num\_features`` 2. ``S = 1. / (D / np.max(D))`` .. currentmodule:: sklearn.metrics The distances between the row vectors of ``X`` and the row vectors of ``Y`` can be evaluated using :func:`pairwise\_distances`. If ``Y`` is omitted the pairwise distances of the row vectors of ``X`` are calculated. Similarly, :func:`pairwise.pairwise\_kernels` can be used to calculate the kernel between `X` and `Y` using different kernel functions. See the API reference for more details. >>> import numpy as np >>> from sklearn.metrics import pairwise\_distances >>> from sklearn.metrics.pairwise import pairwise\_kernels >>> X = np.array([[2, 3], [3, 5], [5, 8]]) >>> Y = np.array([[1, 0], [2, 1]]) >>> pairwise\_distances(X, Y, metric='manhattan') array([[ 4., 2.], [ 7., 5.], [12., 10.]]) >>> pairwise\_distances(X, metric='manhattan') array([[0., 3., 8.], [3., 0., 5.], [8., 5., 0.]]) >>> pairwise\_kernels(X, Y, metric='linear') array([[ 2., 7.], [ 3., 11.], [ 5., 18.]]) .. currentmodule:: sklearn.metrics.pairwise .. \_cosine\_similarity: Cosine similarity ----------------- :func:`cosine\_similarity` computes the L2-normalized dot product of vectors. That is, if :math:`x` and :math:`y` are row vectors, their cosine similarity :math:`k` is defined as: .. math:: k(x, y) = \frac{x y^\top}{\|x\| \|y\|} This is called cosine similarity, because Euclidean (L2) normalization projects the vectors onto the unit sphere, and their dot product is then the cosine of the angle between the points denoted by the vectors. This kernel is a popular choice for computing the similarity of documents represented as tf-idf vectors. :func:`cosine\_similarity` accepts ``scipy.sparse`` matrices. (Note that the tf-idf functionality in ``sklearn.feature\_extraction.text`` can produce normalized vectors, in which case :func:`cosine\_similarity` is equivalent to :func:`linear\_kernel`, only slower.) .. rubric:: References \* C.D. Manning, P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press. https://nlp.stanford.edu/IR-book/html/htmledition/the-vector-space-model-for-scoring-1.html .. \_linear\_kernel: Linear kernel ------------- The function :func:`linear\_kernel` computes the linear kernel, that is, a special case of :func:`polynomial\_kernel` with ``degree=1`` and ``coef0=0`` (homogeneous). If ``x`` and ``y`` are column vectors, their linear kernel is: .. math:: k(x, y) = x^\top y .. \_polynomial\_kernel: Polynomial kernel ----------------- The function :func:`polynomial\_kernel` computes the degree-d polynomial kernel between two vectors. The polynomial kernel represents the similarity between two vectors. Conceptually, the polynomial kernel considers not only the similarity between vectors under the same dimension, but also across dimensions. When used in machine learning algorithms, this allows to account for feature interaction. The polynomial kernel is defined as: .. math:: k(x, y) | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/metrics.rst | main | scikit-learn | [
-0.05366167426109314,
-0.0784807950258255,
-0.09904947876930237,
-0.09609668701887131,
0.027522895485162735,
-0.0506783090531826,
0.004781915340572596,
-0.006040059495717287,
0.05255437269806862,
-0.06350968033075333,
0.0250838790088892,
-0.07801274210214615,
0.027759727090597153,
0.047603... | 0.117459 |
polynomial kernel represents the similarity between two vectors. Conceptually, the polynomial kernel considers not only the similarity between vectors under the same dimension, but also across dimensions. When used in machine learning algorithms, this allows to account for feature interaction. The polynomial kernel is defined as: .. math:: k(x, y) = (\gamma x^\top y +c\_0)^d where: \* ``x``, ``y`` are the input vectors \* ``d`` is the kernel degree If :math:`c\_0 = 0` the kernel is said to be homogeneous. .. \_sigmoid\_kernel: Sigmoid kernel -------------- The function :func:`sigmoid\_kernel` computes the sigmoid kernel between two vectors. The sigmoid kernel is also known as hyperbolic tangent, or Multilayer Perceptron (because, in the neural network field, it is often used as neuron activation function). It is defined as: .. math:: k(x, y) = \tanh( \gamma x^\top y + c\_0) where: \* ``x``, ``y`` are the input vectors \* :math:`\gamma` is known as slope \* :math:`c\_0` is known as intercept .. \_rbf\_kernel: RBF kernel ---------- The function :func:`rbf\_kernel` computes the radial basis function (RBF) kernel between two vectors. This kernel is defined as: .. math:: k(x, y) = \exp( -\gamma \| x-y \|^2) where ``x`` and ``y`` are the input vectors. If :math:`\gamma = \sigma^{-2}` the kernel is known as the Gaussian kernel of variance :math:`\sigma^2`. .. \_laplacian\_kernel: Laplacian kernel ---------------- The function :func:`laplacian\_kernel` is a variant on the radial basis function kernel defined as: .. math:: k(x, y) = \exp( -\gamma \| x-y \|\_1) where ``x`` and ``y`` are the input vectors and :math:`\|x-y\|\_1` is the Manhattan distance between the input vectors. It has proven useful in ML applied to noiseless data. See e.g. `Machine learning for quantum mechanics in a nutshell `\_. .. \_chi2\_kernel: Chi-squared kernel ------------------ The chi-squared kernel is a very popular choice for training non-linear SVMs in computer vision applications. It can be computed using :func:`chi2\_kernel` and then passed to an :class:`~sklearn.svm.SVC` with ``kernel="precomputed"``:: >>> from sklearn.svm import SVC >>> from sklearn.metrics.pairwise import chi2\_kernel >>> X = [[0, 1], [1, 0], [.2, .8], [.7, .3]] >>> y = [0, 1, 0, 1] >>> K = chi2\_kernel(X, gamma=.5) >>> K array([[1. , 0.36787944, 0.89483932, 0.58364548], [0.36787944, 1. , 0.51341712, 0.83822343], [0.89483932, 0.51341712, 1. , 0.7768366 ], [0.58364548, 0.83822343, 0.7768366 , 1. ]]) >>> svm = SVC(kernel='precomputed').fit(K, y) >>> svm.predict(K) array([0, 1, 0, 1]) It can also be directly used as the ``kernel`` argument:: >>> svm = SVC(kernel=chi2\_kernel).fit(X, y) >>> svm.predict(X) array([0, 1, 0, 1]) The chi squared kernel is given by .. math:: k(x, y) = \exp \left (-\gamma \sum\_i \frac{(x[i] - y[i]) ^ 2}{x[i] + y[i]} \right ) The data is assumed to be non-negative, and is often normalized to have an L1-norm of one. The normalization is rationalized with the connection to the chi squared distance, which is a distance between discrete probability distributions. The chi squared kernel is most commonly used on histograms (bags) of visual words. .. rubric:: References \* Zhang, J. and Marszalek, M. and Lazebnik, S. and Schmid, C. Local features and kernels for classification of texture and object categories: A comprehensive study International Journal of Computer Vision 2007 https://hal.archives-ouvertes.fr/hal-00171412/document | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/metrics.rst | main | scikit-learn | [
-0.02995692379772663,
-0.08493392914533615,
-0.029337111860513687,
-0.025981910526752472,
0.04701721668243408,
-0.042506568133831024,
0.02807079255580902,
0.00914796907454729,
0.07920641452074051,
-0.011616339907050133,
0.052819374948740005,
-0.04297703877091408,
-0.020092017948627472,
-0.... | 0.179199 |
.. \_covariance: =================================================== Covariance estimation =================================================== .. currentmodule:: sklearn.covariance Many statistical problems require the estimation of a population's covariance matrix, which can be seen as an estimation of data set scatter plot shape. Most of the time, such an estimation has to be done on a sample whose properties (size, structure, homogeneity) have a large influence on the estimation's quality. The :mod:`sklearn.covariance` package provides tools for accurately estimating a population's covariance matrix under various settings. We assume that the observations are independent and identically distributed (i.i.d.). Empirical covariance ==================== The covariance matrix of a data set is known to be well approximated by the classical \*maximum likelihood estimator\* (or "empirical covariance"), provided the number of observations is large enough compared to the number of features (the variables describing the observations). More precisely, the Maximum Likelihood Estimator of a sample is an asymptotically unbiased estimator of the corresponding population's covariance matrix. The empirical covariance matrix of a sample can be computed using the :func:`empirical\_covariance` function of the package, or by fitting an :class:`EmpiricalCovariance` object to the data sample with the :meth:`EmpiricalCovariance.fit` method. Be careful that results depend on whether the data are centered, so one may want to use the `assume\_centered` parameter accurately. More precisely, if `assume\_centered=True`, then all features in the train and test sets should have a mean of zero. If not, both should be centered by the user, or `assume\_centered=False` should be used. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_covariance\_plot\_covariance\_estimation.py` for an example on how to fit an :class:`EmpiricalCovariance` object to data. .. \_shrunk\_covariance: Shrunk Covariance ================= Basic shrinkage --------------- Despite being an asymptotically unbiased estimator of the covariance matrix, the Maximum Likelihood Estimator is not a good estimator of the eigenvalues of the covariance matrix, so the precision matrix obtained from its inversion is not accurate. Sometimes, it even occurs that the empirical covariance matrix cannot be inverted for numerical reasons. To avoid such an inversion problem, a transformation of the empirical covariance matrix has been introduced: the ``shrinkage``. In scikit-learn, this transformation (with a user-defined shrinkage coefficient) can be directly applied to a pre-computed covariance with the :func:`shrunk\_covariance` method. Also, a shrunk estimator of the covariance can be fitted to data with a :class:`ShrunkCovariance` object and its :meth:`ShrunkCovariance.fit` method. Again, results depend on whether the data are centered, so one may want to use the ``assume\_centered`` parameter accurately. Mathematically, this shrinkage consists in reducing the ratio between the smallest and the largest eigenvalues of the empirical covariance matrix. It can be done by simply shifting every eigenvalue according to a given offset, which is equivalent of finding the l2-penalized Maximum Likelihood Estimator of the covariance matrix. In practice, shrinkage boils down to a simple convex transformation : :math:`\Sigma\_{\rm shrunk} = (1-\alpha)\hat{\Sigma} + \alpha\frac{{\rm Tr}\hat{\Sigma}}{p}\rm Id`. Choosing the amount of shrinkage, :math:`\alpha` amounts to setting a bias/variance trade-off, and is discussed below. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_covariance\_plot\_covariance\_estimation.py` for an example on how to fit a :class:`ShrunkCovariance` object to data. Ledoit-Wolf shrinkage --------------------- In their 2004 paper [1]\_, O. Ledoit and M. Wolf propose a formula to compute the optimal shrinkage coefficient :math:`\alpha` that minimizes the Mean Squared Error between the estimated and the real covariance matrix. The Ledoit-Wolf estimator of the covariance matrix can be computed on a sample with the :meth:`ledoit\_wolf` function of the :mod:`sklearn.covariance` package, or it can be otherwise obtained by fitting a :class:`LedoitWolf` object to the same sample. .. note:: \*\*Case when population covariance matrix is isotropic\*\* It is important to note that when the number of samples is much larger than the number of features, one would expect that no shrinkage would | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/covariance.rst | main | scikit-learn | [
0.004771838895976543,
-0.03437205031514168,
-0.000489337311591953,
0.0076390826143324375,
0.0483948290348053,
-0.06005550175905228,
0.031175870448350906,
-0.07170671224594116,
-0.00682733952999115,
0.023906635120511055,
0.042615458369255066,
-0.07461507618427277,
0.013527967967092991,
-0.0... | 0.101351 |
or it can be otherwise obtained by fitting a :class:`LedoitWolf` object to the same sample. .. note:: \*\*Case when population covariance matrix is isotropic\*\* It is important to note that when the number of samples is much larger than the number of features, one would expect that no shrinkage would be necessary. The intuition behind this is that if the population covariance is full rank, when the number of samples grows, the sample covariance will also become positive definite. As a result, no shrinkage would be necessary and the method should automatically do this. This, however, is not the case in the Ledoit-Wolf procedure when the population covariance happens to be a multiple of the identity matrix. In this case, the Ledoit-Wolf shrinkage estimate approaches 1 as the number of samples increases. This indicates that the optimal estimate of the covariance matrix in the Ledoit-Wolf sense is a multiple of the identity. Since the population covariance is already a multiple of the identity matrix, the Ledoit-Wolf solution is indeed a reasonable estimate. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_covariance\_plot\_covariance\_estimation.py` for an example on how to fit a :class:`LedoitWolf` object to data and for visualizing the performances of the Ledoit-Wolf estimator in terms of likelihood. .. rubric:: References .. [1] O. Ledoit and M. Wolf, "A Well-Conditioned Estimator for Large-Dimensional Covariance Matrices", Journal of Multivariate Analysis, Volume 88, Issue 2, February 2004, pages 365-411. .. \_oracle\_approximating\_shrinkage: Oracle Approximating Shrinkage ------------------------------ Under the assumption that the data are Gaussian distributed, Chen et al. [2]\_ derived a formula aimed at choosing a shrinkage coefficient that yields a smaller Mean Squared Error than the one given by Ledoit and Wolf's formula. The resulting estimator is known as the Oracle Shrinkage Approximating estimator of the covariance. The OAS estimator of the covariance matrix can be computed on a sample with the :meth:`oas` function of the :mod:`sklearn.covariance` package, or it can be otherwise obtained by fitting an :class:`OAS` object to the same sample. .. figure:: ../auto\_examples/covariance/images/sphx\_glr\_plot\_covariance\_estimation\_001.png :target: ../auto\_examples/covariance/plot\_covariance\_estimation.html :align: center :scale: 65% Bias-variance trade-off when setting the shrinkage: comparing the choices of Ledoit-Wolf and OAS estimators .. rubric:: References .. [2] :arxiv:`"Shrinkage algorithms for MMSE covariance estimation.", Chen, Y., Wiesel, A., Eldar, Y. C., & Hero, A. O. IEEE Transactions on Signal Processing, 58(10), 5016-5029, 2010. <0907.4698>` .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_covariance\_plot\_covariance\_estimation.py` for an example on how to fit an :class:`OAS` object to data. \* See :ref:`sphx\_glr\_auto\_examples\_covariance\_plot\_lw\_vs\_oas.py` to visualize the Mean Squared Error difference between a :class:`LedoitWolf` and an :class:`OAS` estimator of the covariance. .. figure:: ../auto\_examples/covariance/images/sphx\_glr\_plot\_lw\_vs\_oas\_001.png :target: ../auto\_examples/covariance/plot\_lw\_vs\_oas.html :align: center :scale: 75% .. \_sparse\_inverse\_covariance: Sparse inverse covariance ========================== The matrix inverse of the covariance matrix, often called the precision matrix, is proportional to the partial correlation matrix. It gives the partial independence relationship. In other words, if two features are independent conditionally on the others, the corresponding coefficient in the precision matrix will be zero. This is why it makes sense to estimate a sparse precision matrix: the estimation of the covariance matrix is better conditioned by learning independence relations from the data. This is known as \*covariance selection\*. In the small-samples situation, in which ``n\_samples`` is on the order of ``n\_features`` or smaller, sparse inverse covariance estimators tend to work better than shrunk covariance estimators. However, in the opposite situation, or for very correlated data, they can be numerically unstable. In addition, unlike shrinkage estimators, sparse estimators are able to recover off-diagonal structure. The :class:`GraphicalLasso` estimator uses an l1 penalty to enforce sparsity on the precision matrix: the higher its ``alpha`` parameter, the more sparse the precision matrix. The corresponding :class:`GraphicalLassoCV` object uses cross-validation | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/covariance.rst | main | scikit-learn | [
-0.0399068146944046,
-0.014473658055067062,
-0.004392467904835939,
0.0737822949886322,
0.021768391132354736,
0.04289747402071953,
-0.03827429935336113,
0.002191380364820361,
-0.03435014188289642,
0.028303682804107666,
0.10883525758981705,
0.046262335032224655,
-0.003244059393182397,
-0.025... | 0.034543 |
data, they can be numerically unstable. In addition, unlike shrinkage estimators, sparse estimators are able to recover off-diagonal structure. The :class:`GraphicalLasso` estimator uses an l1 penalty to enforce sparsity on the precision matrix: the higher its ``alpha`` parameter, the more sparse the precision matrix. The corresponding :class:`GraphicalLassoCV` object uses cross-validation to automatically set the ``alpha`` parameter. .. figure:: ../auto\_examples/covariance/images/sphx\_glr\_plot\_sparse\_cov\_001.png :target: ../auto\_examples/covariance/plot\_sparse\_cov.html :align: center :scale: 60% \*A comparison of maximum likelihood, shrinkage and sparse estimates of the covariance and precision matrix in the very small samples settings.\* .. note:: \*\*Structure recovery\*\* Recovering a graphical structure from correlations in the data is a challenging thing. If you are interested in such recovery keep in mind that: \* Recovery is easier from a correlation matrix than a covariance matrix: standardize your observations before running :class:`GraphicalLasso` \* If the underlying graph has nodes with much more connections than the average node, the algorithm will miss some of these connections. \* If your number of observations is not large compared to the number of edges in your underlying graph, you will not recover it. \* Even if you are in favorable recovery conditions, the alpha parameter chosen by cross-validation (e.g. using the :class:`GraphicalLassoCV` object) will lead to selecting too many edges. However, the relevant edges will have heavier weights than the irrelevant ones. The mathematical formulation is the following: .. math:: \hat{K} = \mathrm{argmin}\_K \big( \mathrm{tr} S K - \mathrm{log} \mathrm{det} K + \alpha \|K\|\_1 \big) Where :math:`K` is the precision matrix to be estimated, and :math:`S` is the sample covariance matrix. :math:`\|K\|\_1` is the sum of the absolute values of off-diagonal coefficients of :math:`K`. The algorithm employed to solve this problem is the GLasso algorithm, from the Friedman 2008 Biostatistics paper. It is the same algorithm as in the R ``glasso`` package. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_covariance\_plot\_sparse\_cov.py`: example on synthetic data showing some recovery of a structure, and comparing to other covariance estimators. \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_stock\_market.py`: example on real stock market data, finding which symbols are most linked. .. rubric:: References \* Friedman et al, `"Sparse inverse covariance estimation with the graphical lasso" `\_, Biostatistics 9, pp 432, 2008 .. \_robust\_covariance: Robust Covariance Estimation ============================ Real data sets are often subject to measurement or recording errors. Regular but uncommon observations may also appear for a variety of reasons. Observations which are very uncommon are called outliers. The empirical covariance estimator and the shrunk covariance estimators presented above are very sensitive to the presence of outliers in the data. Therefore, one should use robust covariance estimators to estimate the covariance of its real data sets. Alternatively, robust covariance estimators can be used to perform outlier detection and discard/downweight some observations according to further processing of the data. The ``sklearn.covariance`` package implements a robust estimator of covariance, the Minimum Covariance Determinant [3]\_. Minimum Covariance Determinant ------------------------------ The Minimum Covariance Determinant estimator is a robust estimator of a data set's covariance introduced by P.J. Rousseeuw in [3]\_. The idea is to find a given proportion (h) of "good" observations which are not outliers and compute their empirical covariance matrix. This empirical covariance matrix is then rescaled to compensate the performed selection of observations ("consistency step"). Having computed the Minimum Covariance Determinant estimator, one can give weights to observations according to their Mahalanobis distance, leading to a reweighted estimate of the covariance matrix of the data set ("reweighting step"). Rousseeuw and Van Driessen [4]\_ developed the FastMCD algorithm in order to compute the Minimum Covariance Determinant. This algorithm is used in scikit-learn when fitting an MCD object to data. The FastMCD algorithm also computes a robust estimate of the | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/covariance.rst | main | scikit-learn | [
-0.06467556953430176,
-0.09643732756376266,
-0.0378711000084877,
-0.013646820560097694,
0.06660047918558121,
-0.06925766915082932,
-0.05399324744939804,
0.0290013886988163,
-0.0406043641269207,
-0.03526442497968674,
0.031950198113918304,
0.05853084474802017,
-0.0007844884530641139,
0.01452... | 0.033285 |
of the covariance matrix of the data set ("reweighting step"). Rousseeuw and Van Driessen [4]\_ developed the FastMCD algorithm in order to compute the Minimum Covariance Determinant. This algorithm is used in scikit-learn when fitting an MCD object to data. The FastMCD algorithm also computes a robust estimate of the data set location at the same time. Raw estimates can be accessed as ``raw\_location\_`` and ``raw\_covariance\_`` attributes of a :class:`MinCovDet` robust covariance estimator object. .. rubric:: References .. [3] P. J. Rousseeuw. Least median of squares regression. J. Am Stat Ass, 79:871, 1984. .. [4] A Fast Algorithm for the Minimum Covariance Determinant Estimator, 1999, American Statistical Association and the American Society for Quality, TECHNOMETRICS. .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_covariance\_plot\_robust\_vs\_empirical\_covariance.py` for an example on how to fit a :class:`MinCovDet` object to data and see how the estimate remains accurate despite the presence of outliers. \* See :ref:`sphx\_glr\_auto\_examples\_covariance\_plot\_mahalanobis\_distances.py` to visualize the difference between :class:`EmpiricalCovariance` and :class:`MinCovDet` covariance estimators in terms of Mahalanobis distance (so we get a better estimate of the precision matrix too). .. |robust\_vs\_emp| image:: ../auto\_examples/covariance/images/sphx\_glr\_plot\_robust\_vs\_empirical\_covariance\_001.png :target: ../auto\_examples/covariance/plot\_robust\_vs\_empirical\_covariance.html :scale: 49% .. |mahalanobis| image:: ../auto\_examples/covariance/images/sphx\_glr\_plot\_mahalanobis\_distances\_001.png :target: ../auto\_examples/covariance/plot\_mahalanobis\_distances.html :scale: 49% \_\_\_\_ .. list-table:: :header-rows: 1 \* - Influence of outliers on location and covariance estimates - Separating inliers from outliers using a Mahalanobis distance \* - |robust\_vs\_emp| - |mahalanobis| | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/covariance.rst | main | scikit-learn | [
-0.0999489277601242,
-0.023618167266249657,
-0.003143287729471922,
0.0031079864129424095,
0.04878287762403488,
-0.019406193867325783,
-0.01310499757528305,
0.02120928466320038,
-0.01906905695796013,
-0.007643429096788168,
0.05717293545603752,
0.049709755927324295,
0.05509956181049347,
-0.0... | 0.109838 |
.. \_multiclass: ===================================== Multiclass and multioutput algorithms ===================================== This section of the user guide covers functionality related to multi-learning problems, including :term:`multiclass`, :term:`multilabel`, and :term:`multioutput` classification and regression. The modules in this section implement :term:`meta-estimators`, which require a base estimator to be provided in their constructor. Meta-estimators extend the functionality of the base estimator to support multi-learning problems, which is accomplished by transforming the multi-learning problem into a set of simpler problems, then fitting one estimator per problem. This section covers two modules: :mod:`sklearn.multiclass` and :mod:`sklearn.multioutput`. The chart below demonstrates the problem types that each module is responsible for, and the corresponding meta-estimators that each module provides. .. image:: ../images/multi\_org\_chart.png :align: center The table below provides a quick reference on the differences between problem types. More detailed explanations can be found in subsequent sections of this guide. +------------------------------+-----------------------+-------------------------+--------------------------------------------------+ | | Number of targets | Target cardinality | Valid | | | | | :func:`~sklearn.utils.multiclass.type\_of\_target` | +==============================+=======================+=========================+==================================================+ | Multiclass | 1 | >2 | 'multiclass' | | classification | | | | +------------------------------+-----------------------+-------------------------+--------------------------------------------------+ | Multilabel | >1 | 2 (0 or 1) | 'multilabel-indicator' | | classification | | | | +------------------------------+-----------------------+-------------------------+--------------------------------------------------+ | Multiclass-multioutput | >1 | >2 | 'multiclass-multioutput' | | classification | | | | +------------------------------+-----------------------+-------------------------+--------------------------------------------------+ | Multioutput | >1 | Continuous | 'continuous-multioutput' | | regression | | | | +------------------------------+-----------------------+-------------------------+--------------------------------------------------+ Below is a summary of scikit-learn estimators that have multi-learning support built-in, grouped by strategy. You don't need the meta-estimators provided by this section if you're using one of these estimators. However, meta-estimators can provide additional strategies beyond what is built-in: .. currentmodule:: sklearn - \*\*Inherently multiclass:\*\* - :class:`naive\_bayes.BernoulliNB` - :class:`tree.DecisionTreeClassifier` - :class:`tree.ExtraTreeClassifier` - :class:`ensemble.ExtraTreesClassifier` - :class:`naive\_bayes.GaussianNB` - :class:`neighbors.KNeighborsClassifier` - :class:`semi\_supervised.LabelPropagation` - :class:`semi\_supervised.LabelSpreading` - :class:`discriminant\_analysis.LinearDiscriminantAnalysis` - :class:`svm.LinearSVC` (setting multi\_class="crammer\_singer") - :class:`linear\_model.LogisticRegression` (with most solvers) - :class:`linear\_model.LogisticRegressionCV` (with most solvers) - :class:`neural\_network.MLPClassifier` - :class:`neighbors.NearestCentroid` - :class:`discriminant\_analysis.QuadraticDiscriminantAnalysis` - :class:`neighbors.RadiusNeighborsClassifier` - :class:`ensemble.RandomForestClassifier` - :class:`linear\_model.RidgeClassifier` - :class:`linear\_model.RidgeClassifierCV` - \*\*Multiclass as One-Vs-One:\*\* - :class:`svm.NuSVC` - :class:`svm.SVC`. - :class:`gaussian\_process.GaussianProcessClassifier` (setting multi\_class = "one\_vs\_one") - \*\*Multiclass as One-Vs-The-Rest:\*\* - :class:`ensemble.GradientBoostingClassifier` - :class:`gaussian\_process.GaussianProcessClassifier` (setting multi\_class = "one\_vs\_rest") - :class:`svm.LinearSVC` (setting multi\_class="ovr") - :class:`linear\_model.LogisticRegression` (most solvers) - :class:`linear\_model.LogisticRegressionCV` (most solvers) - :class:`linear\_model.SGDClassifier` - :class:`linear\_model.Perceptron` - \*\*Support multilabel:\*\* - :class:`tree.DecisionTreeClassifier` - :class:`tree.ExtraTreeClassifier` - :class:`ensemble.ExtraTreesClassifier` - :class:`neighbors.KNeighborsClassifier` - :class:`neural\_network.MLPClassifier` - :class:`neighbors.RadiusNeighborsClassifier` - :class:`ensemble.RandomForestClassifier` - :class:`linear\_model.RidgeClassifier` - :class:`linear\_model.RidgeClassifierCV` - \*\*Support multiclass-multioutput:\*\* - :class:`tree.DecisionTreeClassifier` - :class:`tree.ExtraTreeClassifier` - :class:`ensemble.ExtraTreesClassifier` - :class:`neighbors.KNeighborsClassifier` - :class:`neighbors.RadiusNeighborsClassifier` - :class:`ensemble.RandomForestClassifier` .. \_multiclass\_classification: Multiclass classification ========================= .. warning:: All classifiers in scikit-learn do multiclass classification out-of-the-box. You don't need to use the :mod:`sklearn.multiclass` module unless you want to experiment with different multiclass strategies. \*\*Multiclass classification\*\* is a classification task with more than two classes. Each sample can only be labeled as one class. For example, classification using features extracted from a set of images of fruit, where each image may either be of an orange, an apple, or a pear. Each image is one sample and is labeled as one of the 3 possible classes. Multiclass classification makes the assumption that each sample is assigned to one and only one label - one sample cannot, for example, be both a pear and an apple. While all scikit-learn classifiers are capable of multiclass classification, the meta-estimators offered by :mod:`sklearn.multiclass` permit changing the way they handle more than two classes because this may have an effect on classifier performance (either in terms of generalization error or required computational resources). Target format ------------- Valid :term:`multiclass` representations for :func:`~sklearn.utils.multiclass.type\_of\_target` (`y`) are: - 1d or column vector containing more than two discrete values. An example of a vector ``y`` for 4 samples: >>> import numpy as np >>> y = np.array(['apple', 'pear', | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
-0.03902139142155647,
-0.05417552590370178,
-0.05245625227689743,
-0.053293488919734955,
0.025940896943211555,
0.003751952201128006,
-0.009089103899896145,
0.004057108890265226,
-0.05166773870587349,
-0.02903624065220356,
-0.02247459441423416,
-0.01800541952252388,
0.00635518180206418,
-0.... | 0.160042 |
(either in terms of generalization error or required computational resources). Target format ------------- Valid :term:`multiclass` representations for :func:`~sklearn.utils.multiclass.type\_of\_target` (`y`) are: - 1d or column vector containing more than two discrete values. An example of a vector ``y`` for 4 samples: >>> import numpy as np >>> y = np.array(['apple', 'pear', 'apple', 'orange']) >>> print(y) ['apple' 'pear' 'apple' 'orange'] - Dense or sparse :term:`binary` matrix of shape ``(n\_samples, n\_classes)`` with a single sample per row, where each column represents one class. An example of both a dense and sparse :term:`binary` matrix ``y`` for 4 samples, where the columns, in order, are apple, orange, and pear: >>> import numpy as np >>> from sklearn.preprocessing import LabelBinarizer >>> y = np.array(['apple', 'pear', 'apple', 'orange']) >>> y\_dense = LabelBinarizer().fit\_transform(y) >>> print(y\_dense) [[1 0 0] [0 0 1] [1 0 0] [0 1 0]] >>> from scipy import sparse >>> y\_sparse = sparse.csr\_matrix(y\_dense) >>> print(y\_sparse) Coords Values (0, 0) 1 (1, 2) 1 (2, 0) 1 (3, 1) 1 For more information about :class:`~sklearn.preprocessing.LabelBinarizer`, refer to :ref:`preprocessing\_targets`. .. \_ovr\_classification: OneVsRestClassifier ------------------- The \*\*one-vs-rest\*\* strategy, also known as \*\*one-vs-all\*\*, is implemented in :class:`~sklearn.multiclass.OneVsRestClassifier`. The strategy consists in fitting one classifier per class. For each classifier, the class is fitted against all the other classes. In addition to its computational efficiency (only `n\_classes` classifiers are needed), one advantage of this approach is its interpretability. Since each class is represented by one and only one classifier, it is possible to gain knowledge about the class by inspecting its corresponding classifier. This is the most commonly used strategy and is a fair default choice. Below is an example of multiclass learning using OvR:: >>> from sklearn import datasets >>> from sklearn.multiclass import OneVsRestClassifier >>> from sklearn.svm import LinearSVC >>> X, y = datasets.load\_iris(return\_X\_y=True) >>> OneVsRestClassifier(LinearSVC(random\_state=0)).fit(X, y).predict(X) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]) :class:`~sklearn.multiclass.OneVsRestClassifier` also supports multilabel classification. To use this feature, feed the classifier an indicator matrix, in which cell [i, j] indicates the presence of label j in sample i. .. figure:: ../auto\_examples/miscellaneous/images/sphx\_glr\_plot\_multilabel\_001.png :target: ../auto\_examples/miscellaneous/plot\_multilabel.html :align: center :scale: 75% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_multilabel.py` \* :ref:`sphx\_glr\_auto\_examples\_classification\_plot\_classification\_probability.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_logistic\_multinomial.py` .. \_ovo\_classification: OneVsOneClassifier ------------------ :class:`~sklearn.multiclass.OneVsOneClassifier` constructs one classifier per pair of classes. At prediction time, the class which received the most votes is selected. In the event of a tie (among two classes with an equal number of votes), it selects the class with the highest aggregate classification confidence by summing over the pair-wise classification confidence levels computed by the underlying binary classifiers. Since it requires to fit ``n\_classes \* (n\_classes - 1) / 2`` classifiers, this method is usually slower than one-vs-the-rest, due to its O(n\_classes^2) complexity. However, this method may be advantageous for algorithms such as kernel algorithms which don't scale well with ``n\_samples``. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
0.04569857940077782,
-0.07510454207658768,
-0.08337243646383286,
-0.08830034732818604,
0.0714653879404068,
-0.012656163424253464,
0.016098685562610626,
-0.13398925960063934,
-0.053314968943595886,
-0.015285981819033623,
-0.004399875644594431,
-0.021135345101356506,
0.05528390407562256,
-0.... | 0.038298 |
levels computed by the underlying binary classifiers. Since it requires to fit ``n\_classes \* (n\_classes - 1) / 2`` classifiers, this method is usually slower than one-vs-the-rest, due to its O(n\_classes^2) complexity. However, this method may be advantageous for algorithms such as kernel algorithms which don't scale well with ``n\_samples``. This is because each individual learning problem only involves a small subset of the data whereas, with one-vs-the-rest, the complete dataset is used ``n\_classes`` times. The decision function is the result of a monotonic transformation of the one-versus-one classification. Below is an example of multiclass learning using OvO:: >>> from sklearn import datasets >>> from sklearn.multiclass import OneVsOneClassifier >>> from sklearn.svm import LinearSVC >>> X, y = datasets.load\_iris(return\_X\_y=True) >>> OneVsOneClassifier(LinearSVC(random\_state=0)).fit(X, y).predict(X) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]) .. rubric:: References \* "Pattern Recognition and Machine Learning. Springer", Christopher M. Bishop, page 183, (First Edition) .. \_ecoc: OutputCodeClassifier -------------------- Error-Correcting Output Code-based strategies are fairly different from one-vs-the-rest and one-vs-one. With these strategies, each class is represented in a Euclidean space, where each dimension can only be 0 or 1. Another way to put it is that each class is represented by a binary code (an array of 0 and 1). The matrix which keeps track of the location/code of each class is called the code book. The code size is the dimensionality of the aforementioned space. Intuitively, each class should be represented by a code as unique as possible and a good code book should be designed to optimize classification accuracy. In this implementation, we simply use a randomly-generated code book as advocated in [3]\_ although more elaborate methods may be added in the future. At fitting time, one binary classifier per bit in the code book is fitted. At prediction time, the classifiers are used to project new points in the class space and the class closest to the points is chosen. In :class:`~sklearn.multiclass.OutputCodeClassifier`, the ``code\_size`` attribute allows the user to control the number of classifiers which will be used. It is a percentage of the total number of classes. A number between 0 and 1 will require fewer classifiers than one-vs-the-rest. In theory, ``log2(n\_classes) / n\_classes`` is sufficient to represent each class unambiguously. However, in practice, it may not lead to good accuracy since ``log2(n\_classes)`` is much smaller than `n\_classes`. A number greater than 1 will require more classifiers than one-vs-the-rest. In this case, some classifiers will in theory correct for the mistakes made by other classifiers, hence the name "error-correcting". In practice, however, this may not happen as classifier mistakes will typically be correlated. The error-correcting output codes have a similar effect to bagging. Below is an example of multiclass learning using Output-Codes:: >>> from sklearn import datasets >>> from sklearn.multiclass import OutputCodeClassifier | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
-0.03208004683256149,
-0.059871334582567215,
0.024097418412566185,
-0.029339062049984932,
0.0939127504825592,
-0.08727778494358063,
-0.08773040026426315,
0.022470178082585335,
0.028073683381080627,
-0.005956873297691345,
-0.07675126194953918,
-0.02748911641538143,
0.012262585572898388,
-0.... | 0.0333 |
by other classifiers, hence the name "error-correcting". In practice, however, this may not happen as classifier mistakes will typically be correlated. The error-correcting output codes have a similar effect to bagging. Below is an example of multiclass learning using Output-Codes:: >>> from sklearn import datasets >>> from sklearn.multiclass import OutputCodeClassifier >>> from sklearn.svm import LinearSVC >>> X, y = datasets.load\_iris(return\_X\_y=True) >>> clf = OutputCodeClassifier(LinearSVC(random\_state=0), code\_size=2, random\_state=0) >>> clf.fit(X, y).predict(X) array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]) .. rubric:: References \* "Solving multiclass learning problems via error-correcting output codes", Dietterich T., Bakiri G., Journal of Artificial Intelligence Research 2, 1995. .. [3] "The error coding method and PICTs", James G., Hastie T., Journal of Computational and Graphical statistics 7, 1998. \* "The Elements of Statistical Learning", Hastie T., Tibshirani R., Friedman J., page 606 (second-edition), 2008. .. \_multilabel\_classification: Multilabel classification ========================= \*\*Multilabel classification\*\* (closely related to \*\*multioutput\*\* \*\*classification\*\*) is a classification task labeling each sample with ``m`` labels from ``n\_classes`` possible classes, where ``m`` can be 0 to ``n\_classes`` inclusive. This can be thought of as predicting properties of a sample that are not mutually exclusive. Formally, a binary output is assigned to each class, for every sample. Positive classes are indicated with 1 and negative classes with 0 or -1. It is thus comparable to running ``n\_classes`` binary classification tasks, for example with :class:`~sklearn.multioutput.MultiOutputClassifier`. This approach treats each label independently whereas multilabel classifiers \*may\* treat the multiple classes simultaneously, accounting for correlated behavior among them. For example, prediction of the topics relevant to a text document or video. The document or video may be about one of 'religion', 'politics', 'finance' or 'education', several of the topic classes or all of the topic classes. Target format ------------- A valid representation of :term:`multilabel` `y` is an either dense or sparse :term:`binary` matrix of shape ``(n\_samples, n\_classes)``. Each column represents a class. The ``1``'s in each row denote the positive classes a sample has been labeled with. An example of a dense matrix ``y`` for 3 samples: >>> y = np.array([[1, 0, 0, 1], [0, 0, 1, 1], [0, 0, 0, 0]]) >>> print(y) [[1 0 0 1] [0 0 1 1] [0 0 0 0]] Dense binary matrices can also be created using :class:`~sklearn.preprocessing.MultiLabelBinarizer`. For more information, refer to :ref:`preprocessing\_targets`. An example of the same ``y`` in sparse matrix form: >>> y\_sparse = sparse.csr\_matrix(y) >>> print(y\_sparse) Coords Values (0, 0) 1 (0, 3) 1 (1, 2) 1 (1, 3) 1 .. \_multioutputclassfier: MultiOutputClassifier --------------------- Multilabel classification support can be added to any classifier with :class:`~sklearn.multioutput.MultiOutputClassifier`. This strategy consists of fitting one classifier per target. This allows multiple target variable classifications. The purpose of this class is to extend estimators to be able to estimate a series of target functions | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
-0.035119861364364624,
-0.08639422059059143,
-0.03452412411570549,
0.02213050052523613,
0.017186349257826805,
-0.07365398854017258,
0.048457320779561996,
-0.08446409553289413,
-0.07976454496383667,
-0.03156323358416557,
0.007525499910116196,
0.003923729993402958,
0.05621451139450073,
-0.05... | -0.010861 |
1 .. \_multioutputclassfier: MultiOutputClassifier --------------------- Multilabel classification support can be added to any classifier with :class:`~sklearn.multioutput.MultiOutputClassifier`. This strategy consists of fitting one classifier per target. This allows multiple target variable classifications. The purpose of this class is to extend estimators to be able to estimate a series of target functions (f1,f2,f3...,fn) that are trained on a single X predictor matrix to predict a series of responses (y1,y2,y3...,yn). You can find a usage example for :class:`~sklearn.multioutput.MultiOutputClassifier` as part of the section on :ref:`multiclass\_multioutput\_classification` since it is a generalization of multilabel classification to multiclass outputs instead of binary outputs. .. \_classifierchain: ClassifierChain --------------- Classifier chains (see :class:`~sklearn.multioutput.ClassifierChain`) are a way of combining a number of binary classifiers into a single multi-label model that is capable of exploiting correlations among targets. For a multi-label classification problem with N classes, N binary classifiers are assigned an integer between 0 and N-1. These integers define the order of models in the chain. Each classifier is then fit on the available training data plus the true labels of the classes whose models were assigned a lower number. When predicting, the true labels will not be available. Instead the predictions of each model are passed on to the subsequent models in the chain to be used as features. Clearly the order of the chain is important. The first model in the chain has no information about the other labels while the last model in the chain has features indicating the presence of all of the other labels. In general one does not know the optimal ordering of the models in the chain so typically many randomly ordered chains are fit and their predictions are averaged together. .. rubric:: References \* Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank, "Classifier Chains for Multi-label Classification", 2009. .. \_multiclass\_multioutput\_classification: Multiclass-multioutput classification ===================================== \*\*Multiclass-multioutput classification\*\* (also known as \*\*multitask classification\*\*) is a classification task which labels each sample with a set of \*\*non-binary\*\* properties. Both the number of properties and the number of classes per property is greater than 2. A single estimator thus handles several joint classification tasks. This is both a generalization of the multi\ \*label\* classification task, which only considers binary attributes, as well as a generalization of the multi\ \*class\* classification task, where only one property is considered. For example, classification of the properties "type of fruit" and "colour" for a set of images of fruit. The property "type of fruit" has the possible classes: "apple", "pear" and "orange". The property "colour" has the possible classes: "green", "red", "yellow" and "orange". Each sample is an image of a fruit, a label is output for both properties and each label is one of the possible classes of the corresponding property. Note that all classifiers handling multiclass-multioutput (also known as multitask classification) tasks, support the multilabel classification task as a special case. Multitask classification is similar to the multioutput classification task with different model formulations. For more information, see the relevant estimator documentation. Below is an example of multiclass-multioutput classification: >>> from sklearn.datasets import make\_classification >>> from sklearn.multioutput import MultiOutputClassifier >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.utils import shuffle >>> import numpy as np >>> X, y1 = make\_classification(n\_samples=10, n\_features=100, ... n\_informative=30, n\_classes=3, ... random\_state=1) >>> y2 = shuffle(y1, random\_state=1) >>> y3 = shuffle(y1, random\_state=2) >>> Y = np.vstack((y1, y2, y3)).T >>> n\_samples, n\_features = X.shape # 10,100 >>> n\_outputs = Y.shape[1] # 3 >>> n\_classes = 3 >>> forest = RandomForestClassifier(random\_state=1) >>> multi\_target\_forest = MultiOutputClassifier(forest, n\_jobs=2) >>> multi\_target\_forest.fit(X, Y).predict(X) array([[2, 2, 0], [1, 2, 1], [2, 1, 0], [0, 0, 2], [0, 2, 1], [0, 0, 2], | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
-0.025504425168037415,
-0.10845061391592026,
-0.08323761820793152,
-0.06250540912151337,
0.0652020275592804,
0.019092317670583725,
-0.0247003473341465,
-0.06513449549674988,
-0.03596269339323044,
-0.08175987750291824,
-0.054963674396276474,
-0.056293293833732605,
0.02280980348587036,
-0.04... | 0.103361 |
np.vstack((y1, y2, y3)).T >>> n\_samples, n\_features = X.shape # 10,100 >>> n\_outputs = Y.shape[1] # 3 >>> n\_classes = 3 >>> forest = RandomForestClassifier(random\_state=1) >>> multi\_target\_forest = MultiOutputClassifier(forest, n\_jobs=2) >>> multi\_target\_forest.fit(X, Y).predict(X) array([[2, 2, 0], [1, 2, 1], [2, 1, 0], [0, 0, 2], [0, 2, 1], [0, 0, 2], [1, 1, 0], [1, 1, 1], [0, 0, 2], [2, 0, 0]]) .. warning:: At present, no metric in :mod:`sklearn.metrics` supports the multiclass-multioutput classification task. Target format ------------- A valid representation of :term:`multioutput` `y` is a dense matrix of shape ``(n\_samples, n\_classes)`` of class labels. A column wise concatenation of 1d :term:`multiclass` variables. An example of ``y`` for 3 samples: >>> y = np.array([['apple', 'green'], ['orange', 'orange'], ['pear', 'green']]) >>> print(y) [['apple' 'green'] ['orange' 'orange'] ['pear' 'green']] .. \_multioutput\_regression: Multioutput regression ====================== \*\*Multioutput regression\*\* predicts multiple numerical properties for each sample. Each property is a numerical variable and the number of properties to be predicted for each sample is greater than or equal to 2. Some estimators that support multioutput regression are faster than just running ``n\_output`` estimators. For example, prediction of both wind speed and wind direction, in degrees, using data obtained at a certain location. Each sample would be data obtained at one location and both wind speed and direction would be output for each sample. The following regressors natively support multioutput regression: - :class:`cross\_decomposition.CCA` - :class:`tree.DecisionTreeRegressor` - :class:`dummy.DummyRegressor` - :class:`linear\_model.ElasticNet` - :class:`tree.ExtraTreeRegressor` - :class:`ensemble.ExtraTreesRegressor` - :class:`gaussian\_process.GaussianProcessRegressor` - :class:`neighbors.KNeighborsRegressor` - :class:`kernel\_ridge.KernelRidge` - :class:`linear\_model.Lars` - :class:`linear\_model.Lasso` - :class:`linear\_model.LassoLars` - :class:`linear\_model.LinearRegression` - :class:`multioutput.MultiOutputRegressor` - :class:`linear\_model.MultiTaskElasticNet` - :class:`linear\_model.MultiTaskElasticNetCV` - :class:`linear\_model.MultiTaskLasso` - :class:`linear\_model.MultiTaskLassoCV` - :class:`linear\_model.OrthogonalMatchingPursuit` - :class:`cross\_decomposition.PLSCanonical` - :class:`cross\_decomposition.PLSRegression` - :class:`linear\_model.RANSACRegressor` - :class:`neighbors.RadiusNeighborsRegressor` - :class:`ensemble.RandomForestRegressor` - :class:`multioutput.RegressorChain` - :class:`linear\_model.Ridge` - :class:`linear\_model.RidgeCV` - :class:`compose.TransformedTargetRegressor` Target format ------------- A valid representation of :term:`multioutput` `y` is a dense matrix of shape ``(n\_samples, n\_output)`` of floats. A column wise concatenation of :term:`continuous` variables. An example of ``y`` for 3 samples: >>> y = np.array([[31.4, 94], [40.5, 109], [25.0, 30]]) >>> print(y) [[ 31.4 94. ] [ 40.5 109. ] [ 25. 30. ]] .. \_multioutputregressor: MultiOutputRegressor -------------------- Multioutput regression support can be added to any regressor with :class:`~sklearn.multioutput.MultiOutputRegressor`. This strategy consists of fitting one regressor per target. Since each target is represented by exactly one regressor it is possible to gain knowledge about the target by inspecting its corresponding regressor. As :class:`~sklearn.multioutput.MultiOutputRegressor` fits one regressor per target it can not take advantage of correlations between targets. Below is an example of multioutput regression: >>> from sklearn.datasets import make\_regression >>> from sklearn.multioutput import MultiOutputRegressor >>> from sklearn.ensemble import GradientBoostingRegressor >>> X, y = make\_regression(n\_samples=10, n\_targets=3, random\_state=1) >>> MultiOutputRegressor(GradientBoostingRegressor(random\_state=0)).fit(X, y).predict(X) array([[-154.75474165, -147.03498585, -50.03812219], [ 7.12165031, 5.12914884, -81.46081961], [-187.8948621 , -100.44373091, 13.88978285], [-141.62745778, 95.02891072, -191.48204257], [ 97.03260883, 165.34867495, 139.52003279], [ 123.92529176, 21.25719016, -7.84253 ], [-122.25193977, -85.16443186, -107.12274212], [ -30.170388 , -94.80956739, 12.16979946], [ 140.72667194, 176.50941682, -17.50447799], [ 149.37967282, -81.15699552, -5.72850319]]) .. \_regressorchain: RegressorChain -------------- Regressor chains (see :class:`~sklearn.multioutput.RegressorChain`) is analogous to :class:`~sklearn.multioutput.ClassifierChain` as a way of combining a number of regressions into a single multi-target model that is capable of exploiting correlations among targets. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/multiclass.rst | main | scikit-learn | [
0.05133732780814171,
-0.11197195947170258,
-0.06315150111913681,
-0.020289525389671326,
0.09772004187107086,
-0.06138530373573303,
0.004083568230271339,
-0.09931179881095886,
-0.042114660143852234,
-0.03436727449297905,
-0.10757744312286377,
-0.10132714360952377,
-0.03895290195941925,
0.00... | 0.067256 |
.. \_neighbors: ================= Nearest Neighbors ================= .. sectionauthor:: Jake Vanderplas .. currentmodule:: sklearn.neighbors :mod:`sklearn.neighbors` provides functionality for unsupervised and supervised neighbors-based learning methods. Unsupervised nearest neighbors is the foundation of many other learning methods, notably manifold learning and spectral clustering. Supervised neighbors-based learning comes in two flavors: `classification`\_ for data with discrete labels, and `regression`\_ for data with continuous labels. The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure: standard Euclidean distance is the most common choice. Neighbors-based methods are known as \*non-generalizing\* machine learning methods, since they simply "remember" all of its training data (possibly transformed into a fast indexing structure such as a :ref:`Ball Tree ` or :ref:`KD Tree `). Despite its simplicity, nearest neighbors has been successful in a large number of classification and regression problems, including handwritten digits and satellite image scenes. Being a non-parametric method, it is often successful in classification situations where the decision boundary is very irregular. The classes in :mod:`sklearn.neighbors` can handle either NumPy arrays or `scipy.sparse` matrices as input. For dense matrices, a large number of possible distance metrics are supported. For sparse matrices, arbitrary Minkowski metrics are supported for searches. There are many learning routines which rely on nearest neighbors at their core. One example is :ref:`kernel density estimation `, discussed in the :ref:`density estimation ` section. .. \_unsupervised\_neighbors: Unsupervised Nearest Neighbors ============================== :class:`NearestNeighbors` implements unsupervised nearest neighbors learning. It acts as a uniform interface to three different nearest neighbors algorithms: :class:`BallTree`, :class:`KDTree`, and a brute-force algorithm based on routines in :mod:`sklearn.metrics.pairwise`. The choice of neighbors search algorithm is controlled through the keyword ``'algorithm'``, which must be one of ``['auto', 'ball\_tree', 'kd\_tree', 'brute']``. When the default value ``'auto'`` is passed, the algorithm attempts to determine the best approach from the training data. For a discussion of the strengths and weaknesses of each option, see `Nearest Neighbor Algorithms`\_. .. warning:: Regarding the Nearest Neighbors algorithms, if two neighbors :math:`k+1` and :math:`k` have identical distances but different labels, the result will depend on the ordering of the training data. Finding the Nearest Neighbors ----------------------------- For the simple task of finding the nearest neighbors between two sets of data, the unsupervised algorithms within :mod:`sklearn.neighbors` can be used: >>> from sklearn.neighbors import NearestNeighbors >>> import numpy as np >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> nbrs = NearestNeighbors(n\_neighbors=2, algorithm='ball\_tree').fit(X) >>> distances, indices = nbrs.kneighbors(X) >>> indices array([[0, 1], [1, 0], [2, 1], [3, 4], [4, 3], [5, 4]]...) >>> distances array([[0. , 1. ], [0. , 1. ], [0. , 1.41421356], [0. , 1. ], [0. , 1. ], [0. , 1.41421356]]) Because the query set matches the training set, the nearest neighbor of each point is the point itself, at a distance of zero. It is also possible to efficiently produce a sparse graph showing the connections between neighboring points: >>> nbrs.kneighbors\_graph(X).toarray() array([[1., 1., 0., 0., 0., 0.], [1., 1., 0., 0., 0., 0.], [0., 1., 1., 0., 0., 0.], [0., 0., 0., 1., 1., 0.], [0., 0., 0., 1., 1., 0.], [0., 0., 0., 0., 1., 1.]]) The dataset is structured such that points nearby in index order are nearby in parameter space, leading to an approximately block-diagonal matrix of K-nearest neighbors. Such a sparse graph is useful in a variety | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.07819987833499908,
-0.09972770512104034,
-0.07010406255722046,
-0.04919898509979248,
0.09148124605417252,
0.04976606369018555,
-0.06972674280405045,
-0.06924464553594589,
-0.01671895943582058,
0.0039007076993584633,
0.01941189356148243,
0.013564525172114372,
0.059171147644519806,
0.0416... | 0.061635 |
0., 1., 1., 0.], [0., 0., 0., 1., 1., 0.], [0., 0., 0., 0., 1., 1.]]) The dataset is structured such that points nearby in index order are nearby in parameter space, leading to an approximately block-diagonal matrix of K-nearest neighbors. Such a sparse graph is useful in a variety of circumstances which make use of spatial relationships between points for unsupervised learning: in particular, see :class:`~sklearn.manifold.Isomap`, :class:`~sklearn.manifold.LocallyLinearEmbedding`, and :class:`~sklearn.cluster.SpectralClustering`. .. \_kdtree\_and\_balltree\_classes: KDTree and BallTree Classes --------------------------- Alternatively, one can use the :class:`KDTree` or :class:`BallTree` classes directly to find nearest neighbors. This is the functionality wrapped by the :class:`NearestNeighbors` class used above. The Ball Tree and KD Tree have the same interface; we'll show an example of using the KD Tree here: >>> from sklearn.neighbors import KDTree >>> import numpy as np >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> kdt = KDTree(X, leaf\_size=30, metric='euclidean') >>> kdt.query(X, k=2, return\_distance=False) array([[0, 1], [1, 0], [2, 1], [3, 4], [4, 3], [5, 4]]...) Refer to the :class:`KDTree` and :class:`BallTree` class documentation for more information on the options available for nearest neighbors searches, including specification of query strategies, distance metrics, etc. For a list of valid metrics use `KDTree.valid\_metrics` and `BallTree.valid\_metrics`: >>> from sklearn.neighbors import KDTree, BallTree >>> KDTree.valid\_metrics ['euclidean', 'l2', 'minkowski', 'p', 'manhattan', 'cityblock', 'l1', 'chebyshev', 'infinity'] >>> BallTree.valid\_metrics ['euclidean', 'l2', 'minkowski', 'p', 'manhattan', 'cityblock', 'l1', 'chebyshev', 'infinity', 'seuclidean', 'mahalanobis', 'hamming', 'canberra', 'braycurtis', 'jaccard', 'dice', 'rogerstanimoto', 'russellrao', 'sokalmichener', 'sokalsneath', 'haversine', 'pyfunc'] .. \_classification: Nearest Neighbors Classification ================================ Neighbors-based classification is a type of \*instance-based learning\* or \*non-generalizing learning\*: it does not attempt to construct a general internal model, but simply stores instances of the training data. Classification is computed from a simple majority vote of the nearest neighbors of each point: a query point is assigned the data class which has the most representatives within the nearest neighbors of the point. scikit-learn implements two different nearest neighbors classifiers: :class:`KNeighborsClassifier` implements learning based on the :math:`k` nearest neighbors of each query point, where :math:`k` is an integer value specified by the user. :class:`RadiusNeighborsClassifier` implements learning based on the number of neighbors within a fixed radius :math:`r` of each training point, where :math:`r` is a floating-point value specified by the user. The :math:`k`-neighbors classification in :class:`KNeighborsClassifier` is the most commonly used technique. The optimal choice of the value :math:`k` is highly data-dependent: in general a larger :math:`k` suppresses the effects of noise, but makes the classification boundaries less distinct. In cases where the data is not uniformly sampled, radius-based neighbors classification in :class:`RadiusNeighborsClassifier` can be a better choice. The user specifies a fixed radius :math:`r`, such that points in sparser neighborhoods use fewer nearest neighbors for the classification. For high-dimensional parameter spaces, this method becomes less effective due to the so-called "curse of dimensionality". The basic nearest neighbors classification uses uniform weights: that is, the value assigned to a query point is computed from a simple majority vote of the nearest neighbors. Under some circumstances, it is better to weight the neighbors such that nearer neighbors contribute more to the fit. This can be accomplished through the ``weights`` keyword. The default value, ``weights = 'uniform'``, assigns uniform weights to each neighbor. ``weights = 'distance'`` assigns weights proportional to the inverse of the distance from the query point. Alternatively, a user-defined function of the distance can be supplied to compute the weights. .. |classification\_1| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_classification\_001.png :target: ../auto\_examples/neighbors/plot\_classification.html :scale: 75 .. centered:: |classification\_1| .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_classification.py`: an example of classification using nearest neighbors. .. \_regression: Nearest Neighbors Regression ============================ Neighbors-based regression can be used in | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.02263059839606285,
-0.02807759679853916,
-0.018156779929995537,
-0.04510999843478203,
0.04083062708377838,
0.03102194145321846,
-0.054832667112350464,
-0.07289000600576401,
-0.004676362033933401,
0.0030218639876693487,
0.02740463800728321,
0.004101347178220749,
0.02511303685605526,
-0.0... | 0.035786 |
point. Alternatively, a user-defined function of the distance can be supplied to compute the weights. .. |classification\_1| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_classification\_001.png :target: ../auto\_examples/neighbors/plot\_classification.html :scale: 75 .. centered:: |classification\_1| .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_classification.py`: an example of classification using nearest neighbors. .. \_regression: Nearest Neighbors Regression ============================ Neighbors-based regression can be used in cases where the data labels are continuous rather than discrete variables. The label assigned to a query point is computed based on the mean of the labels of its nearest neighbors. scikit-learn implements two different neighbors regressors: :class:`KNeighborsRegressor` implements learning based on the :math:`k` nearest neighbors of each query point, where :math:`k` is an integer value specified by the user. :class:`RadiusNeighborsRegressor` implements learning based on the neighbors within a fixed radius :math:`r` of the query point, where :math:`r` is a floating-point value specified by the user. The basic nearest neighbors regression uses uniform weights: that is, each point in the local neighborhood contributes uniformly to the classification of a query point. Under some circumstances, it can be advantageous to weight points such that nearby points contribute more to the regression than faraway points. This can be accomplished through the ``weights`` keyword. The default value, ``weights = 'uniform'``, assigns equal weights to all points. ``weights = 'distance'`` assigns weights proportional to the inverse of the distance from the query point. Alternatively, a user-defined function of the distance can be supplied, which will be used to compute the weights. .. figure:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_regression\_001.png :target: ../auto\_examples/neighbors/plot\_regression.html :align: center :scale: 75 The use of multi-output nearest neighbors for regression is demonstrated in :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_multioutput\_face\_completion.py`. In this example, the inputs X are the pixels of the upper half of faces and the outputs Y are the pixels of the lower half of those faces. .. figure:: ../auto\_examples/miscellaneous/images/sphx\_glr\_plot\_multioutput\_face\_completion\_001.png :target: ../auto\_examples/miscellaneous/plot\_multioutput\_face\_completion.html :scale: 75 :align: center .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_regression.py`: an example of regression using nearest neighbors. \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_multioutput\_face\_completion.py`: an example of multi-output regression using nearest neighbors. Nearest Neighbor Algorithms =========================== .. \_brute\_force: Brute Force ----------- Fast computation of nearest neighbors is an active area of research in machine learning. The most naive neighbor search implementation involves the brute-force computation of distances between all pairs of points in the dataset: for :math:`N` samples in :math:`D` dimensions, this approach scales as :math:`O[D N^2]`. Efficient brute-force neighbors searches can be very competitive for small data samples. However, as the number of samples :math:`N` grows, the brute-force approach quickly becomes infeasible. In the classes within :mod:`sklearn.neighbors`, brute-force neighbors searches are specified using the keyword ``algorithm = 'brute'``, and are computed using the routines available in :mod:`sklearn.metrics.pairwise`. .. \_kd\_tree: K-D Tree -------- To address the computational inefficiencies of the brute-force approach, a variety of tree-based data structures have been invented. In general, these structures attempt to reduce the required number of distance calculations by efficiently encoding aggregate distance information for the sample. The basic idea is that if point :math:`A` is very distant from point :math:`B`, and point :math:`B` is very close to point :math:`C`, then we know that points :math:`A` and :math:`C` are very distant, \*without having to explicitly calculate their distance\*. In this way, the computational cost of a nearest neighbors search can be reduced to :math:`O[D N \log(N)]` or better. This is a significant improvement over brute-force for large :math:`N`. An early approach to taking advantage of this aggregate information was the \*KD tree\* data structure (short for \*K-dimensional tree\*), which generalizes two-dimensional \*Quad-trees\* and 3-dimensional \*Oct-trees\* to an arbitrary number of dimensions. The KD tree is a binary tree structure which recursively partitions the parameter space along the data axes, dividing it into nested orthotropic regions into which data | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.10253243148326874,
-0.0488087497651577,
-0.10260733217000961,
-0.034138210117816925,
0.0664081797003746,
-0.014125779271125793,
0.030043181031942368,
0.021344255656003952,
-0.013528545387089252,
-0.06287559866905212,
-0.00911154504865408,
-0.023001184687018394,
0.0768037959933281,
0.004... | 0.110848 |
was the \*KD tree\* data structure (short for \*K-dimensional tree\*), which generalizes two-dimensional \*Quad-trees\* and 3-dimensional \*Oct-trees\* to an arbitrary number of dimensions. The KD tree is a binary tree structure which recursively partitions the parameter space along the data axes, dividing it into nested orthotropic regions into which data points are filed. The construction of a KD tree is very fast: because partitioning is performed only along the data axes, no :math:`D`-dimensional distances need to be computed. Once constructed, the nearest neighbor of a query point can be determined with only :math:`O[\log(N)]` distance computations. Though the KD tree approach is very fast for low-dimensional (:math:`D < 20`) neighbors searches, it becomes inefficient as :math:`D` grows very large: this is one manifestation of the so-called "curse of dimensionality". In scikit-learn, KD tree neighbors searches are specified using the keyword ``algorithm = 'kd\_tree'``, and are computed using the class :class:`KDTree`. .. dropdown:: References \* `"Multidimensional binary search trees used for associative searching" `\_, Bentley, J.L., Communications of the ACM (1975) .. \_ball\_tree: Ball Tree --------- To address the inefficiencies of KD Trees in higher dimensions, the \*ball tree\* data structure was developed. Where KD trees partition data along Cartesian axes, ball trees partition data in a series of nesting hyper-spheres. This makes tree construction more costly than that of the KD tree, but results in a data structure which can be very efficient on highly structured data, even in very high dimensions. A ball tree recursively divides the data into nodes defined by a centroid :math:`C` and radius :math:`r`, such that each point in the node lies within the hyper-sphere defined by :math:`r` and :math:`C`. The number of candidate points for a neighbor search is reduced through use of the \*triangle inequality\*: .. math:: |x+y| \leq |x| + |y| With this setup, a single distance calculation between a test point and the centroid is sufficient to determine a lower and upper bound on the distance to all points within the node. Because of the spherical geometry of the ball tree nodes, it can out-perform a \*KD-tree\* in high dimensions, though the actual performance is highly dependent on the structure of the training data. In scikit-learn, ball-tree-based neighbors searches are specified using the keyword ``algorithm = 'ball\_tree'``, and are computed using the class :class:`BallTree`. Alternatively, the user can work with the :class:`BallTree` class directly. .. dropdown:: References \* `"Five Balltree Construction Algorithms" `\_, Omohundro, S.M., International Computer Science Institute Technical Report (1989) .. dropdown:: Choice of Nearest Neighbors Algorithm The optimal algorithm for a given dataset is a complicated choice, and depends on a number of factors: \* number of samples :math:`N` (i.e. ``n\_samples``) and dimensionality :math:`D` (i.e. ``n\_features``). \* \*Brute force\* query time grows as :math:`O[D N]` \* \*Ball tree\* query time grows as approximately :math:`O[D \log(N)]` \* \*KD tree\* query time changes with :math:`D` in a way that is difficult to precisely characterise. For small :math:`D` (less than 20 or so) the cost is approximately :math:`O[D\log(N)]`, and the KD tree query can be very efficient. For larger :math:`D`, the cost increases to nearly :math:`O[DN]`, and the overhead due to the tree structure can lead to queries which are slower than brute force. For small data sets (:math:`N` less than 30 or so), :math:`\log(N)` is comparable to :math:`N`, and brute force algorithms can be more efficient than a tree-based approach. Both :class:`KDTree` and :class:`BallTree` address this through providing a \*leaf size\* parameter: this controls the number of samples at which a query switches to brute-force. This allows both algorithms to approach the efficiency of a brute-force computation for small :math:`N`. \* | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.03254687041044235,
0.002519946312531829,
0.03487169370055199,
-0.058244913816452026,
-0.016969118267297745,
-0.07790069282054901,
-0.07773961871862411,
-0.05476786568760872,
0.07065286487340927,
0.05672875791788101,
-0.0017745159566402435,
-0.0031994518358260393,
0.004408880136907101,
-... | 0.013026 |
algorithms can be more efficient than a tree-based approach. Both :class:`KDTree` and :class:`BallTree` address this through providing a \*leaf size\* parameter: this controls the number of samples at which a query switches to brute-force. This allows both algorithms to approach the efficiency of a brute-force computation for small :math:`N`. \* data structure: \*intrinsic dimensionality\* of the data and/or \*sparsity\* of the data. Intrinsic dimensionality refers to the dimension :math:`d \le D` of a manifold on which the data lies, which can be linearly or non-linearly embedded in the parameter space. Sparsity refers to the degree to which the data fills the parameter space (this is to be distinguished from the concept as used in "sparse" matrices. The data matrix may have no zero entries, but the \*\*structure\*\* can still be "sparse" in this sense). \* \*Brute force\* query time is unchanged by data structure. \* \*Ball tree\* and \*KD tree\* query times can be greatly influenced by data structure. In general, sparser data with a smaller intrinsic dimensionality leads to faster query times. Because the KD tree internal representation is aligned with the parameter axes, it will not generally show as much improvement as ball tree for arbitrarily structured data. Datasets used in machine learning tend to be very structured, and are very well-suited for tree-based queries. \* number of neighbors :math:`k` requested for a query point. \* \*Brute force\* query time is largely unaffected by the value of :math:`k` \* \*Ball tree\* and \*KD tree\* query time will become slower as :math:`k` increases. This is due to two effects: first, a larger :math:`k` leads to the necessity to search a larger portion of the parameter space. Second, using :math:`k > 1` requires internal queueing of results as the tree is traversed. As :math:`k` becomes large compared to :math:`N`, the ability to prune branches in a tree-based query is reduced. In this situation, Brute force queries can be more efficient. \* number of query points. Both the ball tree and the KD Tree require a construction phase. The cost of this construction becomes negligible when amortized over many queries. If only a small number of queries will be performed, however, the construction can make up a significant fraction of the total cost. If very few query points will be required, brute force is better than a tree-based method. Currently, ``algorithm = 'auto'`` selects ``'brute'`` if any of the following conditions are verified: \* input data is sparse \* ``metric = 'precomputed'`` \* :math:`D > 15` \* :math:`k >= N/2` \* ``effective\_metric\_`` isn't in the ``VALID\_METRICS`` list for either ``'kd\_tree'`` or ``'ball\_tree'`` Otherwise, it selects the first out of ``'kd\_tree'`` and ``'ball\_tree'`` that has ``effective\_metric\_`` in its ``VALID\_METRICS`` list. This heuristic is based on the following assumptions: \* the number of query points is at least the same order as the number of training points \* ``leaf\_size`` is close to its default value of ``30`` \* when :math:`D > 15`, the intrinsic dimensionality of the data is generally too high for tree-based methods .. dropdown:: Effect of ``leaf\_size`` As noted above, for small sample sizes a brute force search can be more efficient than a tree-based query. This fact is accounted for in the ball tree and KD tree by internally switching to brute force searches within leaf nodes. The level of this switch can be specified with the parameter ``leaf\_size``. This parameter choice has many effects: \*\*construction time\*\* A larger ``leaf\_size`` leads to a faster tree construction time, because fewer nodes need to be created \*\*query time\*\* Both a large or small ``leaf\_size`` can lead to suboptimal query cost. For | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
0.003671059850603342,
-0.00784382689744234,
0.024026110768318176,
0.015283942222595215,
-0.01293662004172802,
-0.09549206495285034,
-0.01648237183690071,
-0.023915164172649384,
0.0425688810646534,
0.03720271959900856,
-0.04052001237869263,
-0.04515937715768814,
0.02621402218937874,
0.02881... | 0.074629 |
of this switch can be specified with the parameter ``leaf\_size``. This parameter choice has many effects: \*\*construction time\*\* A larger ``leaf\_size`` leads to a faster tree construction time, because fewer nodes need to be created \*\*query time\*\* Both a large or small ``leaf\_size`` can lead to suboptimal query cost. For ``leaf\_size`` approaching 1, the overhead involved in traversing nodes can significantly slow query times. For ``leaf\_size`` approaching the size of the training set, queries become essentially brute force. A good compromise between these is ``leaf\_size = 30``, the default value of the parameter. \*\*memory\*\* As ``leaf\_size`` increases, the memory required to store a tree structure decreases. This is especially important in the case of ball tree, which stores a :math:`D`-dimensional centroid for each node. The required storage space for :class:`BallTree` is approximately ``1 / leaf\_size`` times the size of the training set. ``leaf\_size`` is not referenced for brute force queries. .. dropdown:: Valid Metrics for Nearest Neighbor Algorithms For a list of available metrics, see the documentation of the :class:`~sklearn.metrics.DistanceMetric` class and the metrics listed in `sklearn.metrics.pairwise.PAIRWISE\_DISTANCE\_FUNCTIONS`. Note that the "cosine" metric uses :func:`~sklearn.metrics.pairwise.cosine\_distances`. A list of valid metrics for any of the above algorithms can be obtained by using their ``valid\_metric`` attribute. For example, valid metrics for ``KDTree`` can be generated by: >>> from sklearn.neighbors import KDTree >>> print(sorted(KDTree.valid\_metrics)) ['chebyshev', 'cityblock', 'euclidean', 'infinity', 'l1', 'l2', 'manhattan', 'minkowski', 'p'] .. \_nearest\_centroid\_classifier: Nearest Centroid Classifier =========================== The :class:`NearestCentroid` classifier is a simple algorithm that represents each class by the centroid of its members. In effect, this makes it similar to the label updating phase of the :class:`~sklearn.cluster.KMeans` algorithm. It also has no parameters to choose, making it a good baseline classifier. It does, however, suffer on non-convex classes, as well as when classes have drastically different variances, as equal variance in all dimensions is assumed. See Linear Discriminant Analysis (:class:`~sklearn.discriminant\_analysis.LinearDiscriminantAnalysis`) and Quadratic Discriminant Analysis (:class:`~sklearn.discriminant\_analysis.QuadraticDiscriminantAnalysis`) for more complex methods that do not make this assumption. Usage of the default :class:`NearestCentroid` is simple: >>> from sklearn.neighbors import NearestCentroid >>> import numpy as np >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> clf = NearestCentroid() >>> clf.fit(X, y) NearestCentroid() >>> print(clf.predict([[-0.8, -1]])) [1] Nearest Shrunken Centroid ------------------------- The :class:`NearestCentroid` classifier has a ``shrink\_threshold`` parameter, which implements the nearest shrunken centroid classifier. In effect, the value of each feature for each centroid is divided by the within-class variance of that feature. The feature values are then reduced by ``shrink\_threshold``. Most notably, if a particular feature value crosses zero, it is set to zero. In effect, this removes the feature from affecting the classification. This is useful, for example, for removing noisy features. In the example below, using a small shrink threshold increases the accuracy of the model from 0.81 to 0.82. .. |nearest\_centroid\_1| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nearest\_centroid\_001.png :target: ../auto\_examples/neighbors/plot\_nearest\_centroid.html :scale: 50 .. |nearest\_centroid\_2| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nearest\_centroid\_002.png :target: ../auto\_examples/neighbors/plot\_nearest\_centroid.html :scale: 50 .. centered:: |nearest\_centroid\_1| |nearest\_centroid\_2| .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_nearest\_centroid.py`: an example of classification using nearest centroid with different shrink thresholds. .. \_neighbors\_transformer: Nearest Neighbors Transformer ============================= Many scikit-learn estimators rely on nearest neighbors: Several classifiers and regressors such as :class:`KNeighborsClassifier` and :class:`KNeighborsRegressor`, but also some clustering methods such as :class:`~sklearn.cluster.DBSCAN` and :class:`~sklearn.cluster.SpectralClustering`, and some manifold embeddings such as :class:`~sklearn.manifold.TSNE` and :class:`~sklearn.manifold.Isomap`. All these estimators can compute internally the nearest neighbors, but most of them also accept precomputed nearest neighbors :term:`sparse graph`, as given by :func:`~sklearn.neighbors.kneighbors\_graph` and :func:`~sklearn.neighbors.radius\_neighbors\_graph`. With mode `mode='connectivity'`, these functions return a binary adjacency sparse graph as required, for instance, in :class:`~sklearn.cluster.SpectralClustering`. Whereas with `mode='distance'`, they return a | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
0.02576344646513462,
0.014181249774992466,
-0.01097587589174509,
0.04867938160896301,
-0.009332201443612576,
0.0003345486184116453,
-0.001913196756504476,
0.05227654054760933,
-0.04292461276054382,
0.055763036012649536,
0.017680620774626732,
-0.018194163218140602,
-0.02071518264710903,
-0.... | 0.080876 |
:class:`~sklearn.manifold.Isomap`. All these estimators can compute internally the nearest neighbors, but most of them also accept precomputed nearest neighbors :term:`sparse graph`, as given by :func:`~sklearn.neighbors.kneighbors\_graph` and :func:`~sklearn.neighbors.radius\_neighbors\_graph`. With mode `mode='connectivity'`, these functions return a binary adjacency sparse graph as required, for instance, in :class:`~sklearn.cluster.SpectralClustering`. Whereas with `mode='distance'`, they return a distance sparse graph as required, for instance, in :class:`~sklearn.cluster.DBSCAN`. To include these functions in a scikit-learn pipeline, one can also use the corresponding classes :class:`KNeighborsTransformer` and :class:`RadiusNeighborsTransformer`. The benefits of this sparse graph API are multiple. First, the precomputed graph can be reused multiple times, for instance while varying a parameter of the estimator. This can be done manually by the user, or using the caching properties of the scikit-learn pipeline: >>> import tempfile >>> from sklearn.manifold import Isomap >>> from sklearn.neighbors import KNeighborsTransformer >>> from sklearn.pipeline import make\_pipeline >>> from sklearn.datasets import make\_regression >>> cache\_path = tempfile.gettempdir() # we use a temporary folder here >>> X, \_ = make\_regression(n\_samples=50, n\_features=25, random\_state=0) >>> estimator = make\_pipeline( ... KNeighborsTransformer(mode='distance'), ... Isomap(n\_components=3, metric='precomputed'), ... memory=cache\_path) >>> X\_embedded = estimator.fit\_transform(X) >>> X\_embedded.shape (50, 3) Second, precomputing the graph can give finer control on the nearest neighbors estimation, for instance enabling multiprocessing though the parameter `n\_jobs`, which might not be available in all estimators. Finally, the precomputation can be performed by custom estimators to use different implementations, such as approximate nearest neighbors methods, or implementation with special data types. The precomputed neighbors :term:`sparse graph` needs to be formatted as in :func:`~sklearn.neighbors.radius\_neighbors\_graph` output: \* a CSR matrix (although COO, CSC or LIL will be accepted). \* only explicitly store nearest neighborhoods of each sample with respect to the training data. This should include those at 0 distance from a query point, including the matrix diagonal when computing the nearest neighborhoods between the training data and itself. \* each row's `data` should store the distance in increasing order (optional. Unsorted data will be stable-sorted, adding a computational overhead). \* all values in data should be non-negative. \* there should be no duplicate `indices` in any row (see https://github.com/scipy/scipy/issues/5807). \* if the algorithm being passed the precomputed matrix uses k nearest neighbors (as opposed to radius neighborhood), at least k neighbors must be stored in each row (or k+1, as explained in the following note). .. note:: When a specific number of neighbors is queried (using :class:`KNeighborsTransformer`), the definition of `n\_neighbors` is ambiguous since it can either include each training point as its own neighbor, or exclude them. Neither choice is perfect, since including them leads to a different number of non-self neighbors during training and testing, while excluding them leads to a difference between `fit(X).transform(X)` and `fit\_transform(X)`, which is against scikit-learn API. In :class:`KNeighborsTransformer` we use the definition which includes each training point as its own neighbor in the count of `n\_neighbors`. However, for compatibility reasons with other estimators which use the other definition, one extra neighbor will be computed when `mode == 'distance'`. To maximise compatibility with all estimators, a safe choice is to always include one extra neighbor in a custom nearest neighbors estimator, since unnecessary neighbors will be filtered by following estimators. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_approximate\_nearest\_neighbors.py`: an example of pipelining :class:`KNeighborsTransformer` and :class:`~sklearn.manifold.TSNE`. Also proposes two custom nearest neighbors estimators based on external packages. \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_caching\_nearest\_neighbors.py`: an example of pipelining :class:`KNeighborsTransformer` and :class:`KNeighborsClassifier` to enable caching of the neighbors graph during a hyper-parameter grid-search. .. \_nca: Neighborhood Components Analysis ================================ .. sectionauthor:: William de Vazelhes Neighborhood Components Analysis (NCA, :class:`NeighborhoodComponentsAnalysis`) is a distance metric learning algorithm which aims to improve the accuracy of nearest neighbors classification compared to the standard Euclidean distance. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.03012401983141899,
-0.0484105683863163,
-0.09385699778795242,
-0.01750876195728779,
0.04038037359714508,
-0.042137544602155685,
-0.05334542319178581,
-0.03373350575566292,
-0.0726286917924881,
-0.0017734464490786195,
0.05840738117694855,
-0.01826733723282814,
0.024495117366313934,
-0.01... | 0.085872 |
:class:`KNeighborsClassifier` to enable caching of the neighbors graph during a hyper-parameter grid-search. .. \_nca: Neighborhood Components Analysis ================================ .. sectionauthor:: William de Vazelhes Neighborhood Components Analysis (NCA, :class:`NeighborhoodComponentsAnalysis`) is a distance metric learning algorithm which aims to improve the accuracy of nearest neighbors classification compared to the standard Euclidean distance. The algorithm directly maximizes a stochastic variant of the leave-one-out k-nearest neighbors (KNN) score on the training set. It can also learn a low-dimensional linear projection of data that can be used for data visualization and fast classification. .. |nca\_illustration\_1| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nca\_illustration\_001.png :target: ../auto\_examples/neighbors/plot\_nca\_illustration.html :scale: 50 .. |nca\_illustration\_2| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nca\_illustration\_002.png :target: ../auto\_examples/neighbors/plot\_nca\_illustration.html :scale: 50 .. centered:: |nca\_illustration\_1| |nca\_illustration\_2| In the above illustrating figure, we consider some points from a randomly generated dataset. We focus on the stochastic KNN classification of point no. 3. The thickness of a link between sample 3 and another point is proportional to their distance, and can be seen as the relative weight (or probability) that a stochastic nearest neighbor prediction rule would assign to this point. In the original space, sample 3 has many stochastic neighbors from various classes, so the right class is not very likely. However, in the projected space learned by NCA, the only stochastic neighbors with non-negligible weight are from the same class as sample 3, guaranteeing that the latter will be well classified. See the :ref:`mathematical formulation ` for more details. Classification -------------- Combined with a nearest neighbors classifier (:class:`KNeighborsClassifier`), NCA is attractive for classification because it can naturally handle multi-class problems without any increase in the model size, and does not introduce additional parameters that require fine-tuning by the user. NCA classification has been shown to work well in practice for data sets of varying size and difficulty. In contrast to related methods such as Linear Discriminant Analysis, NCA does not make any assumptions about the class distributions. The nearest neighbor classification can naturally produce highly irregular decision boundaries. To use this model for classification, one needs to combine a :class:`NeighborhoodComponentsAnalysis` instance that learns the optimal transformation with a :class:`KNeighborsClassifier` instance that performs the classification in the projected space. Here is an example using the two classes: >>> from sklearn.neighbors import (NeighborhoodComponentsAnalysis, ... KNeighborsClassifier) >>> from sklearn.datasets import load\_iris >>> from sklearn.model\_selection import train\_test\_split >>> from sklearn.pipeline import Pipeline >>> X, y = load\_iris(return\_X\_y=True) >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split(X, y, ... stratify=y, test\_size=0.7, random\_state=42) >>> nca = NeighborhoodComponentsAnalysis(random\_state=42) >>> knn = KNeighborsClassifier(n\_neighbors=3) >>> nca\_pipe = Pipeline([('nca', nca), ('knn', knn)]) >>> nca\_pipe.fit(X\_train, y\_train) Pipeline(...) >>> print(nca\_pipe.score(X\_test, y\_test)) 0.96190476... .. |nca\_classification\_1| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nca\_classification\_001.png :target: ../auto\_examples/neighbors/plot\_nca\_classification.html :scale: 50 .. |nca\_classification\_2| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nca\_classification\_002.png :target: ../auto\_examples/neighbors/plot\_nca\_classification.html :scale: 50 .. centered:: |nca\_classification\_1| |nca\_classification\_2| The plot shows decision boundaries for Nearest Neighbor Classification and Neighborhood Components Analysis classification on the iris dataset, when training and scoring on only two features, for visualisation purposes. .. \_nca\_dim\_reduction: Dimensionality reduction ------------------------ NCA can be used to perform supervised dimensionality reduction. The input data are projected onto a linear subspace consisting of the directions which minimize the NCA objective. The desired dimensionality can be set using the parameter ``n\_components``. For instance, the following figure shows a comparison of dimensionality reduction with Principal Component Analysis (:class:`~sklearn.decomposition.PCA`), Linear Discriminant Analysis (:class:`~sklearn.discriminant\_analysis.LinearDiscriminantAnalysis`) and Neighborhood Component Analysis (:class:`NeighborhoodComponentsAnalysis`) on the Digits dataset, a dataset with size :math:`n\_{samples} = 1797` and :math:`n\_{features} = 64`. The data set is split into a training and a test set of equal size, then standardized. For evaluation the 3-nearest neighbor classification accuracy is computed on the 2-dimensional projected points found by each method. Each data sample belongs to one of 10 classes. .. |nca\_dim\_reduction\_1| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nca\_dim\_reduction\_001.png :target: | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.046370863914489746,
-0.038823891431093216,
-0.055368632078170776,
-0.08340326696634293,
0.03674803301692009,
-0.016767140477895737,
-0.05713389441370964,
-0.0378575474023819,
-0.037109147757291794,
-0.030958229675889015,
-0.015698514878749847,
0.015665903687477112,
0.030884260311722755,
... | 0.110349 |
64`. The data set is split into a training and a test set of equal size, then standardized. For evaluation the 3-nearest neighbor classification accuracy is computed on the 2-dimensional projected points found by each method. Each data sample belongs to one of 10 classes. .. |nca\_dim\_reduction\_1| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nca\_dim\_reduction\_001.png :target: ../auto\_examples/neighbors/plot\_nca\_dim\_reduction.html :width: 32% .. |nca\_dim\_reduction\_2| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nca\_dim\_reduction\_002.png :target: ../auto\_examples/neighbors/plot\_nca\_dim\_reduction.html :width: 32% .. |nca\_dim\_reduction\_3| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_nca\_dim\_reduction\_003.png :target: ../auto\_examples/neighbors/plot\_nca\_dim\_reduction.html :width: 32% .. centered:: |nca\_dim\_reduction\_1| |nca\_dim\_reduction\_2| |nca\_dim\_reduction\_3| .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_nca\_classification.py` \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_nca\_dim\_reduction.py` \* :ref:`sphx\_glr\_auto\_examples\_manifold\_plot\_lle\_digits.py` .. \_nca\_mathematical\_formulation: Mathematical formulation ------------------------ The goal of NCA is to learn an optimal linear transformation matrix of size ``(n\_components, n\_features)``, which maximises the sum over all samples :math:`i` of the probability :math:`p\_i` that :math:`i` is correctly classified, i.e.: .. math:: \underset{L}{\arg\max} \sum\limits\_{i=0}^{N - 1} p\_{i} with :math:`N` = ``n\_samples`` and :math:`p\_i` the probability of sample :math:`i` being correctly classified according to a stochastic nearest neighbors rule in the learned embedded space: .. math:: p\_{i}=\sum\limits\_{j \in C\_i}{p\_{i j}} where :math:`C\_i` is the set of points in the same class as sample :math:`i`, and :math:`p\_{i j}` is the softmax over Euclidean distances in the embedded space: .. math:: p\_{i j} = \frac{\exp(-||L x\_i - L x\_j||^2)}{\sum\limits\_{k \ne i} {\exp{-(||L x\_i - L x\_k||^2)}}} , \quad p\_{i i} = 0 .. dropdown:: Mahalanobis distance NCA can be seen as learning a (squared) Mahalanobis distance metric: .. math:: || L(x\_i - x\_j)||^2 = (x\_i - x\_j)^TM(x\_i - x\_j), where :math:`M = L^T L` is a symmetric positive semi-definite matrix of size ``(n\_features, n\_features)``. Implementation -------------- This implementation follows what is explained in the original paper [1]\_. For the optimisation method, it currently uses scipy's L-BFGS-B with a full gradient computation at each iteration, to avoid to tune the learning rate and provide stable learning. See the examples below and the docstring of :meth:`NeighborhoodComponentsAnalysis.fit` for further information. Complexity ---------- Training ^^^^^^^^ NCA stores a matrix of pairwise distances, taking ``n\_samples \*\* 2`` memory. Time complexity depends on the number of iterations done by the optimisation algorithm. However, one can set the maximum number of iterations with the argument ``max\_iter``. For each iteration, time complexity is ``O(n\_components x n\_samples x min(n\_samples, n\_features))``. Transform ^^^^^^^^^ Here the ``transform`` operation returns :math:`LX^T`, therefore its time complexity equals ``n\_components \* n\_features \* n\_samples\_test``. There is no added space complexity in the operation. .. rubric:: References .. [1] `"Neighbourhood Components Analysis" `\_, J. Goldberger, S. Roweis, G. Hinton, R. Salakhutdinov, Advances in Neural Information Processing Systems, Vol. 17, May 2005, pp. 513-520. \* `Wikipedia entry on Neighborhood Components Analysis `\_ | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neighbors.rst | main | scikit-learn | [
-0.023535840213298798,
-0.08175204694271088,
-0.0381486602127552,
-0.08338729292154312,
0.057630155235528946,
-0.010411521419882774,
-0.06754199415445328,
-0.0021528482902795076,
-0.07740187644958496,
0.010447963140904903,
-0.014150590635836124,
-0.07929939031600952,
0.02660972997546196,
-... | 0.052123 |
.. \_preprocessing: ================== Preprocessing data ================== .. currentmodule:: sklearn.preprocessing The ``sklearn.preprocessing`` package provides several common utility functions and transformer classes to change raw feature vectors into a representation that is more suitable for the downstream estimators. In general, many learning algorithms such as linear models benefit from standardization of the data set (see :ref:`sphx\_glr\_auto\_examples\_preprocessing\_plot\_scaling\_importance.py`). If some outliers are present in the set, robust scalers or other transformers can be more appropriate. The behaviors of the different scalers, transformers, and normalizers on a dataset containing marginal outliers are highlighted in :ref:`sphx\_glr\_auto\_examples\_preprocessing\_plot\_all\_scaling.py`. .. \_preprocessing\_scaler: Standardization, or mean removal and variance scaling ===================================================== \*\*Standardization\*\* of datasets is a \*\*common requirement for many machine learning estimators\*\* implemented in scikit-learn; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with \*\*zero mean and unit variance\*\*. In practice we often ignore the shape of the distribution and just transform the data to center it by removing the mean value of each feature, then scale it by dividing non-constant features by their standard deviation. For instance, many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the l1 and l2 regularizers of linear models) may assume that all features are centered around zero or have variance in the same order. If a feature has a variance that is orders of magnitude larger than others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected. The :mod:`~sklearn.preprocessing` module provides the :class:`StandardScaler` utility class, which is a quick and easy way to perform the following operation on an array-like dataset:: >>> from sklearn import preprocessing >>> import numpy as np >>> X\_train = np.array([[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]]) >>> scaler = preprocessing.StandardScaler().fit(X\_train) >>> scaler StandardScaler() >>> scaler.mean\_ array([1., 0., 0.33]) >>> scaler.scale\_ array([0.81, 0.81, 1.24]) >>> X\_scaled = scaler.transform(X\_train) >>> X\_scaled array([[ 0. , -1.22, 1.33 ], [ 1.22, 0. , -0.267], [-1.22, 1.22, -1.06 ]]) .. >>> import numpy as np >>> print\_options = np.get\_printoptions() >>> np.set\_printoptions(suppress=True) Scaled data has zero mean and unit variance:: >>> X\_scaled.mean(axis=0) array([0., 0., 0.]) >>> X\_scaled.std(axis=0) array([1., 1., 1.]) .. >>> print\_options = np.set\_printoptions(print\_options) This class implements the ``Transformer`` API to compute the mean and standard deviation on a training set so as to be able to later re-apply the same transformation on the testing set. This class is hence suitable for use in the early steps of a :class:`~sklearn.pipeline.Pipeline`:: >>> from sklearn.datasets import make\_classification >>> from sklearn.linear\_model import LogisticRegression >>> from sklearn.model\_selection import train\_test\_split >>> from sklearn.pipeline import make\_pipeline >>> from sklearn.preprocessing import StandardScaler >>> X, y = make\_classification(random\_state=42) >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split(X, y, random\_state=42) >>> pipe = make\_pipeline(StandardScaler(), LogisticRegression()) >>> pipe.fit(X\_train, y\_train) # apply scaling on training data Pipeline(steps=[('standardscaler', StandardScaler()), ('logisticregression', LogisticRegression())]) >>> pipe.score(X\_test, y\_test) # apply scaling on testing data, without leaking training data. 0.96 It is possible to disable either centering or scaling by either passing ``with\_mean=False`` or ``with\_std=False`` to the constructor of :class:`StandardScaler`. Scaling features to a range --------------------------- An alternative standardization is scaling features to lie between a given minimum and maximum value, often between zero and one, or so that the maximum absolute value of each feature is scaled to unit size. This can be achieved using :class:`MinMaxScaler` or :class:`MaxAbsScaler`, respectively. The motivation to use this scaling includes robustness to very small standard deviations of features and preserving zero entries in sparse data. Here is an example to scale a toy data | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.058875419199466705,
-0.08393914252519608,
-0.02430952526628971,
0.018486926332116127,
0.0545445941388607,
-0.02207619696855545,
-0.04589581489562988,
0.030426891520619392,
-0.05331945791840553,
0.01036023162305355,
0.016857046633958817,
0.024382147938013077,
-0.047192662954330444,
0.005... | 0.103289 |
absolute value of each feature is scaled to unit size. This can be achieved using :class:`MinMaxScaler` or :class:`MaxAbsScaler`, respectively. The motivation to use this scaling includes robustness to very small standard deviations of features and preserving zero entries in sparse data. Here is an example to scale a toy data matrix to the ``[0, 1]`` range:: >>> X\_train = np.array([[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]]) ... >>> min\_max\_scaler = preprocessing.MinMaxScaler() >>> X\_train\_minmax = min\_max\_scaler.fit\_transform(X\_train) >>> X\_train\_minmax array([[0.5 , 0. , 1. ], [1. , 0.5 , 0.33333333], [0. , 1. , 0. ]]) The same instance of the transformer can then be applied to some new test data unseen during the fit call: the same scaling and shifting operations will be applied to be consistent with the transformation performed on the train data:: >>> X\_test = np.array([[-3., -1., 4.]]) >>> X\_test\_minmax = min\_max\_scaler.transform(X\_test) >>> X\_test\_minmax array([[-1.5 , 0. , 1.66666667]]) It is possible to introspect the scaler attributes to find about the exact nature of the transformation learned on the training data:: >>> min\_max\_scaler.scale\_ array([0.5 , 0.5 , 0.33]) >>> min\_max\_scaler.min\_ array([0. , 0.5 , 0.33]) If :class:`MinMaxScaler` is given an explicit ``feature\_range=(min, max)`` the full formula is:: X\_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0)) X\_scaled = X\_std \* (max - min) + min :class:`MaxAbsScaler` works in a very similar fashion, but scales in a way that the training data lies within the range ``[-1, 1]`` by dividing through the largest maximum value in each feature. It is meant for data that is already centered at zero or sparse data. Here is how to use the toy data from the previous example with this scaler:: >>> X\_train = np.array([[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]]) ... >>> max\_abs\_scaler = preprocessing.MaxAbsScaler() >>> X\_train\_maxabs = max\_abs\_scaler.fit\_transform(X\_train) >>> X\_train\_maxabs array([[ 0.5, -1. , 1. ], [ 1. , 0. , 0. ], [ 0. , 1. , -0.5]]) >>> X\_test = np.array([[ -3., -1., 4.]]) >>> X\_test\_maxabs = max\_abs\_scaler.transform(X\_test) >>> X\_test\_maxabs array([[-1.5, -1. , 2. ]]) >>> max\_abs\_scaler.scale\_ array([2., 1., 2.]) Scaling sparse data ------------------- Centering sparse data would destroy the sparseness structure in the data, and thus rarely is a sensible thing to do. However, it can make sense to scale sparse inputs, especially if features are on different scales. :class:`MaxAbsScaler` was specifically designed for scaling sparse data, and is the recommended way to go about this. However, :class:`StandardScaler` can accept ``scipy.sparse`` matrices as input, as long as ``with\_mean=False`` is explicitly passed to the constructor. Otherwise a ``ValueError`` will be raised as silently centering would break the sparsity and would often crash the execution by allocating excessive amounts of memory unintentionally. :class:`RobustScaler` cannot be fitted to sparse inputs, but you can use the ``transform`` method on sparse inputs. Note that the scalers accept both Compressed Sparse Rows and Compressed Sparse Columns format (see ``scipy.sparse.csr\_matrix`` and ``scipy.sparse.csc\_matrix``). Any other sparse input will be \*\*converted to the Compressed Sparse Rows representation\*\*. To avoid unnecessary memory copies, it is recommended to choose the CSR or CSC representation upstream. Finally, if the centered data is expected to be small enough, explicitly converting the input to an array using the ``toarray`` method of sparse matrices is another option. Scaling data with outliers -------------------------- If your data contains many outliers, scaling using the mean and variance of the data is likely to not work very well. In these cases, you can use :class:`RobustScaler` as a drop-in replacement instead. It uses more robust estimates for the center and range of your data. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.06414514034986496,
-0.021459117531776428,
-0.034046370536088943,
-0.009214196354150772,
-0.0032148626632988453,
-0.026149246841669083,
-0.06666789948940277,
0.01755865477025509,
-0.11588272452354431,
-0.08834918588399887,
-0.020005537196993828,
-0.028219418600201607,
0.02118869684636593,
... | -0.016626 |
with outliers -------------------------- If your data contains many outliers, scaling using the mean and variance of the data is likely to not work very well. In these cases, you can use :class:`RobustScaler` as a drop-in replacement instead. It uses more robust estimates for the center and range of your data. .. dropdown:: References Further discussion on the importance of centering and scaling data is available on this FAQ: `Should I normalize/standardize/rescale the data? `\_ .. dropdown:: Scaling vs Whitening It is sometimes not enough to center and scale the features independently, since a downstream model can further make some assumption on the linear independence of the features. To address this issue you can use :class:`~sklearn.decomposition.PCA` with ``whiten=True`` to further remove the linear correlation across features. .. \_kernel\_centering: Centering kernel matrices ------------------------- If you have a kernel matrix of a kernel :math:`K` that computes a dot product in a feature space (possibly implicitly) defined by a function :math:`\phi(\cdot)`, a :class:`KernelCenterer` can transform the kernel matrix so that it contains inner products in the feature space defined by :math:`\phi` followed by the removal of the mean in that space. In other words, :class:`KernelCenterer` computes the centered Gram matrix associated to a positive semidefinite kernel :math:`K`. .. dropdown:: Mathematical formulation We can have a look at the mathematical formulation now that we have the intuition. Let :math:`K` be a kernel matrix of shape `(n\_samples, n\_samples)` computed from :math:`X`, a data matrix of shape `(n\_samples, n\_features)`, during the `fit` step. :math:`K` is defined by .. math:: K(X, X) = \phi(X) . \phi(X)^{T} :math:`\phi(X)` is a function mapping of :math:`X` to a Hilbert space. A centered kernel :math:`\tilde{K}` is defined as: .. math:: \tilde{K}(X, X) = \tilde{\phi}(X) . \tilde{\phi}(X)^{T} where :math:`\tilde{\phi}(X)` results from centering :math:`\phi(X)` in the Hilbert space. Thus, one could compute :math:`\tilde{K}` by mapping :math:`X` using the function :math:`\phi(\cdot)` and center the data in this new space. However, kernels are often used because they allow some algebra calculations that avoid computing explicitly this mapping using :math:`\phi(\cdot)`. Indeed, one can implicitly center as shown in Appendix B in [Scholkopf1998]\_: .. math:: \tilde{K} = K - 1\_{\text{n}\_{samples}} K - K 1\_{\text{n}\_{samples}} + 1\_{\text{n}\_{samples}} K 1\_{\text{n}\_{samples}} :math:`1\_{\text{n}\_{samples}}` is a matrix of `(n\_samples, n\_samples)` where all entries are equal to :math:`\frac{1}{\text{n}\_{samples}}`. In the `transform` step, the kernel becomes :math:`K\_{test}(X, Y)` defined as: .. math:: K\_{test}(X, Y) = \phi(Y) . \phi(X)^{T} :math:`Y` is the test dataset of shape `(n\_samples\_test, n\_features)` and thus :math:`K\_{test}` is of shape `(n\_samples\_test, n\_samples)`. In this case, centering :math:`K\_{test}` is done as: .. math:: \tilde{K}\_{test}(X, Y) = K\_{test} - 1'\_{\text{n}\_{samples}} K - K\_{test} 1\_{\text{n}\_{samples}} + 1'\_{\text{n}\_{samples}} K 1\_{\text{n}\_{samples}} :math:`1'\_{\text{n}\_{samples}}` is a matrix of shape `(n\_samples\_test, n\_samples)` where all entries are equal to :math:`\frac{1}{\text{n}\_{samples}}`. .. rubric:: References .. [Scholkopf1998] B. Schölkopf, A. Smola, and K.R. Müller, `"Nonlinear component analysis as a kernel eigenvalue problem." `\_ Neural computation 10.5 (1998): 1299-1319. .. \_preprocessing\_transformer: Non-linear transformation ========================= Two types of transformations are available: quantile transforms and power transforms. Both quantile and power transforms are based on monotonic transformations of the features and thus preserve the rank of the values along each feature. Quantile transforms put all features into the same desired distribution based on the formula :math:`G^{-1}(F(X))` where :math:`F` is the cumulative distribution function of the feature and :math:`G^{-1}` the `quantile function `\_ of the desired output distribution :math:`G`. This formula is using the two following facts: (i) if :math:`X` is a random variable with a continuous cumulative distribution function :math:`F` then :math:`F(X)` is uniformly distributed on :math:`[0,1]`; (ii) if :math:`U` is a random variable with uniform distribution on :math:`[0,1]` then :math:`G^{-1}(U)` has distribution :math:`G`. By performing | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.01923869363963604,
-0.051182232797145844,
0.00587534811347723,
0.011740457266569138,
0.027732117101550102,
-0.050219982862472534,
-0.03970147296786308,
0.052345551550388336,
-0.07297662645578384,
-0.05222305655479431,
0.004929913207888603,
0.0422714501619339,
-0.011913551948964596,
0.008... | -0.051415 |
distribution :math:`G`. This formula is using the two following facts: (i) if :math:`X` is a random variable with a continuous cumulative distribution function :math:`F` then :math:`F(X)` is uniformly distributed on :math:`[0,1]`; (ii) if :math:`U` is a random variable with uniform distribution on :math:`[0,1]` then :math:`G^{-1}(U)` has distribution :math:`G`. By performing a rank transformation, a quantile transform smooths out unusual distributions and is less influenced by outliers than scaling methods. It does, however, distort correlations and distances within and across features. Power transforms are a family of parametric transformations that aim to map data from any distribution to as close to a Gaussian distribution. Mapping to a Uniform distribution --------------------------------- :class:`QuantileTransformer` provides a non-parametric transformation to map the data to a uniform distribution with values between 0 and 1:: >>> from sklearn.datasets import load\_iris >>> from sklearn.model\_selection import train\_test\_split >>> X, y = load\_iris(return\_X\_y=True) >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split(X, y, random\_state=0) >>> quantile\_transformer = preprocessing.QuantileTransformer(random\_state=0) >>> X\_train\_trans = quantile\_transformer.fit\_transform(X\_train) >>> X\_test\_trans = quantile\_transformer.transform(X\_test) >>> np.percentile(X\_train[:, 0], [0, 25, 50, 75, 100]) # doctest: +SKIP array([ 4.3, 5.1, 5.8, 6.5, 7.9]) This feature corresponds to the sepal length in cm. Once the quantile transformation is applied, those landmarks approach closely the percentiles previously defined:: >>> np.percentile(X\_train\_trans[:, 0], [0, 25, 50, 75, 100]) ... # doctest: +SKIP array([ 0.00 , 0.24, 0.49, 0.73, 0.99 ]) This can be confirmed on an independent testing set with similar remarks:: >>> np.percentile(X\_test[:, 0], [0, 25, 50, 75, 100]) ... # doctest: +SKIP array([ 4.4 , 5.125, 5.75 , 6.175, 7.3 ]) >>> np.percentile(X\_test\_trans[:, 0], [0, 25, 50, 75, 100]) ... # doctest: +SKIP array([ 0.01, 0.25, 0.46, 0.60 , 0.94]) Mapping to a Gaussian distribution ---------------------------------- In many modeling scenarios, normality of the features in a dataset is desirable. Power transforms are a family of parametric, monotonic transformations that aim to map data from any distribution to as close to a Gaussian distribution as possible in order to stabilize variance and minimize skewness. :class:`PowerTransformer` currently provides two such power transformations, the Yeo-Johnson transform and the Box-Cox transform. .. dropdown:: Yeo-Johnson transform .. math:: x\_i^{(\lambda)} = \begin{cases} [(x\_i + 1)^\lambda - 1] / \lambda & \text{if } \lambda \neq 0, x\_i \geq 0, \\[8pt] \ln{(x\_i + 1)} & \text{if } \lambda = 0, x\_i \geq 0 \\[8pt] -[(-x\_i + 1)^{2 - \lambda} - 1] / (2 - \lambda) & \text{if } \lambda \neq 2, x\_i < 0, \\[8pt] - \ln (- x\_i + 1) & \text{if } \lambda = 2, x\_i < 0 \end{cases} .. dropdown:: Box-Cox transform .. math:: x\_i^{(\lambda)} = \begin{cases} \dfrac{x\_i^\lambda - 1}{\lambda} & \text{if } \lambda \neq 0, \\[8pt] \ln{(x\_i)} & \text{if } \lambda = 0, \end{cases} Box-Cox can only be applied to strictly positive data. In both methods, the transformation is parameterized by :math:`\lambda`, which is determined through maximum likelihood estimation. Here is an example of using Box-Cox to map samples drawn from a lognormal distribution to a normal distribution:: >>> pt = preprocessing.PowerTransformer(method='box-cox', standardize=False) >>> X\_lognormal = np.random.RandomState(616).lognormal(size=(3, 3)) >>> X\_lognormal array([[1.28, 1.18 , 0.84 ], [0.94, 1.60 , 0.388], [1.35, 0.217, 1.09 ]]) >>> pt.fit\_transform(X\_lognormal) array([[ 0.49 , 0.179, -0.156], [-0.051, 0.589, -0.576], [ 0.69 , -0.849, 0.101]]) While the above example sets the `standardize` option to `False`, :class:`PowerTransformer` will apply zero-mean, unit-variance normalization to the transformed output by default. Below are examples of Box-Cox and Yeo-Johnson applied to various probability distributions. Note that when applied to certain distributions, the power transforms achieve very Gaussian-like results, but with others, they are ineffective. This highlights the importance of visualizing the data before and after transformation. .. figure:: ../auto\_examples/preprocessing/images/sphx\_glr\_plot\_map\_data\_to\_normal\_001.png :target: | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.10901600867509842,
-0.01143339741975069,
0.00839606299996376,
-0.013033025898039341,
-0.04807112365961075,
-0.0929998829960823,
0.005043727811425924,
-0.03356770798563957,
0.05535447224974632,
-0.0063204108737409115,
0.05482158437371254,
0.036435242742300034,
0.08162462711334229,
0.0073... | 0.050547 |
by default. Below are examples of Box-Cox and Yeo-Johnson applied to various probability distributions. Note that when applied to certain distributions, the power transforms achieve very Gaussian-like results, but with others, they are ineffective. This highlights the importance of visualizing the data before and after transformation. .. figure:: ../auto\_examples/preprocessing/images/sphx\_glr\_plot\_map\_data\_to\_normal\_001.png :target: ../auto\_examples/preprocessing/plot\_map\_data\_to\_normal.html :align: center :scale: 100 It is also possible to map data to a normal distribution using :class:`QuantileTransformer` by setting ``output\_distribution='normal'``. Using the earlier example with the iris dataset:: >>> quantile\_transformer = preprocessing.QuantileTransformer( ... output\_distribution='normal', random\_state=0) >>> X\_trans = quantile\_transformer.fit\_transform(X) >>> quantile\_transformer.quantiles\_ array([[4.3, 2. , 1. , 0.1], [4.4, 2.2, 1.1, 0.1], [4.4, 2.2, 1.2, 0.1], ..., [7.7, 4.1, 6.7, 2.5], [7.7, 4.2, 6.7, 2.5], [7.9, 4.4, 6.9, 2.5]]) Thus the median of the input becomes the mean of the output, centered at 0. The normal output is clipped so that the input's minimum and maximum --- corresponding to the 1e-7 and 1 - 1e-7 quantiles respectively --- do not become infinite under the transformation. .. \_preprocessing\_normalization: Normalization ============= \*\*Normalization\*\* is the process of \*\*scaling individual samples to have unit norm\*\*. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples. This assumption is the base of the `Vector Space Model `\_ often used in text classification and clustering contexts. The function :func:`normalize` provides a quick and easy way to perform this operation on a single array-like dataset, either using the ``l1``, ``l2``, or ``max`` norms:: >>> X = [[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]] >>> X\_normalized = preprocessing.normalize(X, norm='l2') >>> X\_normalized array([[ 0.408, -0.408, 0.812], [ 1. , 0. , 0. ], [ 0. , 0.707, -0.707]]) The ``preprocessing`` module further provides a utility class :class:`Normalizer` that implements the same operation using the ``Transformer`` API (even though the ``fit`` method is useless in this case: the class is stateless as this operation treats samples independently). This class is hence suitable for use in the early steps of a :class:`~sklearn.pipeline.Pipeline`:: >>> normalizer = preprocessing.Normalizer().fit(X) # fit does nothing >>> normalizer Normalizer() The normalizer instance can then be used on sample vectors as any transformer:: >>> normalizer.transform(X) array([[ 0.408, -0.408, 0.812], [ 1. , 0. , 0. ], [ 0. , 0.707, -0.707]]) >>> normalizer.transform([[-1., 1., 0.]]) array([[-0.707, 0.707, 0.]]) Note: L2 normalization is also known as spatial sign preprocessing. .. dropdown:: Sparse input :func:`normalize` and :class:`Normalizer` accept \*\*both dense array-like and sparse matrices from scipy.sparse as input\*\*. For sparse input the data is \*\*converted to the Compressed Sparse Rows representation\*\* (see ``scipy.sparse.csr\_matrix``) before being fed to efficient Cython routines. To avoid unnecessary memory copies, it is recommended to choose the CSR representation upstream. .. \_preprocessing\_categorical\_features: Encoding categorical features ============================= Often features are not given as continuous values but categorical. For example a person could have features ``["male", "female"]``, ``["from Europe", "from US", "from Asia"]``, ``["uses Firefox", "uses Chrome", "uses Safari", "uses Internet Explorer"]``. Such features can be efficiently coded as integers, for instance ``["male", "from US", "uses Internet Explorer"]`` could be expressed as ``[0, 1, 3]`` while ``["female", "from Asia", "uses Chrome"]`` would be ``[1, 2, 1]``. To convert categorical features to such integer codes, we can use the :class:`OrdinalEncoder`. This estimator transforms each categorical feature to one new feature of integers (0 to n\_categories - 1):: >>> enc = preprocessing.OrdinalEncoder() >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']] >>> enc.fit(X) OrdinalEncoder() >>> enc.transform([['female', 'from US', 'uses Safari']]) array([[0., 1., 1.]]) Such integer representation can, however, not | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.036950163543224335,
0.015394879505038261,
-0.06415198743343353,
-0.0014716492732986808,
-0.02252242900431156,
-0.08510922640562057,
-0.014129454270005226,
-0.001809297944419086,
0.007669377140700817,
-0.016850966960191727,
0.0655701756477356,
0.03718527778983116,
0.0030053684022277594,
0... | 0.076079 |
transforms each categorical feature to one new feature of integers (0 to n\_categories - 1):: >>> enc = preprocessing.OrdinalEncoder() >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']] >>> enc.fit(X) OrdinalEncoder() >>> enc.transform([['female', 'from US', 'uses Safari']]) array([[0., 1., 1.]]) Such integer representation can, however, not be used directly with all scikit-learn estimators, as these expect continuous input, and would interpret the categories as being ordered, which is often not desired (i.e. the set of browsers was ordered arbitrarily). By default, :class:`OrdinalEncoder` will also passthrough missing values that are indicated by `np.nan`. >>> enc = preprocessing.OrdinalEncoder() >>> X = [['male'], ['female'], [np.nan], ['female']] >>> enc.fit\_transform(X) array([[ 1.], [ 0.], [nan], [ 0.]]) :class:`OrdinalEncoder` provides a parameter `encoded\_missing\_value` to encode the missing values without the need to create a pipeline and using :class:`~sklearn.impute.SimpleImputer`. >>> enc = preprocessing.OrdinalEncoder(encoded\_missing\_value=-1) >>> X = [['male'], ['female'], [np.nan], ['female']] >>> enc.fit\_transform(X) array([[ 1.], [ 0.], [-1.], [ 0.]]) The above processing is equivalent to the following pipeline:: >>> from sklearn.pipeline import Pipeline >>> from sklearn.impute import SimpleImputer >>> enc = Pipeline(steps=[ ... ("encoder", preprocessing.OrdinalEncoder()), ... ("imputer", SimpleImputer(strategy="constant", fill\_value=-1)), ... ]) >>> enc.fit\_transform(X) array([[ 1.], [ 0.], [-1.], [ 0.]]) Another possibility to convert categorical features to features that can be used with scikit-learn estimators is to use a one-of-K, also known as one-hot or dummy encoding. This type of encoding can be obtained with the :class:`OneHotEncoder`, which transforms each categorical feature with ``n\_categories`` possible values into ``n\_categories`` binary features, with one of them 1, and all others 0. Continuing the example above:: >>> enc = preprocessing.OneHotEncoder() >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']] >>> enc.fit(X) OneHotEncoder() >>> enc.transform([['female', 'from US', 'uses Safari'], ... ['male', 'from Europe', 'uses Safari']]).toarray() array([[1., 0., 0., 1., 0., 1.], [0., 1., 1., 0., 0., 1.]]) By default, the values each feature can take is inferred automatically from the dataset and can be found in the ``categories\_`` attribute:: >>> enc.categories\_ [array(['female', 'male'], dtype=object), array(['from Europe', 'from US'], dtype=object), array(['uses Firefox', 'uses Safari'], dtype=object)] It is possible to specify this explicitly using the parameter ``categories``. There are two genders, four possible continents and four web browsers in our dataset:: >>> genders = ['female', 'male'] >>> locations = ['from Africa', 'from Asia', 'from Europe', 'from US'] >>> browsers = ['uses Chrome', 'uses Firefox', 'uses IE', 'uses Safari'] >>> enc = preprocessing.OneHotEncoder(categories=[genders, locations, browsers]) >>> # Note that for there are missing categorical values for the 2nd and 3rd >>> # feature >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']] >>> enc.fit(X) OneHotEncoder(categories=[['female', 'male'], ['from Africa', 'from Asia', 'from Europe', 'from US'], ['uses Chrome', 'uses Firefox', 'uses IE', 'uses Safari']]) >>> enc.transform([['female', 'from Asia', 'uses Chrome']]).toarray() array([[1., 0., 0., 1., 0., 0., 1., 0., 0., 0.]]) If there is a possibility that the training data might have missing categorical features, it can often be better to specify `handle\_unknown='infrequent\_if\_exist'` instead of setting the `categories` manually as above. When `handle\_unknown='infrequent\_if\_exist'` is specified and unknown categories are encountered during transform, no error will be raised but the resulting one-hot encoded columns for this feature will be all zeros or considered as an infrequent category if enabled. (`handle\_unknown='infrequent\_if\_exist'` is only supported for one-hot encoding):: >>> enc = preprocessing.OneHotEncoder(handle\_unknown='infrequent\_if\_exist') >>> X = [['male', 'from US', 'uses Safari'], ['female', 'from Europe', 'uses Firefox']] >>> enc.fit(X) OneHotEncoder(handle\_unknown='infrequent\_if\_exist') >>> enc.transform([['female', 'from Asia', 'uses Chrome']]).toarray() array([[1., 0., 0., 0., 0., 0.]]) It is also possible to encode each column into ``n\_categories - 1`` columns instead of ``n\_categories`` columns by using the ``drop`` parameter. This parameter allows the user to | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.0763128325343132,
-0.01813848689198494,
-0.03504907712340355,
-0.002192710991948843,
0.046832744032144547,
-0.03224709630012512,
-0.01850018836557865,
-0.0959123820066452,
-0.03440823405981064,
-0.047586098313331604,
-0.03861802816390991,
-0.060729317367076874,
-0.03660858795046806,
0.02... | 0.003684 |
'uses Safari'], ['female', 'from Europe', 'uses Firefox']] >>> enc.fit(X) OneHotEncoder(handle\_unknown='infrequent\_if\_exist') >>> enc.transform([['female', 'from Asia', 'uses Chrome']]).toarray() array([[1., 0., 0., 0., 0., 0.]]) It is also possible to encode each column into ``n\_categories - 1`` columns instead of ``n\_categories`` columns by using the ``drop`` parameter. This parameter allows the user to specify a category for each feature to be dropped. This is useful to avoid co-linearity in the input matrix in some classifiers. Such functionality is useful, for example, when using non-regularized regression (:class:`LinearRegression `), since co-linearity would cause the covariance matrix to be non-invertible:: >>> X = [['male', 'from US', 'uses Safari'], ... ['female', 'from Europe', 'uses Firefox']] >>> drop\_enc = preprocessing.OneHotEncoder(drop='first').fit(X) >>> drop\_enc.categories\_ [array(['female', 'male'], dtype=object), array(['from Europe', 'from US'], dtype=object), array(['uses Firefox', 'uses Safari'], dtype=object)] >>> drop\_enc.transform(X).toarray() array([[1., 1., 1.], [0., 0., 0.]]) One might want to drop one of the two columns only for features with 2 categories. In this case, you can set the parameter `drop='if\_binary'`. >>> X = [['male', 'US', 'Safari'], ... ['female', 'Europe', 'Firefox'], ... ['female', 'Asia', 'Chrome']] >>> drop\_enc = preprocessing.OneHotEncoder(drop='if\_binary').fit(X) >>> drop\_enc.categories\_ [array(['female', 'male'], dtype=object), array(['Asia', 'Europe', 'US'], dtype=object), array(['Chrome', 'Firefox', 'Safari'], dtype=object)] >>> drop\_enc.transform(X).toarray() array([[1., 0., 0., 1., 0., 0., 1.], [0., 0., 1., 0., 0., 1., 0.], [0., 1., 0., 0., 1., 0., 0.]]) In the transformed `X`, the first column is the encoding of the feature with categories "male"/"female", while the remaining 6 columns are the encoding of the 2 features with respectively 3 categories each. When `handle\_unknown='ignore'` and `drop` is not None, unknown categories will be encoded as all zeros:: >>> drop\_enc = preprocessing.OneHotEncoder(drop='first', ... handle\_unknown='ignore').fit(X) >>> X\_test = [['unknown', 'America', 'IE']] >>> drop\_enc.transform(X\_test).toarray() array([[0., 0., 0., 0., 0.]]) All the categories in `X\_test` are unknown during transform and will be mapped to all zeros. This means that unknown categories will have the same mapping as the dropped category. :meth:`OneHotEncoder.inverse\_transform` will map all zeros to the dropped category if a category is dropped and `None` if a category is not dropped:: >>> drop\_enc = preprocessing.OneHotEncoder(drop='if\_binary', sparse\_output=False, ... handle\_unknown='ignore').fit(X) >>> X\_test = [['unknown', 'America', 'IE']] >>> X\_trans = drop\_enc.transform(X\_test) >>> X\_trans array([[0., 0., 0., 0., 0., 0., 0.]]) >>> drop\_enc.inverse\_transform(X\_trans) array([['female', None, None]], dtype=object) .. dropdown:: Support of categorical features with missing values :class:`OneHotEncoder` supports categorical features with missing values by considering the missing values as an additional category:: >>> X = [['male', 'Safari'], ... ['female', None], ... [np.nan, 'Firefox']] >>> enc = preprocessing.OneHotEncoder(handle\_unknown='error').fit(X) >>> enc.categories\_ [array(['female', 'male', nan], dtype=object), array(['Firefox', 'Safari', None], dtype=object)] >>> enc.transform(X).toarray() array([[0., 1., 0., 0., 1., 0.], [1., 0., 0., 0., 0., 1.], [0., 0., 1., 1., 0., 0.]]) If a feature contains both `np.nan` and `None`, they will be considered separate categories:: >>> X = [['Safari'], [None], [np.nan], ['Firefox']] >>> enc = preprocessing.OneHotEncoder(handle\_unknown='error').fit(X) >>> enc.categories\_ [array(['Firefox', 'Safari', None, nan], dtype=object)] >>> enc.transform(X).toarray() array([[0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.], [1., 0., 0., 0.]]) See :ref:`dict\_feature\_extraction` for categorical features that are represented as a dict, not as scalars. .. \_encoder\_infrequent\_categories: Infrequent categories --------------------- :class:`OneHotEncoder` and :class:`OrdinalEncoder` support aggregating infrequent categories into a single output for each feature. The parameters to enable the gathering of infrequent categories are `min\_frequency` and `max\_categories`. 1. `min\_frequency` is either an integer greater or equal to 1, or a float in the interval `(0.0, 1.0)`. If `min\_frequency` is an integer, categories with a cardinality smaller than `min\_frequency` will be considered infrequent. If `min\_frequency` is a float, categories with a cardinality smaller than this fraction of the total number of samples will be considered infrequent. The default value is 1, which means every category is encoded | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.005698412656784058,
-0.043689269572496414,
-0.09079758077859879,
0.06492513418197632,
0.07830220460891724,
-0.006582679692655802,
-0.013236560858786106,
-0.1523008793592453,
-0.05147663131356239,
-0.07489728182554245,
0.036586128175258636,
-0.06300537288188934,
-0.03823992237448692,
0.0... | -0.017591 |
If `min\_frequency` is an integer, categories with a cardinality smaller than `min\_frequency` will be considered infrequent. If `min\_frequency` is a float, categories with a cardinality smaller than this fraction of the total number of samples will be considered infrequent. The default value is 1, which means every category is encoded separately. 2. `max\_categories` is either `None` or any integer greater than 1. This parameter sets an upper limit to the number of output features for each input feature. `max\_categories` includes the feature that combines infrequent categories. In the following example with :class:`OrdinalEncoder`, the categories `'dog'` and `'snake'` are considered infrequent:: >>> X = np.array([['dog'] \* 5 + ['cat'] \* 20 + ['rabbit'] \* 10 + ... ['snake'] \* 3], dtype=object).T >>> enc = preprocessing.OrdinalEncoder(min\_frequency=6).fit(X) >>> enc.infrequent\_categories\_ [array(['dog', 'snake'], dtype=object)] >>> enc.transform(np.array([['dog'], ['cat'], ['rabbit'], ['snake']])) array([[2.], [0.], [1.], [2.]]) :class:`OrdinalEncoder`'s `max\_categories` do \*\*not\*\* take into account missing or unknown categories. Setting `unknown\_value` or `encoded\_missing\_value` to an integer will increase the number of unique integer codes by one each. This can result in up to `max\_categories + 2` integer codes. In the following example, "a" and "d" are considered infrequent and grouped together into a single category, "b" and "c" are their own categories, unknown values are encoded as 3 and missing values are encoded as 4. >>> X\_train = np.array( ... [["a"] \* 5 + ["b"] \* 20 + ["c"] \* 10 + ["d"] \* 3 + [np.nan]], ... dtype=object).T >>> enc = preprocessing.OrdinalEncoder( ... handle\_unknown="use\_encoded\_value", unknown\_value=3, ... max\_categories=3, encoded\_missing\_value=4) >>> \_ = enc.fit(X\_train) >>> X\_test = np.array([["a"], ["b"], ["c"], ["d"], ["e"], [np.nan]], dtype=object) >>> enc.transform(X\_test) array([[2.], [0.], [1.], [2.], [3.], [4.]]) Similarly, :class:`OneHotEncoder` can be configured to group together infrequent categories:: >>> enc = preprocessing.OneHotEncoder(min\_frequency=6, sparse\_output=False).fit(X) >>> enc.infrequent\_categories\_ [array(['dog', 'snake'], dtype=object)] >>> enc.transform(np.array([['dog'], ['cat'], ['rabbit'], ['snake']])) array([[0., 0., 1.], [1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) By setting handle\_unknown to `'infrequent\_if\_exist'`, unknown categories will be considered infrequent:: >>> enc = preprocessing.OneHotEncoder( ... handle\_unknown='infrequent\_if\_exist', sparse\_output=False, min\_frequency=6) >>> enc = enc.fit(X) >>> enc.transform(np.array([['dragon']])) array([[0., 0., 1.]]) :meth:`OneHotEncoder.get\_feature\_names\_out` uses 'infrequent' as the infrequent feature name:: >>> enc.get\_feature\_names\_out() array(['x0\_cat', 'x0\_rabbit', 'x0\_infrequent\_sklearn'], dtype=object) When `'handle\_unknown'` is set to `'infrequent\_if\_exist'` and an unknown category is encountered in transform: 1. If infrequent category support was not configured or there was no infrequent category during training, the resulting one-hot encoded columns for this feature will be all zeros. In the inverse transform, an unknown category will be denoted as `None`. 2. If there is an infrequent category during training, the unknown category will be considered infrequent. In the inverse transform, 'infrequent\_sklearn' will be used to represent the infrequent category. Infrequent categories can also be configured using `max\_categories`. In the following example, we set `max\_categories=2` to limit the number of features in the output. This will result in all but the `'cat'` category to be considered infrequent, leading to two features, one for `'cat'` and one for infrequent categories - which are all the others:: >>> enc = preprocessing.OneHotEncoder(max\_categories=2, sparse\_output=False) >>> enc = enc.fit(X) >>> enc.transform([['dog'], ['cat'], ['rabbit'], ['snake']]) array([[0., 1.], [1., 0.], [0., 1.], [0., 1.]]) If both `max\_categories` and `min\_frequency` are non-default values, then categories are selected based on `min\_frequency` first and `max\_categories` categories are kept. In the following example, `min\_frequency=4` considers only `snake` to be infrequent, but `max\_categories=3`, forces `dog` to also be infrequent:: >>> enc = preprocessing.OneHotEncoder(min\_frequency=4, max\_categories=3, sparse\_output=False) >>> enc = enc.fit(X) >>> enc.transform([['dog'], ['cat'], ['rabbit'], ['snake']]) array([[0., 0., 1.], [1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) If there are infrequent categories with the same cardinality at the cutoff of `max\_categories`, then the first `max\_categories` are taken based on lexicon | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.06977249681949615,
-0.022356826812028885,
-0.03443995490670204,
0.03183591365814209,
0.009152485057711601,
-0.00841083750128746,
-0.060560211539268494,
-0.04285614937543869,
-0.021794268861413002,
-0.06449675559997559,
-0.030192846432328224,
-0.08218434453010559,
0.0053174602799117565,
-... | 0.022854 |
>>> enc = preprocessing.OneHotEncoder(min\_frequency=4, max\_categories=3, sparse\_output=False) >>> enc = enc.fit(X) >>> enc.transform([['dog'], ['cat'], ['rabbit'], ['snake']]) array([[0., 0., 1.], [1., 0., 0.], [0., 1., 0.], [0., 0., 1.]]) If there are infrequent categories with the same cardinality at the cutoff of `max\_categories`, then the first `max\_categories` are taken based on lexicon ordering. In the following example, "b", "c", and "d", have the same cardinality and with `max\_categories=2`, "b" and "c" are infrequent because they have a higher lexicon order. >>> X = np.asarray([["a"] \* 20 + ["b"] \* 10 + ["c"] \* 10 + ["d"] \* 10], dtype=object).T >>> enc = preprocessing.OneHotEncoder(max\_categories=3).fit(X) >>> enc.infrequent\_categories\_ [array(['b', 'c'], dtype=object)] .. \_target\_encoder: Target Encoder -------------- .. currentmodule:: sklearn.preprocessing The :class:`TargetEncoder` uses the target mean conditioned on the categorical feature for encoding unordered categories, i.e. nominal categories [PAR]\_ [MIC]\_. This encoding scheme is useful with categorical features with high cardinality, where one-hot encoding would inflate the feature space making it more expensive for a downstream model to process. A classical example of high cardinality categories are location based such as zip code or region. .. dropdown:: Binary classification targets For the binary classification target, the target encoding is given by: .. math:: S\_i = \lambda\_i\frac{n\_{iY}}{n\_i} + (1 - \lambda\_i)\frac{n\_Y}{n} where :math:`S\_i` is the encoding for category :math:`i`, :math:`n\_{iY}` is the number of observations with :math:`Y=1` and category :math:`i`, :math:`n\_i` is the number of observations with category :math:`i`, :math:`n\_Y` is the number of observations with :math:`Y=1`, :math:`n` is the number of observations, and :math:`\lambda\_i` is a shrinkage factor for category :math:`i`. The shrinkage factor is given by: .. math:: \lambda\_i = \frac{n\_i}{m + n\_i} where :math:`m` is a smoothing factor, which is controlled with the `smooth` parameter in :class:`TargetEncoder`. Large smoothing factors will put more weight on the global mean. When `smooth="auto"`, the smoothing factor is computed as an empirical Bayes estimate: :math:`m=\sigma\_i^2/\tau^2`, where :math:`\sigma\_i^2` is the variance of `y` with category :math:`i` and :math:`\tau^2` is the global variance of `y`. .. dropdown:: Multiclass classification targets For multiclass classification targets, the formulation is similar to binary classification: .. math:: S\_{ij} = \lambda\_i\frac{n\_{iY\_j}}{n\_i} + (1 - \lambda\_i)\frac{n\_{Y\_j}}{n} where :math:`S\_{ij}` is the encoding for category :math:`i` and class :math:`j`, :math:`n\_{iY\_j}` is the number of observations with :math:`Y=j` and category :math:`i`, :math:`n\_i` is the number of observations with category :math:`i`, :math:`n\_{Y\_j}` is the number of observations with :math:`Y=j`, :math:`n` is the number of observations, and :math:`\lambda\_i` is a shrinkage factor for category :math:`i`. .. dropdown:: Continuous targets For continuous targets, the formulation is similar to binary classification: .. math:: S\_i = \lambda\_i\frac{\sum\_{k\in L\_i}Y\_k}{n\_i} + (1 - \lambda\_i)\frac{\sum\_{k=1}^{n}Y\_k}{n} where :math:`L\_i` is the set of observations with category :math:`i` and :math:`n\_i` is the number of observations with category :math:`i`. .. note:: In :class:`TargetEncoder`, `fit(X, y).transform(X)` does not equal `fit\_transform(X, y)`. :meth:`~TargetEncoder.fit\_transform` internally relies on a :term:`cross fitting` scheme to prevent target information from leaking into the train-time representation, especially for non-informative high-cardinality categorical variables (features with many unique categories where each category appears only a few times), and help prevent the downstream model from overfitting spurious correlations. In :meth:`~TargetEncoder.fit\_transform`, the training data is split into \*k\* folds (determined by the `cv` parameter) and each fold is encoded using the encodings learnt using the \*other k-1\* folds. For this reason, training data should always be trained and transformed with `fit\_transform(X\_train, y\_train)`. This diagram shows the :term:`cross fitting` scheme in :meth:`~TargetEncoder.fit\_transform` with the default `cv=5`: .. image:: ../images/target\_encoder\_cross\_validation.svg :width: 600 :align: center The :meth:`~TargetEncoder.fit` method does \*\*not\*\* use any :term:`cross fitting` schemes and learns one encoding on the entire training set. It is discouraged to use this method because it can introduce data | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.10636244714260101,
-0.028228633105754852,
-0.0002903554996009916,
-0.02500843070447445,
-0.008783724159002304,
-0.024510765448212624,
-0.02131064236164093,
-0.0905199721455574,
-0.03120896965265274,
-0.007647369522601366,
0.004293819423764944,
-0.0511639229953289,
0.011526981368660927,
0... | 0.00872 |
diagram shows the :term:`cross fitting` scheme in :meth:`~TargetEncoder.fit\_transform` with the default `cv=5`: .. image:: ../images/target\_encoder\_cross\_validation.svg :width: 600 :align: center The :meth:`~TargetEncoder.fit` method does \*\*not\*\* use any :term:`cross fitting` schemes and learns one encoding on the entire training set. It is discouraged to use this method because it can introduce data leakage as mentioned above. Use :meth:`~TargetEncoder.fit\_transform` instead. During :meth:`~TargetEncoder.fit\_transform`, the encoder learns category encodings from the full training data and stores them in the :attr:`~TargetEncoder.encodings\_` attribute. The intermediate encodings learned for each fold during the :term:`cross fitting` process are temporary and not saved. The stored encodings can then be used to transform test data with `encoder.transform(X\_test)`. .. note:: :class:`TargetEncoder` considers missing values, such as `np.nan` or `None`, as another category and encodes them like any other category. Categories that are not seen during `fit` are encoded with the target mean, i.e. `target\_mean\_`. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_preprocessing\_plot\_target\_encoder.py` \* :ref:`sphx\_glr\_auto\_examples\_preprocessing\_plot\_target\_encoder\_cross\_val.py` .. rubric:: References .. [MIC] :doi:`Micci-Barreca, Daniele. "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" SIGKDD Explor. Newsl. 3, 1 (July 2001), 27-32. <10.1145/507533.507538>` .. [PAR] :doi:`Pargent, F., Pfisterer, F., Thomas, J. et al. "Regularized target encoding outperforms traditional methods in supervised machine learning with high cardinality features" Comput Stat 37, 2671-2692 (2022) <10.1007/s00180-022-01207-6>` .. \_preprocessing\_discretization: Discretization ============== `Discretization `\_ (otherwise known as quantization or binning) provides a way to partition continuous features into discrete values. Certain datasets with continuous features may benefit from discretization, because discretization can transform the dataset of continuous attributes to one with only nominal attributes. One-hot encoded discretized features can make a model more expressive, while maintaining interpretability. For instance, pre-processing with a discretizer can introduce nonlinearity to linear models. For more advanced possibilities, in particular smooth ones, see :ref:`generating\_polynomial\_features` further below. K-bins discretization --------------------- :class:`KBinsDiscretizer` discretizes features into ``k`` bins:: >>> X = np.array([[ -3., 5., 15 ], ... [ 0., 6., 14 ], ... [ 6., 3., 11 ]]) >>> est = preprocessing.KBinsDiscretizer(n\_bins=[3, 2, 2], encode='ordinal').fit(X) By default the output is one-hot encoded into a sparse matrix (See :ref:`preprocessing\_categorical\_features`) and this can be configured with the ``encode`` parameter. For each feature, the bin edges are computed during ``fit`` and together with the number of bins, they will define the intervals. Therefore, for the current example, these intervals are defined as: - feature 1: :math:`{[-\infty, -1), [-1, 2), [2, \infty)}` - feature 2: :math:`{[-\infty, 5), [5, \infty)}` - feature 3: :math:`{[-\infty, 14), [14, \infty)}` Based on these bin intervals, ``X`` is transformed as follows:: >>> est.transform(X) # doctest: +SKIP array([[ 0., 1., 1.], [ 1., 1., 1.], [ 2., 0., 0.]]) The resulting dataset contains ordinal attributes which can be further used in a :class:`~sklearn.pipeline.Pipeline`. Discretization is similar to constructing histograms for continuous data. However, histograms focus on counting features which fall into particular bins, whereas discretization focuses on assigning feature values to these bins. :class:`KBinsDiscretizer` implements different binning strategies, which can be selected with the ``strategy`` parameter. The 'uniform' strategy uses constant-width bins. The 'quantile' strategy uses the quantiles values to have equally populated bins in each feature. The 'kmeans' strategy defines bins based on a k-means clustering procedure performed on each feature independently. Be aware that one can specify custom bins by passing a callable defining the discretization strategy to :class:`~sklearn.preprocessing.FunctionTransformer`. For instance, we can use the Pandas function :func:`pandas.cut`:: >>> import pandas as pd >>> import numpy as np >>> from sklearn import preprocessing >>> >>> bins = [0, 1, 13, 20, 60, np.inf] >>> labels = ['infant', 'kid', 'teen', 'adult', 'senior citizen'] >>> transformer = preprocessing.FunctionTransformer( ... pd.cut, kw\_args={'bins': bins, 'labels': labels, 'retbins': False} ... ) >>> | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.06080179288983345,
-0.07104811072349548,
-0.05954465642571449,
0.006839490029960871,
0.06867770105600357,
-0.05304684117436409,
-0.07985593378543854,
0.011281481944024563,
-0.07342669367790222,
-0.04940430074930191,
-0.0034592922311276197,
-0.13482064008712769,
-0.02348942682147026,
-0.... | 0.004327 |
:func:`pandas.cut`:: >>> import pandas as pd >>> import numpy as np >>> from sklearn import preprocessing >>> >>> bins = [0, 1, 13, 20, 60, np.inf] >>> labels = ['infant', 'kid', 'teen', 'adult', 'senior citizen'] >>> transformer = preprocessing.FunctionTransformer( ... pd.cut, kw\_args={'bins': bins, 'labels': labels, 'retbins': False} ... ) >>> X = np.array([0.2, 2, 15, 25, 97]) >>> transformer.fit\_transform(X) ['infant', 'kid', 'teen', 'adult', 'senior citizen'] Categories (5, object): ['infant' < 'kid' < 'teen' < 'adult' < 'senior citizen'] .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_preprocessing\_plot\_discretization.py` \* :ref:`sphx\_glr\_auto\_examples\_preprocessing\_plot\_discretization\_classification.py` \* :ref:`sphx\_glr\_auto\_examples\_preprocessing\_plot\_discretization\_strategies.py` .. \_preprocessing\_binarization: Feature binarization -------------------- \*\*Feature binarization\*\* is the process of \*\*thresholding numerical features to get boolean values\*\*. This can be useful for downstream probabilistic estimators that make assumption that the input data is distributed according to a multi-variate `Bernoulli distribution `\_. For instance, this is the case for the :class:`~sklearn.neural\_network.BernoulliRBM`. It is also common among the text processing community to use binary feature values (probably to simplify the probabilistic reasoning) even if normalized counts (a.k.a. term frequencies) or TF-IDF valued features often perform slightly better in practice. As for the :class:`Normalizer`, the utility class :class:`Binarizer` is meant to be used in the early stages of :class:`~sklearn.pipeline.Pipeline`. The ``fit`` method does nothing as each sample is treated independently of others:: >>> X = [[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]] >>> binarizer = preprocessing.Binarizer().fit(X) # fit does nothing >>> binarizer Binarizer() >>> binarizer.transform(X) array([[1., 0., 1.], [1., 0., 0.], [0., 1., 0.]]) It is possible to adjust the threshold of the binarizer:: >>> binarizer = preprocessing.Binarizer(threshold=1.1) >>> binarizer.transform(X) array([[0., 0., 1.], [1., 0., 0.], [0., 0., 0.]]) As for the :class:`Normalizer` class, the preprocessing module provides a companion function :func:`binarize` to be used when the transformer API is not necessary. Note that the :class:`Binarizer` is similar to the :class:`KBinsDiscretizer` when ``k = 2``, and when the bin edge is at the value ``threshold``. .. topic:: Sparse input :func:`binarize` and :class:`Binarizer` accept \*\*both dense array-like and sparse matrices from scipy.sparse as input\*\*. For sparse input the data is \*\*converted to the Compressed Sparse Rows representation\*\* (see ``scipy.sparse.csr\_matrix``). To avoid unnecessary memory copies, it is recommended to choose the CSR representation upstream. .. \_imputation: Imputation of missing values ============================ Tools for imputing missing values are discussed at :ref:`impute`. .. \_generating\_polynomial\_features: Generating polynomial features ============================== Often it's useful to add complexity to a model by considering nonlinear features of the input data. We show two possibilities that are both based on polynomials: The first one uses pure polynomials, the second one uses splines, i.e. piecewise polynomials. .. \_polynomial\_features: Polynomial features ------------------- A simple and common method to use is polynomial features, which can get features' high-order and interaction terms. It is implemented in :class:`PolynomialFeatures`:: >>> import numpy as np >>> from sklearn.preprocessing import PolynomialFeatures >>> X = np.arange(6).reshape(3, 2) >>> X array([[0, 1], [2, 3], [4, 5]]) >>> poly = PolynomialFeatures(2) >>> poly.fit\_transform(X) array([[ 1., 0., 1., 0., 0., 1.], [ 1., 2., 3., 4., 6., 9.], [ 1., 4., 5., 16., 20., 25.]]) The features of X have been transformed from :math:`(X\_1, X\_2)` to :math:`(1, X\_1, X\_2, X\_1^2, X\_1X\_2, X\_2^2)`. In some cases, only interaction terms among features are required, and it can be gotten with the setting ``interaction\_only=True``:: >>> X = np.arange(9).reshape(3, 3) >>> X array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> poly = PolynomialFeatures(degree=3, interaction\_only=True) >>> poly.fit\_transform(X) array([[ 1., 0., 1., 2., 0., 0., 2., 0.], [ 1., 3., 4., 5., 12., 15., 20., 60.], [ 1., 6., 7., 8., 42., 48., 56., 336.]]) The features of X have been transformed from :math:`(X\_1, | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
0.0022303995210677385,
0.04121822491288185,
-0.07079947739839554,
-0.06707146018743515,
0.04425456374883652,
0.022549917921423912,
-0.025939637795090675,
-0.05704647675156593,
-0.05903789773583412,
0.04117630422115326,
0.06480778753757477,
-0.08228263258934021,
-0.038737375289201736,
-0.00... | 0.044804 |
2], [3, 4, 5], [6, 7, 8]]) >>> poly = PolynomialFeatures(degree=3, interaction\_only=True) >>> poly.fit\_transform(X) array([[ 1., 0., 1., 2., 0., 0., 2., 0.], [ 1., 3., 4., 5., 12., 15., 20., 60.], [ 1., 6., 7., 8., 42., 48., 56., 336.]]) The features of X have been transformed from :math:`(X\_1, X\_2, X\_3)` to :math:`(1, X\_1, X\_2, X\_3, X\_1X\_2, X\_1X\_3, X\_2X\_3, X\_1X\_2X\_3)`. Note that polynomial features are used implicitly in `kernel methods `\_ (e.g., :class:`~sklearn.svm.SVC`, :class:`~sklearn.decomposition.KernelPCA`) when using polynomial :ref:`svm\_kernels`. See :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_polynomial\_interpolation.py` for Ridge regression using created polynomial features. .. \_spline\_transformer: Spline transformer ------------------ Another way to add nonlinear terms instead of pure polynomials of features is to generate spline basis functions for each feature with the :class:`SplineTransformer`. Splines are piecewise polynomials, parametrized by their polynomial degree and the positions of the knots. The :class:`SplineTransformer` implements a B-spline basis, cf. the references below. .. note:: The :class:`SplineTransformer` treats each feature separately, i.e. it won't give you interaction terms. Some of the advantages of splines over polynomials are: - B-splines are very flexible and robust if you keep a fixed low degree, usually 3, and parsimoniously adapt the number of knots. Polynomials would need a higher degree, which leads to the next point. - B-splines do not have oscillatory behaviour at the boundaries as have polynomials (the higher the degree, the worse). This is known as `Runge's phenomenon `\_. - B-splines provide good options for extrapolation beyond the boundaries, i.e. beyond the range of fitted values. Have a look at the option ``extrapolation``. - B-splines generate a feature matrix with a banded structure. For a single feature, every row contains only ``degree + 1`` non-zero elements, which occur consecutively and are even positive. This results in a matrix with good numerical properties, e.g. a low condition number, in sharp contrast to a matrix of polynomials, which goes under the name `Vandermonde matrix `\_. A low condition number is important for stable algorithms of linear models. The following code snippet shows splines in action:: >>> import numpy as np >>> from sklearn.preprocessing import SplineTransformer >>> X = np.arange(5).reshape(5, 1) >>> X array([[0], [1], [2], [3], [4]]) >>> spline = SplineTransformer(degree=2, n\_knots=3) >>> spline.fit\_transform(X) array([[0.5 , 0.5 , 0. , 0. ], [0.125, 0.75 , 0.125, 0. ], [0. , 0.5 , 0.5 , 0. ], [0. , 0.125, 0.75 , 0.125], [0. , 0. , 0.5 , 0.5 ]]) As the ``X`` is sorted, one can easily see the banded matrix output. Only the three middle diagonals are non-zero for ``degree=2``. The higher the degree, the more overlapping of the splines. Interestingly, a :class:`SplineTransformer` of ``degree=0`` is the same as :class:`~sklearn.preprocessing.KBinsDiscretizer` with ``encode='onehot-dense'`` and ``n\_bins = n\_knots - 1`` if ``knots = strategy``. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_polynomial\_interpolation.py` \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_cyclical\_feature\_engineering.py` .. dropdown:: References \* Eilers, P., & Marx, B. (1996). :doi:`Flexible Smoothing with B-splines and Penalties <10.1214/ss/1038425655>`. Statist. Sci. 11 (1996), no. 2, 89--121. \* Perperoglou, A., Sauerbrei, W., Abrahamowicz, M. et al. :doi:`A review of spline function procedures in R <10.1186/s12874-019-0666-3>`. BMC Med Res Methodol 19, 46 (2019). .. \_function\_transformer: Custom transformers =================== Often, you will want to convert an existing Python function into a transformer to assist in data cleaning or processing. You can implement a transformer from an arbitrary function with :class:`FunctionTransformer`. For example, to build a transformer that applies a log transformation in a pipeline, do:: >>> import numpy as np >>> from sklearn.preprocessing import FunctionTransformer >>> transformer = FunctionTransformer(np.log1p, validate=True) >>> X = np.array([[0, 1], [2, 3]]) >>> # Since FunctionTransformer is no-op during fit, we can call transform directly >>> transformer.transform(X) array([[0. , 0.69314718], [1.09861229, 1.38629436]]) You | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.017870021983981133,
0.013575640507042408,
0.006525685545057058,
-0.04170466959476471,
0.024966513738036156,
-0.013155966065824032,
0.03205173462629318,
-0.07170005887746811,
-0.05050456151366234,
-0.020887786522507668,
0.04493943974375725,
-0.02833511121571064,
-0.005391476210206747,
-0... | 0.038304 |
a log transformation in a pipeline, do:: >>> import numpy as np >>> from sklearn.preprocessing import FunctionTransformer >>> transformer = FunctionTransformer(np.log1p, validate=True) >>> X = np.array([[0, 1], [2, 3]]) >>> # Since FunctionTransformer is no-op during fit, we can call transform directly >>> transformer.transform(X) array([[0. , 0.69314718], [1.09861229, 1.38629436]]) You can ensure that ``func`` and ``inverse\_func`` are the inverse of each other by setting ``check\_inverse=True`` and calling ``fit`` before ``transform``. Please note that a warning is raised and can be turned into an error with a ``filterwarnings``:: >>> import warnings >>> warnings.filterwarnings("error", message=".\*check\_inverse\*.", ... category=UserWarning, append=False) For a full code example that demonstrates using a :class:`FunctionTransformer` to extract features from text data see :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer.py` and :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_cyclical\_feature\_engineering.py`. | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing.rst | main | scikit-learn | [
-0.0768207460641861,
-0.020068222656846046,
-0.04250044748187065,
-0.02324610762298107,
0.047318581491708755,
-0.1448938101530075,
0.010387098416686058,
-0.015891138464212418,
-0.017648104578256607,
-0.01548495888710022,
-0.03508233278989792,
-0.04558440297842026,
-0.02291000634431839,
0.0... | 0.091384 |
.. \_mixture: .. \_gmm: ======================= Gaussian mixture models ======================= .. currentmodule:: sklearn.mixture ``sklearn.mixture`` is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from data. Facilities to help determine the appropriate number of components are also provided. .. figure:: ../auto\_examples/mixture/images/sphx\_glr\_plot\_gmm\_pdf\_001.png :target: ../auto\_examples/mixture/plot\_gmm\_pdf.html :align: center :scale: 50% \*\*Two-component Gaussian mixture model:\*\* \*data points, and equi-probability surfaces of the model.\* A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians. Scikit-learn implements different classes to estimate Gaussian mixture models, that correspond to different estimation strategies, detailed below. Gaussian Mixture ================ The :class:`GaussianMixture` object implements the :ref:`expectation-maximization ` (EM) algorithm for fitting mixture-of-Gaussian models. It can also draw confidence ellipsoids for multivariate models, and compute the Bayesian Information Criterion to assess the number of clusters in the data. A :meth:`GaussianMixture.fit` method is provided that learns a Gaussian Mixture Model from training data. Given test data, it can assign to each sample the Gaussian it most probably belongs to using the :meth:`GaussianMixture.predict` method. .. Alternatively, the probability of each sample belonging to the various Gaussians may be retrieved using the :meth:`GaussianMixture.predict\_proba` method. The :class:`GaussianMixture` comes with different options to constrain the covariance of the difference classes estimated: spherical, diagonal, tied or full covariance. .. figure:: ../auto\_examples/mixture/images/sphx\_glr\_plot\_gmm\_covariances\_001.png :target: ../auto\_examples/mixture/plot\_gmm\_covariances.html :align: center :scale: 75% .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_mixture\_plot\_gmm\_covariances.py` for an example of using the Gaussian mixture as clustering on the iris dataset. \* See :ref:`sphx\_glr\_auto\_examples\_mixture\_plot\_gmm\_pdf.py` for an example on plotting the density estimation. .. dropdown:: Pros and cons of class GaussianMixture .. rubric:: Pros :Speed: It is the fastest algorithm for learning mixture models :Agnostic: As this algorithm maximizes only the likelihood, it will not bias the means towards zero, or bias the cluster sizes to have specific structures that might or might not apply. .. rubric:: Cons :Singularities: When one has insufficiently many points per mixture, estimating the covariance matrices becomes difficult, and the algorithm is known to diverge and find solutions with infinite likelihood unless one regularizes the covariances artificially. :Number of components: This algorithm will always use all the components it has access to, needing held-out data or information theoretical criteria to decide how many components to use in the absence of external cues. .. dropdown:: Selecting the number of components in a classical Gaussian Mixture model The BIC criterion can be used to select the number of components in a Gaussian Mixture in an efficient way. In theory, it recovers the true number of components only in the asymptotic regime (i.e. if much data is available and assuming that the data was actually generated i.i.d. from a mixture of Gaussian distributions). Note that using a :ref:`Variational Bayesian Gaussian mixture ` avoids the specification of the number of components for a Gaussian mixture model. .. figure:: ../auto\_examples/mixture/images/sphx\_glr\_plot\_gmm\_selection\_002.png :target: ../auto\_examples/mixture/plot\_gmm\_selection.html :align: center :scale: 50% .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_mixture\_plot\_gmm\_selection.py` for an example of model selection performed with classical Gaussian mixture. .. \_expectation\_maximization: .. dropdown:: Estimation algorithm expectation-maximization The main difficulty in learning Gaussian mixture models from unlabeled data is that one usually doesn't know which points came from which latent component (if one has access to this information it gets very easy to fit a separate Gaussian distribution to each set of points). `Expectation-maximization `\_ is a well-founded statistical | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/mixture.rst | main | scikit-learn | [
0.03827058523893356,
-0.14156268537044525,
0.006165196653455496,
-0.059283897280693054,
0.09584016352891922,
0.012920984998345375,
0.08137741684913635,
-0.008948959410190582,
-0.04092387482523918,
-0.05283186212182045,
0.054047178477048874,
-0.0725696012377739,
0.050786737352609634,
-0.000... | 0.067344 |
difficulty in learning Gaussian mixture models from unlabeled data is that one usually doesn't know which points came from which latent component (if one has access to this information it gets very easy to fit a separate Gaussian distribution to each set of points). `Expectation-maximization `\_ is a well-founded statistical algorithm to get around this problem by an iterative process. First one assumes random components (randomly centered on data points, learned from k-means, or even just normally distributed around the origin) and computes for each point a probability of being generated by each component of the model. Then, one tweaks the parameters to maximize the likelihood of the data given those assignments. Repeating this process is guaranteed to always converge to a local optimum. .. dropdown:: Choice of the Initialization method There is a choice of four initialization methods (as well as inputting user defined initial means) to generate the initial centers for the model components: k-means (default) This applies a traditional k-means clustering algorithm. This can be computationally expensive compared to other initialization methods. k-means++ This uses the initialization method of k-means clustering: k-means++. This will pick the first center at random from the data. Subsequent centers will be chosen from a weighted distribution of the data favouring points further away from existing centers. k-means++ is the default initialization for k-means so will be quicker than running a full k-means but can still take a significant amount of time for large data sets with many components. random\_from\_data This will pick random data points from the input data as the initial centers. This is a very fast method of initialization but can produce non-convergent results if the chosen points are too close to each other. random Centers are chosen as a small perturbation away from the mean of all data. This method is simple but can lead to the model taking longer to converge. .. figure:: ../auto\_examples/mixture/images/sphx\_glr\_plot\_gmm\_init\_001.png :target: ../auto\_examples/mixture/plot\_gmm\_init.html :align: center :scale: 50% .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_mixture\_plot\_gmm\_init.py` for an example of using different initializations in Gaussian Mixture. .. \_bgmm: Variational Bayesian Gaussian Mixture ===================================== The :class:`BayesianGaussianMixture` object implements a variant of the Gaussian mixture model with variational inference algorithms. The API is similar to the one defined by :class:`GaussianMixture`. .. \_variational\_inference: \*\*Estimation algorithm: variational inference\*\* Variational inference is an extension of expectation-maximization that maximizes a lower bound on model evidence (including priors) instead of data likelihood. The principle behind variational methods is the same as expectation-maximization (that is both are iterative algorithms that alternate between finding the probabilities for each point to be generated by each mixture and fitting the mixture to these assigned points), but variational methods add regularization by integrating information from prior distributions. This avoids the singularities often found in expectation-maximization solutions but introduces some subtle biases to the model. Inference is often notably slower, but not usually as much so as to render usage unpractical. Due to its Bayesian nature, the variational algorithm needs more hyperparameters than expectation-maximization, the most important of these being the concentration parameter ``weight\_concentration\_prior``. Specifying a low value for the concentration prior will make the model put most of the weight on a few components and set the remaining components' weights very close to zero. High values of the concentration prior will allow a larger number of components to be active in the mixture. The parameters implementation of the :class:`BayesianGaussianMixture` class proposes two types of prior for the weights distribution: a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process. In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/mixture.rst | main | scikit-learn | [
0.030707335099577904,
-0.10524823516607285,
0.04576911777257919,
-0.023170648142695427,
0.030497858300805092,
0.011504944413900375,
0.05928691104054451,
-0.055349040776491165,
0.0005942682037129998,
-0.0362776443362236,
0.040133509784936905,
-0.0117301344871521,
0.08583580702543259,
0.0128... | 0.001922 |
active in the mixture. The parameters implementation of the :class:`BayesianGaussianMixture` class proposes two types of prior for the weights distribution: a finite mixture model with Dirichlet distribution and an infinite mixture model with the Dirichlet Process. In practice Dirichlet Process inference algorithm is approximated and uses a truncated distribution with a fixed maximum number of components (called the Stick-breaking representation). The number of components actually used almost always depends on the data. The next figure compares the results obtained for the different types of the weight concentration prior (parameter ``weight\_concentration\_prior\_type``) for different values of ``weight\_concentration\_prior``. Here, we can see the value of the ``weight\_concentration\_prior`` parameter has a strong impact on the effective number of active components obtained. We can also notice that large values for the concentration weight prior lead to more uniform weights when the type of prior is 'dirichlet\_distribution' while this is not necessarily the case for the 'dirichlet\_process' type (used by default). .. |plot\_bgmm| image:: ../auto\_examples/mixture/images/sphx\_glr\_plot\_concentration\_prior\_001.png :target: ../auto\_examples/mixture/plot\_concentration\_prior.html :scale: 48% .. |plot\_dpgmm| image:: ../auto\_examples/mixture/images/sphx\_glr\_plot\_concentration\_prior\_002.png :target: ../auto\_examples/mixture/plot\_concentration\_prior.html :scale: 48% .. centered:: |plot\_bgmm| |plot\_dpgmm| The examples below compare Gaussian mixture models with a fixed number of components, to the variational Gaussian mixture models with a Dirichlet process prior. Here, a classical Gaussian mixture is fitted with 5 components on a dataset composed of 2 clusters. We can see that the variational Gaussian mixture with a Dirichlet process prior is able to limit itself to only 2 components whereas the Gaussian mixture fits the data with a fixed number of components that has to be set a priori by the user. In this case the user has selected ``n\_components=5`` which does not match the true generative distribution of this toy dataset. Note that with very little observations, the variational Gaussian mixture models with a Dirichlet process prior can take a conservative stand, and fit only one component. .. figure:: ../auto\_examples/mixture/images/sphx\_glr\_plot\_gmm\_001.png :target: ../auto\_examples/mixture/plot\_gmm.html :align: center :scale: 70% On the following figure we are fitting a dataset not well-depicted by a Gaussian mixture. Adjusting the ``weight\_concentration\_prior``, parameter of the :class:`BayesianGaussianMixture` controls the number of components used to fit this data. We also present on the last two plots a random sampling generated from the two resulting mixtures. .. figure:: ../auto\_examples/mixture/images/sphx\_glr\_plot\_gmm\_sin\_001.png :target: ../auto\_examples/mixture/plot\_gmm\_sin.html :align: center :scale: 65% .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_mixture\_plot\_gmm.py` for an example on plotting the confidence ellipsoids for both :class:`GaussianMixture` and :class:`BayesianGaussianMixture`. \* :ref:`sphx\_glr\_auto\_examples\_mixture\_plot\_gmm\_sin.py` shows using :class:`GaussianMixture` and :class:`BayesianGaussianMixture` to fit a sine wave. \* See :ref:`sphx\_glr\_auto\_examples\_mixture\_plot\_concentration\_prior.py` for an example plotting the confidence ellipsoids for the :class:`BayesianGaussianMixture` with different ``weight\_concentration\_prior\_type`` for different values of the parameter ``weight\_concentration\_prior``. .. dropdown:: Pros and cons of variational inference with BayesianGaussianMixture .. rubric:: Pros :Automatic selection: When ``weight\_concentration\_prior`` is small enough and ``n\_components`` is larger than what is found necessary by the model, the Variational Bayesian mixture model has a natural tendency to set some mixture weights values close to zero. This makes it possible to let the model choose a suitable number of effective components automatically. Only an upper bound of this number needs to be provided. Note however that the "ideal" number of active components is very application specific and is typically ill-defined in a data exploration setting. :Less sensitivity to the number of parameters: Unlike finite models, which will almost always use all components as much as they can, and hence will produce wildly different solutions for different numbers of components, the variational inference with a Dirichlet process prior (``weight\_concentration\_prior\_type='dirichlet\_process'``) won't change much with changes to the parameters, leading to more stability and less tuning. :Regularization: Due to the incorporation of prior information, variational solutions have less pathological special cases | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/mixture.rst | main | scikit-learn | [
-0.019125450402498245,
-0.13617953658103943,
0.07637427002191544,
-0.008478150703012943,
0.042586736381053925,
0.0017482730327174067,
0.08955186605453491,
0.08556347340345383,
0.0570128969848156,
-0.004223431460559368,
0.025863830000162125,
0.0474492646753788,
0.05892200022935867,
-0.03719... | 0.061131 |
hence will produce wildly different solutions for different numbers of components, the variational inference with a Dirichlet process prior (``weight\_concentration\_prior\_type='dirichlet\_process'``) won't change much with changes to the parameters, leading to more stability and less tuning. :Regularization: Due to the incorporation of prior information, variational solutions have less pathological special cases than expectation-maximization solutions. .. rubric:: Cons :Speed: The extra parametrization necessary for variational inference makes inference slower, although not by much. :Hyperparameters: This algorithm needs an extra hyperparameter that might need experimental tuning via cross-validation. :Bias: There are many implicit biases in the inference algorithms (and also in the Dirichlet process if used), and whenever there is a mismatch between these biases and the data it might be possible to fit better models using a finite mixture. .. \_dirichlet\_process: The Dirichlet Process --------------------- Here we describe variational inference algorithms on Dirichlet process mixture. The Dirichlet process is a prior probability distribution on \*clusterings with an infinite, unbounded, number of partitions\*. Variational techniques let us incorporate this prior structure on Gaussian mixture models at almost no penalty in inference time, comparing with a finite Gaussian mixture model. An important question is how can the Dirichlet process use an infinite, unbounded number of clusters and still be consistent. While a full explanation doesn't fit this manual, one can think of its `stick breaking process `\_ analogy to help understanding it. The stick breaking process is a generative story for the Dirichlet process. We start with a unit-length stick and in each step we break off a portion of the remaining stick. Each time, we associate the length of the piece of the stick to the proportion of points that falls into a group of the mixture. At the end, to represent the infinite mixture, we associate the last remaining piece of the stick to the proportion of points that don't fall into all the other groups. The length of each piece is a random variable with probability proportional to the concentration parameter. Smaller values of the concentration will divide the unit-length into larger pieces of the stick (defining more concentrated distribution). Larger concentration values will create smaller pieces of the stick (increasing the number of components with non zero weights). Variational inference techniques for the Dirichlet process still work with a finite approximation to this infinite mixture model, but instead of having to specify a priori how many components one wants to use, one just specifies the concentration parameter and an upper bound on the number of mixture components (this upper bound, assuming it is higher than the "true" number of components, affects only algorithmic complexity, not the actual number of components used). | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/mixture.rst | main | scikit-learn | [
-0.04293849319219589,
-0.14528238773345947,
0.057175274938344955,
-0.03376021236181259,
0.04773196578025818,
0.05516408383846283,
0.0077421762980520725,
0.030941344797611237,
-0.002795402193441987,
-0.03381579741835594,
-0.04465649649500847,
0.05525238439440727,
0.0030381272081285715,
0.00... | 0.036438 |
.. \_partial\_dependence: =============================================================== Partial Dependence and Individual Conditional Expectation plots =============================================================== .. currentmodule:: sklearn.inspection Partial dependence plots (PDP) and individual conditional expectation (ICE) plots can be used to visualize and analyze interaction between the target response [1]\_ and a set of input features of interest. Both PDPs [H2009]\_ and ICEs [G2015]\_ assume that the input features of interest are independent from the complement features, and this assumption is often violated in practice. Thus, in the case of correlated features, we will create absurd data points to compute the PDP/ICE [M2019]\_. Partial dependence plots ======================== Partial dependence plots (PDP) show the dependence between the target response and a set of input features of interest, marginalizing over the values of all other input features (the 'complement' features). Intuitively, we can interpret the partial dependence as the expected target response as a function of the input features of interest. Due to the limits of human perception, the size of the set of input features of interest must be small (usually, one or two) thus the input features of interest are usually chosen among the most important features. The figure below shows two one-way and one two-way partial dependence plots for the bike sharing dataset, with a :class:`~sklearn.ensemble.HistGradientBoostingRegressor`: .. figure:: ../auto\_examples/inspection/images/sphx\_glr\_plot\_partial\_dependence\_006.png :target: ../auto\_examples/inspection/plot\_partial\_dependence.html :align: center :scale: 70 One-way PDPs tell us about the interaction between the target response and an input feature of interest (e.g. linear, non-linear). The left plot in the above figure shows the effect of the temperature on the number of bike rentals; we can clearly see that a higher temperature is related with a higher number of bike rentals. Similarly, we could analyze the effect of the humidity on the number of bike rentals (middle plot). Thus, these interpretations are marginal, considering a feature at a time. PDPs with two input features of interest show the interactions among the two features. For example, the two-variable PDP in the above figure shows the dependence of the number of bike rentals on joint values of temperature and humidity. We can clearly see an interaction between the two features: with a temperature higher than 20 degrees Celsius, mainly the humidity has a strong impact on the number of bike rentals. For lower temperatures, both the temperature and the humidity have an impact on the number of bike rentals. The :mod:`sklearn.inspection` module provides a convenience function :func:`~PartialDependenceDisplay.from\_estimator` to create one-way and two-way partial dependence plots. In the below example we show how to create a grid of partial dependence plots: two one-way PDPs for the features ``0`` and ``1`` and a two-way PDP between the two features:: >>> from sklearn.datasets import make\_hastie\_10\_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> from sklearn.inspection import PartialDependenceDisplay >>> X, y = make\_hastie\_10\_2(random\_state=0) >>> clf = GradientBoostingClassifier(n\_estimators=100, learning\_rate=1.0, ... max\_depth=1, random\_state=0).fit(X, y) >>> features = [0, 1, (0, 1)] >>> PartialDependenceDisplay.from\_estimator(clf, X, features) <...> You can access the newly created figure and Axes objects using ``plt.gcf()`` and ``plt.gca()``. To make a partial dependence plot with categorical features, you need to specify which features are categorical using the parameter `categorical\_features`. This parameter takes a list of indices, names of the categorical features or a boolean mask. The graphical representation of partial dependence for categorical features is a bar plot or a 2D heatmap. .. dropdown:: PDPs for multi-class classification For multi-class classification, you need to set the class label for which the PDPs should be created via the ``target`` argument:: >>> from sklearn.datasets import load\_iris >>> iris = load\_iris() >>> mc\_clf = GradientBoostingClassifier(n\_estimators=10, ... max\_depth=1).fit(iris.data, iris.target) >>> features = [3, 2, (3, 2)] >>> PartialDependenceDisplay.from\_estimator(mc\_clf, X, features, target=0) <...> The same | https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/partial_dependence.rst | main | scikit-learn | [
-0.08118787407875061,
-0.04124707356095314,
0.09971583634614944,
0.043704554438591,
0.14379380643367767,
0.011309641413390636,
0.010733507573604584,
0.00011238826118642464,
0.009885194711387157,
-0.04436662793159485,
0.0007441829075105488,
-0.020508764311671257,
0.030398491770029068,
0.003... | -0.025652 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.