content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
.. \_gaussian\_process: ================== Gaussian Processes ================== .. currentmodule:: sklearn.gaussian\_process \*\*Gaussian Processes (GP)\*\* are a nonparametric supervised learning method used to solve \*regression\* and \*probabilistic classification\* problems. The advantages of Gaussian processes are: - The prediction interpolates the observations (at least for regular kernels). - The prediction is probabilistic (Gaussian) so that one can compute empirical confidence intervals and decide based on those if one should refit (online fitting, adaptive fitting) the prediction in some region of interest. - Versatile: different :ref:`kernels ` can be specified. Common kernels are provided, but it is also possible to specify custom kernels. The disadvantages of Gaussian processes include: - Our implementation is not sparse, i.e., they use the whole samples/features information to perform the prediction. - They lose efficiency in high dimensional spaces -- namely when the number of features exceeds a few dozens. .. \_gpr: Gaussian Process Regression (GPR) ================================= .. currentmodule:: sklearn.gaussian\_process The :class:`GaussianProcessRegressor` implements Gaussian processes (GP) for regression purposes. For this, the prior of the GP needs to be specified. GP will combine this prior and the likelihood function based on training samples. It allows to give a probabilistic approach to prediction by giving the mean and standard deviation as output when predicting. .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpr\_noisy\_targets\_002.png :target: ../auto\_examples/gaussian\_process/plot\_gpr\_noisy\_targets.html :align: center The prior mean is assumed to be constant and zero (for `normalize\_y=False`) or the training data's mean (for `normalize\_y=True`). The prior's covariance is specified by passing a :ref:`kernel ` object. The hyperparameters of the kernel are optimized when fitting the :class:`GaussianProcessRegressor` by maximizing the log-marginal-likelihood (LML) based on the passed `optimizer`. As the LML may have multiple local optima, the optimizer can be started repeatedly by specifying `n\_restarts\_optimizer`. The first run is always conducted starting from the initial hyperparameter values of the kernel; subsequent runs are conducted from hyperparameter values that have been chosen randomly from the range of allowed values. If the initial hyperparameters should be kept fixed, `None` can be passed as optimizer. The noise level in the targets can be specified by passing it via the parameter `alpha`, either globally as a scalar or per datapoint. Note that a moderate noise level can also be helpful for dealing with numeric instabilities during fitting as it is effectively implemented as Tikhonov regularization, i.e., by adding it to the diagonal of the kernel matrix. An alternative to specifying the noise level explicitly is to include a :class:`~sklearn.gaussian\_process.kernels.WhiteKernel` component into the kernel, which can estimate the global noise level from the data (see example below). The figure below shows the effect of noisy target handled by setting the parameter `alpha`. .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpr\_noisy\_targets\_003.png :target: ../auto\_examples/gaussian\_process/plot\_gpr\_noisy\_targets.html :align: center The implementation is based on Algorithm 2.1 of [RW2006]\_. In addition to the API of standard scikit-learn estimators, :class:`GaussianProcessRegressor`: \* allows prediction without prior fitting (based on the GP prior) \* provides an additional method ``sample\_y(X)``, which evaluates samples drawn from the GPR (prior or posterior) at given inputs \* exposes a method ``log\_marginal\_likelihood(theta)``, which can be used externally for other ways of selecting hyperparameters, e.g., via Markov chain Monte Carlo. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_gaussian\_process\_plot\_gpr\_noisy\_targets.py` \* :ref:`sphx\_glr\_auto\_examples\_gaussian\_process\_plot\_gpr\_noisy.py` \* :ref:`sphx\_glr\_auto\_examples\_gaussian\_process\_plot\_compare\_gpr\_krr.py` \* :ref:`sphx\_glr\_auto\_examples\_gaussian\_process\_plot\_gpr\_co2.py` .. \_gpc: Gaussian Process Classification (GPC) ===================================== .. currentmodule:: sklearn.gaussian\_process The :class:`GaussianProcessClassifier` implements Gaussian processes (GP) for classification purposes, more specifically for probabilistic classification, where test predictions take the form of class probabilities. GaussianProcessClassifier places a GP prior on a latent function :math:`f`, which is then squashed through a link function :math:`\pi` to obtain the probabilistic classification. The latent function :math:`f` is a so-called nuisance function, whose values are not observed and are not relevant by themselves. Its purpose is to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst
main
scikit-learn
[ -0.14184944331645966, -0.1247568354010582, -0.004009552299976349, -0.001137965708039701, 0.08885965496301651, -0.05928182601928711, 0.03270294889807701, 0.037590738385915756, -0.015495049767196178, 0.015049890615046024, -0.053129758685827255, 0.028344931080937386, 0.049770236015319824, -0....
0.2214
class probabilities. GaussianProcessClassifier places a GP prior on a latent function :math:`f`, which is then squashed through a link function :math:`\pi` to obtain the probabilistic classification. The latent function :math:`f` is a so-called nuisance function, whose values are not observed and are not relevant by themselves. Its purpose is to allow a convenient formulation of the model, and :math:`f` is removed (integrated out) during prediction. :class:`GaussianProcessClassifier` implements the logistic link function, for which the integral cannot be computed analytically but is easily approximated in the binary case. In contrast to the regression setting, the posterior of the latent function :math:`f` is not Gaussian even for a GP prior since a Gaussian likelihood is inappropriate for discrete class labels. Rather, a non-Gaussian likelihood corresponding to the logistic link function (logit) is used. GaussianProcessClassifier approximates the non-Gaussian posterior with a Gaussian based on the Laplace approximation. More details can be found in Chapter 3 of [RW2006]\_. The GP prior mean is assumed to be zero. The prior's covariance is specified by passing a :ref:`kernel ` object. The hyperparameters of the kernel are optimized during fitting of GaussianProcessRegressor by maximizing the log-marginal-likelihood (LML) based on the passed ``optimizer``. As the LML may have multiple local optima, the optimizer can be started repeatedly by specifying ``n\_restarts\_optimizer``. The first run is always conducted starting from the initial hyperparameter values of the kernel; subsequent runs are conducted from hyperparameter values that have been chosen randomly from the range of allowed values. If the initial hyperparameters should be kept fixed, `None` can be passed as optimizer. In some scenarios, information about the latent function :math:`f` is desired (i.e. the mean :math:`\bar{f\_\*}` and the variance :math:`\text{Var}[f\_\*]` described in Eqs. (3.21) and (3.24) of [RW2006]\_). The :class:`GaussianProcessClassifier` provides access to these quantities via the `latent\_mean\_and\_variance` method. :class:`GaussianProcessClassifier` supports multi-class classification by performing either one-versus-rest or one-versus-one based training and prediction. In one-versus-rest, one binary Gaussian process classifier is fitted for each class, which is trained to separate this class from the rest. In "one\_vs\_one", one binary Gaussian process classifier is fitted for each pair of classes, which is trained to separate these two classes. The predictions of these binary predictors are combined into multi-class predictions. See the section on :ref:`multi-class classification ` for more details. In the case of Gaussian process classification, "one\_vs\_one" might be computationally cheaper since it has to solve many problems involving only a subset of the whole training set rather than fewer problems on the whole dataset. Since Gaussian process classification scales cubically with the size of the dataset, this might be considerably faster. However, note that "one\_vs\_one" does not support predicting probability estimates but only plain predictions. Moreover, note that :class:`GaussianProcessClassifier` does not (yet) implement a true multi-class Laplace approximation internally, but as discussed above is based on solving several binary classification tasks internally, which are combined using one-versus-rest or one-versus-one. GPC examples ============ Probabilistic predictions with GPC ---------------------------------- This example illustrates the predicted probability of GPC for an RBF kernel with different choices of the hyperparameters. The first figure shows the predicted probability of GPC with arbitrarily chosen hyperparameters and with the hyperparameters corresponding to the maximum log-marginal-likelihood (LML). While the hyperparameters chosen by optimizing LML have a considerably larger LML, they perform slightly worse according to the log-loss on test data. The figure shows that this is because they exhibit a steep change of the class probabilities at the class boundaries (which is good) but have predicted probabilities close to 0.5 far away from the class boundaries (which is bad). This undesirable effect is caused by the Laplace approximation used internally by
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst
main
scikit-learn
[ -0.08151236921548843, -0.04049821197986603, -0.050374191254377365, -0.04461104795336723, 0.08535488694906235, -0.022295160219073296, 0.03155365213751793, 0.028752099722623825, -0.004088874440640211, 0.04807540029287338, 0.027038775384426117, 0.06838030368089676, 0.06514395028352737, -0.032...
0.101755
figure shows that this is because they exhibit a steep change of the class probabilities at the class boundaries (which is good) but have predicted probabilities close to 0.5 far away from the class boundaries (which is bad). This undesirable effect is caused by the Laplace approximation used internally by GPC. The second figure shows the log-marginal-likelihood for different choices of the kernel's hyperparameters, highlighting the two choices of the hyperparameters used in the first figure by black dots. .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpc\_001.png :target: ../auto\_examples/gaussian\_process/plot\_gpc.html :align: center .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpc\_002.png :target: ../auto\_examples/gaussian\_process/plot\_gpc.html :align: center Illustration of GPC on the XOR dataset -------------------------------------- .. currentmodule:: sklearn.gaussian\_process.kernels This example illustrates GPC on XOR data. Compared are a stationary, isotropic kernel (:class:`RBF`) and a non-stationary kernel (:class:`DotProduct`). On this particular dataset, the :class:`DotProduct` kernel obtains considerably better results because the class-boundaries are linear and coincide with the coordinate axes. In practice, however, stationary kernels such as :class:`RBF` often obtain better results. .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpc\_xor\_001.png :target: ../auto\_examples/gaussian\_process/plot\_gpc\_xor.html :align: center .. currentmodule:: sklearn.gaussian\_process Gaussian process classification (GPC) on iris dataset ----------------------------------------------------- This example illustrates the predicted probability of GPC for an isotropic and anisotropic RBF kernel on a two-dimensional version for the iris dataset. This illustrates the applicability of GPC to non-binary classification. The anisotropic RBF kernel obtains slightly higher log-marginal-likelihood by assigning different length-scales to the two feature dimensions. .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpc\_iris\_001.png :target: ../auto\_examples/gaussian\_process/plot\_gpc\_iris.html :align: center .. \_gp\_kernels: Kernels for Gaussian Processes ============================== .. currentmodule:: sklearn.gaussian\_process.kernels Kernels (also called "covariance functions" in the context of GPs) are a crucial ingredient of GPs which determine the shape of prior and posterior of the GP. They encode the assumptions on the function being learned by defining the "similarity" of two datapoints combined with the assumption that similar datapoints should have similar target values. Two categories of kernels can be distinguished: stationary kernels depend only on the distance of two datapoints and not on their absolute values :math:`k(x\_i, x\_j)= k(d(x\_i, x\_j))` and are thus invariant to translations in the input space, while non-stationary kernels depend also on the specific values of the datapoints. Stationary kernels can further be subdivided into isotropic and anisotropic kernels, where isotropic kernels are also invariant to rotations in the input space. For more details, we refer to Chapter 4 of [RW2006]\_. :ref:`This example ` shows how to define a custom kernel over discrete data. For guidance on how to best combine different kernels, we refer to [Duv2014]\_. .. dropdown:: Gaussian Process Kernel API The main usage of a :class:`Kernel` is to compute the GP's covariance between datapoints. For this, the method ``\_\_call\_\_`` of the kernel can be called. This method can either be used to compute the "auto-covariance" of all pairs of datapoints in a 2d array X, or the "cross-covariance" of all combinations of datapoints of a 2d array X with datapoints in a 2d array Y. The following identity holds true for all kernels k (except for the :class:`WhiteKernel`): ``k(X) == K(X, Y=X)`` If only the diagonal of the auto-covariance is being used, the method ``diag()`` of a kernel can be called, which is more computationally efficient than the equivalent call to ``\_\_call\_\_``: ``np.diag(k(X, X)) == k.diag(X)`` Kernels are parameterized by a vector :math:`\theta` of hyperparameters. These hyperparameters can for instance control length-scales or periodicity of a kernel (see below). All kernels support computing analytic gradients of the kernel's auto-covariance with respect to :math:`log(\theta)` via setting ``eval\_gradient=True`` in the ``\_\_call\_\_`` method. That is, a ``(len(X), len(X), len(theta))`` array is returned where the entry ``[i, j, l]`` contains :math:`\frac{\partial k\_\theta(x\_i, x\_j)}{\partial log(\theta\_l)}`. This gradient is used by the Gaussian process (both regressor
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst
main
scikit-learn
[ -0.06172211840748787, -0.11372900009155273, 0.04092099145054817, 0.0017825362738221884, 0.11503136157989502, -0.09101416170597076, 0.0211293026804924, 0.028018223121762276, 0.012594794854521751, -0.017782660201191902, 0.023550624027848244, 0.011577094905078411, 0.013258402235805988, -0.057...
-0.001765
kernels support computing analytic gradients of the kernel's auto-covariance with respect to :math:`log(\theta)` via setting ``eval\_gradient=True`` in the ``\_\_call\_\_`` method. That is, a ``(len(X), len(X), len(theta))`` array is returned where the entry ``[i, j, l]`` contains :math:`\frac{\partial k\_\theta(x\_i, x\_j)}{\partial log(\theta\_l)}`. This gradient is used by the Gaussian process (both regressor and classifier) in computing the gradient of the log-marginal-likelihood, which in turn is used to determine the value of :math:`\theta`, which maximizes the log-marginal-likelihood, via gradient ascent. For each hyperparameter, the initial value and the bounds need to be specified when creating an instance of the kernel. The current value of :math:`\theta` can be get and set via the property ``theta`` of the kernel object. Moreover, the bounds of the hyperparameters can be accessed by the property ``bounds`` of the kernel. Note that both properties (theta and bounds) return log-transformed values of the internally used values since those are typically more amenable to gradient-based optimization. The specification of each hyperparameter is stored in the form of an instance of :class:`Hyperparameter` in the respective kernel. Note that a kernel using a hyperparameter with name "x" must have the attributes self.x and self.x\_bounds. The abstract base class for all kernels is :class:`Kernel`. Kernel implements a similar interface as :class:`~sklearn.base.BaseEstimator`, providing the methods ``get\_params()``, ``set\_params()``, and ``clone()``. This allows setting kernel values also via meta-estimators such as :class:`~sklearn.pipeline.Pipeline` or :class:`~sklearn.model\_selection.GridSearchCV`. Note that due to the nested structure of kernels (by applying kernel operators, see below), the names of kernel parameters might become relatively complicated. In general, for a binary kernel operator, parameters of the left operand are prefixed with ``k1\_\_`` and parameters of the right operand with ``k2\_\_``. An additional convenience method is ``clone\_with\_theta(theta)``, which returns a cloned version of the kernel but with the hyperparameters set to ``theta``. An illustrative example: >>> from sklearn.gaussian\_process.kernels import ConstantKernel, RBF >>> kernel = ConstantKernel(constant\_value=1.0, constant\_value\_bounds=(0.0, 10.0)) \* RBF(length\_scale=0.5, length\_scale\_bounds=(0.0, 10.0)) + RBF(length\_scale=2.0, length\_scale\_bounds=(0.0, 10.0)) >>> for hyperparameter in kernel.hyperparameters: print(hyperparameter) Hyperparameter(name='k1\_\_k1\_\_constant\_value', value\_type='numeric', bounds=array([[ 0., 10.]]), n\_elements=1, fixed=False) Hyperparameter(name='k1\_\_k2\_\_length\_scale', value\_type='numeric', bounds=array([[ 0., 10.]]), n\_elements=1, fixed=False) Hyperparameter(name='k2\_\_length\_scale', value\_type='numeric', bounds=array([[ 0., 10.]]), n\_elements=1, fixed=False) >>> params = kernel.get\_params() >>> for key in sorted(params): print("%s : %s" % (key, params[key])) k1 : 1\*\*2 \* RBF(length\_scale=0.5) k1\_\_k1 : 1\*\*2 k1\_\_k1\_\_constant\_value : 1.0 k1\_\_k1\_\_constant\_value\_bounds : (0.0, 10.0) k1\_\_k2 : RBF(length\_scale=0.5) k1\_\_k2\_\_length\_scale : 0.5 k1\_\_k2\_\_length\_scale\_bounds : (0.0, 10.0) k2 : RBF(length\_scale=2) k2\_\_length\_scale : 2.0 k2\_\_length\_scale\_bounds : (0.0, 10.0) >>> print(kernel.theta) # Note: log-transformed [ 0. -0.69314718 0.69314718] >>> print(kernel.bounds) # Note: log-transformed [[ -inf 2.30258509] [ -inf 2.30258509] [ -inf 2.30258509]] All Gaussian process kernels are interoperable with :mod:`sklearn.metrics.pairwise` and vice versa: instances of subclasses of :class:`Kernel` can be passed as ``metric`` to ``pairwise\_kernels`` from :mod:`sklearn.metrics.pairwise`. Moreover, kernel functions from pairwise can be used as GP kernels by using the wrapper class :class:`PairwiseKernel`. The only caveat is that the gradient of the hyperparameters is not analytic but numeric and all those kernels support only isotropic distances. The parameter ``gamma`` is considered to be a hyperparameter and may be optimized. The other kernel parameters are set directly at initialization and are kept fixed. Basic kernels ------------- The :class:`ConstantKernel` kernel can be used as part of a :class:`Product` kernel where it scales the magnitude of the other factor (kernel) or as part of a :class:`Sum` kernel, where it modifies the mean of the Gaussian process. It depends on a parameter :math:`constant\\_value`. It is defined as: .. math:: k(x\_i, x\_j) = constant\\_value \;\forall\; x\_i, x\_j The main use-case of the :class:`WhiteKernel` kernel is as part of a sum-kernel where it explains the noise-component of the signal. Tuning its parameter :math:`noise\\_level` corresponds to estimating the noise-level.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst
main
scikit-learn
[ -0.09342975914478302, -0.10570970922708511, -0.03421347215771675, -0.03275168314576149, 0.028467882424592972, -0.06911543011665344, 0.03542831540107727, 0.006076071877032518, 0.06801651418209076, -0.006152149755507708, 0.039129890501499176, 0.0635596290230751, 0.0420430488884449, -0.038047...
0.178281
process. It depends on a parameter :math:`constant\\_value`. It is defined as: .. math:: k(x\_i, x\_j) = constant\\_value \;\forall\; x\_i, x\_j The main use-case of the :class:`WhiteKernel` kernel is as part of a sum-kernel where it explains the noise-component of the signal. Tuning its parameter :math:`noise\\_level` corresponds to estimating the noise-level. It is defined as: .. math:: k(x\_i, x\_j) = noise\\_level \text{ if } x\_i == x\_j \text{ else } 0 Kernel operators ---------------- Kernel operators take one or two base kernels and combine them into a new kernel. The :class:`Sum` kernel takes two kernels :math:`k\_1` and :math:`k\_2` and combines them via :math:`k\_{sum}(X, Y) = k\_1(X, Y) + k\_2(X, Y)`. The :class:`Product` kernel takes two kernels :math:`k\_1` and :math:`k\_2` and combines them via :math:`k\_{product}(X, Y) = k\_1(X, Y) \* k\_2(X, Y)`. The :class:`Exponentiation` kernel takes one base kernel and a scalar parameter :math:`p` and combines them via :math:`k\_{exp}(X, Y) = k(X, Y)^p`. Note that magic methods ``\_\_add\_\_``, ``\_\_mul\_\_\_`` and ``\_\_pow\_\_`` are overridden on the Kernel objects, so one can use e.g. ``RBF() + RBF()`` as a shortcut for ``Sum(RBF(), RBF())``. Radial basis function (RBF) kernel ---------------------------------- The :class:`RBF` kernel is a stationary kernel. It is also known as the "squared exponential" kernel. It is parameterized by a length-scale parameter :math:`l>0`, which can either be a scalar (isotropic variant of the kernel) or a vector with the same number of dimensions as the inputs :math:`x` (anisotropic variant of the kernel). The kernel is given by: .. math:: k(x\_i, x\_j) = \text{exp}\left(- \frac{d(x\_i, x\_j)^2}{2l^2} \right) where :math:`d(\cdot, \cdot)` is the Euclidean distance. This kernel is infinitely differentiable, which implies that GPs with this kernel as covariance function have mean square derivatives of all orders, and are thus very smooth. The prior and posterior of a GP resulting from an RBF kernel are shown in the following figure: .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpr\_prior\_posterior\_001.png :target: ../auto\_examples/gaussian\_process/plot\_gpr\_prior\_posterior.html :align: center Matérn kernel ------------- The :class:`Matern` kernel is a stationary kernel and a generalization of the :class:`RBF` kernel. It has an additional parameter :math:`\nu` which controls the smoothness of the resulting function. It is parameterized by a length-scale parameter :math:`l>0`, which can either be a scalar (isotropic variant of the kernel) or a vector with the same number of dimensions as the inputs :math:`x` (anisotropic variant of the kernel). .. dropdown:: Mathematical implementation of Matérn kernel The kernel is given by: .. math:: k(x\_i, x\_j) = \frac{1}{\Gamma(\nu)2^{\nu-1}}\Bigg(\frac{\sqrt{2\nu}}{l} d(x\_i , x\_j )\Bigg)^\nu K\_\nu\Bigg(\frac{\sqrt{2\nu}}{l} d(x\_i , x\_j )\Bigg), where :math:`d(\cdot,\cdot)` is the Euclidean distance, :math:`K\_\nu(\cdot)` is a modified Bessel function and :math:`\Gamma(\cdot)` is the gamma function. As :math:`\nu\rightarrow\infty`, the Matérn kernel converges to the RBF kernel. When :math:`\nu = 1/2`, the Matérn kernel becomes identical to the absolute exponential kernel, i.e., .. math:: k(x\_i, x\_j) = \exp \Bigg(- \frac{1}{l} d(x\_i , x\_j ) \Bigg) \quad \quad \nu= \tfrac{1}{2} In particular, :math:`\nu = 3/2`: .. math:: k(x\_i, x\_j) = \Bigg(1 + \frac{\sqrt{3}}{l} d(x\_i , x\_j )\Bigg) \exp \Bigg(-\frac{\sqrt{3}}{l} d(x\_i , x\_j ) \Bigg) \quad \quad \nu= \tfrac{3}{2} and :math:`\nu = 5/2`: .. math:: k(x\_i, x\_j) = \Bigg(1 + \frac{\sqrt{5}}{l} d(x\_i , x\_j ) +\frac{5}{3l} d(x\_i , x\_j )^2 \Bigg) \exp \Bigg(-\frac{\sqrt{5}}{l} d(x\_i , x\_j ) \Bigg) \quad \quad \nu= \tfrac{5}{2} are popular choices for learning functions that are not infinitely differentiable (as assumed by the RBF kernel) but at least once (:math:`\nu = 3/2`) or twice differentiable (:math:`\nu = 5/2`). The flexibility of controlling the smoothness of the learned function via :math:`\nu` allows adapting to the properties of the true underlying functional relation. The prior and posterior of a GP resulting from a Matérn kernel are shown in the following figure: .. figure::
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst
main
scikit-learn
[ -0.05592400208115578, -0.011683421209454536, 0.01025378331542015, 0.016160322353243828, -0.001971747726202011, -0.06739881634712219, 0.06623728573322296, -0.015976952388882637, -0.0013665745500475168, -0.0805351510643959, -0.0293504036962986, -0.00313646555878222, 0.04261929914355278, -0.0...
0.170124
3/2`) or twice differentiable (:math:`\nu = 5/2`). The flexibility of controlling the smoothness of the learned function via :math:`\nu` allows adapting to the properties of the true underlying functional relation. The prior and posterior of a GP resulting from a Matérn kernel are shown in the following figure: .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpr\_prior\_posterior\_005.png :target: ../auto\_examples/gaussian\_process/plot\_gpr\_prior\_posterior.html :align: center See [RW2006]\_, pp84 for further details regarding the different variants of the Matérn kernel. Rational quadratic kernel ------------------------- The :class:`RationalQuadratic` kernel can be seen as a scale mixture (an infinite sum) of :class:`RBF` kernels with different characteristic length-scales. It is parameterized by a length-scale parameter :math:`l>0` and a scale mixture parameter :math:`\alpha>0` Only the isotropic variant where :math:`l` is a scalar is supported at the moment. The kernel is given by: .. math:: k(x\_i, x\_j) = \left(1 + \frac{d(x\_i, x\_j)^2}{2\alpha l^2}\right)^{-\alpha} The prior and posterior of a GP resulting from a :class:`RationalQuadratic` kernel are shown in the following figure: .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpr\_prior\_posterior\_002.png :target: ../auto\_examples/gaussian\_process/plot\_gpr\_prior\_posterior.html :align: center Exp-Sine-Squared kernel ----------------------- The :class:`ExpSineSquared` kernel allows modeling periodic functions. It is parameterized by a length-scale parameter :math:`l>0` and a periodicity parameter :math:`p>0`. Only the isotropic variant where :math:`l` is a scalar is supported at the moment. The kernel is given by: .. math:: k(x\_i, x\_j) = \text{exp}\left(- \frac{ 2\sin^2(\pi d(x\_i, x\_j) / p) }{ l^ 2} \right) The prior and posterior of a GP resulting from an ExpSineSquared kernel are shown in the following figure: .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpr\_prior\_posterior\_003.png :target: ../auto\_examples/gaussian\_process/plot\_gpr\_prior\_posterior.html :align: center Dot-Product kernel ------------------ The :class:`DotProduct` kernel is non-stationary and can be obtained from linear regression by putting :math:`N(0, 1)` priors on the coefficients of :math:`x\_d (d = 1, . . . , D)` and a prior of :math:`N(0, \sigma\_0^2)` on the bias. The :class:`DotProduct` kernel is invariant to a rotation of the coordinates about the origin, but not translations. It is parameterized by a parameter :math:`\sigma\_0^2`. For :math:`\sigma\_0^2 = 0`, the kernel is called the homogeneous linear kernel, otherwise it is inhomogeneous. The kernel is given by .. math:: k(x\_i, x\_j) = \sigma\_0 ^ 2 + x\_i \cdot x\_j The :class:`DotProduct` kernel is commonly combined with exponentiation. An example with exponent 2 is shown in the following figure: .. figure:: ../auto\_examples/gaussian\_process/images/sphx\_glr\_plot\_gpr\_prior\_posterior\_004.png :target: ../auto\_examples/gaussian\_process/plot\_gpr\_prior\_posterior.html :align: center References ---------- .. [RW2006] `Carl E. Rasmussen and Christopher K.I. Williams, "Gaussian Processes for Machine Learning", MIT Press 2006 `\_ .. [Duv2014] `David Duvenaud, "The Kernel Cookbook: Advice on Covariance functions", 2014 `\_ .. currentmodule:: sklearn.gaussian\_process
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/gaussian_process.rst
main
scikit-learn
[ -0.10083723813295364, -0.15042442083358765, -0.0255175419151783, -0.022239210084080696, -0.010115078650414944, -0.0009851689683273435, 0.02933925949037075, 0.045976486057043076, 0.050348054617643356, -0.0170463714748621, 0.07882999628782272, 0.015247971750795841, -0.0026466939598321915, 0....
0.13509
.. \_impute: ============================ Imputation of missing values ============================ .. currentmodule:: sklearn.impute For various reasons, many real world datasets contain missing values, often encoded as blanks, NaNs or other placeholders. Such datasets however are incompatible with scikit-learn estimators which assume that all values in an array are numerical, and that all have and hold meaning. A basic strategy to use incomplete datasets is to discard entire rows and/or columns containing missing values. However, this comes at the price of losing data which may be valuable (even though incomplete). A better strategy is to impute the missing values, i.e., to infer them from the known part of the data. See the glossary entry on :term:`imputation`. Univariate vs. Multivariate Imputation ====================================== One type of imputation algorithm is univariate, which imputes values in the i-th feature dimension using only non-missing values in that feature dimension (e.g. :class:`SimpleImputer`). By contrast, multivariate imputation algorithms use the entire set of available feature dimensions to estimate the missing values (e.g. :class:`IterativeImputer`). .. \_single\_imputer: Univariate feature imputation ============================= The :class:`SimpleImputer` class provides basic strategies for imputing missing values. Missing values can be imputed with a provided constant value, or using the statistics (mean, median or most frequent) of each column in which the missing values are located. This class also allows for different missing values encodings. The following snippet demonstrates how to replace missing values, encoded as ``np.nan``, using the mean value of the columns (axis 0) that contain the missing values:: >>> import numpy as np >>> from sklearn.impute import SimpleImputer >>> imp = SimpleImputer(missing\_values=np.nan, strategy='mean') >>> imp.fit([[1, 2], [np.nan, 3], [7, 6]]) SimpleImputer() >>> X = [[np.nan, 2], [6, np.nan], [7, 6]] >>> print(imp.transform(X)) [[4. 2. ] [6. 3.666] [7. 6. ]] The :class:`SimpleImputer` class also supports sparse matrices:: >>> import scipy.sparse as sp >>> X = sp.csc\_matrix([[1, 2], [0, -1], [8, 4]]) >>> imp = SimpleImputer(missing\_values=-1, strategy='mean') >>> imp.fit(X) SimpleImputer(missing\_values=-1) >>> X\_test = sp.csc\_matrix([[-1, 2], [6, -1], [7, 6]]) >>> print(imp.transform(X\_test).toarray()) [[3. 2.] [6. 3.] [7. 6.]] Note that this format is not meant to be used to implicitly store missing values in the matrix because it would densify it at transform time. Missing values encoded by 0 must be used with dense input. The :class:`SimpleImputer` class also supports categorical data represented as string values or pandas categoricals when using the ``'most\_frequent'`` or ``'constant'`` strategy:: >>> import pandas as pd >>> df = pd.DataFrame([["a", "x"], ... [np.nan, "y"], ... ["a", np.nan], ... ["b", "y"]], dtype="category") ... >>> imp = SimpleImputer(strategy="most\_frequent") >>> print(imp.fit\_transform(df)) [['a' 'x'] ['a' 'y'] ['a' 'y'] ['b' 'y']] For another example on usage, see :ref:`sphx\_glr\_auto\_examples\_impute\_plot\_missing\_values.py`. .. \_iterative\_imputer: Multivariate feature imputation =============================== A more sophisticated approach is to use the :class:`IterativeImputer` class, which models each feature with missing values as a function of other features, and uses that estimate for imputation. It does so in an iterated round-robin fashion: at each step, a feature column is designated as output ``y`` and the other feature columns are treated as inputs ``X``. A regressor is fit on ``(X, y)`` for known ``y``. Then, the regressor is used to predict the missing values of ``y``. This is done for each feature in an iterative fashion, and then is repeated for ``max\_iter`` imputation rounds. The results of the final imputation round are returned. .. note:: This estimator is still \*\*experimental\*\* for now: default parameters or details of behaviour might change without any deprecation cycle. Resolving the following issues would help stabilize :class:`IterativeImputer`: convergence criteria (:issue:`14338`) and default estimators (:issue:`13286`). To use it, you need to explicitly import ``enable\_iterative\_imputer``. :: >>> import numpy as np >>> from sklearn.experimental import enable\_iterative\_imputer
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/impute.rst
main
scikit-learn
[ -0.04640425369143486, 0.004591502249240875, 0.0163921806961298, 0.010102150030434132, 0.041048407554626465, -0.09088412672281265, 0.01997651904821396, 0.002395505318418145, -0.005278297699987888, -0.023159107193350792, 0.02585228532552719, 0.0023268312215805054, 0.005475516431033611, -0.07...
0.046934
\*\*experimental\*\* for now: default parameters or details of behaviour might change without any deprecation cycle. Resolving the following issues would help stabilize :class:`IterativeImputer`: convergence criteria (:issue:`14338`) and default estimators (:issue:`13286`). To use it, you need to explicitly import ``enable\_iterative\_imputer``. :: >>> import numpy as np >>> from sklearn.experimental import enable\_iterative\_imputer >>> from sklearn.impute import IterativeImputer >>> imp = IterativeImputer(max\_iter=10, random\_state=0) >>> imp.fit([[1, 2], [3, 6], [4, 8], [np.nan, 3], [7, np.nan]]) IterativeImputer(random\_state=0) >>> X\_test = [[np.nan, 2], [6, np.nan], [np.nan, 6]] >>> # the model learns that the second feature is double the first >>> print(np.round(imp.transform(X\_test))) [[ 1. 2.] [ 6. 12.] [ 3. 6.]] Both :class:`SimpleImputer` and :class:`IterativeImputer` can be used in a Pipeline as a way to build a composite estimator that supports imputation. See :ref:`sphx\_glr\_auto\_examples\_impute\_plot\_missing\_values.py`. Flexibility of IterativeImputer ------------------------------- There are many well-established imputation packages in the R data science ecosystem: Amelia, mi, mice, missForest, etc. missForest is popular, and turns out to be a particular instance of different sequential imputation algorithms that can all be implemented with :class:`IterativeImputer` by passing in different regressors to be used for predicting missing feature values. In the case of missForest, this regressor is a Random Forest. See :ref:`sphx\_glr\_auto\_examples\_impute\_plot\_iterative\_imputer\_variants\_comparison.py`. .. \_multiple\_imputation: Multiple vs. Single Imputation ------------------------------ In the statistics community, it is common practice to perform multiple imputations, generating, for example, ``m`` separate imputations for a single feature matrix. Each of these ``m`` imputations is then put through the subsequent analysis pipeline (e.g. feature engineering, clustering, regression, classification). The ``m`` final analysis results (e.g. held-out validation errors) allow the data scientist to obtain understanding of how analytic results may differ as a consequence of the inherent uncertainty caused by the missing values. The above practice is called multiple imputation. Our implementation of :class:`IterativeImputer` was inspired by the R MICE package (Multivariate Imputation by Chained Equations) [1]\_, but differs from it by returning a single imputation instead of multiple imputations. However, :class:`IterativeImputer` can also be used for multiple imputations by applying it repeatedly to the same dataset with different random seeds when ``sample\_posterior=True``. See [2]\_, chapter 4 for more discussion on multiple vs. single imputations. It is still an open problem as to how useful single vs. multiple imputation is in the context of prediction and classification when the user is not interested in measuring uncertainty due to missing values. Note that a call to the ``transform`` method of :class:`IterativeImputer` is not allowed to change the number of samples. Therefore multiple imputations cannot be achieved by a single call to ``transform``. .. rubric:: References .. [1] `Stef van Buuren, Karin Groothuis-Oudshoorn (2011). "mice: Multivariate Imputation by Chained Equations in R". Journal of Statistical Software 45: 1-67. `\_ .. [2] Roderick J A Little and Donald B Rubin (1986). "Statistical Analysis with Missing Data". John Wiley & Sons, Inc., New York, NY, USA. .. \_knnimpute: Nearest neighbors imputation ============================ The :class:`KNNImputer` class provides imputation for filling in missing values using the k-Nearest Neighbors approach. By default, a euclidean distance metric that supports missing values, :func:`~sklearn.metrics.pairwise.nan\_euclidean\_distances`, is used to find the nearest neighbors. Each missing feature is imputed using values from ``n\_neighbors`` nearest neighbors that have a value for the feature. The feature of the neighbors are averaged uniformly or weighted by distance to each neighbor. If a sample has more than one feature missing, then the neighbors for that sample can be different depending on the particular feature being imputed. When the number of available neighbors is less than `n\_neighbors` and there are no defined distances to the training set, the training set average for that feature is used during imputation. If there
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/impute.rst
main
scikit-learn
[ -0.06971768289804459, -0.08603672683238983, 0.026189293712377548, -0.02838638238608837, 0.05532994493842125, -0.10167849808931351, -0.027216603979468346, -0.03756313398480415, -0.11590051651000977, -0.0009482451132498682, 0.002087955130264163, 0.0376364104449749, 0.03265818580985069, -0.06...
0.061554
missing, then the neighbors for that sample can be different depending on the particular feature being imputed. When the number of available neighbors is less than `n\_neighbors` and there are no defined distances to the training set, the training set average for that feature is used during imputation. If there is at least one neighbor with a defined distance, the weighted or unweighted average of the remaining neighbors will be used during imputation. If a feature is always missing in training, it is removed during `transform`. For more information on the methodology, see ref. [OL2001]\_. The following snippet demonstrates how to replace missing values, encoded as ``np.nan``, using the mean feature value of the two nearest neighbors of samples with missing values:: >>> import numpy as np >>> from sklearn.impute import KNNImputer >>> nan = np.nan >>> X = [[1, 2, nan], [3, 4, 3], [nan, 6, 5], [8, 8, 7]] >>> imputer = KNNImputer(n\_neighbors=2, weights="uniform") >>> imputer.fit\_transform(X) array([[1. , 2. , 4. ], [3. , 4. , 3. ], [5.5, 6. , 5. ], [8. , 8. , 7. ]]) For another example on usage, see :ref:`sphx\_glr\_auto\_examples\_impute\_plot\_missing\_values.py`. .. rubric:: References .. [OL2001] `Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein and Russ B. Altman, Missing value estimation methods for DNA microarrays, BIOINFORMATICS Vol. 17 no. 6, 2001 Pages 520-525. `\_ Keeping the number of features constant ======================================= By default, the scikit-learn imputers will drop fully empty features, i.e. columns containing only missing values. For instance:: >>> imputer = SimpleImputer() >>> X = np.array([[np.nan, 1], [np.nan, 2], [np.nan, 3]]) >>> imputer.fit\_transform(X) array([[1.], [2.], [3.]]) The first feature in `X` containing only `np.nan` was dropped after the imputation. While this feature will not help in predictive setting, dropping the columns will change the shape of `X` which could be problematic when using imputers in a more complex machine-learning pipeline. The parameter `keep\_empty\_features` offers the option to keep the empty features by imputing with a constant value. In most of the cases, this constant value is zero:: >>> imputer.set\_params(keep\_empty\_features=True) SimpleImputer(keep\_empty\_features=True) >>> imputer.fit\_transform(X) array([[0., 1.], [0., 2.], [0., 3.]]) .. \_missing\_indicator: Marking imputed values ====================== The :class:`MissingIndicator` transformer is useful to transform a dataset into corresponding binary matrix indicating the presence of missing values in the dataset. This transformation is useful in conjunction with imputation. When using imputation, preserving the information about which values had been missing can be informative. Note that both the :class:`SimpleImputer` and :class:`IterativeImputer` have the boolean parameter ``add\_indicator`` (``False`` by default) which when set to ``True`` provides a convenient way of stacking the output of the :class:`MissingIndicator` transformer with the output of the imputer. ``NaN`` is usually used as the placeholder for missing values. However, it enforces the data type to be float. The parameter ``missing\_values`` allows to specify other placeholder such as integer. In the following example, we will use ``-1`` as missing values:: >>> from sklearn.impute import MissingIndicator >>> X = np.array([[-1, -1, 1, 3], ... [4, -1, 0, -1], ... [8, -1, 1, 0]]) >>> indicator = MissingIndicator(missing\_values=-1) >>> mask\_missing\_values\_only = indicator.fit\_transform(X) >>> mask\_missing\_values\_only array([[ True, True, False], [False, True, True], [False, True, False]]) The ``features`` parameter is used to choose the features for which the mask is constructed. By default, it is ``'missing-only'`` which returns the imputer mask of the features containing missing values at ``fit`` time:: >>> indicator.features\_ array([0, 1, 3]) The ``features`` parameter can be set to ``'all'`` to return all features whether or not they contain missing values:: >>> indicator = MissingIndicator(missing\_values=-1, features="all") >>> mask\_all = indicator.fit\_transform(X) >>> mask\_all array([[ True, True, False, False], [False, True,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/impute.rst
main
scikit-learn
[ -0.06553405523300171, -0.009119771420955658, 0.08112934976816177, -0.0074792965315282345, 0.07662127912044525, -0.04955018311738968, -0.051190976053476334, -0.02605479769408703, -0.042492009699344635, -0.04261936992406845, 0.014031137339770794, 0.01819552108645439, 0.05087253078818321, -0....
0.006936
the features containing missing values at ``fit`` time:: >>> indicator.features\_ array([0, 1, 3]) The ``features`` parameter can be set to ``'all'`` to return all features whether or not they contain missing values:: >>> indicator = MissingIndicator(missing\_values=-1, features="all") >>> mask\_all = indicator.fit\_transform(X) >>> mask\_all array([[ True, True, False, False], [False, True, False, True], [False, True, False, False]]) >>> indicator.features\_ array([0, 1, 2, 3]) When using the :class:`MissingIndicator` in a :class:`~sklearn.pipeline.Pipeline`, be sure to use the :class:`~sklearn.pipeline.FeatureUnion` or :class:`~sklearn.compose.ColumnTransformer` to add the indicator features to the regular features. First we obtain the `iris` dataset, and add some missing values to it. >>> from sklearn.datasets import load\_iris >>> from sklearn.impute import SimpleImputer, MissingIndicator >>> from sklearn.model\_selection import train\_test\_split >>> from sklearn.pipeline import FeatureUnion, make\_pipeline >>> from sklearn.tree import DecisionTreeClassifier >>> X, y = load\_iris(return\_X\_y=True) >>> mask = np.random.randint(0, 2, size=X.shape).astype(bool) >>> X[mask] = np.nan >>> X\_train, X\_test, y\_train, \_ = train\_test\_split(X, y, test\_size=100, ... random\_state=0) Now we create a :class:`~sklearn.pipeline.FeatureUnion`. All features will be imputed using :class:`SimpleImputer`, in order to enable classifiers to work with this data. Additionally, it adds the indicator variables from :class:`MissingIndicator`. >>> transformer = FeatureUnion( ... transformer\_list=[ ... ('features', SimpleImputer(strategy='mean')), ... ('indicators', MissingIndicator())]) >>> transformer = transformer.fit(X\_train, y\_train) >>> results = transformer.transform(X\_test) >>> results.shape (100, 8) Of course, we cannot use the transformer to make any predictions. We should wrap this in a :class:`~sklearn.pipeline.Pipeline` with a classifier (e.g., a :class:`~sklearn.tree.DecisionTreeClassifier`) to be able to make predictions. >>> clf = make\_pipeline(transformer, DecisionTreeClassifier()) >>> clf = clf.fit(X\_train, y\_train) >>> results = clf.predict(X\_test) >>> results.shape (100,) Estimators that handle NaN values ================================= Some estimators are designed to handle NaN values without preprocessing. Below is the list of these estimators, classified by type (cluster, regressor, classifier, transform): .. allow\_nan\_estimators::
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/impute.rst
main
scikit-learn
[ 0.029995068907737732, 0.006512004416435957, 0.010153002105653286, 0.04004117473959923, 0.06500574946403503, 0.01566026732325554, 0.025738254189491272, -0.07252605259418488, -0.029896341264247894, -0.06787838786840439, 0.03295000270009041, -0.13382798433303833, -0.011079759337008, -0.086607...
-0.012821
.. \_lda\_qda: ========================================== Linear and Quadratic Discriminant Analysis ========================================== .. currentmodule:: sklearn Linear Discriminant Analysis (:class:`~discriminant\_analysis.LinearDiscriminantAnalysis`) and Quadratic Discriminant Analysis (:class:`~discriminant\_analysis.QuadraticDiscriminantAnalysis`) are two classic classifiers, with, as their names suggest, a linear and a quadratic decision surface, respectively. These classifiers are attractive because they have closed-form solutions that can be easily computed, are inherently multiclass, have proven to work well in practice, and have no hyperparameters to tune. .. |ldaqda| image:: ../auto\_examples/classification/images/sphx\_glr\_plot\_lda\_qda\_001.png :target: ../auto\_examples/classification/plot\_lda\_qda.html :scale: 80 .. centered:: |ldaqda| The plot shows decision boundaries for Linear Discriminant Analysis and Quadratic Discriminant Analysis. The bottom row demonstrates that Linear Discriminant Analysis can only learn linear boundaries, while Quadratic Discriminant Analysis can learn quadratic boundaries and is therefore more flexible. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_classification\_plot\_lda\_qda.py`: Comparison of LDA and QDA on synthetic data. Dimensionality reduction using Linear Discriminant Analysis =========================================================== :class:`~discriminant\_analysis.LinearDiscriminantAnalysis` can be used to perform supervised dimensionality reduction, by projecting the input data to a linear subspace consisting of the directions which maximize the separation between classes (in a precise sense discussed in the mathematics section below). The dimension of the output is necessarily less than the number of classes, so this is in general a rather strong dimensionality reduction, and only makes sense in a multiclass setting. This is implemented in the `transform` method. The desired dimensionality can be set using the ``n\_components`` parameter. This parameter has no influence on the `fit` and `predict` methods. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_pca\_vs\_lda.py`: Comparison of LDA and PCA for dimensionality reduction of the Iris dataset .. \_lda\_qda\_math: Mathematical formulation of the LDA and QDA classifiers ======================================================= Both LDA and QDA can be derived from simple probabilistic models which model the class conditional distribution of the data :math:`P(X|y=k)` for each class :math:`k`. Predictions can then be obtained by using Bayes' rule, for each training sample :math:`x \in \mathbb{R}^d`: .. math:: P(y=k | x) = \frac{P(x | y=k) P(y=k)}{P(x)} = \frac{P(x | y=k) P(y = k)}{ \sum\_{l} P(x | y=l) \cdot P(y=l)} and we select the class :math:`k` which maximizes this posterior probability. More specifically, for linear and quadratic discriminant analysis, :math:`P(x|y)` is modeled as a multivariate Gaussian distribution with density: .. math:: P(x | y=k) = \frac{1}{(2\pi)^{d/2} |\Sigma\_k|^{1/2}}\exp\left(-\frac{1}{2} (x-\mu\_k)^T \Sigma\_k^{-1} (x-\mu\_k)\right) where :math:`d` is the number of features. QDA --- According to the model above, the log of the posterior is: .. math:: \log P(y=k | x) &= \log P(x | y=k) + \log P(y = k) + Cst \\ &= -\frac{1}{2} \log |\Sigma\_k| -\frac{1}{2} (x-\mu\_k)^T \Sigma\_k^{-1} (x-\mu\_k) + \log P(y = k) + Cst, where the constant term :math:`Cst` corresponds to the denominator :math:`P(x)`, in addition to other constant terms from the Gaussian. The predicted class is the one that maximises this log-posterior. .. note:: \*\*Relation with Gaussian Naive Bayes\*\* If in the QDA model one assumes that the covariance matrices are diagonal, then the inputs are assumed to be conditionally independent in each class, and the resulting classifier is equivalent to the Gaussian Naive Bayes classifier :class:`naive\_bayes.GaussianNB`. LDA --- LDA is a special case of QDA, where the Gaussians for each class are assumed to share the same covariance matrix: :math:`\Sigma\_k = \Sigma` for all :math:`k`. This reduces the log posterior to: .. math:: \log P(y=k | x) = -\frac{1}{2} (x-\mu\_k)^T \Sigma^{-1} (x-\mu\_k) + \log P(y = k) + Cst. The term :math:`(x-\mu\_k)^T \Sigma^{-1} (x-\mu\_k)` corresponds to the `Mahalanobis Distance `\_ between the sample :math:`x` and the mean :math:`\mu\_k`. The Mahalanobis distance tells how close :math:`x` is from :math:`\mu\_k`, while also accounting for the variance of each feature. We can thus interpret LDA as assigning :math:`x` to the class whose mean is
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/lda_qda.rst
main
scikit-learn
[ -0.09641604870557785, -0.08529116958379745, -0.049235258251428604, -0.10162857919931412, -0.0483085997402668, 0.04579874128103256, -0.018717190250754356, -0.01204791758209467, -0.03891674801707268, 0.02183745615184307, 0.045755114406347275, -0.028313107788562775, -0.05338798463344574, 0.07...
0.028585
:math:`(x-\mu\_k)^T \Sigma^{-1} (x-\mu\_k)` corresponds to the `Mahalanobis Distance `\_ between the sample :math:`x` and the mean :math:`\mu\_k`. The Mahalanobis distance tells how close :math:`x` is from :math:`\mu\_k`, while also accounting for the variance of each feature. We can thus interpret LDA as assigning :math:`x` to the class whose mean is the closest in terms of Mahalanobis distance, while also accounting for the class prior probabilities. The log-posterior of LDA can also be written [3]\_ as: .. math:: \log P(y=k | x) = \omega\_k^T x + \omega\_{k0} + Cst. where :math:`\omega\_k = \Sigma^{-1} \mu\_k` and :math:`\omega\_{k0} = -\frac{1}{2} \mu\_k^T\Sigma^{-1}\mu\_k + \log P (y = k)`. These quantities correspond to the `coef\_` and `intercept\_` attributes, respectively. From the above formula, it is clear that LDA has a linear decision surface. In the case of QDA, there are no assumptions on the covariance matrices :math:`\Sigma\_k` of the Gaussians, leading to quadratic decision surfaces. See [1]\_ for more details. Mathematical formulation of LDA dimensionality reduction ======================================================== First note that the K means :math:`\mu\_k` are vectors in :math:`\mathbb{R}^d`, and they lie in an affine subspace :math:`H` of dimension at most :math:`K - 1` (2 points lie on a line, 3 points lie on a plane, etc.). As mentioned above, we can interpret LDA as assigning :math:`x` to the class whose mean :math:`\mu\_k` is the closest in terms of Mahalanobis distance, while also accounting for the class prior probabilities. Alternatively, LDA is equivalent to first \*sphering\* the data so that the covariance matrix is the identity, and then assigning :math:`x` to the closest mean in terms of Euclidean distance (still accounting for the class priors). Computing Euclidean distances in this d-dimensional space is equivalent to first projecting the data points into :math:`H`, and computing the distances there (since the other dimensions will contribute equally to each class in terms of distance). In other words, if :math:`x` is closest to :math:`\mu\_k` in the original space, it will also be the case in :math:`H`. This shows that, implicit in the LDA classifier, there is a dimensionality reduction by linear projection onto a :math:`K-1` dimensional space. We can reduce the dimension even more, to a chosen :math:`L`, by projecting onto the linear subspace :math:`H\_L` which maximizes the variance of the :math:`\mu^\*\_k` after projection (in effect, we are doing a form of PCA for the transformed class means :math:`\mu^\*\_k`). This :math:`L` corresponds to the ``n\_components`` parameter used in the :func:`~discriminant\_analysis.LinearDiscriminantAnalysis.transform` method. See [1]\_ for more details. Shrinkage and Covariance Estimator ================================== Shrinkage is a form of regularization used to improve the estimation of covariance matrices in situations where the number of training samples is small compared to the number of features. In this scenario, the empirical sample covariance is a poor estimator, and shrinkage helps improving the generalization performance of the classifier. Shrinkage can be used with LDA (or QDA) by setting the ``shrinkage`` parameter of the :class:`~discriminant\_analysis.LinearDiscriminantAnalysis` class (or :class:`~discriminant\_analysis.QuadraticDiscriminantAnalysis`) to `'auto'`. This automatically determines the optimal shrinkage parameter in an analytic way following the lemma introduced by Ledoit and Wolf [2]\_. Note that currently shrinkage only works when setting the ``solver`` parameter to `'lsqr'` or `'eigen'` (only `'eigen'` is implemented for QDA). The ``shrinkage`` parameter can also be manually set between 0 and 1. In particular, a value of 0 corresponds to no shrinkage (which means the empirical covariance matrix will be used) and a value of 1 corresponds to complete shrinkage (which means that the diagonal matrix of variances will be used as an estimate for the covariance matrix). Setting this parameter to a value between these two extrema will estimate a shrunk version of the covariance
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/lda_qda.rst
main
scikit-learn
[ -0.027434388175606728, -0.07295415550470352, 0.033660151064395905, -0.06548119336366653, -0.010853092186152935, -0.023022644221782684, 0.013672727160155773, 0.078033946454525, 0.030633963644504547, 0.00205570668913424, 0.09255096316337585, -0.02336391620337963, 0.02394523285329342, -0.0719...
0.037553
covariance matrix will be used) and a value of 1 corresponds to complete shrinkage (which means that the diagonal matrix of variances will be used as an estimate for the covariance matrix). Setting this parameter to a value between these two extrema will estimate a shrunk version of the covariance matrix. The shrunk Ledoit and Wolf estimator of covariance may not always be the best choice. For example if the distribution of the data is normally distributed, the Oracle Approximating Shrinkage estimator :class:`sklearn.covariance.OAS` yields a smaller Mean Squared Error than the one given by Ledoit and Wolf's formula used with `shrinkage="auto"`. In LDA and QDA, the data are assumed to be gaussian conditionally to the class. If these assumptions hold, using LDA and QDA with the OAS estimator of covariance will yield a better classification accuracy than if Ledoit and Wolf or the empirical covariance estimator is used. The covariance estimator can be chosen using the ``covariance\_estimator`` parameter of the :class:`discriminant\_analysis.LinearDiscriminantAnalysis` and :class:`discriminant\_analysis.QuadraticDiscriminantAnalysis` classes. A covariance estimator should have a :term:`fit` method and a ``covariance\_`` attribute like all covariance estimators in the :mod:`sklearn.covariance` module. .. |shrinkage| image:: ../auto\_examples/classification/images/sphx\_glr\_plot\_lda\_001.png :target: ../auto\_examples/classification/plot\_lda.html :scale: 75 .. centered:: |shrinkage| .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_classification\_plot\_lda.py`: Comparison of LDA classifiers with Empirical, Ledoit Wolf and OAS covariance estimator. Estimation algorithms ===================== Using LDA and QDA requires computing the log-posterior which depends on the class priors :math:`P(y=k)`, the class means :math:`\mu\_k`, and the covariance matrices. The 'svd' solver is the default solver used for :class:`~sklearn.discriminant\_analysis.LinearDiscriminantAnalysis` and :class:`~sklearn.discriminant\_analysis.QuadraticDiscriminantAnalysis`. It can perform both classification and transform (for LDA). As it does not rely on the calculation of the covariance matrix, the 'svd' solver may be preferable in situations where the number of features is large. The 'svd' solver cannot be used with shrinkage. For QDA, the use of the SVD solver relies on the fact that the covariance matrix :math:`\Sigma\_k` is, by definition, equal to :math:`\frac{1}{n - 1} X\_k^TX\_k = \frac{1}{n - 1} V S^2 V^T` where :math:`V` comes from the SVD of the (centered) matrix: :math:`X\_k = U S V^T`. It turns out that we can compute the log-posterior above without having to explicitly compute :math:`\Sigma`: computing :math:`S` and :math:`V` via the SVD of :math:`X` is enough. For LDA, two SVDs are computed: the SVD of the centered input matrix :math:`X` and the SVD of the class-wise mean vectors. The `'lsqr'` solver is an efficient algorithm that only works for classification. It needs to explicitly compute the covariance matrix :math:`\Sigma`, and supports shrinkage and custom covariance estimators. This solver computes the coefficients :math:`\omega\_k = \Sigma^{-1}\mu\_k` by solving for :math:`\Sigma \omega = \mu\_k`, thus avoiding the explicit computation of the inverse :math:`\Sigma^{-1}`. The `'eigen'` solver for :class:`~discriminant\_analysis.LinearDiscriminantAnalysis` is based on the optimization of the between class scatter to within class scatter ratio. It can be used for both classification and transform, and it supports shrinkage. For :class:`~sklearn.discriminant\_analysis.QuadraticDiscriminantAnalysis`, the `'eigen'` solver is based on computing the eigenvalues and eigenvectors of each class covariance matrix. It allows using shrinkage for classification. However, the `'eigen'` solver needs to compute the covariance matrix, so it might not be suitable for situations with a high number of features. .. rubric:: References .. [1] "The Elements of Statistical Learning", Hastie T., Tibshirani R., Friedman J., Section 4.3, p.106-119, 2008. .. [2] Ledoit O, Wolf M. Honey, I Shrunk the Sample Covariance Matrix. The Journal of Portfolio Management 30(4), 110-119, 2004. .. [3] R. O. Duda, P. E. Hart, D. G. Stork. Pattern Classification (Second Edition), section 2.6.2.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/lda_qda.rst
main
scikit-learn
[ -0.023542648181319237, -0.008835064247250557, -0.05491993576288223, 0.02371031604707241, 0.016712594777345657, -0.002732304623350501, -0.05270165950059891, 0.07079910486936569, -0.017923777922987938, 0.019606435671448708, 0.057884104549884796, 0.05494704842567444, -0.023019080981612206, -0...
0.040072
Honey, I Shrunk the Sample Covariance Matrix. The Journal of Portfolio Management 30(4), 110-119, 2004. .. [3] R. O. Duda, P. E. Hart, D. G. Stork. Pattern Classification (Second Edition), section 2.6.2.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/lda_qda.rst
main
scikit-learn
[ -0.00008913130295695737, -0.045718543231487274, -0.06757417321205139, -0.04684220626950264, -0.0021503050811588764, 0.04025999456644058, 0.03750348463654518, -0.06860802322626114, 0.03181835263967514, 0.01082095317542553, 0.008418479934334755, 0.05528736487030983, -0.09703171253204346, -0....
0.082004
.. \_cross\_validation: =================================================== Cross-validation: evaluating estimator performance =================================================== .. currentmodule:: sklearn.model\_selection Learning the parameters of a prediction function and testing it on the same data is a methodological mistake: a model that would just repeat the labels of the samples that it has just seen would have a perfect score but would fail to predict anything useful on yet-unseen data. This situation is called \*\*overfitting\*\*. To avoid it, it is common practice when performing a (supervised) machine learning experiment to hold out part of the available data as a \*\*test set\*\* ``X\_test, y\_test``. Note that the word "experiment" is not intended to denote academic use only, because even in commercial settings machine learning usually starts out experimentally. Here is a flowchart of typical cross validation workflow in model training. The best parameters can be determined by :ref:`grid search ` techniques. .. image:: ../images/grid\_search\_workflow.png :width: 400px :height: 240px :alt: Grid Search Workflow :align: center In scikit-learn a random split into training and test sets can be quickly computed with the :func:`train\_test\_split` helper function. Let's load the iris data set to fit a linear support vector machine on it:: >>> import numpy as np >>> from sklearn.model\_selection import train\_test\_split >>> from sklearn import datasets >>> from sklearn import svm >>> X, y = datasets.load\_iris(return\_X\_y=True) >>> X.shape, y.shape ((150, 4), (150,)) We can now quickly sample a training set while holding out 40% of the data for testing (evaluating) our classifier:: >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split( ... X, y, test\_size=0.4, random\_state=0) >>> X\_train.shape, y\_train.shape ((90, 4), (90,)) >>> X\_test.shape, y\_test.shape ((60, 4), (60,)) >>> clf = svm.SVC(kernel='linear', C=1).fit(X\_train, y\_train) >>> clf.score(X\_test, y\_test) 0.96 When evaluating different settings ("hyperparameters") for estimators, such as the ``C`` setting that must be manually set for an SVM, there is still a risk of overfitting \*on the test set\* because the parameters can be tweaked until the estimator performs optimally. This way, knowledge about the test set can "leak" into the model and evaluation metrics no longer report on generalization performance. To solve this problem, yet another part of the dataset can be held out as a so-called "validation set": training proceeds on the training set, after which evaluation is done on the validation set, and when the experiment seems to be successful, final evaluation can be done on the test set. However, by partitioning the available data into three sets, we drastically reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets. A solution to this problem is a procedure called `cross-validation `\_ (CV for short). A test set should still be held out for final evaluation, but the validation set is no longer needed when doing CV. In the basic approach, called \*k\*-fold CV, the training set is split into \*k\* smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the \*k\* "folds": \* A model is trained using :math:`k-1` of the folds as training data; \* the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy). The performance measure reported by \*k\*-fold cross-validation is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage in problems such as inverse inference where the number of samples
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.08386753499507904, -0.06845244765281677, 0.018407441675662994, 0.1080942153930664, 0.13324642181396484, -0.030939746648073196, 0.0008202157914638519, -0.024208877235651016, -0.04467488080263138, -0.011302342638373375, -0.02298821322619915, -0.08967996388673782, 0.029526708647608757, -0....
0.069526
is then the average of the values computed in the loop. This approach can be computationally expensive, but does not waste too much data (as is the case when fixing an arbitrary validation set), which is a major advantage in problems such as inverse inference where the number of samples is very small. .. image:: ../images/grid\_search\_cross\_validation.png :width: 500px :height: 300px :alt: A depiction of a 5 fold cross validation on a training set, while holding out a test set. :align: center Computing cross-validated metrics ================================= The simplest way to use cross-validation is to call the :func:`cross\_val\_score` helper function on the estimator and the dataset. The following example demonstrates how to estimate the accuracy of a linear kernel support vector machine on the iris dataset by splitting the data, fitting a model and computing the score 5 consecutive times (with different splits each time):: >>> from sklearn.model\_selection import cross\_val\_score >>> clf = svm.SVC(kernel='linear', C=1, random\_state=42) >>> scores = cross\_val\_score(clf, X, y, cv=5) >>> scores array([0.96, 1. , 0.96, 0.96, 1. ]) The mean score and the standard deviation are hence given by:: >>> print("%0.2f accuracy with a standard deviation of %0.2f" % (scores.mean(), scores.std())) 0.98 accuracy with a standard deviation of 0.02 By default, the score computed at each CV iteration is the ``score`` method of the estimator. It is possible to change this by using the scoring parameter:: >>> from sklearn import metrics >>> scores = cross\_val\_score( ... clf, X, y, cv=5, scoring='f1\_macro') >>> scores array([0.96, 1., 0.96, 0.96, 1.]) See :ref:`scoring\_parameter` for details. In the case of the Iris dataset, the samples are balanced across target classes hence the accuracy and the F1-score are almost equal. When the ``cv`` argument is an integer, :func:`cross\_val\_score` uses the :class:`KFold` or :class:`StratifiedKFold` strategies by default, the latter being used if the estimator derives from :class:`ClassifierMixin `. It is also possible to use other cross validation strategies by passing a cross validation iterator instead, for instance:: >>> from sklearn.model\_selection import ShuffleSplit >>> n\_samples = X.shape[0] >>> cv = ShuffleSplit(n\_splits=5, test\_size=0.3, random\_state=0) >>> cross\_val\_score(clf, X, y, cv=cv) array([0.977, 0.977, 1., 0.955, 1.]) Another option is to use an iterable yielding (train, test) splits as arrays of indices, for example:: >>> def custom\_cv\_2folds(X): ... n = X.shape[0] ... i = 1 ... while i <= 2: ... idx = np.arange(n \* (i - 1) / 2, n \* i / 2, dtype=int) ... yield idx, idx ... i += 1 ... >>> custom\_cv = custom\_cv\_2folds(X) >>> cross\_val\_score(clf, X, y, cv=custom\_cv) array([1. , 0.973]) .. dropdown:: Data transformation with held-out data Just as it is important to test a predictor on data held-out from training, preprocessing (such as standardization, feature selection, etc.) and similar :ref:`data transformations ` similarly should be learnt from a training set and applied to held-out data for prediction:: >>> from sklearn import preprocessing >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split( ... X, y, test\_size=0.4, random\_state=0) >>> scaler = preprocessing.StandardScaler().fit(X\_train) >>> X\_train\_transformed = scaler.transform(X\_train) >>> clf = svm.SVC(C=1).fit(X\_train\_transformed, y\_train) >>> X\_test\_transformed = scaler.transform(X\_test) >>> clf.score(X\_test\_transformed, y\_test) 0.9333 A :class:`Pipeline ` makes it easier to compose estimators, providing this behavior under cross-validation:: >>> from sklearn.pipeline import make\_pipeline >>> clf = make\_pipeline(preprocessing.StandardScaler(), svm.SVC(C=1)) >>> cross\_val\_score(clf, X, y, cv=cv) array([0.977, 0.933, 0.955, 0.933, 0.977]) See :ref:`combining\_estimators`. .. \_multimetric\_cross\_validation: The cross\_validate function and multiple metric evaluation ---------------------------------------------------------- The :func:`cross\_validate` function differs from :func:`cross\_val\_score` in two ways: - It allows specifying multiple metrics for evaluation. - It returns a dict containing fit-times, score-times (and optionally training scores, fitted estimators, train-test split indices) in addition to the test score. For single metric evaluation, where the scoring parameter is a
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.05506205931305885, -0.04387262091040611, -0.06065249815583229, -0.042565420269966125, 0.07637085765600204, -0.05536910519003868, -0.000939090212341398, 0.02725156396627426, -0.043985847383737564, -0.06287969648838043, -0.055892132222652435, -0.053672343492507935, 0.029107512906193733, -...
0.003241
The :func:`cross\_validate` function differs from :func:`cross\_val\_score` in two ways: - It allows specifying multiple metrics for evaluation. - It returns a dict containing fit-times, score-times (and optionally training scores, fitted estimators, train-test split indices) in addition to the test score. For single metric evaluation, where the scoring parameter is a string, callable or None, the keys will be - ``['test\_score', 'fit\_time', 'score\_time']`` And for multiple metric evaluation, the return value is a dict with the following keys - ``['test\_', 'test\_', 'test\_', 'fit\_time', 'score\_time']`` ``return\_train\_score`` is set to ``False`` by default to save computation time. To evaluate the scores on the training set as well you need to set it to ``True``. You may also retain the estimator fitted on each training set by setting ``return\_estimator=True``. Similarly, you may set `return\_indices=True` to retain the training and testing indices used to split the dataset into train and test sets for each cv split. The multiple metrics can be specified either as a list, tuple or set of predefined scorer names:: >>> from sklearn.model\_selection import cross\_validate >>> from sklearn.metrics import recall\_score >>> scoring = ['precision\_macro', 'recall\_macro'] >>> clf = svm.SVC(kernel='linear', C=1, random\_state=0) >>> scores = cross\_validate(clf, X, y, scoring=scoring) >>> sorted(scores.keys()) ['fit\_time', 'score\_time', 'test\_precision\_macro', 'test\_recall\_macro'] >>> scores['test\_recall\_macro'] array([0.96, 1., 0.96, 0.96, 1.]) Or as a dict mapping scorer name to a predefined or custom scoring function:: >>> from sklearn.metrics import make\_scorer >>> scoring = {'prec\_macro': 'precision\_macro', ... 'rec\_macro': make\_scorer(recall\_score, average='macro')} >>> scores = cross\_validate(clf, X, y, scoring=scoring, ... cv=5, return\_train\_score=True) >>> sorted(scores.keys()) ['fit\_time', 'score\_time', 'test\_prec\_macro', 'test\_rec\_macro', 'train\_prec\_macro', 'train\_rec\_macro'] >>> scores['train\_rec\_macro'] array([0.97, 0.97, 0.99, 0.98, 0.98]) Here is an example of ``cross\_validate`` using a single metric:: >>> scores = cross\_validate(clf, X, y, ... scoring='precision\_macro', cv=5, ... return\_estimator=True) >>> sorted(scores.keys()) ['estimator', 'fit\_time', 'score\_time', 'test\_score'] Obtaining predictions by cross-validation ----------------------------------------- The function :func:`cross\_val\_predict` has a similar interface to :func:`cross\_val\_score`, but returns, for each element in the input, the prediction that was obtained for that element when it was in the test set. Only cross-validation strategies that assign all elements to a test set exactly once can be used (otherwise, an exception is raised). .. warning:: Note on inappropriate usage of cross\_val\_predict The result of :func:`cross\_val\_predict` may be different from those obtained using :func:`cross\_val\_score` as the elements are grouped in different ways. The function :func:`cross\_val\_score` takes an average over cross-validation folds, whereas :func:`cross\_val\_predict` simply returns the labels (or probabilities) from several distinct models undistinguished. Thus, :func:`cross\_val\_predict` is not an appropriate measure of generalization error. The function :func:`cross\_val\_predict` is appropriate for: - Visualization of predictions obtained from different models. - Model blending: When predictions of one supervised estimator are used to train another estimator in ensemble methods. The available cross validation iterators are introduced in the following section. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_roc\_crossval.py`, \* :ref:`sphx\_glr\_auto\_examples\_feature\_selection\_plot\_rfe\_with\_cross\_validation.py`, \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_grid\_search\_digits.py`, \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_grid\_search\_text\_feature\_extraction.py`, \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_cv\_predict.py`, \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_nested\_cross\_validation\_iris.py`. Cross validation iterators ========================== The following sections list utilities to generate indices that can be used to generate dataset splits according to different cross validation strategies. .. \_iid\_cv: Cross-validation iterators for i.i.d. data ------------------------------------------ Assuming that some data is Independent and Identically Distributed (i.i.d.) is making the assumption that all samples stem from the same generative process and that the generative process is assumed to have no memory of past generated samples. The following cross-validators can be used in such cases. .. note:: While i.i.d. data is a common assumption in machine learning theory, it rarely holds in practice. If one knows that the samples have been generated using a time-dependent process, it is safer to use a :ref:`time-series aware cross-validation scheme `. Similarly, if we know that the generative process has a group structure
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.0579567551612854, -0.07494482398033142, -0.1006135419011116, 0.014634862542152405, 0.029336733743548393, -0.024587268009781837, 0.030319103971123695, 0.02172861061990261, -0.04425140842795372, -0.05349593982100487, 0.002851886907592416, -0.0943189263343811, -0.01798606663942337, -0.0019...
0.043106
data is a common assumption in machine learning theory, it rarely holds in practice. If one knows that the samples have been generated using a time-dependent process, it is safer to use a :ref:`time-series aware cross-validation scheme `. Similarly, if we know that the generative process has a group structure (samples collected from different subjects, experiments, measurement devices), it is safer to use :ref:`group-wise cross-validation `. .. \_k\_fold: K-fold ^^^^^^ :class:`KFold` divides all the samples in :math:`k` groups of samples, called folds (if :math:`k = n`, this is equivalent to the \*Leave One Out\* strategy), of equal sizes (if possible). The prediction function is learned using :math:`k - 1` folds, and the fold left out is used for test. Example of 2-fold cross-validation on a dataset with 4 samples:: >>> import numpy as np >>> from sklearn.model\_selection import KFold >>> X = ["a", "b", "c", "d"] >>> kf = KFold(n\_splits=2) >>> for train, test in kf.split(X): ... print("%s %s" % (train, test)) [2 3] [0 1] [0 1] [2 3] Here is a visualization of the cross-validation behavior. Note that :class:`KFold` is not affected by classes or groups. .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_indices\_006.png :target: ../auto\_examples/model\_selection/plot\_cv\_indices.html :align: center :scale: 75% Each fold is constituted by two arrays: the first one is related to the \*training set\*, and the second one to the \*test set\*. Thus, one can create the training/test sets using numpy indexing:: >>> X = np.array([[0., 0.], [1., 1.], [-1., -1.], [2., 2.]]) >>> y = np.array([0, 1, 0, 1]) >>> X\_train, X\_test, y\_train, y\_test = X[train], X[test], y[train], y[test] .. \_repeated\_k\_fold: Repeated K-Fold ^^^^^^^^^^^^^^^ :class:`RepeatedKFold` repeats :class:`KFold` :math:`n` times, producing different splits in each repetition. Example of 2-fold K-Fold repeated 2 times:: >>> import numpy as np >>> from sklearn.model\_selection import RepeatedKFold >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) >>> random\_state = 12883823 >>> rkf = RepeatedKFold(n\_splits=2, n\_repeats=2, random\_state=random\_state) >>> for train, test in rkf.split(X): ... print("%s %s" % (train, test)) ... [2 3] [0 1] [0 1] [2 3] [0 2] [1 3] [1 3] [0 2] Similarly, :class:`RepeatedStratifiedKFold` repeats :class:`StratifiedKFold` :math:`n` times with different randomization in each repetition. .. \_leave\_one\_out: Leave One Out (LOO) ^^^^^^^^^^^^^^^^^^^ :class:`LeaveOneOut` (or LOO) is a simple cross-validation. Each learning set is created by taking all the samples except one, the test set being the sample left out. Thus, for :math:`n` samples, we have :math:`n` different training sets and :math:`n` different test sets. This cross-validation procedure does not waste much data as only one sample is removed from the training set:: >>> from sklearn.model\_selection import LeaveOneOut >>> X = [1, 2, 3, 4] >>> loo = LeaveOneOut() >>> for train, test in loo.split(X): ... print("%s %s" % (train, test)) [1 2 3] [0] [0 2 3] [1] [0 1 3] [2] [0 1 2] [3] Potential users of LOO for model selection should weigh a few known caveats. When compared with :math:`k`-fold cross validation, one builds :math:`n` models from :math:`n` samples instead of :math:`k` models, where :math:`n > k`. Moreover, each is trained on :math:`n - 1` samples rather than :math:`(k-1) n / k`. In both ways, assuming :math:`k` is not too large and :math:`k < n`, LOO is more computationally expensive than :math:`k`-fold cross validation. In terms of accuracy, LOO often results in high variance as an estimator for the test error. Intuitively, since :math:`n - 1` of the :math:`n` samples are used to build each model, models constructed from folds are virtually identical to each other and to the model built from the entire training set. However, if the learning curve is steep for the training size in
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.15754932165145874, -0.04832054302096367, 0.009427422657608986, 0.003807474160566926, 0.08429000526666641, -0.002613066928461194, -0.05761263146996498, -0.02367187663912773, -0.013731160201132298, 0.0281022060662508, -0.02158663235604763, -0.06502226740121841, -0.03028061054646969, -0.04...
0.106693
the test error. Intuitively, since :math:`n - 1` of the :math:`n` samples are used to build each model, models constructed from folds are virtually identical to each other and to the model built from the entire training set. However, if the learning curve is steep for the training size in question, then 5 or 10-fold cross validation can overestimate the generalization error. As a general rule, most authors and empirical evidence suggest that 5 or 10-fold cross validation should be preferred to LOO. .. dropdown:: References \* ``\_; \* T. Hastie, R. Tibshirani, J. Friedman, `The Elements of Statistical Learning `\_, Springer 2009 \* L. Breiman, P. Spector `Submodel selection and evaluation in regression: The X-random case `\_, International Statistical Review 1992; \* R. Kohavi, `A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection `\_, Intl. Jnt. Conf. AI \* R. Bharat Rao, G. Fung, R. Rosales, `On the Dangers of Cross-Validation. An Experimental Evaluation `\_, SIAM 2008; \* G. James, D. Witten, T. Hastie, R. Tibshirani, `An Introduction to Statistical Learning `\_, Springer 2013. .. \_leave\_p\_out: Leave P Out (LPO) ^^^^^^^^^^^^^^^^^ :class:`LeavePOut` is very similar to :class:`LeaveOneOut` as it creates all the possible training/test sets by removing :math:`p` samples from the complete set. For :math:`n` samples, this produces :math:`{n \choose p}` train-test pairs. Unlike :class:`LeaveOneOut` and :class:`KFold`, the test sets will overlap for :math:`p > 1`. Example of Leave-2-Out on a dataset with 4 samples:: >>> from sklearn.model\_selection import LeavePOut >>> X = np.ones(4) >>> lpo = LeavePOut(p=2) >>> for train, test in lpo.split(X): ... print("%s %s" % (train, test)) [2 3] [0 1] [1 3] [0 2] [1 2] [0 3] [0 3] [1 2] [0 2] [1 3] [0 1] [2 3] .. \_ShuffleSplit: Random permutations cross-validation a.k.a. Shuffle & Split ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The :class:`ShuffleSplit` iterator will generate a user defined number of independent train / test dataset splits. Samples are first shuffled and then split into a pair of train and test sets. It is possible to control the randomness for reproducibility of the results by explicitly seeding the ``random\_state`` pseudo random number generator. Here is a usage example:: >>> from sklearn.model\_selection import ShuffleSplit >>> X = np.arange(10) >>> ss = ShuffleSplit(n\_splits=5, test\_size=0.25, random\_state=0) >>> for train\_index, test\_index in ss.split(X): ... print("%s %s" % (train\_index, test\_index)) [9 1 6 7 3 0 5] [2 8 4] [2 9 8 0 6 7 4] [3 5 1] [4 5 1 0 6 9 7] [2 3 8] [2 7 5 8 0 3 4] [6 1 9] [4 1 0 6 8 9 3] [5 2 7] Here is a visualization of the cross-validation behavior. Note that :class:`ShuffleSplit` is not affected by classes or groups. .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_indices\_008.png :target: ../auto\_examples/model\_selection/plot\_cv\_indices.html :align: center :scale: 75% :class:`ShuffleSplit` is thus a good alternative to :class:`KFold` cross validation that allows a finer control on the number of iterations and the proportion of samples on each side of the train / test split. .. \_stratification: Cross-validation iterators with stratification based on class labels -------------------------------------------------------------------- Some classification tasks can naturally exhibit rare classes: for instance, there could be orders of magnitude more negative observations than positive observations (e.g. medical screening, fraud detection, etc). As a result, cross-validation splitting can generate train or validation folds without any occurrence of a particular class. This typically leads to undefined classification metrics (e.g. ROC AUC), exceptions raised when attempting to call :term:`fit` or missing columns in the output of the `predict\_proba` or `decision\_function` methods of multiclass classifiers trained on different folds. To mitigate such problems, splitters such as :class:`StratifiedKFold` and :class:`StratifiedShuffleSplit` implement stratified sampling to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.10754674673080444, -0.06552624702453613, 0.03458971157670021, 0.033660922199487686, 0.12569579482078552, 0.019300933927297592, -0.03529568389058113, 0.08844927698373795, -0.03645911067724228, -0.04434230923652649, -0.017637168988585472, -0.0027489992789924145, -0.0018062925664708018, 0....
0.064732
This typically leads to undefined classification metrics (e.g. ROC AUC), exceptions raised when attempting to call :term:`fit` or missing columns in the output of the `predict\_proba` or `decision\_function` methods of multiclass classifiers trained on different folds. To mitigate such problems, splitters such as :class:`StratifiedKFold` and :class:`StratifiedShuffleSplit` implement stratified sampling to ensure that relative class frequencies are approximately preserved in each fold. .. note:: Stratified sampling was introduced in scikit-learn to workaround the aforementioned engineering problems rather than solve a statistical one. Stratification makes cross-validation folds more homogeneous, and as a result hides some of the variability inherent to fitting models with a limited number of observations. As a result, stratification can artificially shrink the spread of the metric measured across cross-validation iterations: the inter-fold variability does no longer reflect the uncertainty in the performance of classifiers in the presence of rare classes. .. \_stratified\_k\_fold: Stratified K-fold ^^^^^^^^^^^^^^^^^ :class:`StratifiedKFold` is a variation of \*K-fold\* which returns \*stratified\* folds: each set contains approximately the same percentage of samples of each target class as the complete set. Here is an example of stratified 3-fold cross-validation on a dataset with 50 samples from two unbalanced classes. We show the number of samples in each class and compare with :class:`KFold`. >>> from sklearn.model\_selection import StratifiedKFold, KFold >>> import numpy as np >>> X, y = np.ones((50, 1)), np.hstack(([0] \* 45, [1] \* 5)) >>> skf = StratifiedKFold(n\_splits=3) >>> for train, test in skf.split(X, y): ... print('train - {} | test - {}'.format( ... np.bincount(y[train]), np.bincount(y[test]))) train - [30 3] | test - [15 2] train - [30 3] | test - [15 2] train - [30 4] | test - [15 1] >>> kf = KFold(n\_splits=3) >>> for train, test in kf.split(X, y): ... print('train - {} | test - {}'.format( ... np.bincount(y[train]), np.bincount(y[test]))) train - [28 5] | test - [17] train - [28 5] | test - [17] train - [34] | test - [11 5] We can see that :class:`StratifiedKFold` preserves the class ratios (approximately 1 / 10) in both train and test datasets. Here is a visualization of the cross-validation behavior. .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_indices\_009.png :target: ../auto\_examples/model\_selection/plot\_cv\_indices.html :align: center :scale: 75% :class:`RepeatedStratifiedKFold` can be used to repeat Stratified K-Fold n times with different randomization in each repetition. .. \_stratified\_shuffle\_split: Stratified Shuffle Split ^^^^^^^^^^^^^^^^^^^^^^^^ :class:`StratifiedShuffleSplit` is a variation of \*ShuffleSplit\*, which returns stratified splits, \*i.e.\* which creates splits by preserving the same percentage for each target class as in the complete set. Here is a visualization of the cross-validation behavior. .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_indices\_012.png :target: ../auto\_examples/model\_selection/plot\_cv\_indices.html :align: center :scale: 75% .. \_predefined\_split: Predefined fold-splits / Validation-sets ---------------------------------------- For some datasets, a pre-defined split of the data into training- and validation fold or into several cross-validation folds already exists. Using :class:`PredefinedSplit` it is possible to use these folds e.g. when searching for hyperparameters. For example, when using a validation set, set the ``test\_fold`` to 0 for all samples that are part of the validation set, and to -1 for all other samples. .. \_group\_cv: Cross-validation iterators for grouped data ------------------------------------------- The i.i.d. assumption is broken if the underlying generative process yields groups of dependent samples. Such a grouping of data is domain specific. An example would be when there is medical data collected from multiple patients, with multiple samples taken from each patient. And such data is likely to be dependent on the individual group. In our example, the patient id for each sample will be its group identifier. In this case we would like to know if a model trained on a particular set of groups generalizes well to the unseen groups. To measure
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.05037201568484306, -0.05364040285348892, -0.04261036962270737, 0.0004427144885994494, 0.10409580171108246, -0.021814635023474693, -0.045119959861040115, 0.032685551792383194, -0.043055109679698944, -0.03602750599384308, -0.01724402606487274, -0.10969807952642441, -0.03908534720540047, -...
0.000974
data is likely to be dependent on the individual group. In our example, the patient id for each sample will be its group identifier. In this case we would like to know if a model trained on a particular set of groups generalizes well to the unseen groups. To measure this, we need to ensure that all the samples in the validation fold come from groups that are not represented at all in the paired training fold. The following cross-validation splitters can be used to do that. The grouping identifier for the samples is specified via the ``groups`` parameter. .. \_group\_k\_fold: Group K-fold ^^^^^^^^^^^^ :class:`GroupKFold` is a variation of K-fold which ensures that the same group is not represented in both testing and training sets. For example if the data is obtained from different subjects with several samples per-subject and if the model is flexible enough to learn from highly person specific features it could fail to generalize to new subjects. :class:`GroupKFold` makes it possible to detect this kind of overfitting situations. Imagine you have three subjects, each with an associated number from 1 to 3:: >>> from sklearn.model\_selection import GroupKFold >>> X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10] >>> y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"] >>> groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3] >>> gkf = GroupKFold(n\_splits=3) >>> for train, test in gkf.split(X, y, groups=groups): ... print("%s %s" % (train, test)) [0 1 2 3 4 5] [6 7 8 9] [0 1 2 6 7 8 9] [3 4 5] [3 4 5 6 7 8 9] [0 1 2] Each subject is in a different testing fold, and the same subject is never in both testing and training. Notice that the folds do not have exactly the same size due to the imbalance in the data. If class proportions must be balanced across folds, :class:`StratifiedGroupKFold` is a better option. Here is a visualization of the cross-validation behavior. .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_indices\_007.png :target: ../auto\_examples/model\_selection/plot\_cv\_indices.html :align: center :scale: 75% Similar to :class:`KFold`, the test sets from :class:`GroupKFold` will form a complete partition of all the data. While :class:`GroupKFold` attempts to place the same number of samples in each fold when ``shuffle=False``, when ``shuffle=True`` it attempts to place an equal number of distinct groups in each fold (but does not account for group sizes). .. \_stratified\_group\_k\_fold: StratifiedGroupKFold ^^^^^^^^^^^^^^^^^^^^ :class:`StratifiedGroupKFold` is a cross-validation scheme that combines both :class:`StratifiedKFold` and :class:`GroupKFold`. The idea is to try to preserve the distribution of classes in each split while keeping each group within a single split. That might be useful when you have an unbalanced dataset so that using just :class:`GroupKFold` might produce skewed splits. Example:: >>> from sklearn.model\_selection import StratifiedGroupKFold >>> X = list(range(18)) >>> y = [1] \* 6 + [0] \* 12 >>> groups = [1, 2, 3, 3, 4, 4, 1, 1, 2, 2, 3, 4, 5, 5, 5, 6, 6, 6] >>> sgkf = StratifiedGroupKFold(n\_splits=3) >>> for train, test in sgkf.split(X, y, groups=groups): ... print("%s %s" % (train, test)) [ 0 2 3 4 5 6 7 10 11 15 16 17] [ 1 8 9 12 13 14] [ 0 1 4 5 6 7 8 9 11 12 13 14] [ 2 3 10 15 16 17] [ 1 2 3 8 9 10 12 13 14 15 16 17] [ 0 4 5 6 7 11] .. dropdown:: Implementation notes - With the current implementation full shuffle is not possible in most scenarios. When shuffle=True, the following happens: 1. All groups are
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.0610843226313591, -0.028361111879348755, 0.0073321545496582985, 0.017389468848705292, 0.08786921203136444, -0.010357868857681751, -0.023020271211862564, -0.046451471745967865, -0.022794445976614952, -0.049570273607969284, -0.026853736490011215, -0.0498364232480526, -0.04675376042723656, ...
0.018234
10 15 16 17] [ 1 2 3 8 9 10 12 13 14 15 16 17] [ 0 4 5 6 7 11] .. dropdown:: Implementation notes - With the current implementation full shuffle is not possible in most scenarios. When shuffle=True, the following happens: 1. All groups are shuffled. 2. Groups are sorted by standard deviation of classes using stable sort. 3. Sorted groups are iterated over and assigned to folds. That means that only groups with the same standard deviation of class distribution will be shuffled, which might be useful when each group has only a single class. - The algorithm greedily assigns each group to one of n\_splits test sets, choosing the test set that minimises the variance in class distribution across test sets. Group assignment proceeds from groups with highest to lowest variance in class frequency, i.e. large groups peaked on one or few classes are assigned first. - This split is suboptimal in a sense that it might produce imbalanced splits even if perfect stratification is possible. If you have relatively close distribution of classes in each group, using :class:`GroupKFold` is better. Here is a visualization of cross-validation behavior for uneven groups: .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_indices\_005.png :target: ../auto\_examples/model\_selection/plot\_cv\_indices.html :align: center :scale: 75% .. \_leave\_one\_group\_out: Leave One Group Out ^^^^^^^^^^^^^^^^^^^ :class:`LeaveOneGroupOut` is a cross-validation scheme where each split holds out samples belonging to one specific group. Group information is provided via an array that encodes the group of each sample. Each training set is thus constituted by all the samples except the ones related to a specific group. This is the same as :class:`LeavePGroupsOut` with `n\_groups=1` and the same as :class:`GroupKFold` with `n\_splits` equal to the number of unique labels passed to the `groups` parameter. For example, in the cases of multiple experiments, :class:`LeaveOneGroupOut` can be used to create a cross-validation based on the different experiments: we create a training set using the samples of all the experiments except one:: >>> from sklearn.model\_selection import LeaveOneGroupOut >>> X = [1, 5, 10, 50, 60, 70, 80] >>> y = [0, 1, 1, 2, 2, 2, 2] >>> groups = [1, 1, 2, 2, 3, 3, 3] >>> logo = LeaveOneGroupOut() >>> for train, test in logo.split(X, y, groups=groups): ... print("%s %s" % (train, test)) [2 3 4 5 6] [0 1] [0 1 4 5 6] [2 3] [0 1 2 3] [4 5 6] Another common application is to use time information: for instance the groups could be the year of collection of the samples and thus allow for cross-validation against time-based splits. .. \_leave\_p\_groups\_out: Leave P Groups Out ^^^^^^^^^^^^^^^^^^ :class:`LeavePGroupsOut` is similar to :class:`LeaveOneGroupOut`, but removes samples related to :math:`P` groups for each training/test set. All possible combinations of :math:`P` groups are left out, meaning test sets will overlap for :math:`P>1`. Example of Leave-2-Group Out:: >>> from sklearn.model\_selection import LeavePGroupsOut >>> X = np.arange(6) >>> y = [1, 1, 1, 2, 2, 2] >>> groups = [1, 1, 2, 2, 3, 3] >>> lpgo = LeavePGroupsOut(n\_groups=2) >>> for train, test in lpgo.split(X, y, groups=groups): ... print("%s %s" % (train, test)) [4 5] [0 1 2 3] [2 3] [0 1 4 5] [0 1] [2 3 4 5] .. \_group\_shuffle\_split: Group Shuffle Split ^^^^^^^^^^^^^^^^^^^ The :class:`GroupShuffleSplit` iterator behaves as a combination of :class:`ShuffleSplit` and :class:`LeavePGroupsOut`, and generates a sequence of randomized partitions in which a subset of groups are held out for each split. Each train/test split is performed independently meaning there is no guaranteed relationship between successive test sets. Here is a usage example:: >>> from sklearn.model\_selection import GroupShuffleSplit >>> X = [0.1, 0.2,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.09111734479665756, -0.0008816253975965083, 0.037526391446590424, -0.032604772597551346, 0.05136728659272194, -0.03975512832403183, -0.039023950695991516, 0.030514735728502274, -0.05677930265665054, -0.013327659107744694, -0.03299368545413017, 0.03128095716238022, -0.02080702595412731, -...
0.014941
and generates a sequence of randomized partitions in which a subset of groups are held out for each split. Each train/test split is performed independently meaning there is no guaranteed relationship between successive test sets. Here is a usage example:: >>> from sklearn.model\_selection import GroupShuffleSplit >>> X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 0.001] >>> y = ["a", "b", "b", "b", "c", "c", "c", "a"] >>> groups = [1, 1, 2, 2, 3, 3, 4, 4] >>> gss = GroupShuffleSplit(n\_splits=4, test\_size=0.5, random\_state=0) >>> for train, test in gss.split(X, y, groups=groups): ... print("%s %s" % (train, test)) ... [0 1 2 3] [4 5 6 7] [2 3 6 7] [0 1 4 5] [2 3 4 5] [0 1 6 7] [4 5 6 7] [0 1 2 3] Here is a visualization of the cross-validation behavior. .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_indices\_011.png :target: ../auto\_examples/model\_selection/plot\_cv\_indices.html :align: center :scale: 75% This class is useful when the behavior of :class:`LeavePGroupsOut` is desired, but the number of groups is large enough that generating all possible partitions with :math:`P` groups withheld would be prohibitively expensive. In such a scenario, :class:`GroupShuffleSplit` provides a random sample (with replacement) of the train / test splits generated by :class:`LeavePGroupsOut`. Using cross-validation iterators to split train and test -------------------------------------------------------- The above group cross-validation functions may also be useful for splitting a dataset into training and testing subsets. Note that the convenience function :func:`train\_test\_split` is a wrapper around :func:`ShuffleSplit` and thus only allows for stratified splitting (using the class labels) and cannot account for groups. To perform the train and test split, use the indices for the train and test subsets yielded by the generator output by the `split()` method of the cross-validation splitter. For example:: >>> import numpy as np >>> from sklearn.model\_selection import GroupShuffleSplit >>> X = np.array([0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 0.001]) >>> y = np.array(["a", "b", "b", "b", "c", "c", "c", "a"]) >>> groups = np.array([1, 1, 2, 2, 3, 3, 4, 4]) >>> train\_indx, test\_indx = next( ... GroupShuffleSplit(random\_state=7).split(X, y, groups) ... ) >>> X\_train, X\_test, y\_train, y\_test = \ ... X[train\_indx], X[test\_indx], y[train\_indx], y[test\_indx] >>> X\_train.shape, X\_test.shape ((6,), (2,)) >>> np.unique(groups[train\_indx]), np.unique(groups[test\_indx]) (array([1, 2, 4]), array([3])) .. \_timeseries\_cv: Cross validation of time series data ------------------------------------ Time series data is characterized by the correlation between observations that are near in time (\*autocorrelation\*). However, classical cross-validation techniques such as :class:`KFold` and :class:`ShuffleSplit` assume the samples are independent and identically distributed, and would result in unreasonable correlation between training and testing instances (yielding poor estimates of generalization error) on time series data. Therefore, it is very important to evaluate our model for time series data on the "future" observations least like those that are used to train the model. To achieve this, one solution is provided by :class:`TimeSeriesSplit`. .. \_time\_series\_split: Time Series Split ^^^^^^^^^^^^^^^^^ :class:`TimeSeriesSplit` is a variation of \*k-fold\* which returns first :math:`k` folds as train set and the :math:`(k+1)` th fold as test set. Note that unlike standard cross-validation methods, successive training sets are supersets of those that come before them. Also, it adds all surplus data to the first training partition, which is always used to train the model. This class can be used to cross-validate time series data samples that are observed at fixed time intervals. Indeed, the folds must represent the same duration, in order to have comparable metrics across folds. Example of 3-split time series cross-validation on a dataset with 6 samples:: >>> from sklearn.model\_selection import TimeSeriesSplit >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]]) >>> y = np.array([1, 2, 3, 4, 5,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.059548184275627136, -0.026590365916490555, 0.03341513127088547, 0.04687918350100517, 0.04828958585858345, 0.01561661809682846, -0.0015940621960908175, -0.08891652524471283, -0.05822692811489105, -0.06082842871546745, 0.0013055885210633278, -0.013521225191652775, 0.01161119807511568, -0....
0.057564
same duration, in order to have comparable metrics across folds. Example of 3-split time series cross-validation on a dataset with 6 samples:: >>> from sklearn.model\_selection import TimeSeriesSplit >>> X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]]) >>> y = np.array([1, 2, 3, 4, 5, 6]) >>> tscv = TimeSeriesSplit(n\_splits=3) >>> print(tscv) TimeSeriesSplit(gap=0, max\_train\_size=None, n\_splits=3, test\_size=None) >>> for train, test in tscv.split(X): ... print("%s %s" % (train, test)) [0 1 2] [3] [0 1 2 3] [4] [0 1 2 3 4] [5] Here is a visualization of the cross-validation behavior. .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cv\_indices\_013.png :target: ../auto\_examples/model\_selection/plot\_cv\_indices.html :align: center :scale: 75% A note on shuffling =================== If the data ordering is not arbitrary (e.g. samples with the same class label are contiguous), shuffling it first may be essential to get a meaningful cross-validation result. However, the opposite may be true if the samples are not independently and identically distributed. For example, if samples correspond to news articles, and are ordered by their time of publication, then shuffling the data will likely lead to a model that is overfit and an inflated validation score: it will be tested on samples that are artificially similar (close in time) to training samples. Some cross validation iterators, such as :class:`KFold`, have an inbuilt option to shuffle the data indices before splitting them. Note that: \* This consumes less memory than shuffling the data directly. \* By default no shuffling occurs, including for the (stratified) K fold cross-validation performed by specifying ``cv=some\_integer`` to :func:`cross\_val\_score`, grid search, etc. Keep in mind that :func:`train\_test\_split` still returns a random split. \* The ``random\_state`` parameter defaults to ``None``, meaning that the shuffling will be different every time ``KFold(..., shuffle=True)`` is iterated. However, ``GridSearchCV`` will use the same shuffling for each set of parameters validated by a single call to its ``fit`` method. \* To get identical results for each split, set ``random\_state`` to an integer. For more details on how to control the randomness of cv splitters and avoid common pitfalls, see :ref:`randomness`. Cross validation and model selection ==================================== Cross validation iterators can also be used to directly perform model selection using Grid Search for the optimal hyperparameters of the model. This is the topic of the next section: :ref:`grid\_search`. .. \_permutation\_test\_score: Permutation test score ====================== :func:`~sklearn.model\_selection.permutation\_test\_score` offers another way to evaluate the performance of a :term:`predictor`. It provides a permutation-based p-value, which represents how likely an observed performance of the estimator would be obtained by chance. The null hypothesis in this test is that the estimator fails to leverage any statistical dependency between the features and the targets to make correct predictions on left-out data. :func:`~sklearn.model\_selection.permutation\_test\_score` generates a null distribution by calculating `n\_permutations` different permutations of the data. In each permutation the target values are randomly shuffled, thereby removing any dependency between the features and the targets. The p-value output is the fraction of permutations whose cross-validation score is better or equal than the true score without permuting targets. For reliable results ``n\_permutations`` should typically be larger than 100 and ``cv`` between 3-10 folds. A low p-value provides evidence that the dataset contains some real dependency between features and targets \*\*and\*\* that the estimator was able to utilize this dependency to obtain good results. A high p-value, in reverse, could be due to either one of these: - a lack of dependency between features and targets (i.e., there is no systematic relationship and any observed patterns are likely due to random chance) - \*\*or\*\* because the estimator was not able to use the dependency in the data (for instance because it underfit). In the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.05181961879134178, -0.14067837595939636, 0.021079810336232185, -0.030765946954488754, 0.07705550640821457, -0.0025144771207123995, -0.07839065790176392, 0.007737522479146719, 0.008146222680807114, -0.08880240470170975, -0.07574335485696793, -0.04746851325035095, -0.11812727898359299, 0....
0.074322
one of these: - a lack of dependency between features and targets (i.e., there is no systematic relationship and any observed patterns are likely due to random chance) - \*\*or\*\* because the estimator was not able to use the dependency in the data (for instance because it underfit). In the latter case, using a more appropriate estimator that is able to use the structure in the data, would result in a lower p-value. Cross-validation provides information about how well an estimator generalizes by estimating the range of its expected scores. However, an estimator trained on a high dimensional dataset with no structure may still perform better than expected on cross-validation, just by chance. This can typically happen with small datasets with less than a few hundred samples. :func:`~sklearn.model\_selection.permutation\_test\_score` provides information on whether the estimator has found a real dependency between features and targets and can help in evaluating the performance of the estimator. It is important to note that this test has been shown to produce low p-values even if there is only weak structure in the data because in the corresponding permutated datasets there is absolutely no structure. This test is therefore only able to show whether the model reliably outperforms random guessing. Finally, :func:`~sklearn.model\_selection.permutation\_test\_score` is computed using brute force and internally fits ``(n\_permutations + 1) \* n\_cv`` models. It is therefore only tractable with small datasets for which fitting an individual model is very fast. Using the `n\_jobs` parameter parallelizes the computation and thus speeds it up. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_permutation\_tests\_for\_classification.py` .. dropdown:: References \* Ojala and Garriga. `Permutation Tests for Studying Classifier Performance `\_. J. Mach. Learn. Res. 2010.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/cross_validation.rst
main
scikit-learn
[ -0.03298240527510643, -0.12810541689395905, 0.03654314577579498, 0.03169918805360794, 0.1702430248260498, -0.011156951077282429, -0.02424994483590126, 0.02044016122817993, 0.02698056772351265, -0.020140042528510094, -0.015864457935094833, -0.0719861388206482, -0.007976075634360313, -0.0202...
0.005806
.. currentmodule:: sklearn.preprocessing .. \_preprocessing\_targets: ========================================== Transforming the prediction target (``y``) ========================================== These are transformers that are not intended to be used on features, only on supervised learning targets. See also :ref:`transformed\_target\_regressor` if you want to transform the prediction target for learning, but evaluate the model in the original (untransformed) space. Label binarization ================== LabelBinarizer -------------- :class:`LabelBinarizer` is a utility class to help create a :term:`label indicator matrix` from a list of :term:`multiclass` labels:: >>> from sklearn import preprocessing >>> lb = preprocessing.LabelBinarizer() >>> lb.fit([1, 2, 6, 4, 2]) LabelBinarizer() >>> lb.classes\_ array([1, 2, 4, 6]) >>> lb.transform([1, 6]) array([[1, 0, 0, 0], [0, 0, 0, 1]]) Using this format can enable multiclass classification in estimators that support the label indicator matrix format. .. warning:: LabelBinarizer is not needed if you are using an estimator that already supports :term:`multiclass` data. For more information about multiclass classification, refer to :ref:`multiclass\_classification`. .. \_multilabelbinarizer: MultiLabelBinarizer ------------------- In :term:`multilabel` learning, the joint set of binary classification tasks is expressed with a label binary indicator array: each sample is one row of a 2d array of shape (n\_samples, n\_classes) with binary values where the one, i.e. the non zero elements, corresponds to the subset of labels for that sample. An array such as ``np.array([[1, 0, 0], [0, 1, 1], [0, 0, 0]])`` represents label 0 in the first sample, labels 1 and 2 in the second sample, and no labels in the third sample. Producing multilabel data as a list of sets of labels may be more intuitive. The :class:`MultiLabelBinarizer ` transformer can be used to convert between a collection of collections of labels and the indicator format:: >>> from sklearn.preprocessing import MultiLabelBinarizer >>> y = [[2, 3, 4], [2], [0, 1, 3], [0, 1, 2, 3, 4], [0, 1, 2]] >>> MultiLabelBinarizer().fit\_transform(y) array([[0, 0, 1, 1, 1], [0, 0, 1, 0, 0], [1, 1, 0, 1, 0], [1, 1, 1, 1, 1], [1, 1, 1, 0, 0]]) For more information about multilabel classification, refer to :ref:`multilabel\_classification`. Label encoding ============== :class:`LabelEncoder` is a utility class to help normalize labels such that they contain only values between 0 and n\_classes-1. This is sometimes useful for writing efficient Cython routines. :class:`LabelEncoder` can be used as follows:: >>> from sklearn import preprocessing >>> le = preprocessing.LabelEncoder() >>> le.fit([1, 2, 2, 6]) LabelEncoder() >>> le.classes\_ array([1, 2, 6]) >>> le.transform([1, 1, 2, 6]) array([0, 0, 1, 2]) >>> le.inverse\_transform([0, 0, 1, 2]) array([1, 1, 2, 6]) It can also be used to transform non-numerical labels (as long as they are hashable and comparable) to numerical labels:: >>> le = preprocessing.LabelEncoder() >>> le.fit(["paris", "paris", "tokyo", "amsterdam"]) LabelEncoder() >>> list(le.classes\_) [np.str\_('amsterdam'), np.str\_('paris'), np.str\_('tokyo')] >>> le.transform(["tokyo", "tokyo", "paris"]) array([2, 2, 1]) >>> list(le.inverse\_transform([2, 2, 1])) [np.str\_('tokyo'), np.str\_('tokyo'), np.str\_('paris')]
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/preprocessing_targets.rst
main
scikit-learn
[ -0.055145926773548126, -0.02711663953959942, -0.08604104816913605, -0.035524882376194, 0.05346878990530968, 0.06143099442124367, 0.05782125145196915, -0.015250854194164276, -0.08185944706201553, -0.04240892827510834, 0.022311093285679817, -0.10076668858528137, -0.02065679058432579, 0.01994...
0.078658
.. \_sgd: =========================== Stochastic Gradient Descent =========================== .. currentmodule:: sklearn.linear\_model \*\*Stochastic Gradient Descent (SGD)\*\* is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) `Support Vector Machines `\_ and `Logistic Regression `\_. Even though SGD has been around in the machine learning community for a long time, it has received a considerable amount of attention just recently in the context of large-scale learning. SGD has been successfully applied to large-scale and sparse machine learning problems often encountered in text classification and natural language processing. Given that the data is sparse, the classifiers in this module easily scale to problems with more than :math:`10^5` training examples and more than :math:`10^5` features. Strictly speaking, SGD is merely an optimization technique and does not correspond to a specific family of machine learning models. It is only a \*way\* to train a model. Often, an instance of :class:`SGDClassifier` or :class:`SGDRegressor` will have an equivalent estimator in the scikit-learn API, potentially using a different optimization technique. For example, using `SGDClassifier(loss='log\_loss')` results in logistic regression, i.e. a model equivalent to :class:`~sklearn.linear\_model.LogisticRegression` which is fitted via SGD instead of being fitted by one of the other solvers in :class:`~sklearn.linear\_model.LogisticRegression`. Similarly, `SGDRegressor(loss='squared\_error', penalty='l2')` and :class:`~sklearn.linear\_model.Ridge` solve the same optimization problem, via different means. The advantages of Stochastic Gradient Descent are: + Efficiency. + Ease of implementation (lots of opportunities for code tuning). The disadvantages of Stochastic Gradient Descent include: + SGD requires a number of hyperparameters such as the regularization parameter and the number of iterations. + SGD is sensitive to feature scaling. .. warning:: Make sure you permute (shuffle) your training data before fitting the model or use ``shuffle=True`` to shuffle after each iteration (used by default). Also, ideally, features should be standardized using e.g. `make\_pipeline(StandardScaler(), SGDClassifier())` (see :ref:`Pipelines `). Classification ============== The class :class:`SGDClassifier` implements a plain stochastic gradient descent learning routine which supports different loss functions and penalties for classification. Below is the decision boundary of a :class:`SGDClassifier` trained with the hinge loss, equivalent to a linear SVM. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_sgd\_separating\_hyperplane\_001.png :target: ../auto\_examples/linear\_model/plot\_sgd\_separating\_hyperplane.html :align: center :scale: 75 As other classifiers, SGD has to be fitted with two arrays: an array `X` of shape (n\_samples, n\_features) holding the training samples, and an array `y` of shape (n\_samples,) holding the target values (class labels) for the training samples:: >>> from sklearn.linear\_model import SGDClassifier >>> X = [[0., 0.], [1., 1.]] >>> y = [0, 1] >>> clf = SGDClassifier(loss="hinge", penalty="l2", max\_iter=5) >>> clf.fit(X, y) SGDClassifier(max\_iter=5) After being fitted, the model can then be used to predict new values:: >>> clf.predict([[2., 2.]]) array([1]) SGD fits a linear model to the training data. The ``coef\_`` attribute holds the model parameters:: >>> clf.coef\_ array([[9.9, 9.9]]) The ``intercept\_`` attribute holds the intercept (aka offset or bias):: >>> clf.intercept\_ array([-9.9]) Whether or not the model should use an intercept, i.e. a biased hyperplane, is controlled by the parameter ``fit\_intercept``. The signed distance to the hyperplane (computed as the dot product between the coefficients and the input sample, plus the intercept) is given by :meth:`SGDClassifier.decision\_function`:: >>> clf.decision\_function([[2., 2.]]) array([29.6]) The concrete loss function can be set via the ``loss`` parameter. :class:`SGDClassifier` supports the following loss functions: \* ``loss="hinge"``: (soft-margin) linear Support Vector Machine, \* ``loss="modified\_huber"``: smoothed hinge loss, \* ``loss="log\_loss"``: logistic regression, \* and all regression losses below. In this case the target is encoded as :math:`-1` or :math:`1`, and the problem is treated as a regression problem. The predicted class then corresponds to the sign of the predicted target. Please refer to the :ref:`mathematical section below ` for formulas. The first two
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst
main
scikit-learn
[ -0.13399629294872284, -0.059581346809864044, -0.02553606778383255, 0.010198775678873062, 0.0784359872341156, 0.0032509788870811462, -0.035657744854688644, -0.0035857888869941235, -0.04045846313238144, -0.04972301423549652, 0.004725810140371323, 0.06469698995351791, 0.007632046472281218, -0...
0.05021
and all regression losses below. In this case the target is encoded as :math:`-1` or :math:`1`, and the problem is treated as a regression problem. The predicted class then corresponds to the sign of the predicted target. Please refer to the :ref:`mathematical section below ` for formulas. The first two loss functions are lazy, they only update the model parameters if an example violates the margin constraint, which makes training very efficient and may result in sparser models (i.e. with more zero coefficients), even when :math:`L\_2` penalty is used. Using ``loss="log\_loss"`` or ``loss="modified\_huber"`` enables the ``predict\_proba`` method, which gives a vector of probability estimates :math:`P(y|x)` per sample :math:`x`:: >>> clf = SGDClassifier(loss="log\_loss", max\_iter=5).fit(X, y) >>> clf.predict\_proba([[1., 1.]]) # doctest: +SKIP array([[0.00, 0.99]]) The concrete penalty can be set via the ``penalty`` parameter. SGD supports the following penalties: \* ``penalty="l2"``: :math:`L\_2` norm penalty on ``coef\_``. \* ``penalty="l1"``: :math:`L\_1` norm penalty on ``coef\_``. \* ``penalty="elasticnet"``: Convex combination of :math:`L\_2` and :math:`L\_1`; ``(1 - l1\_ratio) \* L2 + l1\_ratio \* L1``. The default setting is ``penalty="l2"``. The :math:`L\_1` penalty leads to sparse solutions, driving most coefficients to zero. The Elastic Net [#5]\_ solves some deficiencies of the :math:`L\_1` penalty in the presence of highly correlated attributes. The parameter ``l1\_ratio`` controls the convex combination of :math:`L\_1` and :math:`L\_2` penalty. :class:`SGDClassifier` supports multi-class classification by combining multiple binary classifiers in a "one versus all" (OVA) scheme. For each of the :math:`K` classes, a binary classifier is learned that discriminates between that and all other :math:`K-1` classes. At testing time, we compute the confidence score (i.e. the signed distances to the hyperplane) for each classifier and choose the class with the highest confidence. The Figure below illustrates the OVA approach on the iris dataset. The dashed lines represent the three OVA classifiers; the background colors show the decision surface induced by the three classifiers. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_sgd\_iris\_001.png :target: ../auto\_examples/linear\_model/plot\_sgd\_iris.html :align: center :scale: 75 In the case of multi-class classification ``coef\_`` is a two-dimensional array of shape (n\_classes, n\_features) and ``intercept\_`` is a one-dimensional array of shape (n\_classes,). The :math:`i`-th row of ``coef\_`` holds the weight vector of the OVA classifier for the :math:`i`-th class; classes are indexed in ascending order (see attribute ``classes\_``). Note that, in principle, since they allow to create a probability model, ``loss="log\_loss"`` and ``loss="modified\_huber"`` are more suitable for one-vs-all classification. :class:`SGDClassifier` supports both weighted classes and weighted instances via the fit parameters ``class\_weight`` and ``sample\_weight``. See the examples below and the docstring of :meth:`SGDClassifier.fit` for further information. :class:`SGDClassifier` supports averaged SGD (ASGD) [#4]\_. Averaging can be enabled by setting `average=True`. ASGD performs the same updates as the regular SGD (see :ref:`sgd\_mathematical\_formulation`), but instead of using the last value of the coefficients as the `coef\_` attribute (i.e. the values of the last update), `coef\_` is set instead to the \*\*average\*\* value of the coefficients across all updates. The same is done for the `intercept\_` attribute. When using ASGD the learning rate can be larger and even constant, leading on some datasets to a speed up in training time. For classification with a logistic loss, another variant of SGD with an averaging strategy is available with Stochastic Average Gradient (SAG) algorithm, available as a solver in :class:`LogisticRegression`. .. rubric:: Examples - :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_sgd\_separating\_hyperplane.py` - :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_sgd\_iris.py` - :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_sgd\_weighted\_samples.py` - :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_separating\_hyperplane\_unbalanced.py` (See the Note in the example) Regression ========== The class :class:`SGDRegressor` implements a plain stochastic gradient descent learning routine which supports different loss functions and penalties to fit linear regression models. :class:`SGDRegressor` is well suited for regression problems with a large number of training samples (> 10.000), for other problems we recommend :class:`Ridge`, :class:`Lasso`, or :class:`ElasticNet`.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst
main
scikit-learn
[ -0.031836219131946564, -0.0401027537882328, -0.035684339702129364, 0.05365753918886185, 0.15695908665657043, -0.007224753964692354, -0.010517103597521782, 0.07003102451562881, 0.013897942379117012, 0.01571866311132908, -0.006244208663702011, -0.016544990241527557, 0.09900301694869995, -0.0...
0.030867
Regression ========== The class :class:`SGDRegressor` implements a plain stochastic gradient descent learning routine which supports different loss functions and penalties to fit linear regression models. :class:`SGDRegressor` is well suited for regression problems with a large number of training samples (> 10.000), for other problems we recommend :class:`Ridge`, :class:`Lasso`, or :class:`ElasticNet`. The concrete loss function can be set via the ``loss`` parameter. :class:`SGDRegressor` supports the following loss functions: \* ``loss="squared\_error"``: Ordinary least squares, \* ``loss="huber"``: Huber loss for robust regression, \* ``loss="epsilon\_insensitive"``: linear Support Vector Regression. Please refer to the :ref:`mathematical section below ` for formulas. The Huber and epsilon-insensitive loss functions can be used for robust regression. The width of the insensitive region has to be specified via the parameter ``epsilon``. This parameter depends on the scale of the target variables. The `penalty` parameter determines the regularization to be used (see description above in the classification section). :class:`SGDRegressor` also supports averaged SGD [#4]\_ (here again, see description above in the classification section). For regression with a squared loss and a :math:`L\_2` penalty, another variant of SGD with an averaging strategy is available with Stochastic Average Gradient (SAG) algorithm, available as a solver in :class:`Ridge`. .. rubric:: Examples - :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_prediction\_latency.py` .. \_sgd\_online\_one\_class\_svm: Online One-Class SVM ==================== The class :class:`sklearn.linear\_model.SGDOneClassSVM` implements an online linear version of the One-Class SVM using a stochastic gradient descent. Combined with kernel approximation techniques, :class:`sklearn.linear\_model.SGDOneClassSVM` can be used to approximate the solution of a kernelized One-Class SVM, implemented in :class:`sklearn.svm.OneClassSVM`, with a linear complexity in the number of samples. Note that the complexity of a kernelized One-Class SVM is at best quadratic in the number of samples. :class:`sklearn.linear\_model.SGDOneClassSVM` is thus well suited for datasets with a large number of training samples (over 10,000) for which the SGD variant can be several orders of magnitude faster. .. dropdown:: Mathematical details Its implementation is based on the implementation of the stochastic gradient descent. Indeed, the original optimization problem of the One-Class SVM is given by .. math:: \begin{aligned} \min\_{w, \rho, \xi} & \quad \frac{1}{2}\Vert w \Vert^2 - \rho + \frac{1}{\nu n} \sum\_{i=1}^n \xi\_i \\ \text{s.t.} & \quad \langle w, x\_i \rangle \geq \rho - \xi\_i \quad 1 \leq i \leq n \\ & \quad \xi\_i \geq 0 \quad 1 \leq i \leq n \end{aligned} where :math:`\nu \in (0, 1]` is the user-specified parameter controlling the proportion of outliers and the proportion of support vectors. Getting rid of the slack variables :math:`\xi\_i` this problem is equivalent to .. math:: \min\_{w, \rho} \frac{1}{2}\Vert w \Vert^2 - \rho + \frac{1}{\nu n} \sum\_{i=1}^n \max(0, \rho - \langle w, x\_i \rangle) \, . Multiplying by the constant :math:`\nu` and introducing the intercept :math:`b = 1 - \rho` we obtain the following equivalent optimization problem .. math:: \min\_{w, b} \frac{\nu}{2}\Vert w \Vert^2 + b\nu + \frac{1}{n} \sum\_{i=1}^n \max(0, 1 - (\langle w, x\_i \rangle + b)) \, . This is similar to the optimization problems studied in section :ref:`sgd\_mathematical\_formulation` with :math:`y\_i = 1, 1 \leq i \leq n` and :math:`\alpha = \nu`, :math:`L` being the hinge loss function and :math:`R` being the :math:`L\_2` norm. We just need to add the term :math:`b\nu` in the optimization loop. As :class:`SGDClassifier` and :class:`SGDRegressor`, :class:`SGDOneClassSVM` supports averaged SGD. Averaging can be enabled by setting ``average=True``. .. rubric:: Examples - :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_sgdocsvm\_vs\_ocsvm.py` Stochastic Gradient Descent for sparse data =========================================== .. note:: The sparse implementation produces slightly different results from the dense implementation, due to a shrunk learning rate for the intercept. See :ref:`implementation\_details`. There is built-in support for sparse data given in any matrix in a format supported by `scipy.sparse `\_. For maximum efficiency, however, use the CSR matrix format as
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst
main
scikit-learn
[ -0.04112710431218147, -0.03641277179121971, -0.019368678331375122, 0.01785794459283352, 0.05946415662765503, 0.056183651089668274, -0.03954996541142464, 0.08394879102706909, -0.04031994938850403, 0.017664648592472076, 0.010002749040722847, 0.06332496553659439, 0.05772387608885765, -0.02868...
0.070196
The sparse implementation produces slightly different results from the dense implementation, due to a shrunk learning rate for the intercept. See :ref:`implementation\_details`. There is built-in support for sparse data given in any matrix in a format supported by `scipy.sparse `\_. For maximum efficiency, however, use the CSR matrix format as defined in `scipy.sparse.csr\_matrix `\_. .. rubric:: Examples - :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_classification\_20newsgroups.py` Complexity ========== The major advantage of SGD is its efficiency, which is basically linear in the number of training examples. If :math:`X` is a matrix of size :math:`n \times p` (with :math:`n` samples and :math:`p` features), training has a cost of :math:`O(k n \bar p)`, where :math:`k` is the number of iterations (epochs) and :math:`\bar p` is the average number of non-zero attributes per sample. Recent theoretical results, however, show that the runtime to get some desired optimization accuracy does not increase as the training set size increases. Stopping criterion ================== The classes :class:`SGDClassifier` and :class:`SGDRegressor` provide two criteria to stop the algorithm when a given level of convergence is reached: \* With ``early\_stopping=True``, the input data is split into a training set and a validation set. The model is then fitted on the training set, and the stopping criterion is based on the prediction score (using the `score` method) computed on the validation set. The size of the validation set can be changed with the parameter ``validation\_fraction``. \* With ``early\_stopping=False``, the model is fitted on the entire input data and the stopping criterion is based on the objective function computed on the training data. In both cases, the criterion is evaluated once by epoch, and the algorithm stops when the criterion does not improve ``n\_iter\_no\_change`` times in a row. The improvement is evaluated with absolute tolerance ``tol``, and the algorithm stops in any case after a maximum number of iterations ``max\_iter``. See :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_sgd\_early\_stopping.py` for an example of the effects of early stopping. Tips on Practical Use ===================== \* Stochastic Gradient Descent is sensitive to feature scaling, so it is highly recommended to scale your data. For example, scale each attribute on the input vector :math:`X` to :math:`[0,1]` or :math:`[-1,1]`, or standardize it to have mean :math:`0` and variance :math:`1`. Note that the \*same\* scaling must be applied to the test vector to obtain meaningful results. This can be easily done using :class:`~sklearn.preprocessing.StandardScaler`:: from sklearn.preprocessing import StandardScaler scaler = StandardScaler() scaler.fit(X\_train) # Don't cheat - fit only on training data X\_train = scaler.transform(X\_train) X\_test = scaler.transform(X\_test) # apply same transformation to test data # Or better yet: use a pipeline! from sklearn.pipeline import make\_pipeline est = make\_pipeline(StandardScaler(), SGDClassifier()) est.fit(X\_train) est.predict(X\_test) If your attributes have an intrinsic scale (e.g. word frequencies or indicator features) scaling is not needed. \* Finding a reasonable regularization term :math:`\alpha` is best done using automatic hyper-parameter search, e.g. :class:`~sklearn.model\_selection.GridSearchCV` or :class:`~sklearn.model\_selection.RandomizedSearchCV`, usually in the range ``10.0\*\*-np.arange(1,7)``. \* Empirically, we found that SGD converges after observing approximately :math:`10^6` training samples. Thus, a reasonable first guess for the number of iterations is ``max\_iter = np.ceil(10\*\*6 / n)``, where ``n`` is the size of the training set. \* If you apply SGD to features extracted using PCA we found that it is often wise to scale the feature values by some constant `c` such that the average :math:`L\_2` norm of the training data equals one. \* We found that Averaged SGD works best with a larger number of features and a higher `eta0`. .. rubric:: References \* `"Efficient BackProp" `\_ Y. LeCun, L. Bottou, G. Orr, K. Müller - In Neural Networks: Tricks of the Trade 1998. .. \_sgd\_mathematical\_formulation: Mathematical formulation ======================== We describe here the mathematical details of the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst
main
scikit-learn
[ -0.08586014062166214, -0.08398769050836563, -0.0727691650390625, 0.02842317335307598, 0.05593777447938919, -0.0771191418170929, -0.04568755254149437, 0.018016062676906586, -0.04582501947879791, -0.03292137756943703, 0.000888270151335746, 0.06554345786571503, -0.003971024416387081, -0.00825...
-0.036931
SGD works best with a larger number of features and a higher `eta0`. .. rubric:: References \* `"Efficient BackProp" `\_ Y. LeCun, L. Bottou, G. Orr, K. Müller - In Neural Networks: Tricks of the Trade 1998. .. \_sgd\_mathematical\_formulation: Mathematical formulation ======================== We describe here the mathematical details of the SGD procedure. A good overview with convergence rates can be found in [#6]\_. Given a set of training examples :math:`\{(x\_1, y\_1), \ldots, (x\_n, y\_n)\}` where :math:`x\_i \in \mathbf{R}^m` and :math:`y\_i \in \mathbf{R}` (:math:`y\_i \in \{-1, 1\}` for classification), our goal is to learn a linear scoring function :math:`f(x) = w^T x + b` with model parameters :math:`w \in \mathbf{R}^m` and intercept :math:`b \in \mathbf{R}`. In order to make predictions for binary classification, we simply look at the sign of :math:`f(x)`. To find the model parameters, we minimize the regularized training error given by .. math:: E(w,b) = \frac{1}{n}\sum\_{i=1}^{n} L(y\_i, f(x\_i)) + \alpha R(w) where :math:`L` is a loss function that measures model (mis)fit and :math:`R` is a regularization term (aka penalty) that penalizes model complexity; :math:`\alpha > 0` is a non-negative hyperparameter that controls the regularization strength. .. dropdown:: Loss functions details Different choices for :math:`L` entail different classifiers or regressors: - Hinge (soft-margin): equivalent to Support Vector Classification. :math:`L(y\_i, f(x\_i)) = \max(0, 1 - y\_i f(x\_i))`. - Perceptron: :math:`L(y\_i, f(x\_i)) = \max(0, - y\_i f(x\_i))`. - Modified Huber: :math:`L(y\_i, f(x\_i)) = \max(0, 1 - y\_i f(x\_i))^2` if :math:`y\_i f(x\_i) > -1`, and :math:`L(y\_i, f(x\_i)) = -4 y\_i f(x\_i)` otherwise. - Log Loss: equivalent to Logistic Regression. :math:`L(y\_i, f(x\_i)) = \log(1 + \exp (-y\_i f(x\_i)))`. - Squared Error: Linear regression (Ridge or Lasso depending on :math:`R`). :math:`L(y\_i, f(x\_i)) = \frac{1}{2}(y\_i - f(x\_i))^2`. - Huber: less sensitive to outliers than least-squares. It is equivalent to least squares when :math:`|y\_i - f(x\_i)| \leq \varepsilon`, and :math:`L(y\_i, f(x\_i)) = \varepsilon |y\_i - f(x\_i)| - \frac{1}{2} \varepsilon^2` otherwise. - Epsilon-Insensitive: (soft-margin) equivalent to Support Vector Regression. :math:`L(y\_i, f(x\_i)) = \max(0, |y\_i - f(x\_i)| - \varepsilon)`. All of the above loss functions can be regarded as an upper bound on the misclassification error (Zero-one loss) as shown in the Figure below. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_sgd\_loss\_functions\_001.png :target: ../auto\_examples/linear\_model/plot\_sgd\_loss\_functions.html :align: center :scale: 75 Popular choices for the regularization term :math:`R` (the `penalty` parameter) include: - :math:`L\_2` norm: :math:`R(w) := \frac{1}{2} \sum\_{j=1}^{m} w\_j^2 = \frac{1}{2} ||w||\_2^2`, - :math:`L\_1` norm: :math:`R(w) := \sum\_{j=1}^{m} |w\_j|`, which leads to sparse solutions. - Elastic Net: :math:`R(w) := \frac{\rho}{2} \sum\_{j=1}^{n} w\_j^2 + (1-\rho) \sum\_{j=1}^{m} |w\_j|`, a convex combination of :math:`L\_2` and :math:`L\_1`, where :math:`\rho` is given by ``1 - l1\_ratio``. The Figure below shows the contours of the different regularization terms in a 2-dimensional parameter space (:math:`m=2`) when :math:`R(w) = 1`. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_sgd\_penalties\_001.png :target: ../auto\_examples/linear\_model/plot\_sgd\_penalties.html :align: center :scale: 75 SGD --- Stochastic gradient descent is an optimization method for unconstrained optimization problems. In contrast to (batch) gradient descent, SGD approximates the true gradient of :math:`E(w,b)` by considering a single training example at a time. The class :class:`SGDClassifier` implements a first-order SGD learning routine. The algorithm iterates over the training examples and for each example updates the model parameters according to the update rule given by .. math:: w \leftarrow w - \eta \left[\alpha \frac{\partial R(w)}{\partial w} + \frac{\partial L(w^T x\_i + b, y\_i)}{\partial w}\right] where :math:`\eta` is the learning rate which controls the step-size in the parameter space. The intercept :math:`b` is updated similarly but without regularization (and with additional decay for sparse matrices, as detailed in :ref:`implementation\_details`). The learning rate :math:`\eta` can be either constant or gradually decaying. For classification, the default learning rate schedule (``learning\_rate='optimal'``) is given by .. math:: \eta^{(t)}
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst
main
scikit-learn
[ -0.08828761428594589, -0.12259003520011902, -0.010941551066935062, -0.03935815766453743, 0.07471048086881638, 0.03316660597920418, 0.006673725787550211, -0.022698136046528816, -0.044406965374946594, -0.062376029789447784, -0.015257853083312511, 0.1024991050362587, 0.04198858514428139, -0.0...
-0.006585
step-size in the parameter space. The intercept :math:`b` is updated similarly but without regularization (and with additional decay for sparse matrices, as detailed in :ref:`implementation\_details`). The learning rate :math:`\eta` can be either constant or gradually decaying. For classification, the default learning rate schedule (``learning\_rate='optimal'``) is given by .. math:: \eta^{(t)} = \frac {1}{\alpha (t\_0 + t)} where :math:`t` is the time step (there are a total of `n\_samples \* n\_iter` time steps), :math:`t\_0` is determined based on a heuristic proposed by Léon Bottou such that the expected initial updates are comparable with the expected size of the weights (this assumes that the norm of the training samples is approximately 1). The exact definition can be found in ``\_init\_t`` in `BaseSGD`. For regression the default learning rate schedule is inverse scaling (``learning\_rate='invscaling'``), given by .. math:: \eta^{(t)} = \frac{\eta\_0}{t^{power\\_t}} where :math:`\eta\_0` and :math:`power\\_t` are hyperparameters chosen by the user via ``eta0`` and ``power\_t``, respectively. For a constant learning rate use ``learning\_rate='constant'`` and use ``eta0`` to specify the learning rate. For an adaptively decreasing learning rate, use ``learning\_rate='adaptive'`` and use ``eta0`` to specify the starting learning rate. When the stopping criterion is reached, the learning rate is divided by 5, and the algorithm does not stop. The algorithm stops when the learning rate goes below `1e-6`. The model parameters can be accessed through the ``coef\_`` and ``intercept\_`` attributes: ``coef\_`` holds the weights :math:`w` and ``intercept\_`` holds :math:`b`. When using Averaged SGD (with the `average` parameter), `coef\_` is set to the average weight across all updates: `coef\_` :math:`= \frac{1}{T} \sum\_{t=0}^{T-1} w^{(t)}`, where :math:`T` is the total number of updates, found in the `t\_` attribute. .. \_implementation\_details: Implementation details ====================== The implementation of SGD is influenced by the `Stochastic Gradient SVM` of [#1]\_. Similar to SvmSGD, the weight vector is represented as the product of a scalar and a vector which allows an efficient weight update in the case of :math:`L\_2` regularization. In the case of sparse input `X`, the intercept is updated with a smaller learning rate (multiplied by 0.01) to account for the fact that it is updated more frequently. Training examples are picked up sequentially and the learning rate is lowered after each observed example. We adopted the learning rate schedule from [#2]\_. For multi-class classification, a "one versus all" approach is used. We use the truncated gradient algorithm proposed in [#3]\_ for :math:`L\_1` regularization (and the Elastic Net). The code is written in Cython. .. rubric:: References .. [#1] `"Stochastic Gradient Descent" `\_ L. Bottou - Website, 2010. .. [#2] :doi:`"Pegasos: Primal estimated sub-gradient solver for svm" <10.1145/1273496.1273598>` S. Shalev-Shwartz, Y. Singer, N. Srebro - In Proceedings of ICML '07. .. [#3] `"Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty" `\_ Y. Tsuruoka, J. Tsujii, S. Ananiadou - In Proceedings of the AFNLP/ACL'09. .. [#4] :arxiv:`"Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent" <1107.2490v2>`. Xu, Wei (2011) .. [#5] :doi:`"Regularization and variable selection via the elastic net" <10.1111/j.1467-9868.2005.00503.x>` H. Zou, T. Hastie - Journal of the Royal Statistical Society Series B, 67 (2), 301-320. .. [#6] :doi:`"Solving large scale linear prediction problems using stochastic gradient descent algorithms" <10.1145/1015330.1015332>` T. Zhang - In Proceedings of ICML '04.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/sgd.rst
main
scikit-learn
[ -0.06905706226825714, -0.05588751658797264, -0.025147918611764908, 0.053600870072841644, 0.042905379086732864, -0.026067819446325302, -0.07940816879272461, 0.0008358936756849289, 0.012575644068419933, 0.016191260889172554, 0.01996035873889923, 0.09816093742847443, -0.056992799043655396, -0...
0.106972
.. \_semi\_supervised: =================================================== Semi-supervised learning =================================================== .. currentmodule:: sklearn.semi\_supervised `Semi-supervised learning `\_ is a situation in which in your training data some of the samples are not labeled. The semi-supervised estimators in :mod:`sklearn.semi\_supervised` are able to make use of this additional unlabeled data to better capture the shape of the underlying data distribution and generalize better to new samples. These algorithms can perform well when we have a very small amount of labeled points and a large amount of unlabeled points. .. topic:: Unlabeled entries in `y` It is important to assign an identifier to unlabeled points along with the labeled data when training the model with the ``fit`` method. The identifier that this implementation uses is the integer value :math:`-1`. Note that for string labels, the dtype of `y` should be object so that it can contain both strings and integers. .. note:: Semi-supervised algorithms need to make assumptions about the distribution of the dataset in order to achieve performance gains. See `here `\_ for more details. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_semi\_supervised\_plot\_semi\_supervised\_newsgroups.py` .. \_self\_training: Self Training ============= This self-training implementation is based on Yarowsky's [1]\_ algorithm. Using this algorithm, a given supervised classifier can function as a semi-supervised classifier, allowing it to learn from unlabeled data. :class:`SelfTrainingClassifier` can be called with any classifier that implements `predict\_proba`, passed as the parameter `estimator`. In each iteration, the `estimator` predicts labels for the unlabeled samples and adds a subset of these labels to the labeled dataset. The choice of this subset is determined by the selection criterion. This selection can be done using a `threshold` on the prediction probabilities, or by choosing the `k\_best` samples according to the prediction probabilities. The labels used for the final fit as well as the iteration in which each sample was labeled are available as attributes. The optional `max\_iter` parameter specifies how many times the loop is executed at most. The `max\_iter` parameter may be set to `None`, causing the algorithm to iterate until all samples have labels or no new samples are selected in that iteration. .. note:: When using the self-training classifier, the :ref:`calibration ` of the classifier is important. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_semi\_supervised\_plot\_self\_training\_varying\_threshold.py` \* :ref:`sphx\_glr\_auto\_examples\_semi\_supervised\_plot\_semi\_supervised\_versus\_svm\_iris.py` .. rubric:: References .. [1] :doi:`"Unsupervised word sense disambiguation rivaling supervised methods" <10.3115/981658.981684>` David Yarowsky, Proceedings of the 33rd annual meeting on Association for Computational Linguistics (ACL '95). Association for Computational Linguistics, Stroudsburg, PA, USA, 189-196. .. \_label\_propagation: Label Propagation ================= Label propagation denotes a few variations of semi-supervised graph inference algorithms. A few features available in this model: \* Used for classification tasks \* Kernel methods to project data into alternate dimensional spaces `scikit-learn` provides two label propagation models: :class:`LabelPropagation` and :class:`LabelSpreading`. Both work by constructing a similarity graph over all items in the input dataset. .. figure:: ../auto\_examples/semi\_supervised/images/sphx\_glr\_plot\_label\_propagation\_structure\_001.png :target: ../auto\_examples/semi\_supervised/plot\_label\_propagation\_structure.html :align: center :scale: 60% \*\*An illustration of label-propagation:\*\* \*the structure of unlabeled observations is consistent with the class structure, and thus the class label can be propagated to the unlabeled observations of the training set.\* :class:`LabelPropagation` and :class:`LabelSpreading` differ in modifications to the similarity matrix that graph and the clamping effect on the label distributions. Clamping allows the algorithm to change the weight of the true ground labeled data to some degree. The :class:`LabelPropagation` algorithm performs hard clamping of input labels, which means :math:`\alpha=0`. This clamping factor can be relaxed, to say :math:`\alpha=0.2`, which means that we will always retain 80 percent of our original label distribution, but the algorithm gets to change its confidence of the distribution within 20 percent. :class:`LabelPropagation` uses the raw similarity matrix constructed from the data with no modifications. In
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/semi_supervised.rst
main
scikit-learn
[ -0.04380118101835251, -0.040115341544151306, -0.004464255180209875, 0.03415679559111595, 0.15073661506175995, 0.010009980760514736, 0.061087507754564285, -0.08691751211881638, -0.08424971252679825, -0.046761564910411835, 0.03466561436653137, -0.05794551596045494, 0.004659350495785475, -0.0...
0.117396
clamping factor can be relaxed, to say :math:`\alpha=0.2`, which means that we will always retain 80 percent of our original label distribution, but the algorithm gets to change its confidence of the distribution within 20 percent. :class:`LabelPropagation` uses the raw similarity matrix constructed from the data with no modifications. In contrast, :class:`LabelSpreading` minimizes a loss function that has regularization properties, as such it is often more robust to noise. The algorithm iterates on a modified version of the original graph and normalizes the edge weights by computing the normalized graph Laplacian matrix. This procedure is also used in :ref:`spectral\_clustering`. Label propagation models have two built-in kernel methods. Choice of kernel affects both scalability and performance of the algorithms. The following are available: \* rbf (:math:`\exp(-\gamma |x-y|^2), \gamma > 0`). :math:`\gamma` is specified by keyword gamma. \* knn (:math:`1[x' \in kNN(x)]`). :math:`k` is specified by keyword n\_neighbors. The RBF kernel will produce a fully connected graph which is represented in memory by a dense matrix. This matrix may be very large and combined with the cost of performing a full matrix multiplication calculation for each iteration of the algorithm can lead to prohibitively long running times. On the other hand, the KNN kernel will produce a much more memory-friendly sparse matrix which can drastically reduce running times. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_semi\_supervised\_plot\_semi\_supervised\_versus\_svm\_iris.py` \* :ref:`sphx\_glr\_auto\_examples\_semi\_supervised\_plot\_label\_propagation\_structure.py` \* :ref:`sphx\_glr\_auto\_examples\_semi\_supervised\_plot\_label\_propagation\_digits.py` \* :ref:`sphx\_glr\_auto\_examples\_semi\_supervised\_plot\_label\_propagation\_digits\_active\_learning.py` .. rubric:: References [2] Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux. In Semi-Supervised Learning (2006), pp. 193-216 [3] Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux. Efficient Non-Parametric Function Induction in Semi-Supervised Learning. AISTAT 2005 https://www.gatsby.ucl.ac.uk/aistats/fullpapers/204.pdf
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/semi_supervised.rst
main
scikit-learn
[ -0.01830308884382248, -0.08562533557415009, -0.06868290901184082, -0.07737993448972702, 0.0693824291229248, 0.008885260671377182, 0.03071596659719944, 0.015409665182232857, -0.025435341522097588, -0.10919728875160217, 0.07375406473875046, 0.03811274841427803, 0.044794514775276184, -0.01099...
0.029
.. \_decompositions: ================================================================= Decomposing signals in components (matrix factorization problems) ================================================================= .. currentmodule:: sklearn.decomposition .. \_PCA: Principal component analysis (PCA) ================================== Exact PCA and probabilistic interpretation ------------------------------------------ PCA is used to decompose a multivariate dataset in a set of successive orthogonal components that explain a maximum amount of the variance. In scikit-learn, :class:`PCA` is implemented as a \*transformer\* object that learns :math:`n` components in its ``fit`` method, and can be used on new data to project it on these components. PCA centers but does not scale the input data for each feature before applying the SVD. The optional parameter ``whiten=True`` makes it possible to project the data onto the singular space while scaling each component to unit variance. This is often useful if the models down-stream make strong assumptions on the isotropy of the signal: this is for example the case for Support Vector Machines with the RBF kernel and the K-Means clustering algorithm. Below is an example of the iris dataset, which is comprised of 4 features, projected on the 2 dimensions that explain most variance: .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_pca\_vs\_lda\_001.png :target: ../auto\_examples/decomposition/plot\_pca\_vs\_lda.html :align: center :scale: 75% The :class:`PCA` object also provides a probabilistic interpretation of the PCA that can give a likelihood of data based on the amount of variance it explains. As such it implements a :term:`score` method that can be used in cross-validation: .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_pca\_vs\_fa\_model\_selection\_001.png :target: ../auto\_examples/decomposition/plot\_pca\_vs\_fa\_model\_selection.html :align: center :scale: 75% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_pca\_iris.py` \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_pca\_vs\_lda.py` \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_pca\_vs\_fa\_model\_selection.py` .. \_IncrementalPCA: Incremental PCA --------------- The :class:`PCA` object is very useful, but has certain limitations for large datasets. The biggest limitation is that :class:`PCA` only supports batch processing, which means all of the data to be processed must fit in main memory. The :class:`IncrementalPCA` object uses a different form of processing and allows for partial computations which almost exactly match the results of :class:`PCA` while processing the data in a minibatch fashion. :class:`IncrementalPCA` makes it possible to implement out-of-core Principal Component Analysis either by: \* Using its ``partial\_fit`` method on chunks of data fetched sequentially from the local hard drive or a network database. \* Calling its fit method on a memory mapped file using ``numpy.memmap``. :class:`IncrementalPCA` only stores estimates of component and noise variances, in order to update ``explained\_variance\_ratio\_`` incrementally. This is why memory usage depends on the number of samples per batch, rather than the number of samples to be processed in the dataset. As in :class:`PCA`, :class:`IncrementalPCA` centers but does not scale the input data for each feature before applying the SVD. .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_incremental\_pca\_001.png :target: ../auto\_examples/decomposition/plot\_incremental\_pca.html :align: center :scale: 75% .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_incremental\_pca\_002.png :target: ../auto\_examples/decomposition/plot\_incremental\_pca.html :align: center :scale: 75% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_incremental\_pca.py` .. \_RandomizedPCA: PCA using randomized SVD ------------------------ It is often interesting to project data to a lower-dimensional space that preserves most of the variance, by dropping the singular vector of components associated with lower singular values. For instance, if we work with 64x64 pixel gray-level pictures for face recognition, the dimensionality of the data is 4096 and it is slow to train an RBF support vector machine on such wide data. Furthermore we know that the intrinsic dimensionality of the data is much lower than 4096 since all pictures of human faces look somewhat alike. The samples lie on a manifold of much lower dimension (say around 200 for instance). The PCA algorithm can be used to linearly transform the data while both reducing the dimensionality and preserving most of the explained variance at the same time. The class :class:`PCA` used with the optional parameter ``svd\_solver='randomized'`` is very useful in that case: since we are going to drop most
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.03684031218290329, -0.06268782168626785, -0.00993876438587904, -0.028679123148322105, 0.05109117180109024, -0.0032836697064340115, 0.02473965287208557, 0.016751712188124657, 0.01764003373682499, 0.03317004069685936, 0.012810803018510342, -0.03339441865682602, 0.0010856387671083212, 0.01...
-0.046234
instance). The PCA algorithm can be used to linearly transform the data while both reducing the dimensionality and preserving most of the explained variance at the same time. The class :class:`PCA` used with the optional parameter ``svd\_solver='randomized'`` is very useful in that case: since we are going to drop most of the singular vectors it is much more efficient to limit the computation to an approximated estimate of the singular vectors we will keep to actually perform the transform. For instance, the following shows 16 sample portraits (centered around 0.0) from the Olivetti dataset. On the right hand side are the first 16 singular vectors reshaped as portraits. Since we only require the top 16 singular vectors of a dataset with size :math:`n\_{samples} = 400` and :math:`n\_{features} = 64 \times 64 = 4096`, the computation time is less than 1s: .. |orig\_img| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_001.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. |pca\_img| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_002.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. centered:: |orig\_img| |pca\_img| If we note :math:`n\_{\max} = \max(n\_{\mathrm{samples}}, n\_{\mathrm{features}})` and :math:`n\_{\min} = \min(n\_{\mathrm{samples}}, n\_{\mathrm{features}})`, the time complexity of the randomized :class:`PCA` is :math:`O(n\_{\max}^2 \cdot n\_{\mathrm{components}})` instead of :math:`O(n\_{\max}^2 \cdot n\_{\min})` for the exact method implemented in :class:`PCA`. The memory footprint of randomized :class:`PCA` is also proportional to :math:`2 \cdot n\_{\max} \cdot n\_{\mathrm{components}}` instead of :math:`n\_{\max} \cdot n\_{\min}` for the exact method. Note: the implementation of ``inverse\_transform`` in :class:`PCA` with ``svd\_solver='randomized'`` is not the exact inverse transform of ``transform`` even when ``whiten=False`` (default). .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_face\_recognition.py` \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_faces\_decomposition.py` .. rubric:: References \* Algorithm 4.3 in :arxiv:`"Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions" <0909.4061>` Halko, et al., 2009 \* :arxiv:`"An implementation of a randomized algorithm for principal component analysis" <1412.3510>` A. Szlam et al. 2014 .. \_SparsePCA: Sparse principal components analysis (SparsePCA and MiniBatchSparsePCA) ----------------------------------------------------------------------- :class:`SparsePCA` is a variant of PCA, with the goal of extracting the set of sparse components that best reconstruct the data. Mini-batch sparse PCA (:class:`MiniBatchSparsePCA`) is a variant of :class:`SparsePCA` that is faster but less accurate. The increased speed is reached by iterating over small chunks of the set of features, for a given number of iterations. Principal component analysis (:class:`PCA`) has the disadvantage that the components extracted by this method have exclusively dense expressions, i.e. they have non-zero coefficients when expressed as linear combinations of the original variables. This can make interpretation difficult. In many cases, the real underlying components can be more naturally imagined as sparse vectors; for example in face recognition, components might naturally map to parts of faces. Sparse principal components yield a more parsimonious, interpretable representation, clearly emphasizing which of the original features contribute to the differences between samples. The following example illustrates 16 components extracted using sparse PCA from the Olivetti faces dataset. It can be seen how the regularization term induces many zeros. Furthermore, the natural structure of the data causes the non-zero coefficients to be vertically adjacent. The model does not enforce this mathematically: each component is a vector :math:`h \in \mathbf{R}^{4096}`, and there is no notion of vertical adjacency except during the human-friendly visualization as 64x64 pixel images. The fact that the components shown below appear local is the effect of the inherent structure of the data, which makes such local patterns minimize reconstruction error. There exist sparsity-inducing norms that take into account adjacency and different kinds of structure; see [Jen09]\_ for a review of such methods. For more details on how to use Sparse PCA, see the Examples section, below. .. |spca\_img| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_005.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. centered:: |pca\_img| |spca\_img| Note that there are many different formulations for the Sparse PCA
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.0811968743801117, 0.012286192737519741, 0.004605728201568127, -0.06473790854215622, 0.04227783530950546, -0.06579336524009705, -0.08905255794525146, -0.00942954234778881, 0.03985770791769028, 0.006239264737814665, 0.006890543736517429, 0.04109559580683708, -0.09970134496688843, -0.05314...
0.029036
and different kinds of structure; see [Jen09]\_ for a review of such methods. For more details on how to use Sparse PCA, see the Examples section, below. .. |spca\_img| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_005.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. centered:: |pca\_img| |spca\_img| Note that there are many different formulations for the Sparse PCA problem. The one implemented here is based on [Mrl09]\_ . The optimization problem solved is a PCA problem (dictionary learning) with an :math:`\ell\_1` penalty on the components: .. math:: (U^\*, V^\*) = \underset{U, V}{\operatorname{arg\,min\,}} & \frac{1}{2} ||X-UV||\_{\text{Fro}}^2+\alpha||V||\_{1,1} \\ \text{subject to } & ||U\_k||\_2 \leq 1 \text{ for all } 0 \leq k < n\_{components} :math:`||.||\_{\text{Fro}}` stands for the Frobenius norm and :math:`||.||\_{1,1}` stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. The sparsity-inducing :math:`||.||\_{1,1}` matrix norm also prevents learning components from noise when few training samples are available. The degree of penalization (and thus sparsity) can be adjusted through the hyperparameter ``alpha``. Small values lead to a gently regularized factorization, while larger values shrink many coefficients to zero. .. note:: While in the spirit of an online algorithm, the class :class:`MiniBatchSparsePCA` does not implement ``partial\_fit`` because the algorithm is online along the features direction, not the samples direction. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_faces\_decomposition.py` .. rubric:: References .. [Mrl09] `"Online Dictionary Learning for Sparse Coding" `\_ J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009 .. [Jen09] `"Structured Sparse Principal Component Analysis" `\_ R. Jenatton, G. Obozinski, F. Bach, 2009 .. \_kernel\_PCA: Kernel Principal Component Analysis (kPCA) ========================================== Exact Kernel PCA ---------------- :class:`KernelPCA` is an extension of PCA which achieves non-linear dimensionality reduction through the use of kernels (see :ref:`metrics`) [Scholkopf1997]\_. It has many applications including denoising, compression and structured prediction (kernel dependency estimation). :class:`KernelPCA` supports both ``transform`` and ``inverse\_transform``. .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_kernel\_pca\_002.png :target: ../auto\_examples/decomposition/plot\_kernel\_pca.html :align: center :scale: 75% .. note:: :meth:`KernelPCA.inverse\_transform` relies on a kernel ridge to learn the function mapping samples from the PCA basis into the original feature space [Bakir2003]\_. Thus, the reconstruction obtained with :meth:`KernelPCA.inverse\_transform` is an approximation. See the example linked below for more details. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_kernel\_pca.py` \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_digits\_denoising.py` .. rubric:: References .. [Scholkopf1997] Schölkopf, Bernhard, Alexander Smola, and Klaus-Robert Müller. `"Kernel principal component analysis." `\_ International conference on artificial neural networks. Springer, Berlin, Heidelberg, 1997. .. [Bakir2003] Bakır, Gökhan H., Jason Weston, and Bernhard Schölkopf. `"Learning to find pre-images." `\_ Advances in neural information processing systems 16 (2003): 449-456. .. \_kPCA\_Solvers: Choice of solver for Kernel PCA ------------------------------- While in :class:`PCA` the number of components is bounded by the number of features, in :class:`KernelPCA` the number of components is bounded by the number of samples. Many real-world datasets have large number of samples! In these cases finding \*all\* the components with a full kPCA is a waste of computation time, as data is mostly described by the first few components (e.g. ``n\_components<=100``). In other words, the centered Gram matrix that is eigendecomposed in the Kernel PCA fitting process has an effective rank that is much smaller than its size. This is a situation where approximate eigensolvers can provide speedup with very low precision loss. .. dropdown:: Eigensolvers The optional parameter ``eigen\_solver='randomized'`` can be used to \*significantly\* reduce the computation time when the number of requested ``n\_components`` is small compared with the number of samples. It relies on randomized decomposition methods to find an approximate solution in a shorter time. The time complexity of the randomized :class:`KernelPCA` is :math:`O(n\_{\mathrm{samples}}^2 \cdot n\_{\mathrm{components}})` instead of :math:`O(n\_{\mathrm{samples}}^3)` for the exact method implemented with ``eigen\_solver='dense'``. The memory footprint of randomized :class:`KernelPCA` is also proportional to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.07270952314138412, -0.014084173366427422, -0.021533869206905365, -0.06893929094076157, -0.00008609931683167815, -0.09648188948631287, 0.01800127513706684, 0.0024829658214002848, -0.031204167753458023, 0.013206075876951218, 0.004436340648680925, 0.03438655659556389, 0.019514193758368492, ...
-0.023787
the number of samples. It relies on randomized decomposition methods to find an approximate solution in a shorter time. The time complexity of the randomized :class:`KernelPCA` is :math:`O(n\_{\mathrm{samples}}^2 \cdot n\_{\mathrm{components}})` instead of :math:`O(n\_{\mathrm{samples}}^3)` for the exact method implemented with ``eigen\_solver='dense'``. The memory footprint of randomized :class:`KernelPCA` is also proportional to :math:`2 \cdot n\_{\mathrm{samples}} \cdot n\_{\mathrm{components}}` instead of :math:`n\_{\mathrm{samples}}^2` for the exact method. Note: this technique is the same as in :ref:`RandomizedPCA`. In addition to the above two solvers, ``eigen\_solver='arpack'`` can be used as an alternate way to get an approximate decomposition. In practice, this method only provides reasonable execution times when the number of components to find is extremely small. It is enabled by default when the desired number of components is less than 10 (strict) and the number of samples is more than 200 (strict). See :class:`KernelPCA` for details. .. rubric:: References \* \*dense\* solver: `scipy.linalg.eigh documentation `\_ \* \*randomized\* solver: \* Algorithm 4.3 in :arxiv:`"Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions" <0909.4061>` Halko, et al. (2009) \* :arxiv:`"An implementation of a randomized algorithm for principal component analysis" <1412.3510>` A. Szlam et al. (2014) \* \*arpack\* solver: `scipy.sparse.linalg.eigsh documentation `\_ R. B. Lehoucq, D. C. Sorensen, and C. Yang, (1998) .. \_LSA: Truncated singular value decomposition and latent semantic analysis =================================================================== :class:`TruncatedSVD` implements a variant of singular value decomposition (SVD) that only computes the :math:`k` largest singular values, where :math:`k` is a user-specified parameter. :class:`TruncatedSVD` is very similar to :class:`PCA`, but differs in that the matrix :math:`X` does not need to be centered. When the columnwise (per-feature) means of :math:`X` are subtracted from the feature values, truncated SVD on the resulting matrix is equivalent to PCA. .. dropdown:: About truncated SVD and latent semantic analysis (LSA) When truncated SVD is applied to term-document matrices (as returned by :class:`~sklearn.feature\_extraction.text.CountVectorizer` or :class:`~sklearn.feature\_extraction.text.TfidfVectorizer`), this transformation is known as `latent semantic analysis `\_ (LSA), because it transforms such matrices to a "semantic" space of low dimensionality. In particular, LSA is known to combat the effects of synonymy and polysemy (both of which roughly mean there are multiple meanings per word), which cause term-document matrices to be overly sparse and exhibit poor similarity under measures such as cosine similarity. .. note:: LSA is also known as latent semantic indexing, LSI, though strictly that refers to its use in persistent indexes for information retrieval purposes. Mathematically, truncated SVD applied to training samples :math:`X` produces a low-rank approximation :math:`X`: .. math:: X \approx X\_k = U\_k \Sigma\_k V\_k^\top After this operation, :math:`U\_k \Sigma\_k` is the transformed training set with :math:`k` features (called ``n\_components`` in the API). To also transform a test set :math:`X`, we multiply it with :math:`V\_k`: .. math:: X' = X V\_k .. note:: Most treatments of LSA in the natural language processing (NLP) and information retrieval (IR) literature swap the axes of the matrix :math:`X` so that it has shape ``(n\_features, n\_samples)``. We present LSA in a different way that matches the scikit-learn API better, but the singular values found are the same. While the :class:`TruncatedSVD` transformer works with any feature matrix, using it on tf-idf matrices is recommended over raw frequency counts in an LSA/document processing setting. In particular, sublinear scaling and inverse document frequency should be turned on (``sublinear\_tf=True, use\_idf=True``) to bring the feature values closer to a Gaussian distribution, compensating for LSA's erroneous assumptions about textual data. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_clustering.py` .. rubric:: References \* Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze (2008), \*Introduction to Information Retrieval\*, Cambridge University Press, chapter 18: `Matrix decompositions & latent semantic indexing `\_ .. \_DictionaryLearning: Dictionary Learning ===================
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.057981353253126144, -0.031060228124260902, -0.05572197958827019, 0.023247720673680305, 0.03469084948301315, -0.13687248528003693, -0.041267115622758865, 0.01417634915560484, 0.04938928037881851, 0.033431991934776306, 0.025034241378307343, 0.042709313333034515, -0.03985084220767021, -0.0...
0.01531
a Gaussian distribution, compensating for LSA's erroneous assumptions about textual data. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_clustering.py` .. rubric:: References \* Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze (2008), \*Introduction to Information Retrieval\*, Cambridge University Press, chapter 18: `Matrix decompositions & latent semantic indexing `\_ .. \_DictionaryLearning: Dictionary Learning =================== .. \_SparseCoder: Sparse coding with a precomputed dictionary ------------------------------------------- The :class:`SparseCoder` object is an estimator that can be used to transform signals into sparse linear combination of atoms from a fixed, precomputed dictionary such as a discrete wavelet basis. This object therefore does not implement a ``fit`` method. The transformation amounts to a sparse coding problem: finding a representation of the data as a linear combination of as few dictionary atoms as possible. All variations of dictionary learning implement the following transform methods, controllable via the ``transform\_method`` initialization parameter: \* Orthogonal matching pursuit (:ref:`omp`) \* Least-angle regression (:ref:`least\_angle\_regression`) \* Lasso computed by least-angle regression \* Lasso using coordinate descent (:ref:`lasso`) \* Thresholding Thresholding is very fast but it does not yield accurate reconstructions. They have been shown useful in literature for classification tasks. For image reconstruction tasks, orthogonal matching pursuit yields the most accurate, unbiased reconstruction. The dictionary learning objects offer, via the ``split\_code`` parameter, the possibility to separate the positive and negative values in the results of sparse coding. This is useful when dictionary learning is used for extracting features that will be used for supervised learning, because it allows the learning algorithm to assign different weights to negative loadings of a particular atom, from to the corresponding positive loading. The split code for a single sample has length ``2 \* n\_components`` and is constructed using the following rule: First, the regular code of length ``n\_components`` is computed. Then, the first ``n\_components`` entries of the ``split\_code`` are filled with the positive part of the regular code vector. The second half of the split code is filled with the negative part of the code vector, only with a positive sign. Therefore, the split\_code is non-negative. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_sparse\_coding.py` Generic dictionary learning --------------------------- Dictionary learning (:class:`DictionaryLearning`) is a matrix factorization problem that amounts to finding a (usually overcomplete) dictionary that will perform well at sparsely encoding the fitted data. Representing data as sparse combinations of atoms from an overcomplete dictionary is suggested to be the way the mammalian primary visual cortex works. Consequently, dictionary learning applied on image patches has been shown to give good results in image processing tasks such as image completion, inpainting and denoising, as well as for supervised recognition tasks. Dictionary learning is an optimization problem solved by alternatively updating the sparse code, as a solution to multiple Lasso problems, considering the dictionary fixed, and then updating the dictionary to best fit the sparse code. .. math:: (U^\*, V^\*) = \underset{U, V}{\operatorname{arg\,min\,}} & \frac{1}{2} ||X-UV||\_{\text{Fro}}^2+\alpha||U||\_{1,1} \\ \text{subject to } & ||V\_k||\_2 \leq 1 \text{ for all } 0 \leq k < n\_{\mathrm{atoms}} .. |pca\_img2| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_002.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. |dict\_img2| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_007.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. centered:: |pca\_img2| |dict\_img2| :math:`||.||\_{\text{Fro}}` stands for the Frobenius norm and :math:`||.||\_{1,1}` stands for the entry-wise matrix norm which is the sum of the absolute values of all the entries in the matrix. After using such a procedure to fit the dictionary, the transform is simply a sparse coding step that shares the same implementation with all dictionary learning objects (see :ref:`SparseCoder`). It is also possible to constrain the dictionary and/or code to be positive to match constraints that may be present in the data. Below are the faces with different positivity constraints applied. Red indicates negative
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.03786846622824669, -0.06840569525957108, -0.09949425607919693, 0.034958284348249435, 0.06175008788704872, 0.062063638120889664, 0.04749174788594246, -0.04028681665658951, -0.021595807746052742, 0.009387527592480183, 0.11362607032060623, 0.05176149681210518, 0.05644410103559494, 0.044647...
0.103677
sparse coding step that shares the same implementation with all dictionary learning objects (see :ref:`SparseCoder`). It is also possible to constrain the dictionary and/or code to be positive to match constraints that may be present in the data. Below are the faces with different positivity constraints applied. Red indicates negative values, blue indicates positive values, and white represents zeros. .. |dict\_img\_pos1| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_010.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. |dict\_img\_pos2| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_011.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. |dict\_img\_pos3| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_012.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. |dict\_img\_pos4| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_013.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. centered:: |dict\_img\_pos1| |dict\_img\_pos2| .. centered:: |dict\_img\_pos3| |dict\_img\_pos4| .. rubric:: References \* `"Online dictionary learning for sparse coding" `\_ J. Mairal, F. Bach, J. Ponce, G. Sapiro, 2009 .. \_MiniBatchDictionaryLearning: Mini-batch dictionary learning ------------------------------ :class:`MiniBatchDictionaryLearning` implements a faster, but less accurate version of the dictionary learning algorithm that is better suited for large datasets. By default, :class:`MiniBatchDictionaryLearning` divides the data into mini-batches and optimizes in an online manner by cycling over the mini-batches for the specified number of iterations. However, at the moment it does not implement a stopping condition. The estimator also implements ``partial\_fit``, which updates the dictionary by iterating only once over a mini-batch. This can be used for online learning when the data is not readily available from the start, or for when the data does not fit into memory. .. currentmodule:: sklearn.cluster .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_dict\_face\_patches\_001.png :target: ../auto\_examples/cluster/plot\_dict\_face\_patches.html :scale: 50% :align: right .. topic:: \*\*Clustering for dictionary learning\*\* Note that when using dictionary learning to extract a representation (e.g. for sparse coding) clustering can be a good proxy to learn the dictionary. For instance the :class:`MiniBatchKMeans` estimator is computationally efficient and implements on-line learning with a ``partial\_fit`` method. Example: :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_dict\_face\_patches.py` .. currentmodule:: sklearn.decomposition The following image shows how a dictionary, learned from 4x4 pixel image patches extracted from part of the image of a raccoon face, looks like. .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_image\_denoising\_001.png :target: ../auto\_examples/decomposition/plot\_image\_denoising.html :align: center :scale: 50% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_image\_denoising.py` .. \_FA: Factor Analysis =============== In unsupervised learning we only have a dataset :math:`X = \{x\_1, x\_2, \dots, x\_n \}`. How can this dataset be described mathematically? A very simple `continuous latent variable` model for :math:`X` is .. math:: x\_i = W h\_i + \mu + \epsilon The vector :math:`h\_i` is called "latent" because it is unobserved. :math:`\epsilon` is considered a noise term distributed according to a Gaussian with mean 0 and covariance :math:`\Psi` (i.e. :math:`\epsilon \sim \mathcal{N}(0, \Psi)`), :math:`\mu` is some arbitrary offset vector. Such a model is called "generative" as it describes how :math:`x\_i` is generated from :math:`h\_i`. If we use all the :math:`x\_i`'s as columns to form a matrix :math:`\mathbf{X}` and all the :math:`h\_i`'s as columns of a matrix :math:`\mathbf{H}` then we can write (with suitably defined :math:`\mathbf{M}` and :math:`\mathbf{E}`): .. math:: \mathbf{X} = W \mathbf{H} + \mathbf{M} + \mathbf{E} In other words, we \*decomposed\* matrix :math:`\mathbf{X}`. If :math:`h\_i` is given, the above equation automatically implies the following probabilistic interpretation: .. math:: p(x\_i|h\_i) = \mathcal{N}(Wh\_i + \mu, \Psi) For a complete probabilistic model we also need a prior distribution for the latent variable :math:`h`. The most straightforward assumption (based on the nice properties of the Gaussian distribution) is :math:`h \sim \mathcal{N}(0, \mathbf{I})`. This yields a Gaussian as the marginal distribution of :math:`x`: .. math:: p(x) = \mathcal{N}(\mu, WW^T + \Psi) Now, without any further assumptions the idea of having a latent variable :math:`h` would be superfluous -- :math:`x` can be completely modelled with a mean and a covariance. We need to impose some more specific structure on one of these two parameters. A simple additional assumption regards the structure of the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.04766840115189552, 0.0037488348316401243, -0.017289044335484505, -0.030985558405518532, 0.038862213492393494, -0.07574868947267532, 0.02938855066895485, -0.05628128722310066, -0.07366060465574265, -0.00787180196493864, 0.0521162748336792, -0.054448749870061874, 0.057793863117694855, 0.0...
-0.01726
without any further assumptions the idea of having a latent variable :math:`h` would be superfluous -- :math:`x` can be completely modelled with a mean and a covariance. We need to impose some more specific structure on one of these two parameters. A simple additional assumption regards the structure of the error covariance :math:`\Psi`: \* :math:`\Psi = \sigma^2 \mathbf{I}`: This assumption leads to the probabilistic model of :class:`PCA`. \* :math:`\Psi = \mathrm{diag}(\psi\_1, \psi\_2, \dots, \psi\_n)`: This model is called :class:`FactorAnalysis`, a classical statistical model. The matrix W is sometimes called the "factor loading matrix". Both models essentially estimate a Gaussian with a low-rank covariance matrix. Because both models are probabilistic they can be integrated in more complex models, e.g. Mixture of Factor Analysers. One gets very different models (e.g. :class:`FastICA`) if non-Gaussian priors on the latent variables are assumed. Factor analysis \*can\* produce similar components (the columns of its loading matrix) to :class:`PCA`. However, one can not make any general statements about these components (e.g. whether they are orthogonal): .. |pca\_img3| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_002.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. |fa\_img3| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_008.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. centered:: |pca\_img3| |fa\_img3| The main advantage for Factor Analysis over :class:`PCA` is that it can model the variance in every direction of the input space independently (heteroscedastic noise): .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_009.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :align: center :scale: 75% This allows better model selection than probabilistic PCA in the presence of heteroscedastic noise: .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_pca\_vs\_fa\_model\_selection\_002.png :target: ../auto\_examples/decomposition/plot\_pca\_vs\_fa\_model\_selection.html :align: center :scale: 75% Factor Analysis is often followed by a rotation of the factors (with the parameter `rotation`), usually to improve interpretability. For example, Varimax rotation maximizes the sum of the variances of the squared loadings, i.e., it tends to produce sparser factors, which are influenced by only a few features each (the "simple structure"). See e.g., the first example below. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_varimax\_fa.py` \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_pca\_vs\_fa\_model\_selection.py` .. \_ICA: Independent component analysis (ICA) ==================================== Independent component analysis separates a multivariate signal into additive subcomponents that are maximally independent. It is implemented in scikit-learn using the :class:`Fast ICA ` algorithm. Typically, ICA is not used for reducing dimensionality but for separating superimposed signals. Since the ICA model does not include a noise term, for the model to be correct, whitening must be applied. This can be done internally using the `whiten` argument or manually using one of the PCA variants. It is classically used to separate mixed signals (a problem known as \*blind source separation\*), as in the example below: .. figure:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_ica\_blind\_source\_separation\_001.png :target: ../auto\_examples/decomposition/plot\_ica\_blind\_source\_separation.html :align: center :scale: 60% ICA can also be used as yet another non linear decomposition that finds components with some sparsity: .. |pca\_img4| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_002.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. |ica\_img4| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_004.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. centered:: |pca\_img4| |ica\_img4| .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_ica\_blind\_source\_separation.py` \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_ica\_vs\_pca.py` \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_faces\_decomposition.py` .. \_NMF: Non-negative matrix factorization (NMF or NNMF) =============================================== NMF with the Frobenius norm --------------------------- :class:`NMF` [1]\_ is an alternative approach to decomposition that assumes that the data and the components are non-negative. :class:`NMF` can be plugged in instead of :class:`PCA` or its variants, in the cases where the data matrix does not contain negative values. It finds a decomposition of samples :math:`X` into two matrices :math:`W` and :math:`H` of non-negative elements, by optimizing the distance :math:`d` between :math:`X` and the matrix product :math:`WH`. The most widely used distance function is the squared Frobenius norm, which is an obvious extension of the Euclidean norm to matrices: .. math:: d\_{\mathrm{Fro}}(X, Y) = \frac{1}{2} ||X - Y||\_{\mathrm{Fro}}^2 = \frac{1}{2} \sum\_{i,j} (X\_{ij} - {Y}\_{ij})^2 Unlike :class:`PCA`, the representation of a vector is obtained in an additive
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.025157185271382332, -0.11198952794075012, -0.021004172042012215, 0.03465244546532631, 0.0037424794863909483, 0.03951670229434967, 0.009419907815754414, -0.006229887250810862, 0.04527128487825394, -0.01033796090632677, 0.09402262419462204, 0.031209662556648254, -0.009640530683100224, 0.0...
-0.021355
:math:`WH`. The most widely used distance function is the squared Frobenius norm, which is an obvious extension of the Euclidean norm to matrices: .. math:: d\_{\mathrm{Fro}}(X, Y) = \frac{1}{2} ||X - Y||\_{\mathrm{Fro}}^2 = \frac{1}{2} \sum\_{i,j} (X\_{ij} - {Y}\_{ij})^2 Unlike :class:`PCA`, the representation of a vector is obtained in an additive fashion, by superimposing the components, without subtracting. Such additive models are efficient for representing images and text. It has been observed in [Hoyer, 2004] [2]\_ that, when carefully constrained, :class:`NMF` can produce a parts-based representation of the dataset, resulting in interpretable models. The following example displays 16 sparse components found by :class:`NMF` from the images in the Olivetti faces dataset, in comparison with the PCA eigenfaces. .. |pca\_img5| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_002.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. |nmf\_img5| image:: ../auto\_examples/decomposition/images/sphx\_glr\_plot\_faces\_decomposition\_003.png :target: ../auto\_examples/decomposition/plot\_faces\_decomposition.html :scale: 60% .. centered:: |pca\_img5| |nmf\_img5| The `init` attribute determines the initialization method applied, which has a great impact on the performance of the method. :class:`NMF` implements the method Nonnegative Double Singular Value Decomposition. NNDSVD [4]\_ is based on two SVD processes, one approximating the data matrix, the other approximating positive sections of the resulting partial SVD factors utilizing an algebraic property of unit rank matrices. The basic NNDSVD algorithm is better fit for sparse factorization. Its variants NNDSVDa (in which all zeros are set equal to the mean of all elements of the data), and NNDSVDar (in which the zeros are set to random perturbations less than the mean of the data divided by 100) are recommended in the dense case. Note that the Multiplicative Update ('mu') solver cannot update zeros present in the initialization, so it leads to poorer results when used jointly with the basic NNDSVD algorithm which introduces a lot of zeros; in this case, NNDSVDa or NNDSVDar should be preferred. :class:`NMF` can also be initialized with correctly scaled random non-negative matrices by setting `init="random"`. An integer seed or a ``RandomState`` can also be passed to `random\_state` to control reproducibility. In :class:`NMF`, L1 and L2 priors can be added to the loss function in order to regularize the model. The L2 prior uses the Frobenius norm, while the L1 prior uses an elementwise L1 norm. As in :class:`~sklearn.linear\_model.ElasticNet`, we control the combination of L1 and L2 with the `l1\_ratio` (:math:`\rho`) parameter, and the intensity of the regularization with the `alpha\_W` and `alpha\_H` (:math:`\alpha\_W` and :math:`\alpha\_H`) parameters. The priors are scaled by the number of samples (:math:`n\\_samples`) for `H` and the number of features (:math:`n\\_features`) for `W` to keep their impact balanced with respect to one another and to the data fit term as independent as possible of the size of the training set. Then the priors terms are: .. math:: (\alpha\_W \rho ||W||\_1 + \frac{\alpha\_W(1-\rho)}{2} ||W||\_{\mathrm{Fro}} ^ 2) \* n\\_features + (\alpha\_H \rho ||H||\_1 + \frac{\alpha\_H(1-\rho)}{2} ||H||\_{\mathrm{Fro}} ^ 2) \* n\\_samples and the regularized objective function is: .. math:: d\_{\mathrm{Fro}}(X, WH) + (\alpha\_W \rho ||W||\_1 + \frac{\alpha\_W(1-\rho)}{2} ||W||\_{\mathrm{Fro}} ^ 2) \* n\\_features + (\alpha\_H \rho ||H||\_1 + \frac{\alpha\_H(1-\rho)}{2} ||H||\_{\mathrm{Fro}} ^ 2) \* n\\_samples NMF with a beta-divergence -------------------------- As described previously, the most widely used distance function is the squared Frobenius norm, which is an obvious extension of the Euclidean norm to matrices: .. math:: d\_{\mathrm{Fro}}(X, Y) = \frac{1}{2} ||X - Y||\_{Fro}^2 = \frac{1}{2} \sum\_{i,j} (X\_{ij} - {Y}\_{ij})^2 Other distance functions can be used in NMF as, for example, the (generalized) Kullback-Leibler (KL) divergence, also referred as I-divergence: .. math:: d\_{KL}(X, Y) = \sum\_{i,j} (X\_{ij} \log(\frac{X\_{ij}}{Y\_{ij}}) - X\_{ij} + Y\_{ij}) Or, the Itakura-Saito (IS) divergence: .. math:: d\_{IS}(X, Y) = \sum\_{i,j} (\frac{X\_{ij}}{Y\_{ij}} - \log(\frac{X\_{ij}}{Y\_{ij}}) - 1) These three distances are special cases of the beta-divergence family,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.053731732070446014, -0.05056434124708176, -0.10185831785202026, -0.044723644852638245, 0.07714265584945679, 0.0003302463155705482, -0.060703620314598083, -0.022856002673506737, 0.06244940310716629, -0.04136679321527481, 0.05982581153512001, 0.02444940246641636, 0.021235903725028038, 0.0...
0.086874
as, for example, the (generalized) Kullback-Leibler (KL) divergence, also referred as I-divergence: .. math:: d\_{KL}(X, Y) = \sum\_{i,j} (X\_{ij} \log(\frac{X\_{ij}}{Y\_{ij}}) - X\_{ij} + Y\_{ij}) Or, the Itakura-Saito (IS) divergence: .. math:: d\_{IS}(X, Y) = \sum\_{i,j} (\frac{X\_{ij}}{Y\_{ij}} - \log(\frac{X\_{ij}}{Y\_{ij}}) - 1) These three distances are special cases of the beta-divergence family, with :math:`\beta = 2, 1, 0` respectively [6]\_. The beta-divergence is defined by : .. math:: d\_{\beta}(X, Y) = \sum\_{i,j} \frac{1}{\beta(\beta - 1)}(X\_{ij}^\beta + (\beta-1)Y\_{ij}^\beta - \beta X\_{ij} Y\_{ij}^{\beta - 1}) .. image:: ../images/beta\_divergence.png :align: center :scale: 75% Note that this definition is not valid if :math:`\beta \in (0; 1)`, yet it can be continuously extended to the definitions of :math:`d\_{KL}` and :math:`d\_{IS}` respectively. .. dropdown:: NMF implemented solvers :class:`NMF` implements two solvers, using Coordinate Descent ('cd') [5]\_, and Multiplicative Update ('mu') [6]\_. The 'mu' solver can optimize every beta-divergence, including of course the Frobenius norm (:math:`\beta=2`), the (generalized) Kullback-Leibler divergence (:math:`\beta=1`) and the Itakura-Saito divergence (:math:`\beta=0`). Note that for :math:`\beta \in (1; 2)`, the 'mu' solver is significantly faster than for other values of :math:`\beta`. Note also that with a negative (or 0, i.e. 'itakura-saito') :math:`\beta`, the input matrix cannot contain zero values. The 'cd' solver can only optimize the Frobenius norm. Due to the underlying non-convexity of NMF, the different solvers may converge to different minima, even when optimizing the same distance function. NMF is best used with the ``fit\_transform`` method, which returns the matrix W. The matrix H is stored into the fitted model in the ``components\_`` attribute; the method ``transform`` will decompose a new matrix X\_new based on these stored components:: >>> import numpy as np >>> X = np.array([[1, 1], [2, 1], [3, 1.2], [4, 1], [5, 0.8], [6, 1]]) >>> from sklearn.decomposition import NMF >>> model = NMF(n\_components=2, init='random', random\_state=0) >>> W = model.fit\_transform(X) >>> H = model.components\_ >>> X\_new = np.array([[1, 0], [1, 6.1], [1, 0], [1, 4], [3.2, 1], [0, 4]]) >>> W\_new = model.transform(X\_new) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_decomposition\_plot\_faces\_decomposition.py` \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_topics\_extraction\_with\_nmf\_lda.py` .. \_MiniBatchNMF: Mini-batch Non Negative Matrix Factorization -------------------------------------------- :class:`MiniBatchNMF` [7]\_ implements a faster, but less accurate version of the non negative matrix factorization (i.e. :class:`~sklearn.decomposition.NMF`), better suited for large datasets. By default, :class:`MiniBatchNMF` divides the data into mini-batches and optimizes the NMF model in an online manner by cycling over the mini-batches for the specified number of iterations. The ``batch\_size`` parameter controls the size of the batches. In order to speed up the mini-batch algorithm it is also possible to scale past batches, giving them less importance than newer batches. This is done by introducing a so-called forgetting factor controlled by the ``forget\_factor`` parameter. The estimator also implements ``partial\_fit``, which updates ``H`` by iterating only once over a mini-batch. This can be used for online learning when the data is not readily available from the start, or when the data does not fit into memory. .. rubric:: References .. [1] `"Learning the parts of objects by non-negative matrix factorization" `\_ D. Lee, S. Seung, 1999 .. [2] `"Non-negative Matrix Factorization with Sparseness Constraints" `\_ P. Hoyer, 2004 .. [4] `"SVD based initialization: A head start for nonnegative matrix factorization" `\_ C. Boutsidis, E. Gallopoulos, 2008 .. [5] `"Fast local algorithms for large scale nonnegative matrix and tensor factorizations." `\_ A. Cichocki, A. Phan, 2009 .. [6] :arxiv:`"Algorithms for nonnegative matrix factorization with the beta-divergence" <1010.1763>` C. Fevotte, J. Idier, 2011 .. [7] :arxiv:`"Online algorithms for nonnegative matrix factorization with the Itakura-Saito divergence" <1106.4198>` A. Lefevre, F. Bach, C. Fevotte, 2011 .. \_LatentDirichletAllocation: Latent Dirichlet Allocation (LDA) ================================= Latent Dirichlet Allocation is a generative probabilistic model for collections of discrete datasets
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.07100366055965424, -0.09384847432374954, -0.007731180172413588, -0.05589741840958595, -0.03079560585319996, 0.013354693539440632, 0.025114640593528748, 0.0352763868868351, 0.12075087428092957, -0.0793561115860939, 0.04860714450478554, 0.010968327522277832, -0.0333065465092659, 0.0401667...
0.124435
factorization with the beta-divergence" <1010.1763>` C. Fevotte, J. Idier, 2011 .. [7] :arxiv:`"Online algorithms for nonnegative matrix factorization with the Itakura-Saito divergence" <1106.4198>` A. Lefevre, F. Bach, C. Fevotte, 2011 .. \_LatentDirichletAllocation: Latent Dirichlet Allocation (LDA) ================================= Latent Dirichlet Allocation is a generative probabilistic model for collections of discrete datasets such as text corpora. It is also a topic model that is used for discovering abstract topics from a collection of documents. The graphical model of LDA is a three-level generative model: .. image:: ../images/lda\_model\_graph.png :align: center Note on notations presented in the graphical model above, which can be found in Hoffman et al. (2013): \* The corpus is a collection of :math:`D` documents. \* A document :math:`d \in D` is a sequence of :math:`N\_d` words. \* There are :math:`K` topics in the corpus. \* The boxes represent repeated sampling. In the graphical model, each node is a random variable and has a role in the generative process. A shaded node indicates an observed variable and an unshaded node indicates a hidden (latent) variable. In this case, words in the corpus are the only data that we observe. The latent variables determine the random mixture of topics in the corpus and the distribution of words in the documents. The goal of LDA is to use the observed words to infer the hidden topic structure. .. dropdown:: Details on modeling text corpora When modeling text corpora, the model assumes the following generative process for a corpus with :math:`D` documents and :math:`K` topics, with :math:`K` corresponding to `n\_components` in the API: 1. For each topic :math:`k \in K`, draw :math:`\beta\_k \sim \mathrm{Dirichlet}(\eta)`. This provides a distribution over the words, i.e. the probability of a word appearing in topic :math:`k`. :math:`\eta` corresponds to `topic\_word\_prior`. 2. For each document :math:`d \in D`, draw the topic proportions :math:`\theta\_d \sim \mathrm{Dirichlet}(\alpha)`. :math:`\alpha` corresponds to `doc\_topic\_prior`. 3. For each word :math:`n=1,\cdots,N\_d` in document :math:`d`: a. Draw the topic assignment :math:`z\_{dn} \sim \mathrm{Multinomial} (\theta\_d)` b. Draw the observed word :math:`w\_{dn} \sim \mathrm{Multinomial} (\beta\_{z\_{dn}})` For parameter estimation, the posterior distribution is: .. math:: p(z, \theta, \beta |w, \alpha, \eta) = \frac{p(z, \theta, \beta|\alpha, \eta)}{p(w|\alpha, \eta)} Since the posterior is intractable, variational Bayesian method uses a simpler distribution :math:`q(z,\theta,\beta | \lambda, \phi, \gamma)` to approximate it, and those variational parameters :math:`\lambda`, :math:`\phi`, :math:`\gamma` are optimized to maximize the Evidence Lower Bound (ELBO): .. math:: \log\: P(w | \alpha, \eta) \geq L(w,\phi,\gamma,\lambda) \overset{\triangle}{=} E\_{q}[\log\:p(w,z,\theta,\beta|\alpha,\eta)] - E\_{q}[\log\:q(z, \theta, \beta)] Maximizing ELBO is equivalent to minimizing the Kullback-Leibler(KL) divergence between :math:`q(z,\theta,\beta)` and the true posterior :math:`p(z, \theta, \beta |w, \alpha, \eta)`. :class:`LatentDirichletAllocation` implements the online variational Bayes algorithm and supports both online and batch update methods. While the batch method updates variational variables after each full pass through the data, the online method updates variational variables from mini-batch data points. .. note:: Although the online method is guaranteed to converge to a local optimum point, the quality of the optimum point and the speed of convergence may depend on mini-batch size and attributes related to learning rate setting. When :class:`LatentDirichletAllocation` is applied on a "document-term" matrix, the matrix will be decomposed into a "topic-term" matrix and a "document-topic" matrix. While "topic-term" matrix is stored as `components\_` in the model, "document-topic" matrix can be calculated from ``transform`` method. :class:`LatentDirichletAllocation` also implements ``partial\_fit`` method. This is used when data can be fetched sequentially. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_topics\_extraction\_with\_nmf\_lda.py` .. rubric:: References \* `"Latent Dirichlet Allocation" `\_ D. Blei, A. Ng, M. Jordan, 2003 \* `"Online Learning for Latent Dirichlet Allocation” `\_ M. Hoffman, D. Blei, F. Bach, 2010 \* `"Stochastic Variational Inference" `\_ M. Hoffman,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.033825214952230453, -0.11302930116653442, -0.08021390438079834, -0.01092715747654438, 0.07667616754770279, 0.0749945268034935, 0.024412473663687706, 0.013037443161010742, 0.052718985825777054, -0.01985197328031063, 0.059903841465711594, 0.06645923852920532, 0.037282492965459824, 0.06032...
0.041953
is used when data can be fetched sequentially. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_topics\_extraction\_with\_nmf\_lda.py` .. rubric:: References \* `"Latent Dirichlet Allocation" `\_ D. Blei, A. Ng, M. Jordan, 2003 \* `"Online Learning for Latent Dirichlet Allocation” `\_ M. Hoffman, D. Blei, F. Bach, 2010 \* `"Stochastic Variational Inference" `\_ M. Hoffman, D. Blei, C. Wang, J. Paisley, 2013 \* `"The varimax criterion for analytic rotation in factor analysis" `\_ H. F. Kaiser, 1958 See also :ref:`nca\_dim\_reduction` for dimensionality reduction with Neighborhood Components Analysis.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/decomposition.rst
main
scikit-learn
[ -0.04700421914458275, -0.0918898954987526, -0.07063999027013779, -0.03172305226325989, 0.06578430533409119, 0.1153821051120758, 0.01487761177122593, 0.031069884076714516, 0.045746803283691406, -0.03316158428788185, 0.11723639070987701, 0.13367889821529388, 0.020712468773126602, -0.00928351...
0.104564
.. \_ensemble: =========================================================================== Ensembles: Gradient boosting, random forests, bagging, voting, stacking =========================================================================== .. currentmodule:: sklearn.ensemble \*\*Ensemble methods\*\* combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. Two very famous examples of ensemble methods are :ref:`gradient-boosted trees ` and :ref:`random forests `. More generally, ensemble models can be applied to any base learner beyond trees, in averaging methods such as :ref:`Bagging methods `, :ref:`model stacking `, or :ref:`Voting `, or in boosting, as :ref:`AdaBoost `. .. \_gradient\_boosting: Gradient-boosted trees ====================== `Gradient Tree Boosting `\_ or Gradient Boosted Decision Trees (GBDT) is a generalization of boosting to arbitrary differentiable loss functions, see the seminal work of [Friedman2001]\_. GBDT is an excellent model for both regression and classification, in particular for tabular data. .. topic:: :class:`GradientBoostingClassifier` vs :class:`HistGradientBoostingClassifier` Scikit-learn provides two implementations of gradient-boosted trees: :class:`HistGradientBoostingClassifier` vs :class:`GradientBoostingClassifier` for classification, and the corresponding classes for regression. The former can be \*\*orders of magnitude faster\*\* than the latter when the number of samples is larger than tens of thousands of samples. Missing values and categorical data are natively supported by the Hist... version, removing the need for additional preprocessing such as imputation. :class:`GradientBoostingClassifier` and :class:`GradientBoostingRegressor` might be preferred for small sample sizes since binning may lead to split points that are too approximate in this setting. .. \_histogram\_based\_gradient\_boosting: Histogram-Based Gradient Boosting ---------------------------------- Scikit-learn 0.21 introduced two new implementations of gradient boosted trees, namely :class:`HistGradientBoostingClassifier` and :class:`HistGradientBoostingRegressor`, inspired by `LightGBM `\_\_ (See [LightGBM]\_). These histogram-based estimators can be \*\*orders of magnitude faster\*\* than :class:`GradientBoostingClassifier` and :class:`GradientBoostingRegressor` when the number of samples is larger than tens of thousands of samples. They also have built-in support for missing values, which avoids the need for an imputer. These fast estimators first bin the input samples ``X`` into integer-valued bins (typically 256 bins) which tremendously reduces the number of splitting points to consider, and allows the algorithm to leverage integer-based data structures (histograms) instead of relying on sorted continuous values when building the trees. The API of these estimators is slightly different, and some of the features from :class:`GradientBoostingClassifier` and :class:`GradientBoostingRegressor` are not yet supported, for instance some loss functions. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_inspection\_plot\_partial\_dependence.py` \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_forest\_hist\_grad\_boosting\_comparison.py` Usage ^^^^^ Most of the parameters are unchanged from :class:`GradientBoostingClassifier` and :class:`GradientBoostingRegressor`. One exception is the ``max\_iter`` parameter that replaces ``n\_estimators``, and controls the number of iterations of the boosting process:: >>> from sklearn.ensemble import HistGradientBoostingClassifier >>> from sklearn.datasets import make\_hastie\_10\_2 >>> X, y = make\_hastie\_10\_2(random\_state=0) >>> X\_train, X\_test = X[:2000], X[2000:] >>> y\_train, y\_test = y[:2000], y[2000:] >>> clf = HistGradientBoostingClassifier(max\_iter=100).fit(X\_train, y\_train) >>> clf.score(X\_test, y\_test) 0.8965 Available losses for \*\*regression\*\* are: - 'squared\_error', which is the default loss; - 'absolute\_error', which is less sensitive to outliers than the squared error; - 'gamma', which is well suited to model strictly positive outcomes; - 'poisson', which is well suited to model counts and frequencies; - 'quantile', which allows for estimating a conditional quantile that can later be used to obtain prediction intervals. For \*\*classification\*\*, 'log\_loss' is the only option. For binary classification it uses the binary log loss, also known as binomial deviance or binary cross-entropy. For `n\_classes >= 3`, it uses the multi-class log loss function, with multinomial deviance and categorical cross-entropy as alternative names. The appropriate loss version is selected based on :term:`y` passed to :term:`fit`. The size of the trees can be controlled through the ``max\_leaf\_nodes``, ``max\_depth``, and ``min\_samples\_leaf`` parameters. The number of bins used to bin the data is controlled with the ``max\_bins`` parameter. Using less bins acts as a form of regularization. It
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.09956887364387512, -0.10874049365520477, -0.010870425030589104, 0.05194336920976639, 0.06462769210338593, 0.009950493462383747, -0.003366154385730624, 0.00350636662915349, -0.051920000463724136, -0.005682784598320723, -0.12235803157091141, -0.08759615570306778, 0.06150403618812561, -0.0...
0.100535
loss version is selected based on :term:`y` passed to :term:`fit`. The size of the trees can be controlled through the ``max\_leaf\_nodes``, ``max\_depth``, and ``min\_samples\_leaf`` parameters. The number of bins used to bin the data is controlled with the ``max\_bins`` parameter. Using less bins acts as a form of regularization. It is generally recommended to use as many bins as possible (255), which is the default. The ``l2\_regularization`` parameter acts as a regularizer for the loss function, and corresponds to :math:`\lambda` in the following expression (see equation (2) in [XGBoost]\_): .. math:: \mathcal{L}(\phi) = \sum\_i l(\hat{y}\_i, y\_i) + \frac12 \sum\_k \lambda ||w\_k||^2 .. dropdown:: Details on l2 regularization It is important to notice that the loss term :math:`l(\hat{y}\_i, y\_i)` describes only half of the actual loss function except for the pinball loss and absolute error. The index :math:`k` refers to the k-th tree in the ensemble of trees. In the case of regression and binary classification, gradient boosting models grow one tree per iteration, then :math:`k` runs up to `max\_iter`. In the case of multiclass classification problems, the maximal value of the index :math:`k` is `n\_classes` :math:`\times` `max\_iter`. If :math:`T\_k` denotes the number of leaves in the k-th tree, then :math:`w\_k` is a vector of length :math:`T\_k`, which contains the leaf values of the form `w = -sum\_gradient / (sum\_hessian + l2\_regularization)` (see equation (5) in [XGBoost]\_). The leaf values :math:`w\_k` are derived by dividing the sum of the gradients of the loss function by the combined sum of hessians. Adding the regularization to the denominator penalizes the leaves with small hessians (flat regions), resulting in smaller updates. Those :math:`w\_k` values contribute then to the model's prediction for a given input that ends up in the corresponding leaf. The final prediction is the sum of the base prediction and the contributions from each tree. The result of that sum is then transformed by the inverse link function depending on the choice of the loss function (see :ref:`gradient\_boosting\_formulation`). Notice that the original paper [XGBoost]\_ introduces a term :math:`\gamma\sum\_k T\_k` that penalizes the number of leaves (making it a smooth version of `max\_leaf\_nodes`) not presented here as it is not implemented in scikit-learn; whereas :math:`\lambda` penalizes the magnitude of the individual tree predictions before being rescaled by the learning rate, see :ref:`gradient\_boosting\_shrinkage`. Note that \*\*early-stopping is enabled by default if the number of samples is larger than 10,000\*\*. The early-stopping behaviour is controlled via the ``early\_stopping``, ``scoring``, ``validation\_fraction``, ``n\_iter\_no\_change``, and ``tol`` parameters. It is possible to early-stop using an arbitrary :term:`scorer`, or just the training or validation loss. Note that for technical reasons, using a callable as a scorer is significantly slower than using the loss. By default, early-stopping is performed if there are at least 10,000 samples in the training set, using the validation loss. .. \_nan\_support\_hgbt: Missing values support ^^^^^^^^^^^^^^^^^^^^^^ :class:`HistGradientBoostingClassifier` and :class:`HistGradientBoostingRegressor` have built-in support for missing values (NaNs). During training, the tree grower learns at each split point whether samples with missing values should go to the left or right child, based on the potential gain. When predicting, samples with missing values are assigned to the left or right child consequently:: >>> from sklearn.ensemble import HistGradientBoostingClassifier >>> import numpy as np >>> X = np.array([0, 1, 2, np.nan]).reshape(-1, 1) >>> y = [0, 0, 1, 1] >>> gbdt = HistGradientBoostingClassifier(min\_samples\_leaf=1).fit(X, y) >>> gbdt.predict(X) array([0, 0, 1, 1]) When the missingness pattern is predictive, the splits can be performed on whether the feature value is missing or not:: >>> X = np.array([0, np.nan, 1, 2, np.nan]).reshape(-1, 1) >>> y = [0, 1, 0, 0, 1] >>> gbdt = HistGradientBoostingClassifier(min\_samples\_leaf=1, ...
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ 0.05690394341945648, 0.01603829301893711, 0.016457853838801384, 0.02605072408914566, 0.11926008015871048, 0.02669598162174225, -0.01025361567735672, 0.014438199810683727, -0.05267554149031639, 0.010743574239313602, -0.035345274955034256, -0.03522924333810806, 0.04183317720890045, -0.091524...
0.041466
HistGradientBoostingClassifier(min\_samples\_leaf=1).fit(X, y) >>> gbdt.predict(X) array([0, 0, 1, 1]) When the missingness pattern is predictive, the splits can be performed on whether the feature value is missing or not:: >>> X = np.array([0, np.nan, 1, 2, np.nan]).reshape(-1, 1) >>> y = [0, 1, 0, 0, 1] >>> gbdt = HistGradientBoostingClassifier(min\_samples\_leaf=1, ... max\_depth=2, ... learning\_rate=1, ... max\_iter=1).fit(X, y) >>> gbdt.predict(X) array([0, 1, 0, 0, 1]) If no missing values were encountered for a given feature during training, then samples with missing values are mapped to whichever child has the most samples. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_hgbt\_regression.py` .. \_sw\_hgbdt: Sample weight support ^^^^^^^^^^^^^^^^^^^^^ :class:`HistGradientBoostingClassifier` and :class:`HistGradientBoostingRegressor` support sample weights during :term:`fit`. The following toy example demonstrates that samples with a sample weight of zero are ignored: >>> X = [[1, 0], ... [1, 0], ... [1, 0], ... [0, 1]] >>> y = [0, 0, 1, 0] >>> # ignore the first 2 training samples by setting their weight to 0 >>> sample\_weight = [0, 0, 1, 1] >>> gb = HistGradientBoostingClassifier(min\_samples\_leaf=1) >>> gb.fit(X, y, sample\_weight=sample\_weight) HistGradientBoostingClassifier(...) >>> gb.predict([[1, 0]]) array([1]) >>> gb.predict\_proba([[1, 0]])[0, 1] np.float64(0.999) As you can see, the `[1, 0]` is comfortably classified as `1` since the first two samples are ignored due to their sample weights. Implementation detail: taking sample weights into account amounts to multiplying the gradients (and the hessians) by the sample weights. Note that the binning stage (specifically the quantiles computation) does not take the weights into account. .. \_categorical\_support\_gbdt: Categorical Features Support ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :class:`HistGradientBoostingClassifier` and :class:`HistGradientBoostingRegressor` have native support for categorical features: they can consider splits on non-ordered, categorical data. For datasets with categorical features, using the native categorical support is often better than relying on one-hot encoding (:class:`~sklearn.preprocessing.OneHotEncoder`), because one-hot encoding requires more tree depth to achieve equivalent splits. It is also usually better to rely on the native categorical support rather than to treat categorical features as continuous (ordinal), which happens for ordinal-encoded categorical data, since categories are nominal quantities where order does not matter. To enable categorical support, a boolean mask can be passed to the `categorical\_features` parameter, indicating which feature is categorical. In the following, the first feature will be treated as categorical and the second feature as numerical:: >>> gbdt = HistGradientBoostingClassifier(categorical\_features=[True, False]) Equivalently, one can pass a list of integers indicating the indices of the categorical features:: >>> gbdt = HistGradientBoostingClassifier(categorical\_features=[0]) When the input is a DataFrame, it is also possible to pass a list of column names:: >>> gbdt = HistGradientBoostingClassifier(categorical\_features=["site", "manufacturer"]) Finally, when the input is a DataFrame we can use `categorical\_features="from\_dtype"` in which case all columns with a categorical `dtype` will be treated as categorical features. The cardinality of each categorical feature must be less than the `max\_bins` parameter. For an example using histogram-based gradient boosting on categorical features, see :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_categorical.py`. If there are missing values during training, the missing values will be treated as a proper category. If there are no missing values during training, then at prediction time, missing values are mapped to the child node that has the most samples (just like for continuous features). When predicting, categories that were not seen during fit time will be treated as missing values. .. dropdown:: Split finding with categorical features The canonical way of considering categorical splits in a tree is to consider all of the :math:`2^{K - 1} - 1` partitions, where :math:`K` is the number of categories. This can quickly become prohibitive when :math:`K` is large. Fortunately, since gradient boosting trees are always regression trees (even for classification problems), there exists a faster strategy that can yield equivalent splits. First, the categories of
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.019705316051840782, -0.0714949369430542, 0.014990836381912231, 0.00832286011427641, 0.13445861637592316, 0.041556622833013535, -0.0055306158028542995, -0.04203280061483383, -0.1733388602733612, -0.012513688765466213, -0.053708530962467194, -0.08105607330799103, 0.03467053920030594, -0.0...
-0.012688
the :math:`2^{K - 1} - 1` partitions, where :math:`K` is the number of categories. This can quickly become prohibitive when :math:`K` is large. Fortunately, since gradient boosting trees are always regression trees (even for classification problems), there exists a faster strategy that can yield equivalent splits. First, the categories of a feature are sorted according to the variance of the target, for each category `k`. Once the categories are sorted, one can consider \*continuous partitions\*, i.e. treat the categories as if they were ordered continuous values (see Fisher [Fisher1958]\_ for a formal proof). As a result, only :math:`K - 1` splits need to be considered instead of :math:`2^{K - 1} - 1`. The initial sorting is a :math:`\mathcal{O}(K \log(K))` operation, leading to a total complexity of :math:`\mathcal{O}(K \log(K) + K)`, instead of :math:`\mathcal{O}(2^K)`. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_categorical.py` .. \_monotonic\_cst\_gbdt: Monotonic Constraints ^^^^^^^^^^^^^^^^^^^^^ Depending on the problem at hand, you may have prior knowledge indicating that a given feature should in general have a positive (or negative) effect on the target value. For example, all else being equal, a higher credit score should increase the probability of getting approved for a loan. Monotonic constraints allow you to incorporate such prior knowledge into the model. For a predictor :math:`F` with two features: - a \*\*monotonic increase constraint\*\* is a constraint of the form: .. math:: x\_1 \leq x\_1' \implies F(x\_1, x\_2) \leq F(x\_1', x\_2) - a \*\*monotonic decrease constraint\*\* is a constraint of the form: .. math:: x\_1 \leq x\_1' \implies F(x\_1, x\_2) \geq F(x\_1', x\_2) You can specify a monotonic constraint on each feature using the `monotonic\_cst` parameter. For each feature, a value of 0 indicates no constraint, while 1 and -1 indicate a monotonic increase and monotonic decrease constraint, respectively:: >>> from sklearn.ensemble import HistGradientBoostingRegressor ... # monotonic increase, monotonic decrease, and no constraint on the 3 features >>> gbdt = HistGradientBoostingRegressor(monotonic\_cst=[1, -1, 0]) In a binary classification context, imposing a monotonic increase (decrease) constraint means that higher values of the feature are supposed to have a positive (negative) effect on the probability of samples to belong to the positive class. Nevertheless, monotonic constraints only marginally constrain feature effects on the output. For instance, monotonic increase and decrease constraints cannot be used to enforce the following modelling constraint: .. math:: x\_1 \leq x\_1' \implies F(x\_1, x\_2) \leq F(x\_1', x\_2') Also, monotonic constraints are not supported for multiclass classification. For a practical implementation of monotonic constraints with the histogram-based gradient boosting, including how they can improve generalization when domain knowledge is available, see :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_monotonic\_constraints.py`. .. note:: Since categories are unordered quantities, it is not possible to enforce monotonic constraints on categorical features. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_hgbt\_regression.py` .. \_interaction\_cst\_hgbt: Interaction constraints ^^^^^^^^^^^^^^^^^^^^^^^ A priori, the histogram gradient boosted trees are allowed to use any feature to split a node into child nodes. This creates so called interactions between features, i.e. usage of different features as split along a branch. Sometimes, one wants to restrict the possible interactions, see [Mayer2022]\_. This can be done by the parameter ``interaction\_cst``, where one can specify the indices of features that are allowed to interact. For instance, with 3 features in total, ``interaction\_cst=[{0}, {1}, {2}]`` forbids all interactions. The constraints ``[{0, 1}, {1, 2}]`` specify two groups of possibly interacting features. Features 0 and 1 may interact with each other, as well as features 1 and 2. But note that features 0 and 2 are forbidden to interact. The following depicts a tree and the possible splits of the tree: .. code-block:: none 1 <- Both constraint groups could be applied from now on /
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.028816252946853638, -0.035736650228500366, 0.06734434515237808, 0.005630994215607643, 0.00054958276450634, -0.01930948719382286, -0.05804574117064476, -0.03928282484412193, -0.0006317013176158071, 0.04103011637926102, -0.08644530922174454, -0.011306648142635822, -0.018118275329470634, 0...
-0.049093
may interact with each other, as well as features 1 and 2. But note that features 0 and 2 are forbidden to interact. The following depicts a tree and the possible splits of the tree: .. code-block:: none 1 <- Both constraint groups could be applied from now on / \ 1 2 <- Left split still fulfills both constraint groups. / \ / \ Right split at feature 2 has only group {1, 2} from now on. LightGBM uses the same logic for overlapping groups. Note that features not listed in ``interaction\_cst`` are automatically assigned an interaction group for themselves. With again 3 features, this means that ``[{0}]`` is equivalent to ``[{0}, {1, 2}]``. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_inspection\_plot\_partial\_dependence.py` .. rubric:: References .. [Mayer2022] M. Mayer, S.C. Bourassa, M. Hoesli, and D.F. Scognamiglio. 2022. :doi:`Machine Learning Applications to Land and Structure Valuation <10.3390/jrfm15050193>`. Journal of Risk and Financial Management 15, no. 5: 193 Low-level parallelism ^^^^^^^^^^^^^^^^^^^^^ :class:`HistGradientBoostingClassifier` and :class:`HistGradientBoostingRegressor` use OpenMP for parallelization through Cython. For more details on how to control the number of threads, please refer to our :ref:`parallelism` notes. The following parts are parallelized: - mapping samples from real values to integer-valued bins (finding the bin thresholds is however sequential) - building histograms is parallelized over features - finding the best split point at a node is parallelized over features - during fit, mapping samples into the left and right children is parallelized over samples - gradient and hessians computations are parallelized over samples - predicting is parallelized over samples .. \_Why\_it's\_faster: Why it's faster ^^^^^^^^^^^^^^^ The bottleneck of a gradient boosting procedure is building the decision trees. Building a traditional decision tree (as in the other GBDTs :class:`GradientBoostingClassifier` and :class:`GradientBoostingRegressor`) requires sorting the samples at each node (for each feature). Sorting is needed so that the potential gain of a split point can be computed efficiently. Splitting a single node has thus a complexity of :math:`\mathcal{O}(n\_\text{features} \times n \log(n))` where :math:`n` is the number of samples at the node. :class:`HistGradientBoostingClassifier` and :class:`HistGradientBoostingRegressor`, in contrast, do not require sorting the feature values and instead use a data-structure called a histogram, where the samples are implicitly ordered. Building a histogram has a :math:`\mathcal{O}(n)` complexity, so the node splitting procedure has a :math:`\mathcal{O}(n\_\text{features} \times n)` complexity, much smaller than the previous one. In addition, instead of considering :math:`n` split points, we consider only ``max\_bins`` split points, which might be much smaller. In order to build histograms, the input data `X` needs to be binned into integer-valued bins. This binning procedure does require sorting the feature values, but it only happens once at the very beginning of the boosting process (not at each node, like in :class:`GradientBoostingClassifier` and :class:`GradientBoostingRegressor`). Finally, many parts of the implementation of :class:`HistGradientBoostingClassifier` and :class:`HistGradientBoostingRegressor` are parallelized. .. rubric:: References .. [XGBoost] Tianqi Chen, Carlos Guestrin, :arxiv:`"XGBoost: A Scalable Tree Boosting System" <1603.02754>` .. [LightGBM] Ke et. al. `"LightGBM: A Highly Efficient Gradient BoostingDecision Tree" `\_ .. [Fisher1958] Fisher, W.D. (1958). `"On Grouping for Maximum Homogeneity" `\_ Journal of the American Statistical Association, 53, 789-798. :class:`GradientBoostingClassifier` and :class:`GradientBoostingRegressor` ---------------------------------------------------------------------------- The usage and the parameters of :class:`GradientBoostingClassifier` and :class:`GradientBoostingRegressor` are described below. The 2 most important parameters of these estimators are `n\_estimators` and `learning\_rate`. .. dropdown:: Classification :class:`GradientBoostingClassifier` supports both binary and multi-class classification. The following example shows how to fit a gradient boosting classifier with 100 decision stumps as weak learners:: >>> from sklearn.datasets import make\_hastie\_10\_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> X, y = make\_hastie\_10\_2(random\_state=0) >>> X\_train, X\_test = X[:2000], X[2000:] >>> y\_train, y\_test = y[:2000], y[2000:] >>> clf = GradientBoostingClassifier(n\_estimators=100, learning\_rate=1.0, ... max\_depth=1,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.050427086651325226, -0.04171590134501457, -0.02477935142815113, 0.08480317890644073, 0.062495965510606766, 0.0032977897208184004, 0.023891840130090714, 0.003472972195595503, -0.08775656670331955, -0.04925161600112915, 0.0631217360496521, -0.05578409135341644, -0.0042466167360544205, -0....
0.051068
example shows how to fit a gradient boosting classifier with 100 decision stumps as weak learners:: >>> from sklearn.datasets import make\_hastie\_10\_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> X, y = make\_hastie\_10\_2(random\_state=0) >>> X\_train, X\_test = X[:2000], X[2000:] >>> y\_train, y\_test = y[:2000], y[2000:] >>> clf = GradientBoostingClassifier(n\_estimators=100, learning\_rate=1.0, ... max\_depth=1, random\_state=0).fit(X\_train, y\_train) >>> clf.score(X\_test, y\_test) 0.913 The number of weak learners (i.e. regression trees) is controlled by the parameter ``n\_estimators``; :ref:`The size of each tree ` can be controlled either by setting the tree depth via ``max\_depth`` or by setting the number of leaf nodes via ``max\_leaf\_nodes``. The ``learning\_rate`` is a hyper-parameter in the range (0.0, 1.0] that controls overfitting via :ref:`shrinkage ` . .. note:: Classification with more than 2 classes requires the induction of ``n\_classes`` regression trees at each iteration, thus, the total number of induced trees equals ``n\_classes \* n\_estimators``. For datasets with a large number of classes we strongly recommend to use :class:`HistGradientBoostingClassifier` as an alternative to :class:`GradientBoostingClassifier` . .. dropdown:: Regression :class:`GradientBoostingRegressor` supports a number of :ref:`different loss functions ` for regression which can be specified via the argument ``loss``; the default loss function for regression is squared error (``'squared\_error'``). :: >>> import numpy as np >>> from sklearn.metrics import mean\_squared\_error >>> from sklearn.datasets import make\_friedman1 >>> from sklearn.ensemble import GradientBoostingRegressor >>> X, y = make\_friedman1(n\_samples=1200, random\_state=0, noise=1.0) >>> X\_train, X\_test = X[:200], X[200:] >>> y\_train, y\_test = y[:200], y[200:] >>> est = GradientBoostingRegressor( ... n\_estimators=100, learning\_rate=0.1, max\_depth=1, random\_state=0, ... loss='squared\_error' ... ).fit(X\_train, y\_train) >>> mean\_squared\_error(y\_test, est.predict(X\_test)) 5.00 The figure below shows the results of applying :class:`GradientBoostingRegressor` with least squares loss and 500 base learners to the diabetes dataset (:func:`sklearn.datasets.load\_diabetes`). The plot shows the train and test error at each iteration. The train error at each iteration is stored in the `train\_score\_` attribute of the gradient boosting model. The test error at each iteration can be obtained via the :meth:`~GradientBoostingRegressor.staged\_predict` method which returns a generator that yields the predictions at each stage. Plots like these can be used to determine the optimal number of trees (i.e. ``n\_estimators``) by early stopping. .. figure:: ../auto\_examples/ensemble/images/sphx\_glr\_plot\_gradient\_boosting\_regression\_001.png :target: ../auto\_examples/ensemble/plot\_gradient\_boosting\_regression.html :align: center :scale: 75 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_regression.py` \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_oob.py` .. \_gradient\_boosting\_warm\_start: Fitting additional weak-learners ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Both :class:`GradientBoostingRegressor` and :class:`GradientBoostingClassifier` support ``warm\_start=True`` which allows you to add more estimators to an already fitted model. :: >>> import numpy as np >>> from sklearn.metrics import mean\_squared\_error >>> from sklearn.datasets import make\_friedman1 >>> from sklearn.ensemble import GradientBoostingRegressor >>> X, y = make\_friedman1(n\_samples=1200, random\_state=0, noise=1.0) >>> X\_train, X\_test = X[:200], X[200:] >>> y\_train, y\_test = y[:200], y[200:] >>> est = GradientBoostingRegressor( ... n\_estimators=100, learning\_rate=0.1, max\_depth=1, random\_state=0, ... loss='squared\_error' ... ) >>> est = est.fit(X\_train, y\_train) # fit with 100 trees >>> mean\_squared\_error(y\_test, est.predict(X\_test)) 5.00 >>> \_ = est.set\_params(n\_estimators=200, warm\_start=True) # set warm\_start and increase num of trees >>> \_ = est.fit(X\_train, y\_train) # fit additional 100 trees to est >>> mean\_squared\_error(y\_test, est.predict(X\_test)) 3.84 .. \_gradient\_boosting\_tree\_size: Controlling the tree size ^^^^^^^^^^^^^^^^^^^^^^^^^^ The size of the regression tree base learners defines the level of variable interactions that can be captured by the gradient boosting model. In general, a tree of depth ``h`` can capture interactions of order ``h`` . There are two ways in which the size of the individual regression trees can be controlled. If you specify ``max\_depth=h`` then complete binary trees of depth ``h`` will be grown. Such trees will have (at most) ``2\*\*h`` leaf nodes and ``2\*\*h - 1`` split nodes. Alternatively, you can control the tree size by specifying the number of leaf nodes via the parameter ``max\_leaf\_nodes``. In this case, trees will be grown using best-first search where nodes
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.05681997165083885, -0.10231094807386398, -0.007068746257573366, 0.033897753804922104, 0.07942842692136765, -0.005805397871881723, 0.010498004034161568, 0.04015820845961571, -0.08035764843225479, 0.0076649924740195274, -0.07654247432947159, -0.07542592287063599, 0.07117819786071777, -0.0...
-0.018958
depth ``h`` will be grown. Such trees will have (at most) ``2\*\*h`` leaf nodes and ``2\*\*h - 1`` split nodes. Alternatively, you can control the tree size by specifying the number of leaf nodes via the parameter ``max\_leaf\_nodes``. In this case, trees will be grown using best-first search where nodes with the highest improvement in impurity will be expanded first. A tree with ``max\_leaf\_nodes=k`` has ``k - 1`` split nodes and thus can model interactions of up to order ``max\_leaf\_nodes - 1`` . We found that ``max\_leaf\_nodes=k`` gives comparable results to ``max\_depth=k-1`` but is significantly faster to train at the expense of a slightly higher training error. The parameter ``max\_leaf\_nodes`` corresponds to the variable ``J`` in the chapter on gradient boosting in [Friedman2001]\_ and is related to the parameter ``interaction.depth`` in R's gbm package where ``max\_leaf\_nodes == interaction.depth + 1`` . .. \_gradient\_boosting\_formulation: Mathematical formulation ^^^^^^^^^^^^^^^^^^^^^^^^ We first present GBRT for regression, and then detail the classification case. .. dropdown:: Regression GBRT regressors are additive models whose prediction :math:`\hat{y}\_i` for a given input :math:`x\_i` is of the following form: .. math:: \hat{y}\_i = F\_M(x\_i) = \sum\_{m=1}^{M} h\_m(x\_i) where the :math:`h\_m` are estimators called \*weak learners\* in the context of boosting. Gradient Tree Boosting uses :ref:`decision tree regressors ` of fixed size as weak learners. The constant M corresponds to the `n\_estimators` parameter. Similar to other boosting algorithms, a GBRT is built in a greedy fashion: .. math:: F\_m(x) = F\_{m-1}(x) + h\_m(x), where the newly added tree :math:`h\_m` is fitted in order to minimize a sum of losses :math:`L\_m`, given the previous ensemble :math:`F\_{m-1}`: .. math:: h\_m = \arg\min\_{h} L\_m = \arg\min\_{h} \sum\_{i=1}^{n} l(y\_i, F\_{m-1}(x\_i) + h(x\_i)), where :math:`l(y\_i, F(x\_i))` is defined by the `loss` parameter, detailed in the next section. By default, the initial model :math:`F\_{0}` is chosen as the constant that minimizes the loss: for a least-squares loss, this is the empirical mean of the target values. The initial model can also be specified via the ``init`` argument. Using a first-order Taylor approximation, the value of :math:`l` can be approximated as follows: .. math:: l(y\_i, F\_{m-1}(x\_i) + h\_m(x\_i)) \approx l(y\_i, F\_{m-1}(x\_i)) + h\_m(x\_i) \left[ \frac{\partial l(y\_i, F(x\_i))}{\partial F(x\_i)} \right]\_{F=F\_{m - 1}}. .. note:: Briefly, a first-order Taylor approximation says that :math:`l(z) \approx l(a) + (z - a) \frac{\partial l}{\partial z}(a)`. Here, :math:`z` corresponds to :math:`F\_{m - 1}(x\_i) + h\_m(x\_i)`, and :math:`a` corresponds to :math:`F\_{m-1}(x\_i)` The quantity :math:`\left[ \frac{\partial l(y\_i, F(x\_i))}{\partial F(x\_i)} \right]\_{F=F\_{m - 1}}` is the derivative of the loss with respect to its second parameter, evaluated at :math:`F\_{m-1}(x)`. It is easy to compute for any given :math:`F\_{m - 1}(x\_i)` in a closed form since the loss is differentiable. We will denote it by :math:`g\_i`. Removing the constant terms, we have: .. math:: h\_m \approx \arg\min\_{h} \sum\_{i=1}^{n} h(x\_i) g\_i This is minimized if :math:`h(x\_i)` is fitted to predict a value that is proportional to the negative gradient :math:`-g\_i`. Therefore, at each iteration, \*\*the estimator\*\* :math:`h\_m` \*\*is fitted to predict the negative gradients of the samples\*\*. The gradients are updated at each iteration. This can be considered as some kind of gradient descent in a functional space. .. note:: For some losses, e.g. ``'absolute\_error'`` where the gradients are :math:`\pm 1`, the values predicted by a fitted :math:`h\_m` are not accurate enough: the tree can only output integer values. As a result, the leaves values of the tree :math:`h\_m` are modified once the tree is fitted, such that the leaves values minimize the loss :math:`L\_m`. The update is loss-dependent: for the absolute error loss, the value of a leaf is updated to the median of the samples in that leaf.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.059939634054899216, -0.01690801978111267, 0.01529951486736536, -0.01733270101249218, 0.04062194749712944, -0.05011072754859924, -0.053323835134506226, 0.05556115135550499, -0.06618835031986237, 0.000010337911589886062, -0.023815887048840523, -0.050471946597099304, 0.03498125076293945, -...
0.033604
a result, the leaves values of the tree :math:`h\_m` are modified once the tree is fitted, such that the leaves values minimize the loss :math:`L\_m`. The update is loss-dependent: for the absolute error loss, the value of a leaf is updated to the median of the samples in that leaf. .. dropdown:: Classification Gradient boosting for classification is very similar to the regression case. However, the sum of the trees :math:`F\_M(x\_i) = \sum\_m h\_m(x\_i)` is not homogeneous to a prediction: it cannot be a class, since the trees predict continuous values. The mapping from the value :math:`F\_M(x\_i)` to a class or a probability is loss-dependent. For the log-loss, the probability that :math:`x\_i` belongs to the positive class is modeled as :math:`p(y\_i = 1 | x\_i) = \sigma(F\_M(x\_i))` where :math:`\sigma` is the sigmoid or expit function. For multiclass classification, K trees (for K classes) are built at each of the :math:`M` iterations. The probability that :math:`x\_i` belongs to class k is modeled as a softmax of the :math:`F\_{M,k}(x\_i)` values. Note that even for a classification task, the :math:`h\_m` sub-estimator is still a regressor, not a classifier. This is because the sub-estimators are trained to predict (negative) \*gradients\*, which are always continuous quantities. .. \_gradient\_boosting\_loss: Loss Functions ^^^^^^^^^^^^^^ The following loss functions are supported and can be specified using the parameter ``loss``: .. dropdown:: Regression \* Squared error (``'squared\_error'``): The natural choice for regression due to its superior computational properties. The initial model is given by the mean of the target values. \* Absolute error (``'absolute\_error'``): A robust loss function for regression. The initial model is given by the median of the target values. \* Huber (``'huber'``): Another robust loss function that combines least squares and least absolute deviation; use ``alpha`` to control the sensitivity with regards to outliers (see [Friedman2001]\_ for more details). \* Quantile (``'quantile'``): A loss function for quantile regression. Use ``0 < alpha < 1`` to specify the quantile. This loss function can be used to create prediction intervals (see :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_quantile.py`). .. dropdown:: Classification \* Binary log-loss (``'log-loss'``): The binomial negative log-likelihood loss function for binary classification. It provides probability estimates. The initial model is given by the log odds-ratio. \* Multi-class log-loss (``'log-loss'``): The multinomial negative log-likelihood loss function for multi-class classification with ``n\_classes`` mutually exclusive classes. It provides probability estimates. The initial model is given by the prior probability of each class. At each iteration ``n\_classes`` regression trees have to be constructed which makes GBRT rather inefficient for data sets with a large number of classes. \* Exponential loss (``'exponential'``): The same loss function as :class:`AdaBoostClassifier`. Less robust to mislabeled examples than ``'log-loss'``; can only be used for binary classification. .. \_gradient\_boosting\_shrinkage: Shrinkage via learning rate ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [Friedman2001]\_ proposed a simple regularization strategy that scales the contribution of each weak learner by a constant factor :math:`\nu`: .. math:: F\_m(x) = F\_{m-1}(x) + \nu h\_m(x) The parameter :math:`\nu` is also called the \*\*learning rate\*\* because it scales the step length of the gradient descent procedure; it can be set via the ``learning\_rate`` parameter. The parameter ``learning\_rate`` strongly interacts with the parameter ``n\_estimators``, the number of weak learners to fit. Smaller values of ``learning\_rate`` require larger numbers of weak learners to maintain a constant training error. Empirical evidence suggests that small values of ``learning\_rate`` favor better test error. [HTF]\_ recommend to set the learning rate to a small constant (e.g. ``learning\_rate <= 0.1``) and choose ``n\_estimators`` large enough that early stopping applies, see :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_early\_stopping.py` for a more detailed discussion of the interaction between ``learning\_rate`` and ``n\_estimators`` see [R2007]\_. Subsampling ^^^^^^^^^^^^ [Friedman2002]\_ proposed stochastic gradient boosting, which combines gradient
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.07098234444856644, -0.012982857413589954, 0.03745632991194725, 0.05931149423122406, 0.1195962205529213, -0.016672490164637566, -0.01118102204054594, 0.01517909299582243, 0.037653516978025436, 0.09010608494281769, -0.04363425448536873, -0.019087521359324455, 0.06242024898529053, -0.04305...
0.032894
[HTF]\_ recommend to set the learning rate to a small constant (e.g. ``learning\_rate <= 0.1``) and choose ``n\_estimators`` large enough that early stopping applies, see :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_early\_stopping.py` for a more detailed discussion of the interaction between ``learning\_rate`` and ``n\_estimators`` see [R2007]\_. Subsampling ^^^^^^^^^^^^ [Friedman2002]\_ proposed stochastic gradient boosting, which combines gradient boosting with bootstrap averaging (bagging). At each iteration the base classifier is trained on a fraction ``subsample`` of the available training data. The subsample is drawn without replacement. A typical value of ``subsample`` is 0.5. The figure below illustrates the effect of shrinkage and subsampling on the goodness-of-fit of the model. We can clearly see that shrinkage outperforms no-shrinkage. Subsampling with shrinkage can further increase the accuracy of the model. Subsampling without shrinkage, on the other hand, does poorly. .. figure:: ../auto\_examples/ensemble/images/sphx\_glr\_plot\_gradient\_boosting\_regularization\_001.png :target: ../auto\_examples/ensemble/plot\_gradient\_boosting\_regularization.html :align: center :scale: 75 Another strategy to reduce the variance is by subsampling the features analogous to the random splits in :class:`RandomForestClassifier`. The number of subsampled features can be controlled via the ``max\_features`` parameter. .. note:: Using a small ``max\_features`` value can significantly decrease the runtime. Stochastic gradient boosting allows to compute out-of-bag estimates of the test deviance by computing the improvement in deviance on the examples that are not included in the bootstrap sample (i.e. the out-of-bag examples). The improvements are stored in the attribute `oob\_improvement\_`. ``oob\_improvement\_[i]`` holds the improvement in terms of the loss on the OOB samples if you add the i-th stage to the current predictions. Out-of-bag estimates can be used for model selection, for example to determine the optimal number of iterations. OOB estimates are usually very pessimistic thus we recommend to use cross-validation instead and only use OOB if cross-validation is too time consuming. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_regularization.py` \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_oob.py` \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_ensemble\_oob.py` Interpretation with feature importance ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Individual decision trees can be interpreted easily by simply visualizing the tree structure. Gradient boosting models, however, comprise hundreds of regression trees thus they cannot be easily interpreted by visual inspection of the individual trees. Fortunately, a number of techniques have been proposed to summarize and interpret gradient boosting models. Often features do not contribute equally to predict the target response; in many situations the majority of the features are in fact irrelevant. When interpreting a model, the first question usually is: what are those important features and how do they contribute in predicting the target response? Individual decision trees intrinsically perform feature selection by selecting appropriate split points. This information can be used to measure the importance of each feature; the basic idea is: the more often a feature is used in the split points of a tree the more important that feature is. This notion of importance can be extended to decision tree ensembles by simply averaging the impurity-based feature importance of each tree (see :ref:`random\_forest\_feature\_importance` for more details). The feature importance scores of a fit gradient boosting model can be accessed via the ``feature\_importances\_`` property:: >>> from sklearn.datasets import make\_hastie\_10\_2 >>> from sklearn.ensemble import GradientBoostingClassifier >>> X, y = make\_hastie\_10\_2(random\_state=0) >>> clf = GradientBoostingClassifier(n\_estimators=100, learning\_rate=1.0, ... max\_depth=1, random\_state=0).fit(X, y) >>> clf.feature\_importances\_ array([0.107, 0.105, 0.113, 0.0987, 0.0947, 0.107, 0.0916, 0.0972, 0.0958, 0.0906]) Note that this computation of feature importance is based on entropy, and it is distinct from :func:`sklearn.inspection.permutation\_importance` which is based on permutation of the features. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_regression.py` .. rubric:: References .. [Friedman2001] Friedman, J.H. (2001). :doi:`Greedy function approximation: A gradient boosting machine <10.1214/aos/1013203451>`. Annals of Statistics, 29, 1189-1232. .. [Friedman2002] Friedman, J.H. (2002). `Stochastic gradient boosting. `\_. Computational Statistics & Data Analysis, 38, 367-378. .. [R2007] G. Ridgeway (2006). `Generalized Boosted Models: A guide to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.14177800714969635, -0.0474373996257782, 0.04842239245772362, 0.04297279193997383, 0.12836115062236786, -0.024012921378016472, -0.036433104425668716, 0.0432920828461647, -0.03421181067824364, -0.057979654520750046, -0.026572056114673615, 0.06503741443157196, -0.0009217938641086221, -0.03...
0.037071
\* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_regression.py` .. rubric:: References .. [Friedman2001] Friedman, J.H. (2001). :doi:`Greedy function approximation: A gradient boosting machine <10.1214/aos/1013203451>`. Annals of Statistics, 29, 1189-1232. .. [Friedman2002] Friedman, J.H. (2002). `Stochastic gradient boosting. `\_. Computational Statistics & Data Analysis, 38, 367-378. .. [R2007] G. Ridgeway (2006). `Generalized Boosted Models: A guide to the gbm package `\_ .. \_forest: Random forests and other randomized tree ensembles =================================================== The :mod:`sklearn.ensemble` module includes two averaging algorithms based on randomized :ref:`decision trees `: the RandomForest algorithm and the Extra-Trees method. Both algorithms are perturb-and-combine techniques [B1998]\_ specifically designed for trees. This means a diverse set of classifiers is created by introducing randomness in the classifier construction. The prediction of the ensemble is given as the averaged prediction of the individual classifiers. As other classifiers, forest classifiers have to be fitted with two arrays: a sparse or dense array X of shape ``(n\_samples, n\_features)`` holding the training samples, and an array Y of shape ``(n\_samples,)`` holding the target values (class labels) for the training samples:: >>> from sklearn.ensemble import RandomForestClassifier >>> X = [[0, 0], [1, 1]] >>> Y = [0, 1] >>> clf = RandomForestClassifier(n\_estimators=10) >>> clf = clf.fit(X, Y) Like :ref:`decision trees `, forests of trees also extend to :ref:`multi-output problems ` (if Y is an array of shape ``(n\_samples, n\_outputs)``). Random Forests -------------- In random forests (see :class:`RandomForestClassifier` and :class:`RandomForestRegressor` classes), each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. During the construction of each tree in the forest, a random subset of the features is considered. The size of this subset is controlled by the `max\_features` parameter; it may include either all input features or a random subset of them (see the :ref:`parameter tuning guidelines ` for more details). The purpose of these two sources of randomness (bootstrapping the samples and randomly selecting features at each split) is to decrease the variance of the forest estimator. Indeed, individual decision trees typically exhibit high variance and tend to overfit. The injected randomness in forests yield decision trees with somewhat decoupled prediction errors. By taking an average of those predictions, some errors can cancel out. Random forests achieve a reduced variance by combining diverse trees, sometimes at the cost of a slight increase in bias. In practice the variance reduction is often significant hence yielding an overall better model. When growing each tree in the forest, the "best" split (i.e. equivalent to passing `splitter="best"` to the underlying decision trees) is chosen according to the impurity criterion. See the :ref:`CART mathematical formulation ` for more details. In contrast to the original publication [B2001]\_, the scikit-learn implementation combines classifiers by averaging their probabilistic prediction, instead of letting each classifier vote for a single class. A competitive alternative to random forests are :ref:`histogram\_based\_gradient\_boosting` (HGBT) models: - Building trees: Random forests typically rely on deep trees (that overfit individually) which uses much computational resources, as they require several splittings and evaluations of candidate splits. Boosting models build shallow trees (that underfit individually) which are faster to fit and predict. - Sequential boosting: In HGBT, the decision trees are built sequentially, where each tree is trained to correct the errors made by the previous ones. This allows them to iteratively improve the model's performance using relatively few trees. In contrast, random forests use a majority vote to predict the outcome, which can require a larger number of trees to achieve the same level of accuracy. - Efficient binning: HGBT uses an efficient binning algorithm that can handle large datasets with a high number of features. The binning algorithm
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.15312789380550385, -0.0743023157119751, 0.024751802906394005, 0.006373833864927292, 0.10136175900697708, 0.015462403185665607, -0.028103845193982124, 0.02178930677473545, -0.05324571952223778, -0.002116026123985648, -0.03508565202355385, -0.0028247388545423746, 0.05577649548649788, -0.0...
0.023684
In contrast, random forests use a majority vote to predict the outcome, which can require a larger number of trees to achieve the same level of accuracy. - Efficient binning: HGBT uses an efficient binning algorithm that can handle large datasets with a high number of features. The binning algorithm can pre-process the data to speed up the subsequent tree construction (see :ref:`Why it's faster `). In contrast, the scikit-learn implementation of random forests does not use binning and relies on exact splitting, which can be computationally expensive. Overall, the computational cost of HGBT versus RF depends on the specific characteristics of the dataset and the modeling task. It's a good idea to try both models and compare their performance and computational efficiency on your specific problem to determine which model is the best fit. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_forest\_hist\_grad\_boosting\_comparison.py` Extremely Randomized Trees -------------------------- In extremely randomized trees (see :class:`ExtraTreesClassifier` and :class:`ExtraTreesRegressor` classes), randomness goes one step further in the way splits are computed. As in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random for each candidate feature and the best of these randomly-generated thresholds is picked as the splitting rule. This usually allows to reduce the variance of the model a bit more, at the expense of a slightly greater increase in bias:: >>> from sklearn.model\_selection import cross\_val\_score >>> from sklearn.datasets import make\_blobs >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.ensemble import ExtraTreesClassifier >>> from sklearn.tree import DecisionTreeClassifier >>> X, y = make\_blobs(n\_samples=10000, n\_features=10, centers=100, ... random\_state=0) >>> clf = DecisionTreeClassifier(max\_depth=None, min\_samples\_split=2, ... random\_state=0) >>> scores = cross\_val\_score(clf, X, y, cv=5) >>> scores.mean() np.float64(0.98) >>> clf = RandomForestClassifier(n\_estimators=10, max\_depth=None, ... min\_samples\_split=2, random\_state=0) >>> scores = cross\_val\_score(clf, X, y, cv=5) >>> scores.mean() np.float64(0.999) >>> clf = ExtraTreesClassifier(n\_estimators=10, max\_depth=None, ... min\_samples\_split=2, random\_state=0) >>> scores = cross\_val\_score(clf, X, y, cv=5) >>> scores.mean() > 0.999 np.True\_ .. figure:: ../auto\_examples/ensemble/images/sphx\_glr\_plot\_forest\_iris\_001.png :target: ../auto\_examples/ensemble/plot\_forest\_iris.html :align: center :scale: 75% .. \_random\_forest\_parameters: Parameters ---------- The main parameters to adjust when using these methods is ``n\_estimators`` and ``max\_features``. The former is the number of trees in the forest. The larger the better, but also the longer it will take to compute. In addition, note that results will stop getting significantly better beyond a critical number of trees. The latter is the size of the random subsets of features to consider when splitting a node. The lower the greater the reduction of variance, but also the greater the increase in bias. Empirical good default values are ``max\_features=1.0`` or equivalently ``max\_features=None`` (always considering all features instead of a random subset) for regression problems, and ``max\_features="sqrt"`` (using a random subset of size ``sqrt(n\_features)``) for classification tasks (where ``n\_features`` is the number of features in the data). The default value of ``max\_features=1.0`` is equivalent to bagged trees and more randomness can be achieved by setting smaller values (e.g. 0.3 is a typical default in the literature). Good results are often achieved when setting ``max\_depth=None`` in combination with ``min\_samples\_split=2`` (i.e., when fully developing the trees). Bear in mind though that these values are usually not optimal, and might result in models that consume a lot of RAM. The best parameter values should always be cross-validated. In addition, note that in random forests, bootstrap samples are used by default (``bootstrap=True``) while the default strategy for extra-trees is to use the whole dataset (``bootstrap=False``). When using bootstrap sampling the generalization error can be estimated on the left out or out-of-bag samples. This can be enabled by setting ``oob\_score=True``. .. note:: The size of the model with the default parameters is
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ 0.035859812051057816, -0.029024222865700722, -0.017077479511499405, 0.02116156369447708, 0.12870870530605316, -0.05249759927392006, -0.030164537951350212, 0.051380082964897156, 0.0011861971579492092, 0.026542602106928825, -0.11134432256221771, -0.019939163699746132, -0.10046692937612534, -...
-0.05702
(``bootstrap=True``) while the default strategy for extra-trees is to use the whole dataset (``bootstrap=False``). When using bootstrap sampling the generalization error can be estimated on the left out or out-of-bag samples. This can be enabled by setting ``oob\_score=True``. .. note:: The size of the model with the default parameters is :math:`O( M \* N \* log (N) )`, where :math:`M` is the number of trees and :math:`N` is the number of samples. In order to reduce the size of the model, you can change these parameters: ``min\_samples\_split``, ``max\_leaf\_nodes``, ``max\_depth`` and ``min\_samples\_leaf``. Parallelization --------------- Finally, this module also features the parallel construction of the trees and the parallel computation of the predictions through the ``n\_jobs`` parameter. If ``n\_jobs=k`` then computations are partitioned into ``k`` jobs, and run on ``k`` cores of the machine. If ``n\_jobs=-1`` then all cores available on the machine are used. Note that because of inter-process communication overhead, the speedup might not be linear (i.e., using ``k`` jobs will unfortunately not be ``k`` times as fast). Significant speedup can still be achieved though when building a large number of trees, or when building a single tree requires a fair amount of time (e.g., on large datasets). .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_forest\_iris.py` \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_multioutput\_face\_completion.py` .. rubric:: References .. [B2001] L. Breiman, "Random Forests", Machine Learning, 45(1), 5-32, 2001. .. [B1998] L. Breiman, "Arcing Classifiers", Annals of Statistics 1998. \* P. Geurts, D. Ernst., and L. Wehenkel, "Extremely randomized trees", Machine Learning, 63(1), 3-42, 2006. .. \_random\_forest\_feature\_importance: Feature importance evaluation ----------------------------- The relative rank (i.e. depth) of a feature used as a decision node in a tree can be used to assess the relative importance of that feature with respect to the predictability of the target variable. Features used at the top of the tree contribute to the final prediction decision of a larger fraction of the input samples. The \*\*expected fraction of the samples\*\* they contribute to can thus be used as an estimate of the \*\*relative importance of the features\*\*. In scikit-learn, the fraction of samples a feature contributes to is combined with the decrease in impurity from splitting them to create a normalized estimate of the predictive power of that feature. By \*\*averaging\*\* the estimates of predictive ability over several randomized trees one can \*\*reduce the variance\*\* of such an estimate and use it for feature selection. This is known as the mean decrease in impurity, or MDI. Refer to [L2014]\_ for more information on MDI and feature importance evaluation with Random Forests. .. warning:: The impurity-based feature importances computed on tree-based models suffer from two flaws that can lead to misleading conclusions. First they are computed on statistics derived from the training dataset and therefore \*\*do not necessarily inform us on which features are most important to make good predictions on held-out dataset\*\*. Secondly, \*\*they favor high cardinality features\*\*, that is features with many unique values. :ref:`permutation\_importance` is an alternative to impurity-based feature importance that does not suffer from these flaws. These two methods of obtaining feature importance are explored in: :ref:`sphx\_glr\_auto\_examples\_inspection\_plot\_permutation\_importance.py`. In practice those estimates are stored as an attribute named ``feature\_importances\_`` on the fitted model. This is an array with shape ``(n\_features,)`` whose values are positive and sum to 1.0. The higher the value, the more important is the contribution of the matching feature to the prediction function. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_forest\_importances.py` .. rubric:: References .. [L2014] G. Louppe, :arxiv:`"Understanding Random Forests: From Theory to Practice" <1407.7502>`, PhD Thesis, U. of Liege, 2014. .. \_random\_trees\_embedding: Totally Random Trees Embedding ------------------------------ :class:`RandomTreesEmbedding` implements an unsupervised transformation of the data. Using a forest of completely
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.026761559769511223, 0.024015652015805244, 0.02111578732728958, 0.07191493362188339, 0.11190592497587204, -0.10440786182880402, -0.06600914895534515, 0.06895680725574493, -0.058591194450855255, 0.023091014474630356, -0.023615844547748566, -0.06016307696700096, 0.013147968798875809, -0.06...
0.011117
to the prediction function. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_forest\_importances.py` .. rubric:: References .. [L2014] G. Louppe, :arxiv:`"Understanding Random Forests: From Theory to Practice" <1407.7502>`, PhD Thesis, U. of Liege, 2014. .. \_random\_trees\_embedding: Totally Random Trees Embedding ------------------------------ :class:`RandomTreesEmbedding` implements an unsupervised transformation of the data. Using a forest of completely random trees, :class:`RandomTreesEmbedding` encodes the data by the indices of the leaves a data point ends up in. This index is then encoded in a one-of-K manner, leading to a high dimensional, sparse binary coding. This coding can be computed very efficiently and can then be used as a basis for other learning tasks. The size and sparsity of the code can be influenced by choosing the number of trees and the maximum depth per tree. For each tree in the ensemble, the coding contains one entry of one. The size of the coding is at most ``n\_estimators \* 2 \*\* max\_depth``, the maximum number of leaves in the forest. As neighboring data points are more likely to lie within the same leaf of a tree, the transformation performs an implicit, non-parametric density estimation. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_random\_forest\_embedding.py` \* :ref:`sphx\_glr\_auto\_examples\_manifold\_plot\_lle\_digits.py` compares non-linear dimensionality reduction techniques on handwritten digits. \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_feature\_transformation.py` compares supervised and unsupervised tree based feature transformations. .. seealso:: :ref:`manifold` techniques can also be useful to derive non-linear representations of feature space, also these approaches focus also on dimensionality reduction. .. \_tree\_ensemble\_warm\_start: Fitting additional trees ------------------------ RandomForest, Extra-Trees and :class:`RandomTreesEmbedding` estimators all support ``warm\_start=True`` which allows you to add more trees to an already fitted model. :: >>> from sklearn.datasets import make\_classification >>> from sklearn.ensemble import RandomForestClassifier >>> X, y = make\_classification(n\_samples=100, random\_state=1) >>> clf = RandomForestClassifier(n\_estimators=10) >>> clf = clf.fit(X, y) # fit with 10 trees >>> len(clf.estimators\_) 10 >>> # set warm\_start and increase num of estimators >>> \_ = clf.set\_params(n\_estimators=20, warm\_start=True) >>> \_ = clf.fit(X, y) # fit additional 10 trees >>> len(clf.estimators\_) 20 When ``random\_state`` is also set, the internal random state is also preserved between ``fit`` calls. This means that training a model once with ``n`` estimators is the same as building the model iteratively via multiple ``fit`` calls, where the final number of estimators is equal to ``n``. :: >>> clf = RandomForestClassifier(n\_estimators=20) # set `n\_estimators` to 10 + 10 >>> \_ = clf.fit(X, y) # fit `estimators\_` will be the same as `clf` above Note that this differs from the usual behavior of :term:`random\_state` in that it does \*not\* result in the same result across different calls. .. \_bagging: Bagging meta-estimator ====================== In ensemble algorithms, bagging methods form a class of algorithms which build several instances of a black-box estimator on random subsets of the original training set and then aggregate their individual predictions to form a final prediction. These methods are used as a way to reduce the variance of a base estimator (e.g., a decision tree), by introducing randomization into its construction procedure and then making an ensemble out of it. In many cases, bagging methods constitute a very simple way to improve with respect to a single model, without making it necessary to adapt the underlying base algorithm. As they provide a way to reduce overfitting, bagging methods work best with strong and complex models (e.g., fully developed decision trees), in contrast with boosting methods which usually work best with weak models (e.g., shallow decision trees). Bagging methods come in many flavours but mostly differ from each other by the way they draw random subsets of the training set: \* When random subsets of the dataset are drawn as random subsets of the samples, then this algorithm is
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.0552658773958683, -0.04504227265715599, -0.003039476228877902, 0.05040737986564636, 0.14613674581050873, -0.03947114571928978, 0.012375126592814922, -0.03412478044629097, -0.02130236104130745, 0.058354854583740234, -0.04511026293039322, 0.017900541424751282, 0.005030856002122164, 0.0178...
0.094243
best with weak models (e.g., shallow decision trees). Bagging methods come in many flavours but mostly differ from each other by the way they draw random subsets of the training set: \* When random subsets of the dataset are drawn as random subsets of the samples, then this algorithm is known as Pasting [B1999]\_. \* When samples are drawn with replacement, then the method is known as Bagging [B1996]\_. \* When random subsets of the dataset are drawn as random subsets of the features, then the method is known as Random Subspaces [H1998]\_. \* Finally, when base estimators are built on subsets of both samples and features, then the method is known as Random Patches [LG2012]\_. In scikit-learn, bagging methods are offered as a unified :class:`BaggingClassifier` meta-estimator (resp. :class:`BaggingRegressor`), taking as input a user-specified estimator along with parameters specifying the strategy to draw random subsets. In particular, ``max\_samples`` and ``max\_features`` control the size of the subsets (in terms of samples and features), while ``bootstrap`` and ``bootstrap\_features`` control whether samples and features are drawn with or without replacement. When using a subset of the available samples the generalization accuracy can be estimated with the out-of-bag samples by setting ``oob\_score=True``. As an example, the snippet below illustrates how to instantiate a bagging ensemble of :class:`~sklearn.neighbors.KNeighborsClassifier` estimators, each built on random subsets of 50% of the samples and 50% of the features. >>> from sklearn.ensemble import BaggingClassifier >>> from sklearn.neighbors import KNeighborsClassifier >>> bagging = BaggingClassifier(KNeighborsClassifier(), ... max\_samples=0.5, max\_features=0.5) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_bias\_variance.py` .. rubric:: References .. [B1999] L. Breiman, "Pasting small votes for classification in large databases and on-line", Machine Learning, 36(1), 85-103, 1999. .. [B1996] L. Breiman, "Bagging predictors", Machine Learning, 24(2), 123-140, 1996. .. [H1998] T. Ho, "The random subspace method for constructing decision forests", Pattern Analysis and Machine Intelligence, 20(8), 832-844, 1998. .. [LG2012] G. Louppe and P. Geurts, "Ensembles on Random Patches", Machine Learning and Knowledge Discovery in Databases, 346-361, 2012. .. \_voting\_classifier: Voting Classifier ======================== The idea behind the :class:`VotingClassifier` is to combine conceptually different machine learning classifiers and use a majority vote or the average predicted probabilities (soft vote) to predict the class labels. Such a classifier can be useful for a set of equally well performing models in order to balance out their individual weaknesses. Majority Class Labels (Majority/Hard Voting) -------------------------------------------- In majority voting, the predicted class label for a particular sample is the class label that represents the majority (mode) of the class labels predicted by each individual classifier. E.g., if the prediction for a given sample is - classifier 1 -> class 1 - classifier 2 -> class 1 - classifier 3 -> class 2 the VotingClassifier (with ``voting='hard'``) would classify the sample as "class 1" based on the majority class label. In the cases of a tie, the :class:`VotingClassifier` will select the class based on the ascending sort order. E.g., in the following scenario - classifier 1 -> class 2 - classifier 2 -> class 1 the class label 1 will be assigned to the sample. Usage ----- The following example shows how to fit the majority rule classifier:: >>> from sklearn import datasets >>> from sklearn.model\_selection import cross\_val\_score >>> from sklearn.linear\_model import LogisticRegression >>> from sklearn.naive\_bayes import GaussianNB >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.ensemble import VotingClassifier >>> iris = datasets.load\_iris() >>> X, y = iris.data[:, 1:3], iris.target >>> clf1 = LogisticRegression(random\_state=1) >>> clf2 = RandomForestClassifier(n\_estimators=50, random\_state=1) >>> clf3 = GaussianNB() >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='hard') >>> for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes',
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.02132330648601055, -0.009886502288281918, 0.05396734178066254, 0.009839052334427834, 0.10812553763389587, -0.0662456750869751, 0.046443287283182144, 0.020242365077137947, 0.022902004420757294, -0.005081588868051767, -0.0510975643992424, 0.016129374504089355, 0.00973680429160595, -0.0405...
0.096173
= datasets.load\_iris() >>> X, y = iris.data[:, 1:3], iris.target >>> clf1 = LogisticRegression(random\_state=1) >>> clf2 = RandomForestClassifier(n\_estimators=50, random\_state=1) >>> clf3 = GaussianNB() >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='hard') >>> for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']): ... scores = cross\_val\_score(clf, X, y, scoring='accuracy', cv=5) ... print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label)) Accuracy: 0.95 (+/- 0.04) [Logistic Regression] Accuracy: 0.94 (+/- 0.04) [Random Forest] Accuracy: 0.91 (+/- 0.04) [naive Bayes] Accuracy: 0.95 (+/- 0.04) [Ensemble] Weighted Average Probabilities (Soft Voting) -------------------------------------------- In contrast to majority voting (hard voting), soft voting returns the class label as argmax of the sum of predicted probabilities. Specific weights can be assigned to each classifier via the ``weights`` parameter. When weights are provided, the predicted class probabilities for each classifier are collected, multiplied by the classifier weight, and averaged. The final class label is then derived from the class label with the highest average probability. To illustrate this with a simple example, let's assume we have 3 classifiers and a 3-class classification problem where we assign equal weights to all classifiers: w1=1, w2=1, w3=1. The weighted average probabilities for a sample would then be calculated as follows: ================ ========== ========== ========== classifier class 1 class 2 class 3 ================ ========== ========== ========== classifier 1 w1 \* 0.2 w1 \* 0.5 w1 \* 0.3 classifier 2 w2 \* 0.6 w2 \* 0.3 w2 \* 0.1 classifier 3 w3 \* 0.3 w3 \* 0.4 w3 \* 0.3 weighted average 0.37 0.4 0.23 ================ ========== ========== ========== Here, the predicted class label is 2, since it has the highest average predicted probability. See the example on :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_voting\_decision\_regions.py` for a demonstration of how the predicted class label can be obtained from the weighted average of predicted probabilities. The following figure illustrates how the decision regions may change when a soft :class:`VotingClassifier` is trained with weights on three linear models: .. figure:: ../auto\_examples/ensemble/images/sphx\_glr\_plot\_voting\_decision\_regions\_002.png :target: ../auto\_examples/ensemble/plot\_voting\_decision\_regions.html :align: center :scale: 75% Usage ----- In order to predict the class labels based on the predicted class-probabilities (scikit-learn estimators in the VotingClassifier must support ``predict\_proba`` method):: >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft' ... ) Optionally, weights can be provided for the individual classifiers:: >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft', weights=[2,5,1] ... ) .. dropdown:: Using the :class:`VotingClassifier` with :class:`~sklearn.model\_selection.GridSearchCV` The :class:`VotingClassifier` can also be used together with :class:`~sklearn.model\_selection.GridSearchCV` in order to tune the hyperparameters of the individual estimators:: >>> from sklearn.model\_selection import GridSearchCV >>> clf1 = LogisticRegression(random\_state=1) >>> clf2 = RandomForestClassifier(random\_state=1) >>> clf3 = GaussianNB() >>> eclf = VotingClassifier( ... estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], ... voting='soft' ... ) >>> params = {'lr\_\_C': [1.0, 100.0], 'rf\_\_n\_estimators': [20, 200]} >>> grid = GridSearchCV(estimator=eclf, param\_grid=params, cv=5) >>> grid = grid.fit(iris.data, iris.target) .. \_voting\_regressor: Voting Regressor ================ The idea behind the :class:`VotingRegressor` is to combine conceptually different machine learning regressors and return the average predicted values. Such a regressor can be useful for a set of equally well performing models in order to balance out their individual weaknesses. Usage ----- The following example shows how to fit the VotingRegressor:: >>> from sklearn.datasets import load\_diabetes >>> from sklearn.ensemble import GradientBoostingRegressor >>> from sklearn.ensemble import RandomForestRegressor >>> from sklearn.linear\_model import LinearRegression >>> from sklearn.ensemble import VotingRegressor >>> # Loading some example data >>> X, y = load\_diabetes(return\_X\_y=True) >>> # Training classifiers >>> reg1 = GradientBoostingRegressor(random\_state=1) >>> reg2 = RandomForestRegressor(random\_state=1) >>> reg3 = LinearRegression() >>> ereg = VotingRegressor(estimators=[('gb', reg1), ('rf', reg2), ('lr', reg3)]) >>> ereg = ereg.fit(X, y)
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ 0.03912947326898575, -0.10968296229839325, -0.08311982452869415, -0.03720162436366081, 0.11976269632577896, -0.06661499291658401, 0.04398055374622345, -0.021570295095443726, -0.10849611461162567, -0.0078755933791399, 0.025376759469509125, -0.0967484787106514, 0.06246431544423103, -0.083964...
-0.021403
from sklearn.linear\_model import LinearRegression >>> from sklearn.ensemble import VotingRegressor >>> # Loading some example data >>> X, y = load\_diabetes(return\_X\_y=True) >>> # Training classifiers >>> reg1 = GradientBoostingRegressor(random\_state=1) >>> reg2 = RandomForestRegressor(random\_state=1) >>> reg3 = LinearRegression() >>> ereg = VotingRegressor(estimators=[('gb', reg1), ('rf', reg2), ('lr', reg3)]) >>> ereg = ereg.fit(X, y) .. figure:: ../auto\_examples/ensemble/images/sphx\_glr\_plot\_voting\_regressor\_001.png :target: ../auto\_examples/ensemble/plot\_voting\_regressor.html :align: center :scale: 75% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_voting\_regressor.py` .. \_stacking: Stacked generalization ====================== Stacked generalization is a method for combining estimators to reduce their biases [W1992]\_ [HTF]\_. More precisely, the predictions of each individual estimator are stacked together and used as input to a final estimator to compute the prediction. This final estimator is trained through cross-validation. The :class:`StackingClassifier` and :class:`StackingRegressor` provide such strategies which can be applied to classification and regression problems. The `estimators` parameter corresponds to the list of the estimators which are stacked together in parallel on the input data. It should be given as a list of names and estimators:: >>> from sklearn.linear\_model import RidgeCV, LassoCV >>> from sklearn.neighbors import KNeighborsRegressor >>> estimators = [('ridge', RidgeCV()), ... ('lasso', LassoCV(random\_state=42)), ... ('knr', KNeighborsRegressor(n\_neighbors=20, ... metric='euclidean'))] The `final\_estimator` will use the predictions of the `estimators` as input. It needs to be a classifier or a regressor when using :class:`StackingClassifier` or :class:`StackingRegressor`, respectively:: >>> from sklearn.ensemble import GradientBoostingRegressor >>> from sklearn.ensemble import StackingRegressor >>> final\_estimator = GradientBoostingRegressor( ... n\_estimators=25, subsample=0.5, min\_samples\_leaf=25, max\_features=1, ... random\_state=42) >>> reg = StackingRegressor( ... estimators=estimators, ... final\_estimator=final\_estimator) To train the `estimators` and `final\_estimator`, the `fit` method needs to be called on the training data:: >>> from sklearn.datasets import load\_diabetes >>> X, y = load\_diabetes(return\_X\_y=True) >>> from sklearn.model\_selection import train\_test\_split >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split(X, y, ... random\_state=42) >>> reg.fit(X\_train, y\_train) StackingRegressor(...) During training, the `estimators` are fitted on the whole training data `X\_train`. They will be used when calling `predict` or `predict\_proba`. To generalize and avoid over-fitting, the `final\_estimator` is trained on out-samples using :func:`sklearn.model\_selection.cross\_val\_predict` internally. For :class:`StackingClassifier`, note that the output of the ``estimators`` is controlled by the parameter `stack\_method` and it is called by each estimator. This parameter is either a string, being estimator method names, or `'auto'` which will automatically identify an available method depending on the availability, tested in the order of preference: `predict\_proba`, `decision\_function` and `predict`. A :class:`StackingRegressor` and :class:`StackingClassifier` can be used as any other regressor or classifier, exposing a `predict`, `predict\_proba`, or `decision\_function` method, e.g.:: >>> y\_pred = reg.predict(X\_test) >>> from sklearn.metrics import r2\_score >>> print('R2 score: {:.2f}'.format(r2\_score(y\_test, y\_pred))) R2 score: 0.53 Note that it is also possible to get the output of the stacked `estimators` using the `transform` method:: >>> reg.transform(X\_test[:5]) array([[142, 138, 146], [179, 182, 151], [139, 132, 158], [286, 292, 225], [126, 124, 164]]) In practice, a stacking predictor predicts as good as the best predictor of the base layer and even sometimes outperforms it by combining the different strengths of these predictors. However, training a stacking predictor is computationally expensive. .. note:: For :class:`StackingClassifier`, when using `stack\_method\_='predict\_proba'`, the first column is dropped when the problem is a binary classification problem. Indeed, both probability columns predicted by each estimator are perfectly collinear. .. note:: Multiple stacking layers can be achieved by assigning `final\_estimator` to a :class:`StackingClassifier` or :class:`StackingRegressor`:: >>> final\_layer\_rfr = RandomForestRegressor( ... n\_estimators=10, max\_features=1, max\_leaf\_nodes=5,random\_state=42) >>> final\_layer\_gbr = GradientBoostingRegressor( ... n\_estimators=10, max\_features=1, max\_leaf\_nodes=5,random\_state=42) >>> final\_layer = StackingRegressor( ... estimators=[('rf', final\_layer\_rfr), ... ('gbrt', final\_layer\_gbr)], ... final\_estimator=RidgeCV() ... ) >>> multi\_layer\_regressor = StackingRegressor( ... estimators=[('ridge', RidgeCV()), ... ('lasso', LassoCV(random\_state=42)), ... ('knr', KNeighborsRegressor(n\_neighbors=20, ... metric='euclidean'))], ... final\_estimator=final\_layer ... ) >>> multi\_layer\_regressor.fit(X\_train, y\_train) StackingRegressor(...) >>> print('R2 score: {:.2f}' ... .format(multi\_layer\_regressor.score(X\_test, y\_test))) R2 score: 0.53 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_stack\_predictors.py`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.06436654925346375, -0.12945616245269775, -0.04708387702703476, 0.036839768290519714, 0.11027778685092926, 0.018418705090880394, 0.0053242105059325695, -0.016423102468252182, -0.022864123806357384, -0.020544331520795822, -0.043322838842868805, -0.02595406398177147, -0.0030684652738273144, ...
0.020773
= StackingRegressor( ... estimators=[('rf', final\_layer\_rfr), ... ('gbrt', final\_layer\_gbr)], ... final\_estimator=RidgeCV() ... ) >>> multi\_layer\_regressor = StackingRegressor( ... estimators=[('ridge', RidgeCV()), ... ('lasso', LassoCV(random\_state=42)), ... ('knr', KNeighborsRegressor(n\_neighbors=20, ... metric='euclidean'))], ... final\_estimator=final\_layer ... ) >>> multi\_layer\_regressor.fit(X\_train, y\_train) StackingRegressor(...) >>> print('R2 score: {:.2f}' ... .format(multi\_layer\_regressor.score(X\_test, y\_test))) R2 score: 0.53 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_stack\_predictors.py` .. rubric:: References .. [W1992] Wolpert, David H. "Stacked generalization." Neural networks 5.2 (1992): 241-259. .. \_adaboost: AdaBoost ======== The module :mod:`sklearn.ensemble` includes the popular boosting algorithm AdaBoost, introduced in 1995 by Freund and Schapire [FS1995]\_. The core principle of AdaBoost is to fit a sequence of weak learners (i.e., models that are only slightly better than random guessing, such as small decision trees) on repeatedly modified versions of the data. The predictions from all of them are then combined through a weighted majority vote (or sum) to produce the final prediction. The data modifications at each so-called boosting iteration consists of applying weights :math:`w\_1`, :math:`w\_2`, ..., :math:`w\_N` to each of the training samples. Initially, those weights are all set to :math:`w\_i = 1/N`, so that the first step simply trains a weak learner on the original data. For each successive iteration, the sample weights are individually modified and the learning algorithm is reapplied to the reweighted data. At a given step, those training examples that were incorrectly predicted by the boosted model induced at the previous step have their weights increased, whereas the weights are decreased for those that were predicted correctly. As iterations proceed, examples that are difficult to predict receive ever-increasing influence. Each subsequent weak learner is thereby forced to concentrate on the examples that are missed by the previous ones in the sequence [HTF]\_. .. figure:: ../auto\_examples/ensemble/images/sphx\_glr\_plot\_adaboost\_multiclass\_001.png :target: ../auto\_examples/ensemble/plot\_adaboost\_multiclass.html :align: center :scale: 75 AdaBoost can be used both for classification and regression problems: - For multi-class classification, :class:`AdaBoostClassifier` implements AdaBoost.SAMME [ZZRH2009]\_. - For regression, :class:`AdaBoostRegressor` implements AdaBoost.R2 [D1997]\_. Usage ----- The following example shows how to fit an AdaBoost classifier with 100 weak learners:: >>> from sklearn.model\_selection import cross\_val\_score >>> from sklearn.datasets import load\_iris >>> from sklearn.ensemble import AdaBoostClassifier >>> X, y = load\_iris(return\_X\_y=True) >>> clf = AdaBoostClassifier(n\_estimators=100) >>> scores = cross\_val\_score(clf, X, y, cv=5) >>> scores.mean() np.float64(0.95) The number of weak learners is controlled by the parameter ``n\_estimators``. The ``learning\_rate`` parameter controls the contribution of the weak learners in the final combination. By default, weak learners are decision stumps. Different weak learners can be specified through the ``estimator`` parameter. The main parameters to tune to obtain good results are ``n\_estimators`` and the complexity of the base estimators (e.g., its depth ``max\_depth`` or minimum required number of samples to consider a split ``min\_samples\_split``). .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_adaboost\_multiclass.py` shows the performance of AdaBoost on a multi-class problem. \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_adaboost\_twoclass.py` shows the decision boundary and decision function values for a non-linearly separable two-class problem using AdaBoost-SAMME. \* :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_adaboost\_regression.py` demonstrates regression with the AdaBoost.R2 algorithm. .. rubric:: References .. [FS1995] Y. Freund, and R. Schapire, "A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting", 1997. .. [ZZRH2009] J. Zhu, H. Zou, S. Rosset, T. Hastie. "Multi-class AdaBoost", 2009. .. [D1997] H. Drucker. "Improving Regressors using Boosting Techniques", 1997. .. [HTF] T. Hastie, R. Tibshirani and J. Friedman, "Elements of Statistical Learning Ed. 2", Springer, 2009.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/ensemble.rst
main
scikit-learn
[ -0.05038244649767876, -0.14605993032455444, -0.037503682076931, 0.041568368673324585, 0.03420700132846832, 0.013000935316085815, -0.04790036007761955, 0.03848765790462494, -0.04411480948328972, -0.01757447049021721, 0.007817740552127361, -0.02828647382557392, -0.014381649903953075, -0.0358...
0.015842
.. \_array\_api: ================================ Array API support (experimental) ================================ .. currentmodule:: sklearn The `Array API `\_ specification defines a standard API for all array manipulation libraries with a NumPy-like API. Scikit-learn vendors pinned copies of `array-api-compat `\_\_ and `array-api-extra `\_\_. Scikit-learn's support for the array API standard requires the environment variable `SCIPY\_ARRAY\_API` to be set to `1` before importing `scipy` and `scikit-learn`: .. prompt:: bash $ export SCIPY\_ARRAY\_API=1 Please note that this environment variable is intended for temporary use. For more details, refer to SciPy's `Array API documentation `\_. Some scikit-learn estimators that primarily rely on NumPy (as opposed to using Cython) to implement the algorithmic logic of their `fit`, `predict` or `transform` methods can be configured to accept any Array API compatible input data structures and automatically dispatch operations to the underlying namespace instead of relying on NumPy. At this stage, this support is \*\*considered experimental\*\* and must be enabled explicitly by the `array\_api\_dispatch` configuration. See below for details. .. note:: Currently, only `array-api-strict`, `cupy`, and `PyTorch` are known to work with scikit-learn's estimators. The following video provides an overview of the standard's design principles and how it facilitates interoperability between array libraries: - `Scikit-learn on GPUs with Array API `\_ by :user:`Thomas Fan ` at PyData NYC 2023. Enabling array API support ========================== The configuration `array\_api\_dispatch=True` needs to be set to `True` to enable array API support. We recommend setting this configuration globally to ensure consistent behaviour and prevent accidental mixing of array namespaces. Note that in the examples below, we use a context manager (:func:`config\_context`) to avoid having to reset it to `False` at the end of every code snippet, so as to not affect the rest of the documentation. Scikit-learn accepts :term:`array-like` inputs for all :mod:`metrics` and some estimators. When `array\_api\_dispatch=False`, these inputs are converted into NumPy arrays using :func:`numpy.asarray` (or :func:`numpy.array`). While this will successfully convert some array API inputs (e.g., JAX array), we generally recommend setting `array\_api\_dispatch=True` when using array API inputs. This is because NumPy conversion can often fail, e.g., torch tensor allocated on GPU. Example usage ============= The example code snippet below demonstrates how to use `CuPy `\_ to run :class:`~discriminant\_analysis.LinearDiscriminantAnalysis` on a GPU:: >>> from sklearn.datasets import make\_classification >>> from sklearn import config\_context >>> from sklearn.discriminant\_analysis import LinearDiscriminantAnalysis >>> import cupy >>> X\_np, y\_np = make\_classification(random\_state=0) >>> X\_cu = cupy.asarray(X\_np) >>> y\_cu = cupy.asarray(y\_np) >>> X\_cu.device >>> with config\_context(array\_api\_dispatch=True): ... lda = LinearDiscriminantAnalysis() ... X\_trans = lda.fit\_transform(X\_cu, y\_cu) >>> X\_trans.device After the model is trained, fitted attributes that are arrays will also be from the same Array API namespace as the training data. For example, if CuPy's Array API namespace was used for training, then fitted attributes will be on the GPU. We provide an experimental `\_estimator\_with\_converted\_arrays` utility that transfers an estimator attributes from Array API to an ndarray:: >>> from sklearn.utils.\_array\_api import \_estimator\_with\_converted\_arrays >>> cupy\_to\_ndarray = lambda array : array.get() >>> lda\_np = \_estimator\_with\_converted\_arrays(lda, cupy\_to\_ndarray) >>> X\_trans = lda\_np.transform(X\_np) >>> type(X\_trans) PyTorch Support --------------- PyTorch Tensors can also be passed directly:: >>> import torch >>> X\_torch = torch.asarray(X\_np, device="cuda", dtype=torch.float32) >>> y\_torch = torch.asarray(y\_np, device="cuda", dtype=torch.float32) >>> with config\_context(array\_api\_dispatch=True): ... lda = LinearDiscriminantAnalysis() ... X\_trans = lda.fit\_transform(X\_torch, y\_torch) >>> type(X\_trans) >>> X\_trans.device.type 'cuda' .. \_array\_api\_supported: Support for `Array API`-compatible inputs ========================================= Estimators and other tools in scikit-learn that support Array API compatible inputs. Estimators ---------- - :class:`decomposition.PCA` (with `svd\_solver="full"`, `svd\_solver="covariance\_eigh"`, or `svd\_solver="randomized"` (`svd\_solver="randomized"` only if `power\_iteration\_normalizer="QR"`)) - :class:`kernel\_approximation.Nystroem` - :class:`linear\_model.Ridge` (with `solver="svd"`) - :class:`linear\_model.RidgeCV` (with `solver="svd"`, see :ref:`device\_support\_for\_float64`) - :class:`linear\_model.RidgeClassifier` (with `solver="svd"`) - :class:`linear\_model.RidgeClassifierCV` (with `solver="svd"`, see :ref:`device\_support\_for\_float64`) - :class:`discriminant\_analysis.LinearDiscriminantAnalysis` (with `solver="svd"`) - :class:`naive\_bayes.GaussianNB` - :class:`preprocessing.Binarizer` - :class:`preprocessing.KernelCenterer` - :class:`preprocessing.LabelBinarizer` (with
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/array_api.rst
main
scikit-learn
[ -0.002509576268494129, -0.042738884687423706, -0.07778825610876083, 0.020756611600518227, 0.05492609366774559, -0.10776469111442566, -0.012901319190859795, -0.059163570404052734, -0.0721137523651123, -0.016345879063010216, -0.04914315044879913, -0.029516039416193962, -0.020744996145367622, ...
0.154123
compatible inputs. Estimators ---------- - :class:`decomposition.PCA` (with `svd\_solver="full"`, `svd\_solver="covariance\_eigh"`, or `svd\_solver="randomized"` (`svd\_solver="randomized"` only if `power\_iteration\_normalizer="QR"`)) - :class:`kernel\_approximation.Nystroem` - :class:`linear\_model.Ridge` (with `solver="svd"`) - :class:`linear\_model.RidgeCV` (with `solver="svd"`, see :ref:`device\_support\_for\_float64`) - :class:`linear\_model.RidgeClassifier` (with `solver="svd"`) - :class:`linear\_model.RidgeClassifierCV` (with `solver="svd"`, see :ref:`device\_support\_for\_float64`) - :class:`discriminant\_analysis.LinearDiscriminantAnalysis` (with `solver="svd"`) - :class:`naive\_bayes.GaussianNB` - :class:`preprocessing.Binarizer` - :class:`preprocessing.KernelCenterer` - :class:`preprocessing.LabelBinarizer` (with `sparse\_output=False`) - :class:`preprocessing.LabelEncoder` - :class:`preprocessing.MaxAbsScaler` - :class:`preprocessing.MinMaxScaler` - :class:`preprocessing.Normalizer` - :class:`preprocessing.PolynomialFeatures` - :class:`preprocessing.StandardScaler` (see :ref:`device\_support\_for\_float64`) - :class:`mixture.GaussianMixture` (with `init\_params="random"` or `init\_params="random\_from\_data"` and `warm\_start=False`) Meta-estimators --------------- Meta-estimators that accept Array API inputs conditioned on the fact that the base estimator also does: - :class:`calibration.CalibratedClassifierCV` (with `method="temperature"`) - :class:`model\_selection.GridSearchCV` - :class:`model\_selection.RandomizedSearchCV` - :class:`model\_selection.HalvingGridSearchCV` - :class:`model\_selection.HalvingRandomSearchCV` Metrics ------- - :func:`sklearn.metrics.accuracy\_score` - :func:`sklearn.metrics.balanced\_accuracy\_score` - :func:`sklearn.metrics.brier\_score\_loss` - :func:`sklearn.metrics.cluster.calinski\_harabasz\_score` - :func:`sklearn.metrics.cohen\_kappa\_score` - :func:`sklearn.metrics.confusion\_matrix` - :func:`sklearn.metrics.d2\_absolute\_error\_score` - :func:`sklearn.metrics.d2\_brier\_score` - :func:`sklearn.metrics.d2\_log\_loss\_score` - :func:`sklearn.metrics.d2\_pinball\_score` - :func:`sklearn.metrics.d2\_tweedie\_score` - :func:`sklearn.metrics.det\_curve` - :func:`sklearn.metrics.explained\_variance\_score` - :func:`sklearn.metrics.f1\_score` - :func:`sklearn.metrics.fbeta\_score` - :func:`sklearn.metrics.hamming\_loss` - :func:`sklearn.metrics.jaccard\_score` - :func:`sklearn.metrics.log\_loss` - :func:`sklearn.metrics.max\_error` - :func:`sklearn.metrics.mean\_absolute\_error` - :func:`sklearn.metrics.mean\_absolute\_percentage\_error` - :func:`sklearn.metrics.mean\_gamma\_deviance` - :func:`sklearn.metrics.mean\_pinball\_loss` - :func:`sklearn.metrics.mean\_poisson\_deviance` (requires `enabling array API support for SciPy `\_) - :func:`sklearn.metrics.mean\_squared\_error` - :func:`sklearn.metrics.mean\_squared\_log\_error` - :func:`sklearn.metrics.mean\_tweedie\_deviance` - :func:`sklearn.metrics.median\_absolute\_error` - :func:`sklearn.metrics.multilabel\_confusion\_matrix` - :func:`sklearn.metrics.pairwise.additive\_chi2\_kernel` - :func:`sklearn.metrics.pairwise.chi2\_kernel` - :func:`sklearn.metrics.pairwise.cosine\_similarity` - :func:`sklearn.metrics.pairwise.cosine\_distances` - :func:`sklearn.metrics.pairwise.pairwise\_distances` (only supports "cosine", "euclidean", "manhattan" and "l2" metrics) - :func:`sklearn.metrics.pairwise.euclidean\_distances` (see :ref:`device\_support\_for\_float64`) - :func:`sklearn.metrics.pairwise.laplacian\_kernel` - :func:`sklearn.metrics.pairwise.linear\_kernel` - :func:`sklearn.metrics.pairwise.manhattan\_distances` - :func:`sklearn.metrics.pairwise.paired\_cosine\_distances` - :func:`sklearn.metrics.pairwise.paired\_euclidean\_distances` - :func:`sklearn.metrics.pairwise.paired\_manhattan\_distances` - :func:`sklearn.metrics.pairwise.pairwise\_kernels` - :func:`sklearn.metrics.pairwise.polynomial\_kernel` - :func:`sklearn.metrics.pairwise.rbf\_kernel` (see :ref:`device\_support\_for\_float64`) - :func:`sklearn.metrics.pairwise.sigmoid\_kernel` - :func:`sklearn.metrics.precision\_score` - :func:`sklearn.metrics.precision\_recall\_curve` - :func:`sklearn.metrics.precision\_recall\_fscore\_support` - :func:`sklearn.metrics.r2\_score` - :func:`sklearn.metrics.recall\_score` - :func:`sklearn.metrics.roc\_curve` - :func:`sklearn.metrics.root\_mean\_squared\_error` - :func:`sklearn.metrics.root\_mean\_squared\_log\_error` - :func:`sklearn.metrics.zero\_one\_loss` Tools ----- - :func:`preprocessing.label\_binarize` (with `sparse\_output=False`) - :func:`model\_selection.cross\_val\_predict` - :func:`model\_selection.train\_test\_split` - :func:`utils.check\_consistent\_length` Coverage is expected to grow over time. Please follow the dedicated `meta-issue on GitHub `\_ to track progress. Input and output array type handling ==================================== Estimators and scoring functions are able to accept input arrays from different array libraries and/or devices. When a mixed set of input arrays is passed, scikit-learn converts arrays as needed to make them all consistent. For estimators, the rule is \*\*"everything follows\*\* `X` \*\*"\*\* - mixed array inputs are converted so that they all match the array library and device of `X`. For scoring functions the rule is \*\*"everything follows\*\* `y\_pred` \*\*"\*\* - mixed array inputs are converted so that they all match the array library and device of `y\_pred`. When a function or method has been called with array API compatible inputs, the convention is to return arrays from the same array library and on the same device as the input data. Estimators ---------- When an estimator is fitted with an array API compatible `X`, all other array inputs, including constructor arguments, (e.g., `y`, `sample\_weight`) will be converted to match the array library and device of `X`, if they do not already. This behaviour enables switching from processing on the CPU to processing on the GPU at any point within a pipeline. This allows estimators to accept mixed input types, enabling `X` to be moved to a different device within a pipeline, without explicitly moving `y`. Note that scikit-learn pipelines do not allow transformation of `y` (to avoid :ref:`leakage `). Take for example a pipeline where `X` and `y` both start on CPU, and go through the following three steps: \* :class:`~sklearn.preprocessing.TargetEncoder`, which will transform categorial `X` but also requires `y`, meaning both `X` and `y` need to be on CPU. \* :class:`FunctionTransformer(func=partial(torch.asarray, device="cuda")) `, which moves `X` to GPU, to improve performance in the next step. \* :class:`~sklearn.linear\_model.Ridge`, whose performance can be improved when passed arrays on a GPU, as they can handle large matrix operations very efficiently. `X` initially contains categorical string data (thus needs to be on CPU), which is target encoded to numerical values in :class:`~sklearn.preprocessing.TargetEncoder`. `X` is then explicitly moved to GPU to improve
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/array_api.rst
main
scikit-learn
[ -0.059585511684417725, -0.05279041454195976, -0.03147295489907265, -0.0018522517057135701, 0.02016250230371952, -0.043747153133153915, -0.03146160766482353, 0.04542805254459381, -0.020204301923513412, -0.005274751223623753, 0.03208563104271889, -0.01065443828701973, -0.02806500345468521, -...
-0.044437
whose performance can be improved when passed arrays on a GPU, as they can handle large matrix operations very efficiently. `X` initially contains categorical string data (thus needs to be on CPU), which is target encoded to numerical values in :class:`~sklearn.preprocessing.TargetEncoder`. `X` is then explicitly moved to GPU to improve the performance of :class:`~sklearn.linear\_model.Ridge`. `y` cannot be transformed by the pipeline (recall scikit-learn pipelines do not allow transformation of `y`) but as :class:`~sklearn.linear\_model.Ridge` is able to accept mixed input types, this is not a problem and the pipeline is able to be run. The fitted attributes of an estimator fitted with an array API compatible `X`, will be arrays from the same library as the input and stored on the same device. The `predict` and `transform` method subsequently expect inputs from the same array library and device as the data passed to the `fit` method. Scoring functions ----------------- When an array API compatible `y\_pred` is passed to a scoring function, all other array inputs (e.g., `y\_true`, `sample\_weight`) will be converted to match the array library and device of `y\_pred`, if they do not already. This allows scoring functions to accept mixed input types, enabling them to be used within a :term:`meta-estimator` (or function that accepts estimators), with a pipeline that moves input arrays between devices (e.g., CPU to GPU). For example, to be able to use the pipeline described above within e.g., :func:`~sklearn.model\_selection.cross\_validate` or :class:`~sklearn.model\_selection.GridSearchCV`, the scoring function internally called needs to be able to accept mixed input types. The output type of scoring functions depends on the number of output values. When a scoring function returns a scalar value, it will return a Python scalar (typically a `float` instance) instead of an array scalar value. For scoring functions that support :term:`multiclass` or :term:`multioutput`, an array from the same array library and device as `y\_pred` will be returned when multiple values need to be output. Common estimator checks ======================= Add the `array\_api\_support` tag to an estimator's set of tags to indicate that it supports the array API. This will enable dedicated checks as part of the common tests to verify that the estimators' results are the same when using vanilla NumPy and array API inputs. To run these checks you need to install `array-api-strict `\_ in your test environment. This allows you to run checks without having a GPU. To run the full set of checks you also need to install `PyTorch `\_, `CuPy `\_ and have a GPU. Checks that can not be executed or have missing dependencies will be automatically skipped. Therefore it's important to run the tests with the `-v` flag to see which checks are skipped: .. prompt:: bash $ pip install array-api-strict # and other libraries as needed pytest -k "array\_api" -v Running the scikit-learn tests against `array-api-strict` should help reveal most code problems related to handling multiple device inputs via the use of simulated non-CPU devices. This allows for fast iterative development and debugging of array API related code. However, to ensure full handling of PyTorch or CuPy inputs allocated on actual GPU devices, it is necessary to run the tests against those libraries and hardware. This can either be achieved by using `Google Colab `\_ or leveraging our CI infrastructure on pull requests (manually triggered by maintainers for cost reasons). .. \_mps\_support: Note on MPS device support -------------------------- On macOS, PyTorch can use the Metal Performance Shaders (MPS) to access hardware accelerators (e.g. the internal GPU component of the M1 or M2 chips). However, the MPS device support for PyTorch is incomplete at the time of writing. See the following github issue for
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/array_api.rst
main
scikit-learn
[ -0.01805937848985195, -0.07022818177938461, -0.0946175754070282, -0.050232719630002975, 0.023338545113801956, -0.09768501669168472, -0.039461761713027954, -0.06172392889857292, -0.07307549566030502, -0.06234059855341911, -0.05214592069387436, 0.08225910365581512, -0.03844879940152168, -0.0...
0.020851
on MPS device support -------------------------- On macOS, PyTorch can use the Metal Performance Shaders (MPS) to access hardware accelerators (e.g. the internal GPU component of the M1 or M2 chips). However, the MPS device support for PyTorch is incomplete at the time of writing. See the following github issue for more details: - https://github.com/pytorch/pytorch/issues/77764 To enable the MPS support in PyTorch, set the environment variable `PYTORCH\_ENABLE\_MPS\_FALLBACK=1` before running the tests: .. prompt:: bash $ PYTORCH\_ENABLE\_MPS\_FALLBACK=1 pytest -k "array\_api" -v At the time of writing all scikit-learn tests should pass, however, the computational speed is not necessarily better than with the CPU device. .. \_device\_support\_for\_float64: Note on device support for ``float64`` -------------------------------------- Certain operations within scikit-learn will automatically perform operations on floating-point values with `float64` precision to prevent overflows and ensure correctness (e.g., :func:`metrics.pairwise.euclidean\_distances`, :class:`preprocessing.StandardScaler`). However, certain combinations of array namespaces and devices, such as `PyTorch on MPS` (see :ref:`mps\_support`) do not support the `float64` data type. In these cases, scikit-learn will revert to using the `float32` data type instead. This can result in different behavior (typically numerically unstable results) compared to not using array API dispatching or using a device with `float64` support.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/array_api.rst
main
scikit-learn
[ -0.0675823837518692, -0.013695676811039448, -0.07097894698381424, 0.025782842189073563, 0.007152245379984379, -0.09970450401306152, -0.062483157962560654, -0.04554557427763939, -0.10399205982685089, -0.03970488905906677, -0.02526860125362873, -0.062134914100170135, -0.07577311992645264, -0...
-0.002116
.. currentmodule:: sklearn.model\_selection .. \_TunedThresholdClassifierCV: ================================================== Tuning the decision threshold for class prediction ================================================== Classification is best divided into two parts: \* the statistical problem of learning a model to predict, ideally, class probabilities; \* the decision problem to take concrete action based on those probability predictions. Let's take a straightforward example related to weather forecasting: the first point is related to answering "what is the chance that it will rain tomorrow?" while the second point is related to answering "should I take an umbrella tomorrow?". When it comes to the scikit-learn API, the first point is addressed by providing scores using :term:`predict\_proba` or :term:`decision\_function`. The former returns conditional probability estimates :math:`P(y|X)` for each class, while the latter returns a decision score for each class. The decision corresponding to the labels is obtained with :term:`predict`. In binary classification, a decision rule or action is then defined by thresholding the scores, leading to the prediction of a single class label for each sample. For binary classification in scikit-learn, class labels predictions are obtained by hard-coded cut-off rules: a positive class is predicted when the conditional probability :math:`P(y|X)` is greater than 0.5 (obtained with :term:`predict\_proba`) or if the decision score is greater than 0 (obtained with :term:`decision\_function`). Here, we show an example that illustrates the relatonship between conditional probability estimates :math:`P(y|X)` and class labels:: >>> from sklearn.datasets import make\_classification >>> from sklearn.tree import DecisionTreeClassifier >>> X, y = make\_classification(random\_state=0) >>> classifier = DecisionTreeClassifier(max\_depth=2, random\_state=0).fit(X, y) >>> classifier.predict\_proba(X[:4]) array([[0.94 , 0.06 ], [0.94 , 0.06 ], [0.0416, 0.9583], [0.0416, 0.9583]]) >>> classifier.predict(X[:4]) array([0, 0, 1, 1]) While these hard-coded rules might at first seem reasonable as default behavior, they are most certainly not ideal for most use cases. Let's illustrate with an example. Consider a scenario where a predictive model is being deployed to assist physicians in detecting tumors. In this setting, physicians will most likely be interested in identifying all patients with cancer and not missing anyone with cancer so that they can provide them with the right treatment. In other words, physicians prioritize achieving a high recall rate. This emphasis on recall comes, of course, with the trade-off of potentially more false-positive predictions, reducing the precision of the model. That is a risk physicians are willing to take because the cost of a missed cancer is much higher than the cost of further diagnostic tests. Consequently, when it comes to deciding whether to classify a patient as having cancer or not, it may be more beneficial to classify them as positive for cancer when the conditional probability estimate is much lower than 0.5. Post-tuning the decision threshold ================================== One solution to address the problem stated in the introduction is to tune the decision threshold of the classifier once the model has been trained. The :class:`~sklearn.model\_selection.TunedThresholdClassifierCV` tunes this threshold using an internal cross-validation. The optimum threshold is chosen to maximize a given metric. The following image illustrates the tuning of the decision threshold for a gradient boosting classifier. While the vanilla and tuned classifiers provide the same :term:`predict\_proba` outputs and thus the same Receiver Operating Characteristic (ROC) and Precision-Recall curves, the class label predictions differ because of the tuned decision threshold. The vanilla classifier predicts the class of interest for a conditional probability greater than 0.5 while the tuned classifier predicts the class of interest for a very low probability (around 0.02). This decision threshold optimizes a utility metric defined by the business (in this case an insurance company). .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cost\_sensitive\_learning\_002.png :target: ../auto\_examples/model\_selection/plot\_cost\_sensitive\_learning.html :align: center Options to tune the decision threshold -------------------------------------- The decision threshold can be tuned through different strategies
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/classification_threshold.rst
main
scikit-learn
[ -0.0915195420384407, -0.033827804028987885, -0.004847291391342878, 0.03256158158183098, 0.08783092349767685, -0.061066146939992905, 0.048356473445892334, 0.046442531049251556, -0.00612812303006649, 0.015644771978259087, -0.07479488104581833, -0.08616302162408829, 0.05492256581783295, -0.04...
0.094834
class of interest for a very low probability (around 0.02). This decision threshold optimizes a utility metric defined by the business (in this case an insurance company). .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_cost\_sensitive\_learning\_002.png :target: ../auto\_examples/model\_selection/plot\_cost\_sensitive\_learning.html :align: center Options to tune the decision threshold -------------------------------------- The decision threshold can be tuned through different strategies controlled by the parameter `scoring`. One way to tune the threshold is by maximizing a pre-defined scikit-learn metric. These metrics can be found by calling the function :func:`~sklearn.metrics.get\_scorer\_names`. By default, the balanced accuracy is the metric used but be aware that one should choose a meaningful metric for their use case. .. note:: It is important to notice that these metrics come with default parameters, notably the label of the class of interest (i.e. `pos\_label`). Thus, if this label is not the right one for your application, you need to define a scorer and pass the right `pos\_label` (and additional parameters) using the :func:`~sklearn.metrics.make\_scorer`. Refer to :ref:`scoring\_callable` to get information to define your own scoring function. For instance, we show how to pass the information to the scorer that the label of interest is `0` when maximizing the :func:`~sklearn.metrics.f1\_score`:: >>> from sklearn.linear\_model import LogisticRegression >>> from sklearn.model\_selection import TunedThresholdClassifierCV >>> from sklearn.metrics import make\_scorer, f1\_score >>> X, y = make\_classification( ... n\_samples=1\_000, weights=[0.1, 0.9], random\_state=0) >>> pos\_label = 0 >>> scorer = make\_scorer(f1\_score, pos\_label=pos\_label) >>> base\_model = LogisticRegression() >>> model = TunedThresholdClassifierCV(base\_model, scoring=scorer) >>> scorer(model.fit(X, y), X, y) 0.88 >>> # compare it with the internal score found by cross-validation >>> model.best\_score\_ np.float64(0.86) Important notes regarding the internal cross-validation ------------------------------------------------------- By default :class:`~sklearn.model\_selection.TunedThresholdClassifierCV` uses a 5-fold stratified cross-validation to tune the decision threshold. The parameter `cv` allows to control the cross-validation strategy. It is possible to bypass cross-validation by setting `cv="prefit"` and providing a fitted classifier. In this case, the decision threshold is tuned on the data provided to the `fit` method. However, you should be extremely careful when using this option. You should never use the same data for training the classifier and tuning the decision threshold due to the risk of overfitting. Refer to the following example section for more details (cf. :ref:`TunedThresholdClassifierCV\_no\_cv`). If you have limited resources, consider using a float number for `cv` to limit to an internal single train-test split. The option `cv="prefit"` should only be used when the provided classifier was already trained, and you just want to find the best decision threshold using a new validation set. .. \_FixedThresholdClassifier: Manually setting the decision threshold --------------------------------------- The previous sections discussed strategies to find an optimal decision threshold. It is also possible to manually set the decision threshold using the class :class:`~sklearn.model\_selection.FixedThresholdClassifier`. In case that you don't want to refit the model when calling `fit`, wrap your sub-estimator with a :class:`~sklearn.frozen.FrozenEstimator` and do ``FixedThresholdClassifier(FrozenEstimator(estimator), ...)``. Examples -------- - See the example entitled :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_tuned\_decision\_threshold.py`, to get insights on the post-tuning of the decision threshold. - See the example entitled :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_cost\_sensitive\_learning.py`, to learn about cost-sensitive learning and decision threshold tuning.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/classification_threshold.rst
main
scikit-learn
[ -0.060940779745578766, -0.025669286027550697, -0.06278136372566223, -0.06316132098436356, 0.04819256439805031, -0.0894695296883583, 0.08140929788351059, 0.05745331197977066, 0.05103307589888573, 0.06566382944583893, -0.033482037484645844, -0.05775746703147888, 0.028204595670104027, -0.0104...
0.100384
.. \_kernel\_ridge: =========================== Kernel ridge regression =========================== .. currentmodule:: sklearn.kernel\_ridge Kernel ridge regression (KRR) [M2012]\_ combines :ref:`ridge\_regression` (linear least squares with :math:`L\_2`-norm regularization) with the `kernel trick `\_. It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space. The form of the model learned by :class:`KernelRidge` is identical to support vector regression (:class:`~sklearn.svm.SVR`). However, different loss functions are used: KRR uses squared error loss while support vector regression uses :math:`\epsilon`-insensitive loss, both combined with :math:`L\_2` regularization. In contrast to :class:`~sklearn.svm.SVR`, fitting :class:`KernelRidge` can be done in closed-form and is typically faster for medium-sized datasets. On the other hand, the learned model is non-sparse and thus slower than :class:`~sklearn.svm.SVR`, which learns a sparse model for :math:`\epsilon > 0`, at prediction-time. The following figure compares :class:`KernelRidge` and :class:`~sklearn.svm.SVR` on an artificial dataset, which consists of a sinusoidal target function and strong noise added to every fifth datapoint. The learned model of :class:`KernelRidge` and :class:`~sklearn.svm.SVR` is plotted, where both complexity/regularization and bandwidth of the RBF kernel have been optimized using grid-search. The learned functions are very similar; however, fitting :class:`KernelRidge` is approximately seven times faster than fitting :class:`~sklearn.svm.SVR` (both with grid-search). However, prediction of 100,000 target values is more than three times faster with :class:`~sklearn.svm.SVR` since it has learned a sparse model using only approximately 1/3 of the 100 training datapoints as support vectors. .. figure:: ../auto\_examples/miscellaneous/images/sphx\_glr\_plot\_kernel\_ridge\_regression\_001.png :target: ../auto\_examples/miscellaneous/plot\_kernel\_ridge\_regression.html :align: center The next figure compares the time for fitting and prediction of :class:`KernelRidge` and :class:`~sklearn.svm.SVR` for different sizes of the training set. Fitting :class:`KernelRidge` is faster than :class:`~sklearn.svm.SVR` for medium-sized training sets (less than 1000 samples); however, for larger training sets :class:`~sklearn.svm.SVR` scales better. With regard to prediction time, :class:`~sklearn.svm.SVR` is faster than :class:`KernelRidge` for all sizes of the training set because of the learned sparse solution. Note that the degree of sparsity and thus the prediction time depends on the parameters :math:`\epsilon` and :math:`C` of the :class:`~sklearn.svm.SVR`; :math:`\epsilon = 0` would correspond to a dense model. .. figure:: ../auto\_examples/miscellaneous/images/sphx\_glr\_plot\_kernel\_ridge\_regression\_002.png :target: ../auto\_examples/miscellaneous/plot\_kernel\_ridge\_regression.html :align: center .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_kernel\_ridge\_regression.py` .. rubric:: References .. [M2012] "Machine Learning: A Probabilistic Perspective" Murphy, K. P. - chapter 14.4.3, pp. 492-493, The MIT Press, 2012
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/kernel_ridge.rst
main
scikit-learn
[ -0.1338331401348114, -0.08445383608341217, -0.0330464132130146, 0.0036645811051130295, 0.10162992775440216, 0.006419527810066938, -0.012385213747620583, 0.011506200768053532, 0.03968784958124161, 0.057544611394405365, 0.008709771558642387, 0.08939172327518463, -0.0009330221218988299, -0.02...
0.083738
.. \_neural\_networks\_supervised: ================================== Neural network models (supervised) ================================== .. currentmodule:: sklearn.neural\_network .. warning:: This implementation is not intended for large-scale applications. In particular, scikit-learn offers no GPU support. For much faster, GPU-based implementations, as well as frameworks offering much more flexibility to build deep learning architectures, see :ref:`related\_projects`. .. \_multilayer\_perceptron: Multi-layer Perceptron ====================== \*\*Multi-layer Perceptron (MLP)\*\* is a supervised learning algorithm that learns a function :math:`f: R^m \rightarrow R^o` by training on a dataset, where :math:`m` is the number of dimensions for input and :math:`o` is the number of dimensions for output. Given a set of features :math:`X = \{x\_1, x\_2, ..., x\_m\}` and a target :math:`y`, it can learn a non-linear function approximator for either classification or regression. It is different from logistic regression, in that between the input and the output layer, there can be one or more non-linear layers, called hidden layers. Figure 1 shows a one hidden layer MLP with scalar output. .. figure:: ../images/multilayerperceptron\_network.png :align: center :scale: 60% \*\*Figure 1 : One hidden layer MLP.\*\* The leftmost layer, known as the input layer, consists of a set of neurons :math:`\{x\_i | x\_1, x\_2, ..., x\_m\}` representing the input features. Each neuron in the hidden layer transforms the values from the previous layer with a weighted linear summation :math:`w\_1x\_1 + w\_2x\_2 + ... + w\_mx\_m`, followed by a non-linear activation function :math:`g(\cdot):R \rightarrow R` - like the hyperbolic tan function. The output layer receives the values from the last hidden layer and transforms them into output values. The module contains the public attributes ``coefs\_`` and ``intercepts\_``. ``coefs\_`` is a list of weight matrices, where weight matrix at index :math:`i` represents the weights between layer :math:`i` and layer :math:`i+1`. ``intercepts\_`` is a list of bias vectors, where the vector at index :math:`i` represents the bias values added to layer :math:`i+1`. .. dropdown:: Advantages and disadvantages of Multi-layer Perceptron The advantages of Multi-layer Perceptron are: + Capability to learn non-linear models. + Capability to learn models in real-time (on-line learning) using ``partial\_fit``. The disadvantages of Multi-layer Perceptron (MLP) include: + MLP with hidden layers has a non-convex loss function where there exists more than one local minimum. Therefore, different random weight initializations can lead to different validation accuracy. + MLP requires tuning a number of hyperparameters such as the number of hidden neurons, layers, and iterations. + MLP is sensitive to feature scaling. Please see :ref:`Tips on Practical Use ` section that addresses some of these disadvantages. Classification ============== Class :class:`MLPClassifier` implements a multi-layer perceptron (MLP) algorithm that trains using `Backpropagation `\_. MLP trains on two arrays: array X of size (n\_samples, n\_features), which holds the training samples represented as floating point feature vectors; and array y of size (n\_samples,), which holds the target values (class labels) for the training samples:: >>> from sklearn.neural\_network import MLPClassifier >>> X = [[0., 0.], [1., 1.]] >>> y = [0, 1] >>> clf = MLPClassifier(solver='lbfgs', alpha=1e-5, ... hidden\_layer\_sizes=(5, 2), random\_state=1) ... >>> clf.fit(X, y) MLPClassifier(alpha=1e-05, hidden\_layer\_sizes=(5, 2), random\_state=1, solver='lbfgs') After fitting (training), the model can predict labels for new samples:: >>> clf.predict([[2., 2.], [-1., -2.]]) array([1, 0]) MLP can fit a non-linear model to the training data. ``clf.coefs\_`` contains the weight matrices that constitute the model parameters:: >>> [coef.shape for coef in clf.coefs\_] [(2, 5), (5, 2), (2, 1)] Currently, :class:`MLPClassifier` supports only the Cross-Entropy loss function, which allows probability estimates by running the ``predict\_proba`` method. MLP trains using Backpropagation. More precisely, it trains using some form of gradient descent and the gradients are calculated using Backpropagation. For classification, it minimizes the Cross-Entropy loss function, giving a vector of probability estimates
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_supervised.rst
main
scikit-learn
[ -0.07056731730699539, -0.09669746458530426, -0.04441114515066147, -0.008536227978765965, 0.07561904937028885, -0.045722369104623795, -0.09047214686870575, -0.027194058522582054, -0.04741682857275009, -0.07774800807237625, -0.06693743914365768, 0.008442089892923832, -0.07527030259370804, 0....
0.16632
supports only the Cross-Entropy loss function, which allows probability estimates by running the ``predict\_proba`` method. MLP trains using Backpropagation. More precisely, it trains using some form of gradient descent and the gradients are calculated using Backpropagation. For classification, it minimizes the Cross-Entropy loss function, giving a vector of probability estimates :math:`P(y|x)` per sample :math:`x`:: >>> clf.predict\_proba([[2., 2.], [1., 2.]]) array([[1.967e-04, 9.998e-01], [1.967e-04, 9.998e-01]]) :class:`MLPClassifier` supports multi-class classification by applying `Softmax `\_ as the output function. Further, the model supports :ref:`multi-label classification ` in which a sample can belong to more than one class. For each class, the raw output passes through the logistic function. Values larger or equal to `0.5` are rounded to `1`, otherwise to `0`. For a predicted output of a sample, the indices where the value is `1` represent the assigned classes of that sample:: >>> X = [[0., 0.], [1., 1.]] >>> y = [[0, 1], [1, 1]] >>> clf = MLPClassifier(solver='lbfgs', alpha=1e-5, ... hidden\_layer\_sizes=(15,), random\_state=1) ... >>> clf.fit(X, y) MLPClassifier(alpha=1e-05, hidden\_layer\_sizes=(15,), random\_state=1, solver='lbfgs') >>> clf.predict([[1., 2.]]) array([[1, 1]]) >>> clf.predict([[0., 0.]]) array([[0, 1]]) See the examples below and the docstring of :meth:`MLPClassifier.fit` for further information. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neural\_networks\_plot\_mlp\_training\_curves.py` \* See :ref:`sphx\_glr\_auto\_examples\_neural\_networks\_plot\_mnist\_filters.py` for visualized representation of trained weights. Regression ========== Class :class:`MLPRegressor` implements a multi-layer perceptron (MLP) that trains using backpropagation with no activation function in the output layer, which can also be seen as using the identity function as activation function. Therefore, it uses the square error as the loss function, and the output is a set of continuous values. :class:`MLPRegressor` also supports multi-output regression, in which a sample can have more than one target. Regularization ============== Both :class:`MLPRegressor` and :class:`MLPClassifier` use parameter ``alpha`` for regularization (L2 regularization) term which helps in avoiding overfitting by penalizing weights with large magnitudes. Following plot displays varying decision function with value of alpha. .. figure:: ../auto\_examples/neural\_networks/images/sphx\_glr\_plot\_mlp\_alpha\_001.png :target: ../auto\_examples/neural\_networks/plot\_mlp\_alpha.html :align: center :scale: 75 See the examples below for further information. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neural\_networks\_plot\_mlp\_alpha.py` Algorithms ========== MLP trains using `Stochastic Gradient Descent `\_, :arxiv:`Adam <1412.6980>`, or `L-BFGS `\_\_. Stochastic Gradient Descent (SGD) updates parameters using the gradient of the loss function with respect to a parameter that needs adaptation, i.e. .. math:: w \leftarrow w - \eta \left[\alpha \frac{\partial R(w)}{\partial w} + \frac{\partial Loss}{\partial w}\right] where :math:`\eta` is the learning rate which controls the step-size in the parameter space search. :math:`Loss` is the loss function used for the network. More details can be found in the documentation of `SGD `\_ Adam is similar to SGD in a sense that it is a stochastic optimizer, but it can automatically adjust the amount to update parameters based on adaptive estimates of lower-order moments. With SGD or Adam, training supports online and mini-batch learning. L-BFGS is a solver that approximates the Hessian matrix which represents the second-order partial derivative of a function. Further it approximates the inverse of the Hessian matrix to perform parameter updates. The implementation uses the Scipy version of `L-BFGS `\_. If the selected solver is 'L-BFGS', training does not support online nor mini-batch learning. Complexity ========== Suppose there are :math:`n` training samples, :math:`m` features, :math:`k` hidden layers, each containing :math:`h` neurons - for simplicity, and :math:`o` output neurons. The time complexity of backpropagation is :math:`O(i \cdot n \cdot (m \cdot h + (k - 1) \cdot h \cdot h + h \cdot o))`, where :math:`i` is the number of iterations. Since backpropagation has a high time complexity, it is advisable to start with smaller number of hidden neurons and few hidden layers for training. .. dropdown:: Mathematical formulation Given a set of training examples :math:`\{(x\_1,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_supervised.rst
main
scikit-learn
[ -0.06323490291833878, -0.12143224477767944, -0.045407719910144806, -0.05113627016544342, 0.11053948104381561, 0.01597166806459427, 0.013378588482737541, 0.02393485978245735, -0.03155104070901871, -0.02461981400847435, -0.08113288879394531, -0.054418355226516724, 0.024125827476382256, -0.03...
0.06946
\cdot h \cdot h + h \cdot o))`, where :math:`i` is the number of iterations. Since backpropagation has a high time complexity, it is advisable to start with smaller number of hidden neurons and few hidden layers for training. .. dropdown:: Mathematical formulation Given a set of training examples :math:`\{(x\_1, y\_1), (x\_2, y\_2), \ldots, (x\_n, y\_n)\}` where :math:`x\_i \in \mathbf{R}^n` and :math:`y\_i \in \{0, 1\}`, a one hidden layer one hidden neuron MLP learns the function :math:`f(x) = W\_2 g(W\_1^T x + b\_1) + b\_2` where :math:`W\_1 \in \mathbf{R}^m` and :math:`W\_2, b\_1, b\_2 \in \mathbf{R}` are model parameters. :math:`W\_1, W\_2` represent the weights of the input layer and hidden layer, respectively; and :math:`b\_1, b\_2` represent the bias added to the hidden layer and the output layer, respectively. :math:`g(\cdot) : R \rightarrow R` is the activation function, set by default as the hyperbolic tan. It is given as, .. math:: g(z)= \frac{e^z-e^{-z}}{e^z+e^{-z}} For binary classification, :math:`f(x)` passes through the logistic function :math:`g(z)=1/(1+e^{-z})` to obtain output values between zero and one. A threshold, set to 0.5, would assign samples of outputs larger or equal 0.5 to the positive class, and the rest to the negative class. If there are more than two classes, :math:`f(x)` itself would be a vector of size (n\_classes,). Instead of passing through logistic function, it passes through the softmax function, which is written as, .. math:: \text{softmax}(z)\_i = \frac{\exp(z\_i)}{\sum\_{l=1}^k\exp(z\_l)} where :math:`z\_i` represents the :math:`i` th element of the input to softmax, which corresponds to class :math:`i`, and :math:`K` is the number of classes. The result is a vector containing the probabilities that sample :math:`x` belongs to each class. The output is the class with the highest probability. In regression, the output remains as :math:`f(x)`; therefore, output activation function is just the identity function. MLP uses different loss functions depending on the problem type. The loss function for classification is Average Cross-Entropy, which in binary case is given as, .. math:: Loss(\hat{y},y,W) = -\dfrac{1}{n}\sum\_{i=0}^n(y\_i \ln {\hat{y\_i}} + (1-y\_i) \ln{(1-\hat{y\_i})}) + \dfrac{\alpha}{2n} ||W||\_2^2 where :math:`\alpha ||W||\_2^2` is an L2-regularization term (aka penalty) that penalizes complex models; and :math:`\alpha > 0` is a non-negative hyperparameter that controls the magnitude of the penalty. For regression, MLP uses the Mean Square Error loss function; written as, .. math:: Loss(\hat{y},y,W) = \frac{1}{2n}\sum\_{i=0}^n||\hat{y}\_i - y\_i ||\_2^2 + \frac{\alpha}{2n} ||W||\_2^2 Starting from initial random weights, multi-layer perceptron (MLP) minimizes the loss function by repeatedly updating these weights. After computing the loss, a backward pass propagates it from the output layer to the previous layers, providing each weight parameter with an update value meant to decrease the loss. In gradient descent, the gradient :math:`\nabla Loss\_{W}` of the loss with respect to the weights is computed and deducted from :math:`W`. More formally, this is expressed as, .. math:: W^{i+1} = W^i - \epsilon \nabla {Loss}\_{W}^{i} where :math:`i` is the iteration step, and :math:`\epsilon` is the learning rate with a value larger than 0. The algorithm stops when it reaches a preset maximum number of iterations; or when the improvement in loss is below a certain, small number. .. \_mlp\_tips: Tips on Practical Use ===================== \* Multi-layer Perceptron is sensitive to feature scaling, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0, 1] or [-1, +1], or standardize it to have mean 0 and variance 1. Note that you must apply the \*same\* scaling to the test set for meaningful results. You can use :class:`~sklearn.preprocessing.StandardScaler` for standardization. >>> from sklearn.preprocessing import StandardScaler # doctest: +SKIP >>> scaler = StandardScaler() # doctest: +SKIP >>> # Don't cheat -
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_supervised.rst
main
scikit-learn
[ -0.08864082396030426, -0.11341901123523712, -0.00028364534955471754, 0.0017802661750465631, 0.021468231454491615, 0.026248903945088387, 0.020049868151545525, -0.04455941170454025, -0.01174095831811428, -0.028891703113913536, -0.0033760007936507463, 0.017252638936042786, 0.03536003455519676, ...
0.041174
standardize it to have mean 0 and variance 1. Note that you must apply the \*same\* scaling to the test set for meaningful results. You can use :class:`~sklearn.preprocessing.StandardScaler` for standardization. >>> from sklearn.preprocessing import StandardScaler # doctest: +SKIP >>> scaler = StandardScaler() # doctest: +SKIP >>> # Don't cheat - fit only on training data >>> scaler.fit(X\_train) # doctest: +SKIP >>> X\_train = scaler.transform(X\_train) # doctest: +SKIP >>> # apply same transformation to test data >>> X\_test = scaler.transform(X\_test) # doctest: +SKIP An alternative and recommended approach is to use :class:`~sklearn.preprocessing.StandardScaler` in a :class:`~sklearn.pipeline.Pipeline` \* Finding a reasonable regularization parameter :math:`\alpha` is best done using :class:`~sklearn.model\_selection.GridSearchCV`, usually in the range ``10.0 \*\* -np.arange(1, 7)``. \* Empirically, we observed that `L-BFGS` converges faster and with better solutions on small datasets. For relatively large datasets, however, `Adam` is very robust. It usually converges quickly and gives pretty good performance. `SGD` with momentum or nesterov's momentum, on the other hand, can perform better than those two algorithms if learning rate is correctly tuned. More control with warm\_start ============================ If you want more control over stopping criteria or learning rate in SGD, or want to do additional monitoring, using ``warm\_start=True`` and ``max\_iter=1`` and iterating yourself can be helpful:: >>> X = [[0., 0.], [1., 1.]] >>> y = [0, 1] >>> clf = MLPClassifier(hidden\_layer\_sizes=(15,), random\_state=1, max\_iter=1, warm\_start=True) >>> for i in range(10): ... clf.fit(X, y) ... # additional monitoring / inspection MLPClassifier(... .. dropdown:: References \* `"Learning representations by back-propagating errors." `\_ Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. \* `"Stochastic Gradient Descent" `\_ L. Bottou - Website, 2010. \* `"Backpropagation" `\_ Andrew Ng, Jiquan Ngiam, Chuan Yu Foo, Yifan Mai, Caroline Suen - Website, 2011. \* `"Efficient BackProp" `\_ Y. LeCun, L. Bottou, G. Orr, K. Müller - In Neural Networks: Tricks of the Trade 1998. \* :arxiv:`"Adam: A method for stochastic optimization." <1412.6980>` Kingma, Diederik, and Jimmy Ba (2014)
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_supervised.rst
main
scikit-learn
[ 0.000490740523673594, -0.0020397291518747807, -0.02158243954181671, -0.011131995357573032, -0.057964399456977844, -0.04576728492975235, -0.040452517569065094, 0.036834631115198135, -0.10967245697975159, -0.04227130115032196, 0.0041214763186872005, -0.057498082518577576, -0.005350311752408743...
0.020061
.. \_linear\_model: ============= Linear Models ============= .. currentmodule:: sklearn.linear\_model The following are a set of methods intended for regression in which the target value is expected to be a linear combination of the features. In mathematical notation, the predicted value :math:`\hat{y}` can be written as: .. math:: \hat{y}(w, x) = w\_0 + w\_1 x\_1 + ... + w\_p x\_p Across the module, we designate the vector :math:`w = (w\_1, ..., w\_p)` as ``coef\_`` and :math:`w\_0` as ``intercept\_``. To perform classification with generalized linear models, see :ref:`Logistic\_regression`. .. \_ordinary\_least\_squares: Ordinary Least Squares ======================= :class:`LinearRegression` fits a linear model with coefficients :math:`w = (w\_1, ..., w\_p)` to minimize the residual sum of squares between the observed targets in the dataset, and the targets predicted by the linear approximation. Mathematically it solves a problem of the form: .. math:: \min\_{w} || X w - y||\_2^2 .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_ols\_ridge\_001.png :target: ../auto\_examples/linear\_model/plot\_ols\_ridge.html :align: center :scale: 50% :class:`LinearRegression` takes in its ``fit`` method arguments ``X``, ``y``, ``sample\_weight`` and stores the coefficients :math:`w` of the linear model in its ``coef\_`` and ``intercept\_`` attributes:: >>> from sklearn import linear\_model >>> reg = linear\_model.LinearRegression() >>> reg.fit([[0, 0], [1, 1], [2, 2]], [0, 1, 2]) LinearRegression() >>> reg.coef\_ array([0.5, 0.5]) >>> reg.intercept\_ 0.0 The coefficient estimates for Ordinary Least Squares rely on the independence of the features. When features are correlated and some columns of the design matrix :math:`X` have an approximately linear dependence, the design matrix becomes close to singular and as a result, the least-squares estimate becomes highly sensitive to random errors in the observed target, producing a large variance. This situation of \*multicollinearity\* can arise, for example, when data are collected without an experimental design. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_ols\_ridge.py` Non-Negative Least Squares -------------------------- It is possible to constrain all the coefficients to be non-negative, which may be useful when they represent some physical or naturally non-negative quantities (e.g., frequency counts or prices of goods). :class:`LinearRegression` accepts a boolean ``positive`` parameter: when set to `True` `Non-Negative Least Squares `\_ are then applied. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_nnls.py` Ordinary Least Squares Complexity --------------------------------- The least squares solution is computed using the singular value decomposition of :math:`X`. If :math:`X` is a matrix of shape `(n\_samples, n\_features)` this method has a cost of :math:`O(n\_{\text{samples}} n\_{\text{features}}^2)`, assuming that :math:`n\_{\text{samples}} \geq n\_{\text{features}}`. .. \_ridge\_regression: Ridge regression and classification =================================== Regression ---------- :class:`Ridge` regression addresses some of the problems of :ref:`ordinary\_least\_squares` by imposing a penalty on the size of the coefficients. The ridge coefficients minimize a penalized residual sum of squares: .. math:: \min\_{w} || X w - y||\_2^2 + \alpha ||w||\_2^2 The complexity parameter :math:`\alpha \geq 0` controls the amount of shrinkage: the larger the value of :math:`\alpha`, the greater the amount of shrinkage and thus the coefficients become more robust to collinearity. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_ridge\_path\_001.png :target: ../auto\_examples/linear\_model/plot\_ridge\_path.html :align: center :scale: 50% As with other linear models, :class:`Ridge` will take in its ``fit`` method arrays ``X``, ``y`` and will store the coefficients :math:`w` of the linear model in its ``coef\_`` member:: >>> from sklearn import linear\_model >>> reg = linear\_model.Ridge(alpha=.5) >>> reg.fit([[0, 0], [0, 0], [1, 1]], [0, .1, 1]) Ridge(alpha=0.5) >>> reg.coef\_ array([0.34545455, 0.34545455]) >>> reg.intercept\_ np.float64(0.13636) Note that the class :class:`Ridge` allows for the user to specify that the solver be automatically chosen by setting `solver="auto"`. When this option is specified, :class:`Ridge` will choose between the `"lbfgs"`, `"cholesky"`, and `"sparse\_cg"` solvers. :class:`Ridge` will begin checking the conditions shown in the following table from top to bottom. If the condition is true, the corresponding solver is chosen. +-------------+----------------------------------------------------+ | \*\*Solver\*\* | \*\*Condition\*\* | +-------------+----------------------------------------------------+ | 'lbfgs' | The ``positive=True`` option is
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.05766735225915909, -0.10348507016897202, -0.06662443280220032, 0.01735454797744751, 0.09150569140911102, -0.005476490594446659, 0.030182568356394768, 0.013171227648854256, -0.05391208454966545, -0.008844935335218906, 0.03514485061168671, 0.010854939930140972, -0.031108565628528595, 0.06...
0.094369
specified, :class:`Ridge` will choose between the `"lbfgs"`, `"cholesky"`, and `"sparse\_cg"` solvers. :class:`Ridge` will begin checking the conditions shown in the following table from top to bottom. If the condition is true, the corresponding solver is chosen. +-------------+----------------------------------------------------+ | \*\*Solver\*\* | \*\*Condition\*\* | +-------------+----------------------------------------------------+ | 'lbfgs' | The ``positive=True`` option is specified. | +-------------+----------------------------------------------------+ | 'cholesky' | The input array X is not sparse. | +-------------+----------------------------------------------------+ | 'sparse\_cg' | None of the above conditions are fulfilled. | +-------------+----------------------------------------------------+ .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_ols\_ridge.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_ridge\_path.py` \* :ref:`sphx\_glr\_auto\_examples\_inspection\_plot\_linear\_model\_coefficient\_interpretation.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_ridge\_coeffs.py` Classification -------------- The :class:`Ridge` regressor has a classifier variant: :class:`RidgeClassifier`. This classifier first converts binary targets to ``{-1, 1}`` and then treats the problem as a regression task, optimizing the same objective as above. The predicted class corresponds to the sign of the regressor's prediction. For multiclass classification, the problem is treated as multi-output regression, and the predicted class corresponds to the output with the highest value. It might seem questionable to use a (penalized) Least Squares loss to fit a classification model instead of the more traditional logistic or hinge losses. However, in practice, all those models can lead to similar cross-validation scores in terms of accuracy or precision/recall, while the penalized least squares loss used by the :class:`RidgeClassifier` allows for a very different choice of the numerical solvers with distinct computational performance profiles. The :class:`RidgeClassifier` can be significantly faster than e.g. :class:`LogisticRegression` with a high number of classes because it can compute the projection matrix :math:`(X^T X)^{-1} X^T` only once. This classifier is sometimes referred to as a `Least Squares Support Vector Machine `\_ with a linear kernel. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_classification\_20newsgroups.py` Ridge Complexity ---------------- This method has the same order of complexity as :ref:`ordinary\_least\_squares`. .. FIXME: .. Not completely true: OLS is solved by an SVD, while Ridge is solved by .. the method of normal equations (Cholesky), there is a big flop difference .. between these Setting the regularization parameter: leave-one-out Cross-Validation -------------------------------------------------------------------- :class:`RidgeCV` and :class:`RidgeClassifierCV` implement ridge regression/classification with built-in cross-validation of the alpha parameter. They work in the same way as :class:`~sklearn.model\_selection.GridSearchCV` except that it defaults to efficient Leave-One-Out :term:`cross-validation`. When using the default :term:`cross-validation`, alpha cannot be 0 due to the formulation used to calculate Leave-One-Out error. See [RL2007]\_ for details. Usage example:: >>> import numpy as np >>> from sklearn import linear\_model >>> reg = linear\_model.RidgeCV(alphas=np.logspace(-6, 6, 13)) >>> reg.fit([[0, 0], [0, 0], [1, 1]], [0, .1, 1]) RidgeCV(alphas=array([1.e-06, 1.e-05, 1.e-04, 1.e-03, 1.e-02, 1.e-01, 1.e+00, 1.e+01, 1.e+02, 1.e+03, 1.e+04, 1.e+05, 1.e+06])) >>> reg.alpha\_ np.float64(0.01) Specifying the value of the :term:`cv` attribute will trigger the use of cross-validation with :class:`~sklearn.model\_selection.GridSearchCV`, for example `cv=10` for 10-fold cross-validation, rather than Leave-One-Out Cross-Validation. .. dropdown:: References .. [RL2007] "Notes on Regularized Least Squares", Rifkin & Lippert (`technical report `\_, `course slides `\_). .. \_lasso: Lasso ===== The :class:`Lasso` is a linear model that estimates sparse coefficients, i.e., it is able to set coefficients exactly to zero. It is useful in some contexts due to its tendency to prefer solutions with fewer non-zero coefficients, effectively reducing the number of features upon which the given solution is dependent. For this reason, Lasso and its variants are fundamental to the field of compressed sensing. Under certain conditions, it can recover the exact set of non-zero coefficients (see :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_tomography\_l1\_reconstruction.py`). Mathematically, it consists of a linear model with an added regularization term. The objective function to minimize is: .. math:: \min\_{w} P(w) = {\frac{1}{2n\_{\text{samples}}} ||X w - y||\_2 ^ 2 + \alpha ||w||\_1} The lasso estimate thus solves the least-squares with added penalty :math:`\alpha ||w||\_1`, where :math:`\alpha` is a constant and :math:`||w||\_1`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.10367961972951889, 0.005198994651436806, -0.0729803815484047, -0.024795860052108765, -0.021803133189678192, -0.051229219883680344, 0.01601702719926834, 0.039382364600896835, -0.05374303460121155, 0.04518202319741249, 0.06225080043077469, -0.03707707300782204, 0.01709222048521042, -0.061...
-0.036094
consists of a linear model with an added regularization term. The objective function to minimize is: .. math:: \min\_{w} P(w) = {\frac{1}{2n\_{\text{samples}}} ||X w - y||\_2 ^ 2 + \alpha ||w||\_1} The lasso estimate thus solves the least-squares with added penalty :math:`\alpha ||w||\_1`, where :math:`\alpha` is a constant and :math:`||w||\_1` is the :math:`\ell\_1`-norm of the coefficient vector. The implementation in the class :class:`Lasso` uses coordinate descent as the algorithm to fit the coefficients. See :ref:`least\_angle\_regression` for another implementation:: >>> from sklearn import linear\_model >>> reg = linear\_model.Lasso(alpha=0.1) >>> reg.fit([[0, 0], [1, 1]], [0, 1]) Lasso(alpha=0.1) >>> reg.predict([[1, 1]]) array([0.8]) The function :func:`lasso\_path` is useful for lower-level tasks, as it computes the coefficients along the full path of possible values. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_and\_elasticnet.py` \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_tomography\_l1\_reconstruction.py` \* :ref:`sphx\_glr\_auto\_examples\_inspection\_plot\_linear\_model\_coefficient\_interpretation.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_model\_selection.py` .. note:: \*\*Feature selection with Lasso\*\* As the Lasso regression yields sparse models, it can thus be used to perform feature selection, as detailed in :ref:`l1\_feature\_selection`. .. dropdown:: References The following references explain the origin of the Lasso as well as properties of the Lasso problem and the duality gap computation used for convergence control. \* :doi:`Robert Tibshirani. (1996) Regression Shrinkage and Selection Via the Lasso. J. R. Stat. Soc. Ser. B Stat. Methodol., 58(1):267-288 <10.1111/j.2517-6161.1996.tb02080.x>` \* "An Interior-Point Method for Large-Scale L1-Regularized Least Squares," S. J. Kim, K. Koh, M. Lustig, S. Boyd and D. Gorinevsky, in IEEE Journal of Selected Topics in Signal Processing, 2007 (`Paper `\_\_) .. \_coordinate\_descent: Coordinate Descent with Gap Safe Screening Rules ------------------------------------------------ Coordinate descent (CD) is a strategy to solve a minimization problem that considers a single feature :math:`j` at a time. This way, the optimization problem is reduced to a 1-dimensional problem which is easier to solve: .. math:: \min\_{w\_j} {\frac{1}{2n\_{\text{samples}}} ||x\_j w\_j + X\_{-j}w\_{-j} - y||\_2 ^ 2 + \alpha |w\_j|} with index :math:`-j` meaning all features but :math:`j`. The solution is .. math:: w\_j = \frac{S(x\_j^T (y - X\_{-j}w\_{-j}), \alpha)}{||x\_j||\_2^2} with the soft-thresholding function :math:`S(z, \alpha) = \operatorname{sign}(z) \max(0, |z|-\alpha)`. Note that the soft-thresholding function is exactly zero whenever :math:`\alpha \geq |z|`. The CD solver then loops over the features either in a cycle, picking one feature after the other in the order given by `X` (`selection="cyclic"`), or by randomly picking features (`selection="random"`). It stops if the duality gap is smaller than the provided tolerance `tol`. .. dropdown:: Mathematical details The duality gap :math:`G(w, v)` is an upper bound of the difference between the current primal objective function of the Lasso, :math:`P(w)`, and its minimum :math:`P(w^\star)`, i.e. :math:`G(w, v) \geq P(w) - P(w^\star)`. It is given by :math:`G(w, v) = P(w) - D(v)` with dual objective function .. math:: D(v) = \frac{1}{2n\_{\text{samples}}}(y^Tv - ||v||\_2^2) subject to :math:`v \in ||X^Tv||\_{\infty} \leq n\_{\text{samples}}\alpha`. At optimum, the duality gap is zero, :math:`G(w^\star, v^\star) = 0` (a property called strong duality). With (scaled) dual variable :math:`v = c r`, current residual :math:`r = y - Xw` and dual scaling .. math:: c = \begin{cases} 1, & ||X^Tr||\_{\infty} \leq n\_{\text{samples}}\alpha, \\ \frac{n\_{\text{samples}}\alpha}{||X^Tr||\_{\infty}}, & \text{otherwise} \end{cases} the stopping criterion is .. math:: \text{tol} \frac{||y||\_2^2}{n\_{\text{samples}}} < G(w, cr)\,. A clever method to speedup the coordinate descent algorithm is to screen features such that at optimum :math:`w\_j = 0`. Gap safe screening rules are such a tool. Anywhere during the optimization algorithm, they can tell which feature we can safely exclude, i.e., set to zero with certainty. .. dropdown:: References The first reference explains the coordinate descent solver used in scikit-learn, the others treat gap safe screening rules. \* :doi:`Friedman, Hastie & Tibshirani. (2010). Regularization Path For Generalized linear Models by Coordinate Descent. J Stat Softw 33(1), 1-22
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.01743023656308651, -0.04905671626329422, -0.09274324774742126, 0.034188877791166306, 0.0891127809882164, 0.00465955026447773, 0.02727002091705799, 0.017576513811945915, -0.08893506228923798, 0.03570159524679184, 0.012134484015405178, 0.0758383572101593, -0.04226545989513397, 0.004566777...
0.090285
can safely exclude, i.e., set to zero with certainty. .. dropdown:: References The first reference explains the coordinate descent solver used in scikit-learn, the others treat gap safe screening rules. \* :doi:`Friedman, Hastie & Tibshirani. (2010). Regularization Path For Generalized linear Models by Coordinate Descent. J Stat Softw 33(1), 1-22 <10.18637/jss.v033.i01>` \* :arxiv:`O. Fercoq, A. Gramfort, J. Salmon. (2015). Mind the duality gap: safer rules for the Lasso. Proceedings of Machine Learning Research 37:333-342, 2015. <1505.03410>` \* :arxiv:`E. Ndiaye, O. Fercoq, A. Gramfort, J. Salmon. (2017). Gap Safe Screening Rules for Sparsity Enforcing Penalties. Journal of Machine Learning Research 18(128):1-33, 2017. <1611.05780>` Setting regularization parameter -------------------------------- The ``alpha`` parameter controls the degree of sparsity of the estimated coefficients. Using cross-validation ^^^^^^^^^^^^^^^^^^^^^^^ scikit-learn exposes objects that set the Lasso ``alpha`` parameter by cross-validation: :class:`LassoCV` and :class:`LassoLarsCV`. :class:`LassoLarsCV` is based on the :ref:`least\_angle\_regression` algorithm explained below. For high-dimensional datasets with many collinear features, :class:`LassoCV` is most often preferable. However, :class:`LassoLarsCV` has the advantage of exploring more relevant values of `alpha` parameter, and if the number of samples is very small compared to the number of features, it is often faster than :class:`LassoCV`. .. |lasso\_cv\_1| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_lasso\_model\_selection\_002.png :target: ../auto\_examples/linear\_model/plot\_lasso\_model\_selection.html :scale: 48% .. |lasso\_cv\_2| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_lasso\_model\_selection\_003.png :target: ../auto\_examples/linear\_model/plot\_lasso\_model\_selection.html :scale: 48% .. centered:: |lasso\_cv\_1| |lasso\_cv\_2| .. \_lasso\_lars\_ic: Information-criteria based model selection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Alternatively, the estimator :class:`LassoLarsIC` proposes to use the Akaike information criterion (AIC) and the Bayes Information criterion (BIC). It is a computationally cheaper alternative to find the optimal value of alpha as the regularization path is computed only once instead of k+1 times when using k-fold cross-validation. Indeed, these criteria are computed on the in-sample training set. In short, they penalize the over-optimistic scores of the different Lasso models by their flexibility (cf. to "Mathematical details" section below). However, such criteria need a proper estimation of the degrees of freedom of the solution, are derived for large samples (asymptotic results) and assume the correct model is candidates under investigation. They also tend to break when the problem is badly conditioned (e.g. more features than samples). .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_lasso\_lars\_ic\_001.png :target: ../auto\_examples/linear\_model/plot\_lasso\_lars\_ic.html :align: center :scale: 50% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_model\_selection.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_lars\_ic.py` .. \_aic\_bic: AIC and BIC criteria ^^^^^^^^^^^^^^^^^^^^ The definition of AIC (and thus BIC) might differ in the literature. In this section, we give more information regarding the criterion computed in scikit-learn. .. dropdown:: Mathematical details The AIC criterion is defined as: .. math:: AIC = -2 \log(\hat{L}) + 2 d where :math:`\hat{L}` is the maximum likelihood of the model and :math:`d` is the number of parameters (as well referred to as degrees of freedom in the previous section). The definition of BIC replaces the constant :math:`2` by :math:`\log(N)`: .. math:: BIC = -2 \log(\hat{L}) + \log(N) d where :math:`N` is the number of samples. For a linear Gaussian model, the maximum log-likelihood is defined as: .. math:: \log(\hat{L}) = - \frac{n}{2} \log(2 \pi) - \frac{n}{2} \log(\sigma^2) - \frac{\sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2}{2\sigma^2} where :math:`\sigma^2` is an estimate of the noise variance, :math:`y\_i` and :math:`\hat{y}\_i` are respectively the true and predicted targets, and :math:`n` is the number of samples. Plugging the maximum log-likelihood in the AIC formula yields: .. math:: AIC = n \log(2 \pi \sigma^2) + \frac{\sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2}{\sigma^2} + 2 d The first term of the above expression is sometimes discarded since it is a constant when :math:`\sigma^2` is provided. In addition, it is sometimes stated that the AIC is equivalent to the :math:`C\_p` statistic [12]\_. In a strict sense, however, it is equivalent only up to some constant and a multiplicative factor. At last, we mentioned above that :math:`\sigma^2`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.07396185398101807, -0.006494174711406231, -0.01375554408878088, 0.0032083173282444477, 0.1192661002278328, 0.019748980179429054, -0.04423520341515541, -0.031230401247739792, -0.07492615282535553, -0.0013825449859723449, -0.006670157890766859, 0.10009448230266571, -0.031153732910752296, ...
0.029384
discarded since it is a constant when :math:`\sigma^2` is provided. In addition, it is sometimes stated that the AIC is equivalent to the :math:`C\_p` statistic [12]\_. In a strict sense, however, it is equivalent only up to some constant and a multiplicative factor. At last, we mentioned above that :math:`\sigma^2` is an estimate of the noise variance. In :class:`LassoLarsIC` when the parameter `noise\_variance` is not provided (default), the noise variance is estimated via the unbiased estimator [13]\_ defined as: .. math:: \sigma^2 = \frac{\sum\_{i=1}^{n} (y\_i - \hat{y}\_i)^2}{n - p} where :math:`p` is the number of features and :math:`\hat{y}\_i` is the predicted target using an ordinary least squares regression. Note, that this formula is valid only when `n\_samples > n\_features`. .. rubric:: References .. [12] :arxiv:`Zou, Hui, Trevor Hastie, and Robert Tibshirani. "On the degrees of freedom of the lasso." The Annals of Statistics 35.5 (2007): 2173-2192. <0712.0881.pdf>` .. [13] :doi:`Cherkassky, Vladimir, and Yunqian Ma. "Comparison of model selection for regression." Neural computation 15.7 (2003): 1691-1714. <10.1162/089976603321891864>` Comparison with the regularization parameter of SVM ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The equivalence between ``alpha`` and the regularization parameter of SVM, ``C`` is given by ``alpha = 1 / C`` or ``alpha = 1 / (n\_samples \* C)``, depending on the estimator and the exact objective function optimized by the model. .. \_multi\_task\_lasso: Multi-task Lasso ================ The :class:`MultiTaskLasso` is a linear model that estimates sparse coefficients for multiple regression problems jointly: ``y`` is a 2D array, of shape ``(n\_samples, n\_tasks)``. The constraint is that the selected features are the same for all the regression problems, also called tasks. The following figure compares the location of the non-zero entries in the coefficient matrix W obtained with a simple Lasso or a MultiTaskLasso. The Lasso estimates yield scattered non-zeros while the non-zeros of the MultiTaskLasso are full columns. .. |multi\_task\_lasso\_1| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_multi\_task\_lasso\_support\_001.png :target: ../auto\_examples/linear\_model/plot\_multi\_task\_lasso\_support.html :scale: 48% .. |multi\_task\_lasso\_2| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_multi\_task\_lasso\_support\_002.png :target: ../auto\_examples/linear\_model/plot\_multi\_task\_lasso\_support.html :scale: 48% .. centered:: |multi\_task\_lasso\_1| |multi\_task\_lasso\_2| .. centered:: Fitting a time-series model, imposing that any active feature be active at all times. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_multi\_task\_lasso\_support.py` .. dropdown:: Mathematical details Mathematically, it consists of a linear model trained with a mixed :math:`\ell\_1` :math:`\ell\_2`-norm for regularization. The objective function to minimize is: .. math:: \min\_{W} { \frac{1}{2n\_{\text{samples}}} ||X W - Y||\_{\text{Fro}} ^ 2 + \alpha ||W||\_{21}} where :math:`\text{Fro}` indicates the Frobenius norm .. math:: ||A||\_{\text{Fro}} = \sqrt{\sum\_{ij} a\_{ij}^2} and :math:`\ell\_1` :math:`\ell\_2` reads .. math:: ||A||\_{2 1} = \sum\_i \sqrt{\sum\_j a\_{ij}^2}. The implementation in the class :class:`MultiTaskLasso` uses coordinate descent as the algorithm to fit the coefficients. .. \_elastic\_net: Elastic-Net =========== :class:`ElasticNet` is a linear regression model trained with both :math:`\ell\_1` and :math:`\ell\_2`-norm regularization of the coefficients. This combination allows for learning a sparse model where few of the weights are non-zero like :class:`Lasso`, while still maintaining the regularization properties of :class:`Ridge`. We control the convex combination of :math:`\ell\_1` and :math:`\ell\_2` using the ``l1\_ratio`` parameter. Elastic-net is useful when there are multiple features that are correlated with one another. Lasso is likely to pick one of these at random, while elastic-net is likely to pick both. A practical advantage of trading-off between Lasso and Ridge is that it allows Elastic-Net to inherit some of Ridge's stability under rotation. The objective function to minimize is in this case .. math:: \min\_{w} { \frac{1}{2n\_{\text{samples}}} ||X w - y||\_2 ^ 2 + \alpha \rho ||w||\_1 + \frac{\alpha(1-\rho)}{2} ||w||\_2 ^ 2} .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_lasso\_lasso\_lars\_elasticnet\_path\_002.png :target: ../auto\_examples/linear\_model/plot\_lasso\_lasso\_lars\_elasticnet\_path.html :align: center :scale: 50% The class :class:`ElasticNetCV` can be used to set the parameters ``alpha`` (:math:`\alpha`) and ``l1\_ratio`` (:math:`\rho`) by cross-validation. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_and\_elasticnet.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_lasso\_lars\_elasticnet\_path.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_elastic\_net\_precomputed\_gram\_matrix\_with\_weighted\_samples.py` .. dropdown:: References The following two
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.009275809861719608, -0.10337728261947632, 0.02556367963552475, 0.0658838078379631, 0.04602862522006035, 0.016300924122333527, 0.043116070330142975, 0.046918563544750214, -0.04216712340712547, 0.05395076423883438, 0.04891500249505043, 0.025826087221503258, -0.02418232522904873, -0.004388...
0.147501
+ \alpha \rho ||w||\_1 + \frac{\alpha(1-\rho)}{2} ||w||\_2 ^ 2} .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_lasso\_lasso\_lars\_elasticnet\_path\_002.png :target: ../auto\_examples/linear\_model/plot\_lasso\_lasso\_lars\_elasticnet\_path.html :align: center :scale: 50% The class :class:`ElasticNetCV` can be used to set the parameters ``alpha`` (:math:`\alpha`) and ``l1\_ratio`` (:math:`\rho`) by cross-validation. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_and\_elasticnet.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_lasso\_lars\_elasticnet\_path.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_elastic\_net\_precomputed\_gram\_matrix\_with\_weighted\_samples.py` .. dropdown:: References The following two references explain the iterations used in the coordinate descent solver of scikit-learn, as well as the duality gap computation used for convergence control. \* "Regularization Path For Generalized linear Models by Coordinate Descent", Friedman, Hastie & Tibshirani, J Stat Softw, 2010 (`Paper `\_\_). \* "An Interior-Point Method for Large-Scale L1-Regularized Least Squares," S. J. Kim, K. Koh, M. Lustig, S. Boyd and D. Gorinevsky, in IEEE Journal of Selected Topics in Signal Processing, 2007 (`Paper `\_\_) .. \_multi\_task\_elastic\_net: Multi-task Elastic-Net ====================== The :class:`MultiTaskElasticNet` is an elastic-net model that estimates sparse coefficients for multiple regression problems jointly: ``Y`` is a 2D array of shape ``(n\_samples, n\_tasks)``. The constraint is that the selected features are the same for all the regression problems, also called tasks. Mathematically, it consists of a linear model trained with a mixed :math:`\ell\_1` :math:`\ell\_2`-norm and :math:`\ell\_2`-norm for regularization. The objective function to minimize is: .. math:: \min\_{W} { \frac{1}{2n\_{\text{samples}}} ||X W - Y||\_{\text{Fro}}^2 + \alpha \rho ||W||\_{2 1} + \frac{\alpha(1-\rho)}{2} ||W||\_{\text{Fro}}^2} The implementation in the class :class:`MultiTaskElasticNet` uses coordinate descent as the algorithm to fit the coefficients. The class :class:`MultiTaskElasticNetCV` can be used to set the parameters ``alpha`` (:math:`\alpha`) and ``l1\_ratio`` (:math:`\rho`) by cross-validation. .. \_least\_angle\_regression: Least Angle Regression ====================== Least-angle regression (LARS) is a regression algorithm for high-dimensional data, developed by Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. LARS is similar to forward stepwise regression. At each step, it finds the feature most correlated with the target. When there are multiple features having equal correlation, instead of continuing along the same feature, it proceeds in a direction equiangular between the features. The advantages of LARS are: - It is numerically efficient in contexts where the number of features is significantly greater than the number of samples. - It is computationally just as fast as forward selection and has the same order of complexity as ordinary least squares. - It produces a full piecewise linear solution path, which is useful in cross-validation or similar attempts to tune the model. - If two features are almost equally correlated with the target, then their coefficients should increase at approximately the same rate. The algorithm thus behaves as intuition would expect, and also is more stable. - It is easily modified to produce solutions for other estimators, like the Lasso. The disadvantages of the LARS method include: - Because LARS is based upon an iterative refitting of the residuals, it would appear to be especially sensitive to the effects of noise. This problem is discussed in detail by Weisberg in the discussion section of the Efron et al. (2004) Annals of Statistics article. The LARS model can be used via the estimator :class:`Lars`, or its low-level implementation :func:`lars\_path` or :func:`lars\_path\_gram`. LARS Lasso ========== :class:`LassoLars` is a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_lasso\_lasso\_lars\_elasticnet\_path\_001.png :target: ../auto\_examples/linear\_model/plot\_lasso\_lasso\_lars\_elasticnet\_path.html :align: center :scale: 50% :: >>> from sklearn import linear\_model >>> reg = linear\_model.LassoLars(alpha=.1) >>> reg.fit([[0, 0], [1, 1]], [0, 1]) LassoLars(alpha=0.1) >>> reg.coef\_ array([0.6, 0. ]) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_lasso\_lars\_elasticnet\_path.py` The LARS algorithm provides the full path of the coefficients along the regularization parameter almost for free, thus a common operation is to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.028187235817313194, -0.06909073144197464, -0.11695980280637741, -0.011918069794774055, 0.10495501756668091, 0.02469252608716488, 0.005755985155701637, 0.011609718203544617, -0.08006273210048676, 0.01813584379851818, 0.04341365769505501, -0.036520980298519135, 0.008107133209705353, 0.089...
0.095998
>>> from sklearn import linear\_model >>> reg = linear\_model.LassoLars(alpha=.1) >>> reg.fit([[0, 0], [1, 1]], [0, 1]) LassoLars(alpha=0.1) >>> reg.coef\_ array([0.6, 0. ]) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_lasso\_lars\_elasticnet\_path.py` The LARS algorithm provides the full path of the coefficients along the regularization parameter almost for free, thus a common operation is to retrieve the path with one of the functions :func:`lars\_path` or :func:`lars\_path\_gram`. .. dropdown:: Mathematical formulation The algorithm is similar to forward stepwise regression, but instead of including features at each step, the estimated coefficients are increased in a direction equiangular to each one's correlations with the residual. Instead of giving a vector result, the LARS solution consists of a curve denoting the solution for each value of the :math:`\ell\_1` norm of the parameter vector. The full coefficients path is stored in the array ``coef\_path\_`` of shape `(n\_features, max\_features + 1)`. The first column is always zero. .. rubric:: References \* Original Algorithm is detailed in the paper `Least Angle Regression `\_ by Hastie et al. .. \_omp: Orthogonal Matching Pursuit (OMP) ================================= :class:`OrthogonalMatchingPursuit` and :func:`orthogonal\_mp` implement the OMP algorithm for approximating the fit of a linear model with constraints imposed on the number of non-zero coefficients (i.e. the :math:`\ell\_0` pseudo-norm). Being a forward feature selection method like :ref:`least\_angle\_regression`, orthogonal matching pursuit can approximate the optimum solution vector with a fixed number of non-zero elements: .. math:: \underset{w}{\operatorname{arg\,min\,}} ||y - Xw||\_2^2 \text{ subject to } ||w||\_0 \leq n\_{\text{nonzero\_coefs}} Alternatively, orthogonal matching pursuit can target a specific error instead of a specific number of non-zero coefficients. This can be expressed as: .. math:: \underset{w}{\operatorname{arg\,min\,}} ||w||\_0 \text{ subject to } ||y-Xw||\_2^2 \leq \text{tol} OMP is based on a greedy algorithm that includes at each step the atom most highly correlated with the current residual. It is similar to the simpler matching pursuit (MP) method, but better in that at each iteration, the residual is recomputed using an orthogonal projection on the space of the previously chosen dictionary elements. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_omp.py` .. dropdown:: References \* https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf \* `Matching pursuits with time-frequency dictionaries `\_, S. G. Mallat, Z. Zhang, 1993. .. \_bayesian\_regression: Bayesian Regression =================== Bayesian regression techniques can be used to include regularization parameters in the estimation procedure: the regularization parameter is not set in a hard sense but tuned to the data at hand. This can be done by introducing `uninformative priors `\_\_ over the hyper parameters of the model. The :math:`\ell\_{2}` regularization used in :ref:`ridge\_regression` is equivalent to finding a maximum a posteriori estimation under a Gaussian prior over the coefficients :math:`w` with precision :math:`\lambda^{-1}`. Instead of setting `\lambda` manually, it is possible to treat it as a random variable to be estimated from the data. To obtain a fully probabilistic model, the output :math:`y` is assumed to be Gaussian distributed around :math:`X w`: .. math:: p(y|X,w,\alpha) = \mathcal{N}(y|X w,\alpha^{-1}) where :math:`\alpha` is again treated as a random variable that is to be estimated from the data. The advantages of Bayesian Regression are: - It adapts to the data at hand. - It can be used to include regularization parameters in the estimation procedure. The disadvantages of Bayesian regression include: - Inference of the model can be time consuming. .. dropdown:: References \* A good introduction to Bayesian methods is given in `C. Bishop: Pattern Recognition and Machine Learning `\_\_. \* Original Algorithm is detailed in the book `Bayesian learning for neural networks `\_\_ by Radford M. Neal. .. \_bayesian\_ridge\_regression: Bayesian Ridge Regression ------------------------- :class:`BayesianRidge` estimates a probabilistic model of the regression problem as described above. The prior for the coefficient :math:`w` is given by a spherical Gaussian: ..
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.04260465130209923, -0.07496237754821777, -0.04658694937825203, -0.01728709414601326, 0.09438753873109818, -0.028162727132439613, -0.002884599147364497, -0.006739265285432339, -0.02627122402191162, 0.06559453159570694, 0.015610300935804844, 0.06411771476268768, -0.030151426792144775, -0....
0.032212
`\_\_. \* Original Algorithm is detailed in the book `Bayesian learning for neural networks `\_\_ by Radford M. Neal. .. \_bayesian\_ridge\_regression: Bayesian Ridge Regression ------------------------- :class:`BayesianRidge` estimates a probabilistic model of the regression problem as described above. The prior for the coefficient :math:`w` is given by a spherical Gaussian: .. math:: p(w|\lambda) = \mathcal{N}(w|0,\lambda^{-1}\mathbf{I}\_{p}) The priors over :math:`\alpha` and :math:`\lambda` are chosen to be `gamma distributions `\_\_, the conjugate prior for the precision of the Gaussian. The resulting model is called \*Bayesian Ridge Regression\*, and is similar to the classical :class:`Ridge`. The parameters :math:`w`, :math:`\alpha` and :math:`\lambda` are estimated jointly during the fit of the model, the regularization parameters :math:`\alpha` and :math:`\lambda` being estimated by maximizing the \*log marginal likelihood\*. The scikit-learn implementation is based on the algorithm described in Appendix A of (Tipping, 2001) where the update of the parameters :math:`\alpha` and :math:`\lambda` is done as suggested in (MacKay, 1992). The initial value of the maximization procedure can be set with the hyperparameters ``alpha\_init`` and ``lambda\_init``. There are four more hyperparameters, :math:`\alpha\_1`, :math:`\alpha\_2`, :math:`\lambda\_1` and :math:`\lambda\_2` of the gamma prior distributions over :math:`\alpha` and :math:`\lambda`. These are usually chosen to be \*non-informative\*. By default :math:`\alpha\_1 = \alpha\_2 = \lambda\_1 = \lambda\_2 = 10^{-6}`. Bayesian Ridge Regression is used for regression:: >>> from sklearn import linear\_model >>> X = [[0., 0.], [1., 1.], [2., 2.], [3., 3.]] >>> Y = [0., 1., 2., 3.] >>> reg = linear\_model.BayesianRidge() >>> reg.fit(X, Y) BayesianRidge() After being fitted, the model can then be used to predict new values:: >>> reg.predict([[1, 0.]]) array([0.50000013]) The coefficients :math:`w` of the model can be accessed:: >>> reg.coef\_ array([0.49999993, 0.49999993]) Due to the Bayesian framework, the weights found are slightly different from the ones found by :ref:`ordinary\_least\_squares`. However, Bayesian Ridge Regression is more robust to ill-posed problems. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_bayesian\_ridge\_curvefit.py` .. dropdown:: References \* Section 3.3 in Christopher M. Bishop: Pattern Recognition and Machine Learning, 2006 \* David J. C. MacKay, `Bayesian Interpolation `\_, 1992. \* Michael E. Tipping, `Sparse Bayesian Learning and the Relevance Vector Machine `\_, 2001. .. \_automatic\_relevance\_determination: Automatic Relevance Determination - ARD --------------------------------------- The Automatic Relevance Determination (as being implemented in :class:`ARDRegression`) is a kind of linear model which is very similar to the `Bayesian Ridge Regression`\_, but that leads to sparser coefficients :math:`w` [1]\_ [2]\_. :class:`ARDRegression` poses a different prior over :math:`w`: it drops the spherical Gaussian distribution for a centered elliptic Gaussian distribution. This means each coefficient :math:`w\_{i}` can itself be drawn from a Gaussian distribution, centered on zero and with a precision :math:`\lambda\_{i}`: .. math:: p(w|\lambda) = \mathcal{N}(w|0,A^{-1}) with :math:`A` being a positive definite diagonal matrix and :math:`\text{diag}(A) = \lambda = \{\lambda\_{1},...,\lambda\_{p}\}`. In contrast to the `Bayesian Ridge Regression`\_, each coordinate of :math:`w\_{i}` has its own standard deviation :math:`\frac{1}{\lambda\_i}`. The prior over all :math:`\lambda\_i` is chosen to be the same gamma distribution given by the hyperparameters :math:`\lambda\_1` and :math:`\lambda\_2`. ARD is also known in the literature as \*Sparse Bayesian Learning\* and \*Relevance Vector Machine\* [3]\_ [4]\_. See :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_ard.py` for a worked-out comparison between ARD and `Bayesian Ridge Regression`\_. See :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_lasso\_and\_elasticnet.py` for a comparison between various methods - Lasso, ARD and ElasticNet - on correlated data. .. rubric:: References .. [1] Christopher M. Bishop: Pattern Recognition and Machine Learning, Chapter 7.2.1 .. [2] David Wipf and Srikantan Nagarajan: `A New View of Automatic Relevance Determination `\_ .. [3] Michael E. Tipping: `Sparse Bayesian Learning and the Relevance Vector Machine `\_ .. [4] Tristan Fletcher: `Relevance Vector Machines Explained `\_ .. \_Logistic\_regression: Logistic regression =================== The logistic regression is implemented in :class:`LogisticRegression`. Despite its name, it is implemented as a linear
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.13355448842048645, -0.05552781745791435, 0.015260531567037106, -0.024605754762887955, 0.010213085450232029, 0.0055811139754951, 0.05323053523898125, 0.05494336783885956, 0.032952114939689636, 0.0444125235080719, 0.09457720071077347, 0.13789822161197662, 0.039024464786052704, -0.00401451...
0.026523
View of Automatic Relevance Determination `\_ .. [3] Michael E. Tipping: `Sparse Bayesian Learning and the Relevance Vector Machine `\_ .. [4] Tristan Fletcher: `Relevance Vector Machines Explained `\_ .. \_Logistic\_regression: Logistic regression =================== The logistic regression is implemented in :class:`LogisticRegression`. Despite its name, it is implemented as a linear model for classification rather than regression in terms of the scikit-learn/ML nomenclature. The logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a `logistic function `\_. This implementation can fit binary, One-vs-Rest, or multinomial logistic regression with optional :math:`\ell\_1`, :math:`\ell\_2` or Elastic-Net regularization. .. note:: \*\*Regularization\*\* Regularization is applied by default, which is common in machine learning but not in statistics. Another advantage of regularization is that it improves numerical stability. No regularization amounts to setting C to a very high value. .. note:: \*\*Logistic Regression as a special case of the Generalized Linear Models (GLM)\*\* Logistic regression is a special case of :ref:`generalized\_linear\_models` with a Binomial / Bernoulli conditional distribution and a Logit link. The numerical output of the logistic regression, which is the predicted probability, can be used as a classifier by applying a threshold (by default 0.5) to it. This is how it is implemented in scikit-learn, so it expects a categorical target, making the Logistic Regression a classifier. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_logistic\_l1\_l2\_sparsity.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_logistic\_path.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_logistic\_multinomial.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_sparse\_logistic\_regression\_20newsgroups.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_sparse\_logistic\_regression\_mnist.py` \* :ref:`sphx\_glr\_auto\_examples\_classification\_plot\_classification\_probability.py` Binary Case ----------- For notational ease, we assume that the target :math:`y\_i` takes values in the set :math:`\{0, 1\}` for data point :math:`i`. Once fitted, the :meth:`~sklearn.linear\_model.LogisticRegression.predict\_proba` method of :class:`~sklearn.linear\_model.LogisticRegression` predicts the probability of the positive class :math:`P(y\_i=1|X\_i)` as .. math:: \hat{p}(X\_i) = \operatorname{expit}(X\_i w + w\_0) = \frac{1}{1 + \exp(-X\_i w - w\_0)}. As an optimization problem, binary class logistic regression with regularization term :math:`r(w)` minimizes the following cost function: .. math:: :name: regularized-logistic-loss \min\_{w} \frac{1}{S}\sum\_{i=1}^n s\_i \left(-y\_i \log(\hat{p}(X\_i)) - (1 - y\_i) \log(1 - \hat{p}(X\_i))\right) + \frac{r(w)}{S C}\,, where :math:`{s\_i}` corresponds to the weights assigned by the user to a specific training sample (the vector :math:`s` is formed by element-wise multiplication of the class weights and sample weights), and the sum :math:`S = \sum\_{i=1}^n s\_i`. We currently provide four choices for the regularization or penalty term :math:`r(w)` via the arguments `C` and `l1\_ratio`: +-------------------------------+-------------------------------------------------+ | penalty | :math:`r(w)` | +===============================+=================================================+ | none (`C=np.inf`) | :math:`0` | +-------------------------------+-------------------------------------------------+ | :math:`\ell\_1` (`l1\_ratio=1`) | :math:`\|w\|\_1` | +-------------------------------+-------------------------------------------------+ | :math:`\ell\_2` (`l1\_ratio=0`) | :math:`\frac{1}{2}\|w\|\_2^2 = \frac{1}{2}w^T w` | +-------------------------------+-------------------------------------------------+ | ElasticNet (`00` is equivalent to multiplying the (inverse) regularization strength `C` by :math:`b`. Multinomial Case ---------------- The binary case can be extended to :math:`K` classes leading to the multinomial logistic regression, see also `log-linear model `\_. .. note:: It is possible to parameterize a :math:`K`-class classification model using only :math:`K-1` weight vectors, leaving one class probability fully determined by the other class probabilities by leveraging the fact that all class probabilities must sum to one. We deliberately choose to overparameterize the model using :math:`K` weight vectors for ease of implementation and to preserve the symmetrical inductive bias regarding ordering of classes, see [16]\_. This effect becomes especially important when using regularization. The choice of overparameterization can be detrimental for unpenalized models since then the solution may not be unique, as shown in [16]\_. .. dropdown:: Mathematical details Let :math:`y\_i \in \{1, \ldots, K\}` be the label (ordinal) encoded target variable for observation :math:`i`. Instead of a single coefficient vector, we now have a matrix of coefficients :math:`W` where each row vector
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.03101547434926033, -0.04887457564473152, -0.042115334421396255, -0.011858594603836536, 0.056364092975854874, -0.029220236465334892, 0.05499346926808357, 0.08214755356311798, 0.004228068049997091, -0.011287064291536808, -0.019162066280841827, 0.040659911930561066, 0.03472196310758591, 0....
0.125927
since then the solution may not be unique, as shown in [16]\_. .. dropdown:: Mathematical details Let :math:`y\_i \in \{1, \ldots, K\}` be the label (ordinal) encoded target variable for observation :math:`i`. Instead of a single coefficient vector, we now have a matrix of coefficients :math:`W` where each row vector :math:`W\_k` corresponds to class :math:`k`. We aim at predicting the class probabilities :math:`P(y\_i=k|X\_i)` via :meth:`~sklearn.linear\_model.LogisticRegression.predict\_proba` as: .. math:: \hat{p}\_k(X\_i) = \frac{\exp(X\_i W\_k + W\_{0, k})}{\sum\_{l=0}^{K-1} \exp(X\_i W\_l + W\_{0, l})}. The objective for the optimization becomes .. math:: \min\_W -\frac{1}{S}\sum\_{i=1}^n \sum\_{k=0}^{K-1} s\_{ik} [y\_i = k] \log(\hat{p}\_k(X\_i)) + \frac{r(W)}{S C}\,, where :math:`[P]` represents the Iverson bracket which evaluates to :math:`0` if :math:`P` is false, otherwise it evaluates to :math:`1`. Again, :math:`s\_{ik}` are the weights assigned by the user (multiplication of sample weights and class weights) with their sum :math:`S = \sum\_{i=1}^n \sum\_{k=0}^{K-1} s\_{ik}`. We currently provide four choices for the regularization or penalty term :math:`r(W)` via the arguments `C` and `l1\_ratio`, where :math:`m` is the number of features: +-------------------------------+----------------------------------------------------------------------------------+ | penalty | :math:`r(W)` | +===============================+==================================================================================+ | none (`C=np.inf`) | :math:`0` | +-------------------------------+----------------------------------------------------------------------------------+ | :math:`\ell\_1` (`l1\_ratio=1`) | :math:`\|W\|\_{1,1} = \sum\_{i=1}^m\sum\_{j=1}^{K}|W\_{i,j}|` | +-------------------------------+----------------------------------------------------------------------------------+ | :math:`\ell\_2` (`l1\_ratio=0`) | :math:`\frac{1}{2}\|W\|\_F^2 = \frac{1}{2}\sum\_{i=1}^m\sum\_{j=1}^{K} W\_{i,j}^2` | +-------------------------------+----------------------------------------------------------------------------------+ | ElasticNet (`0> n\_features`, "newton-cholesky" is a good choice and can reach high precision (tiny `tol` values). For large datasets the "saga" solver is usually faster (than "lbfgs"), in particular for low precision (high `tol`). For large dataset, you may also consider using :class:`SGDClassifier` with `loss="log\_loss"`, which might be even faster but requires more tuning. .. \_liblinear\_differences: Differences between solvers ^^^^^^^^^^^^^^^^^^^^^^^^^^^ There might be a difference in the scores obtained between :class:`LogisticRegression` with ``solver=liblinear`` or :class:`~sklearn.svm.LinearSVC` and the external liblinear library directly, when ``fit\_intercept=False`` and the fit ``coef\_`` (or) the data to be predicted are zeroes. This is because for the sample(s) with ``decision\_function`` zero, :class:`LogisticRegression` and :class:`~sklearn.svm.LinearSVC` predict the negative class, while liblinear predicts the positive class. Note that a model with ``fit\_intercept=False`` and having many samples with ``decision\_function`` zero, is likely to be an underfit, bad model and you are advised to set ``fit\_intercept=True`` and increase the ``intercept\_scaling``. .. dropdown:: Solvers' details \* The solver "liblinear" uses a coordinate descent (CD) algorithm, and relies on the excellent C++ `LIBLINEAR library `\_, which is shipped with scikit-learn. However, the CD algorithm implemented in liblinear cannot learn a true multinomial (multiclass) model. If you still want to use "liblinear" on multiclass problems, you can use a "one-vs-rest" scheme `OneVsRestClassifier(LogisticRegression(solver="liblinear"))`, see `:class:`~sklearn.multiclass.OneVsRestClassifier`. Note that minimizing the multinomial loss is expected to give better calibrated results as compared to a "one-vs-rest" scheme. For :math:`\ell\_1` regularization :func:`sklearn.svm.l1\_min\_c` allows to calculate the lower bound for C in order to get a non "null" (all feature weights to zero) model. \* The "lbfgs", "newton-cg", "newton-cholesky" and "sag" solvers only support :math:`\ell\_2` regularization or no regularization, and are found to converge faster for some high-dimensional data. These solvers (and "saga") learn a true multinomial logistic regression model [5]\_. \* The "sag" solver uses Stochastic Average Gradient descent [6]\_. It is faster than other solvers for large datasets, when both the number of samples and the number of features are large. \* The "saga" solver [7]\_ is a variant of "sag" that also supports the non-smooth :math:`\ell\_1` penalty (`l1\_ratio=1`). This is therefore the solver of choice for sparse multinomial logistic regression. It is also the only solver that supports Elastic-Net (`0 < l1\_ratio < 1`). \* The "lbfgs" is an optimization algorithm that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm [8]\_, which belongs to quasi-Newton methods. As such, it can deal with a wide range of different training data and
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.07371994853019714, -0.02871795743703842, -0.04122215136885643, 0.0011729162652045488, 0.13486772775650024, -0.01180177554488182, 0.04488382488489151, -0.031095365062355995, -0.011713934130966663, 0.02472180873155594, 0.029782608151435852, -0.09728129953145981, 0.04600924998521805, -0.00...
-0.037022
sparse multinomial logistic regression. It is also the only solver that supports Elastic-Net (`0 < l1\_ratio < 1`). \* The "lbfgs" is an optimization algorithm that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm [8]\_, which belongs to quasi-Newton methods. As such, it can deal with a wide range of different training data and is therefore the default solver. Its performance, however, suffers on poorly scaled datasets and on datasets with one-hot encoded categorical features with rare categories. \* The "newton-cholesky" solver is an exact Newton solver that calculates the Hessian matrix and solves the resulting linear system. It is a very good choice for `n\_samples` >> `n\_features` and can reach high precision (tiny values of `tol`), but has a few shortcomings: Only :math:`\ell\_2` regularization is supported. Furthermore, because the Hessian matrix is explicitly computed, the memory usage has a quadratic dependency on `n\_features` as well as on `n\_classes`. For a comparison of some of these solvers, see [9]\_. .. rubric:: References .. [5] Christopher M. Bishop: Pattern Recognition and Machine Learning, Chapter 4.3.4 .. [6] Mark Schmidt, Nicolas Le Roux, and Francis Bach: `Minimizing Finite Sums with the Stochastic Average Gradient. `\_ .. [7] Aaron Defazio, Francis Bach, Simon Lacoste-Julien: :arxiv:`SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives. <1407.0202>` .. [8] https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno\_algorithm .. [9] Thomas P. Minka `"A comparison of numerical optimizers for logistic regression" `\_ .. [16] :arxiv:`Simon, Noah, J. Friedman and T. Hastie. "A Blockwise Descent Algorithm for Group-penalized Multiresponse and Multinomial Regression." <1311.6529>` .. note:: \*\*Feature selection with sparse logistic regression\*\* A logistic regression with :math:`\ell\_1` penalty yields sparse models, and can thus be used to perform feature selection, as detailed in :ref:`l1\_feature\_selection`. .. note:: \*\*P-value estimation\*\* It is possible to obtain the p-values and confidence intervals for coefficients in cases of regression without penalization. The `statsmodels package `\_ natively supports this. Within sklearn, one could use bootstrapping instead as well. :class:`LogisticRegressionCV` implements Logistic Regression with built-in cross-validation support, to find the optimal `C` and `l1\_ratio` parameters according to the ``scoring`` attribute. The "newton-cg", "sag", "saga" and "lbfgs" solvers are found to be faster for high-dimensional dense data, due to warm-starting (see :term:`Glossary `). .. \_Generalized\_linear\_regression: .. \_Generalized\_linear\_models: Generalized Linear Models ========================= Generalized Linear Models (GLM) extend linear models in two ways [10]\_. First, the predicted values :math:`\hat{y}` are linked to a linear combination of the input variables :math:`X` via an inverse link function :math:`h` as .. math:: \hat{y}(w, X) = h(Xw). Secondly, the squared loss function is replaced by the unit deviance :math:`d` of a distribution in the exponential family (or more precisely, a reproductive exponential dispersion model (EDM) [11]\_). The minimization problem becomes: .. math:: \min\_{w} \frac{1}{2 n\_{\text{samples}}} \sum\_i d(y\_i, \hat{y}\_i) + \frac{\alpha}{2} ||w||\_2^2, where :math:`\alpha` is the L2 regularization penalty. When sample weights are provided, the average becomes a weighted average. The following table lists some specific EDMs and their unit deviance : ================= ================================ ============================================ Distribution Target Domain Unit Deviance :math:`d(y, \hat{y})` ================= ================================ ============================================ Normal :math:`y \in (-\infty, \infty)` :math:`(y-\hat{y})^2` Bernoulli :math:`y \in \{0, 1\}` :math:`2({y}\log\frac{y}{\hat{y}}+({1}-{y})\log\frac{{1}-{y}}{{1}-\hat{y}})` Categorical :math:`y \in \{0, 1, ..., k\}` :math:`2\sum\_{i \in \{0, 1, ..., k\}} I(y = i) y\_\text{i}\log\frac{I(y = i)}{\hat{I(y = i)}}` Poisson :math:`y \in [0, \infty)` :math:`2(y\log\frac{y}{\hat{y}}-y+\hat{y})` Gamma :math:`y \in (0, \infty)` :math:`2(\log\frac{\hat{y}}{y}+\frac{y}{\hat{y}}-1)` Inverse Gaussian :math:`y \in (0, \infty)` :math:`\frac{(y-\hat{y})^2}{y\hat{y}^2}` ================= ================================ ============================================ The Probability Density Functions (PDF) of these distributions are illustrated in the following figure, .. figure:: ./glm\_data/poisson\_gamma\_tweedie\_distributions.png :align: center :scale: 100% PDF of a random variable Y following Poisson, Tweedie (power=1.5) and Gamma distributions with different mean values (:math:`\mu`). Observe the point mass at :math:`Y=0` for the Poisson distribution and the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.05313386768102646, -0.11456461995840073, -0.037572238594293594, 0.0045334952883422375, 0.03754691779613495, -0.06425388157367706, -0.08041048049926758, 0.022567451000213623, -0.057551417499780655, 0.029277436435222626, -0.020808707922697067, 0.024939103052020073, -0.0492636114358902, -0...
0.143488
Probability Density Functions (PDF) of these distributions are illustrated in the following figure, .. figure:: ./glm\_data/poisson\_gamma\_tweedie\_distributions.png :align: center :scale: 100% PDF of a random variable Y following Poisson, Tweedie (power=1.5) and Gamma distributions with different mean values (:math:`\mu`). Observe the point mass at :math:`Y=0` for the Poisson distribution and the Tweedie (power=1.5) distribution, but not for the Gamma distribution which has a strictly positive target domain. The Bernoulli distribution is a discrete probability distribution modelling a Bernoulli trial - an event that has only two mutually exclusive outcomes. The Categorical distribution is a generalization of the Bernoulli distribution for a categorical random variable. While a random variable in a Bernoulli distribution has two possible outcomes, a Categorical random variable can take on one of K possible categories, with the probability of each category specified separately. The choice of the distribution depends on the problem at hand: \* If the target values :math:`y` are counts (non-negative integer valued) or relative frequencies (non-negative), you might use a Poisson distribution with a log-link. \* If the target values are positive valued and skewed, you might try a Gamma distribution with a log-link. \* If the target values seem to be heavier tailed than a Gamma distribution, you might try an Inverse Gaussian distribution (or even higher variance powers of the Tweedie family). \* If the target values :math:`y` are probabilities, you can use the Bernoulli distribution. The Bernoulli distribution with a logit link can be used for binary classification. The Categorical distribution with a softmax link can be used for multiclass classification. .. dropdown:: Examples of use cases \* Agriculture / weather modeling: number of rain events per year (Poisson), amount of rainfall per event (Gamma), total rainfall per year (Tweedie / Compound Poisson Gamma). \* Risk modeling / insurance policy pricing: number of claim events / policyholder per year (Poisson), cost per event (Gamma), total cost per policyholder per year (Tweedie / Compound Poisson Gamma). \* Credit Default: probability that a loan can't be paid back (Bernoulli). \* Fraud Detection: probability that a financial transaction like a cash transfer is a fraudulent transaction (Bernoulli). \* Predictive maintenance: number of production interruption events per year (Poisson), duration of interruption (Gamma), total interruption time per year (Tweedie / Compound Poisson Gamma). \* Medical Drug Testing: probability of curing a patient in a set of trials or probability that a patient will experience side effects (Bernoulli). \* News Classification: classification of news articles into three categories namely Business News, Politics and Entertainment news (Categorical). .. rubric:: References .. [10] McCullagh, Peter; Nelder, John (1989). Generalized Linear Models, Second Edition. Boca Raton: Chapman and Hall/CRC. ISBN 0-412-31760-5. .. [11] Jørgensen, B. (1992). The theory of exponential dispersion models and analysis of deviance. Monografias de matemática, no. 51. See also `Exponential dispersion model. `\_ Usage ----- :class:`TweedieRegressor` implements a generalized linear model for the Tweedie distribution, that allows to model any of the above mentioned distributions using the appropriate ``power`` parameter. In particular: - ``power = 0``: Normal distribution. Specific estimators such as :class:`Ridge`, :class:`ElasticNet` are generally more appropriate in this case. - ``power = 1``: Poisson distribution. :class:`PoissonRegressor` is exposed for convenience. However, it is strictly equivalent to `TweedieRegressor(power=1, link='log')`. - ``power = 2``: Gamma distribution. :class:`GammaRegressor` is exposed for convenience. However, it is strictly equivalent to `TweedieRegressor(power=2, link='log')`. - ``power = 3``: Inverse Gaussian distribution. The link function is determined by the `link` parameter. Usage example:: >>> from sklearn.linear\_model import TweedieRegressor >>> reg = TweedieRegressor(power=1, alpha=0.5, link='log') >>> reg.fit([[0, 0], [0, 1], [2, 2]], [0, 1, 2]) TweedieRegressor(alpha=0.5, link='log', power=1) >>> reg.coef\_ array([0.2463, 0.4337]) >>>
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ 0.041585035622119904, -0.044736072421073914, -0.013132604770362377, -0.02014286257326603, -0.0025617366190999746, -0.10402467101812363, 0.08155152946710587, -0.028076067566871643, 0.07696244865655899, 0.013939032331109047, 0.028297830373048782, -0.03894241526722908, 0.07351198047399521, 0....
-0.108953
to `TweedieRegressor(power=2, link='log')`. - ``power = 3``: Inverse Gaussian distribution. The link function is determined by the `link` parameter. Usage example:: >>> from sklearn.linear\_model import TweedieRegressor >>> reg = TweedieRegressor(power=1, alpha=0.5, link='log') >>> reg.fit([[0, 0], [0, 1], [2, 2]], [0, 1, 2]) TweedieRegressor(alpha=0.5, link='log', power=1) >>> reg.coef\_ array([0.2463, 0.4337]) >>> reg.intercept\_ np.float64(-0.7638) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_poisson\_regression\_non\_normal\_loss.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_tweedie\_regression\_insurance\_claims.py` .. dropdown:: Practical considerations The feature matrix `X` should be standardized before fitting. This ensures that the penalty treats features equally. Since the linear predictor :math:`Xw` can be negative and Poisson, Gamma and Inverse Gaussian distributions don't support negative values, it is necessary to apply an inverse link function that guarantees the non-negativeness. For example with `link='log'`, the inverse link function becomes :math:`h(Xw)=\exp(Xw)`. If you want to model a relative frequency, i.e. counts per exposure (time, volume, ...) you can do so by using a Poisson distribution and passing :math:`y=\frac{\mathrm{counts}}{\mathrm{exposure}}` as target values together with :math:`\mathrm{exposure}` as sample weights. For a concrete example see e.g. :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_tweedie\_regression\_insurance\_claims.py`. When performing cross-validation for the `power` parameter of `TweedieRegressor`, it is advisable to specify an explicit `scoring` function, because the default scorer :meth:`TweedieRegressor.score` is a function of `power` itself. Stochastic Gradient Descent - SGD ================================= Stochastic gradient descent is a simple yet very efficient approach to fit linear models. It is particularly useful when the number of samples (and the number of features) is very large. The ``partial\_fit`` method allows online/out-of-core learning. The classes :class:`SGDClassifier` and :class:`SGDRegressor` provide functionality to fit linear models for classification and regression using different (convex) loss functions and different penalties. E.g., with ``loss="log"``, :class:`SGDClassifier` fits a logistic regression model, while with ``loss="hinge"`` it fits a linear support vector machine (SVM). You can refer to the dedicated :ref:`sgd` documentation section for more details. .. \_perceptron: Perceptron ---------- The :class:`Perceptron` is another simple classification algorithm suitable for large scale learning and derives from SGD. By default: - It does not require a learning rate. - It is not regularized (penalized). - It updates its model only on mistakes. The last characteristic implies that the Perceptron is slightly faster to train than SGD with the hinge loss and that the resulting models are sparser. In fact, the :class:`Perceptron` is a wrapper around the :class:`SGDClassifier` class using a perceptron loss and a constant learning rate. Refer to :ref:`mathematical section ` of the SGD procedure for more details. .. \_passive\_aggressive: Passive Aggressive Algorithms ----------------------------- The passive-aggressive (PA) algorithms are another family of 2 algorithms (PA-I and PA-II) for large-scale online learning that derive from SGD. They are similar to the Perceptron in that they do not require a learning rate. However, contrary to the Perceptron, they include a regularization parameter ``eta0`` (:math:`C` in the reference paper). For classification, :class:`SGDClassifier(loss="hinge", penalty=None, learning\_rate="pa1", eta0=1.0)` can be used for PA-I or with ``learning\_rate="pa2"`` for PA-II. For regression, :class:`SGDRegressor(loss="epsilon\_insensitive", penalty=None, learning\_rate="pa1", eta0=1.0)` can be used for PA-I or with ``learning\_rate="pa2"`` for PA-II. .. dropdown:: References \* `"Online Passive-Aggressive Algorithms" `\_ K. Crammer, O. Dekel, J. Keshat, S. Shalev-Shwartz, Y. Singer - JMLR 7 (2006) Robustness regression: outliers and modeling errors ===================================================== Robust regression aims to fit a regression model in the presence of corrupt data: either outliers, or error in the model. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_theilsen\_001.png :target: ../auto\_examples/linear\_model/plot\_theilsen.html :scale: 50% :align: center Different scenario and useful concepts ---------------------------------------- There are different things to keep in mind when dealing with data corrupted by outliers: .. |y\_outliers| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_robust\_fit\_003.png :target: ../auto\_examples/linear\_model/plot\_robust\_fit.html :scale: 60% .. |X\_outliers| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_robust\_fit\_002.png :target: ../auto\_examples/linear\_model/plot\_robust\_fit.html :scale: 60% .. |large\_y\_outliers| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_robust\_fit\_005.png :target: ../auto\_examples/linear\_model/plot\_robust\_fit.html :scale: 60% \* \*\*Outliers in X or in y\*\*? ==================================== ==================================== Outliers in the y
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.011811355128884315, -0.0786975622177124, -0.04798592999577522, 0.026809222996234894, 0.008793207816779613, -0.07160069793462753, 0.05446483567357063, -0.009331203997135162, -0.009331203065812588, -0.012640592642128468, 0.0463394820690155, -0.0282170120626688, 0.002861975459381938, 0.034...
-0.055028
different things to keep in mind when dealing with data corrupted by outliers: .. |y\_outliers| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_robust\_fit\_003.png :target: ../auto\_examples/linear\_model/plot\_robust\_fit.html :scale: 60% .. |X\_outliers| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_robust\_fit\_002.png :target: ../auto\_examples/linear\_model/plot\_robust\_fit.html :scale: 60% .. |large\_y\_outliers| image:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_robust\_fit\_005.png :target: ../auto\_examples/linear\_model/plot\_robust\_fit.html :scale: 60% \* \*\*Outliers in X or in y\*\*? ==================================== ==================================== Outliers in the y direction Outliers in the X direction ==================================== ==================================== |y\_outliers| |X\_outliers| ==================================== ==================================== \* \*\*Fraction of outliers versus amplitude of error\*\* The number of outlying points matters, but also how much they are outliers. ==================================== ==================================== Small outliers Large outliers ==================================== ==================================== |y\_outliers| |large\_y\_outliers| ==================================== ==================================== An important notion of robust fitting is that of breakdown point: the fraction of data that can be outlying for the fit to start missing the inlying data. Note that in general, robust fitting in high-dimensional setting (large `n\_features`) is very hard. The robust models here will probably not work in these settings. .. topic:: Trade-offs: which estimator ? Scikit-learn provides 3 robust regression estimators: :ref:`RANSAC `, :ref:`Theil Sen ` and :ref:`HuberRegressor `. \* :ref:`HuberRegressor ` should be faster than :ref:`RANSAC ` and :ref:`Theil Sen ` unless the number of samples is very large, i.e. ``n\_samples`` >> ``n\_features``. This is because :ref:`RANSAC ` and :ref:`Theil Sen ` fit on smaller subsets of the data. However, both :ref:`Theil Sen ` and :ref:`RANSAC ` are unlikely to be as robust as :ref:`HuberRegressor ` for the default parameters. \* :ref:`RANSAC ` is faster than :ref:`Theil Sen ` and scales much better with the number of samples. \* :ref:`RANSAC ` will deal better with large outliers in the y direction (most common situation). \* :ref:`Theil Sen ` will cope better with medium-size outliers in the X direction, but this property will disappear in high-dimensional settings. When in doubt, use :ref:`RANSAC `. .. \_ransac\_regression: RANSAC: RANdom SAmple Consensus -------------------------------- RANSAC (RANdom SAmple Consensus) fits a model from random subsets of inliers from the complete data set. RANSAC is a non-deterministic algorithm producing only a reasonable result with a certain probability, which is dependent on the number of iterations (see `max\_trials` parameter). It is typically used for linear and non-linear regression problems and is especially popular in the field of photogrammetric computer vision. The algorithm splits the complete input sample data into a set of inliers, which may be subject to noise, and outliers, which are e.g. caused by erroneous measurements or invalid hypotheses about the data. The resulting model is then estimated only from the determined inliers. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_ransac\_001.png :target: ../auto\_examples/linear\_model/plot\_ransac.html :align: center :scale: 50% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_ransac.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_robust\_fit.py` .. dropdown:: Details of the algorithm Each iteration performs the following steps: 1. Select ``min\_samples`` random samples from the original data and check whether the set of data is valid (see ``is\_data\_valid``). 2. Fit a model to the random subset (``estimator.fit``) and check whether the estimated model is valid (see ``is\_model\_valid``). 3. Classify all data as inliers or outliers by calculating the residuals to the estimated model (``estimator.predict(X) - y``) - all data samples with absolute residuals smaller than or equal to the ``residual\_threshold`` are considered as inliers. 4. Save fitted model as best model if number of inlier samples is maximal. In case the current estimated model has the same number of inliers, it is only considered as the best model if it has better score. These steps are performed either a maximum number of times (``max\_trials``) or until one of the special stop criteria are met (see ``stop\_n\_inliers`` and ``stop\_score``). The final model is estimated using all inlier samples (consensus set) of the previously determined best model. The ``is\_data\_valid`` and ``is\_model\_valid`` functions allow to identify
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.03127560764551163, -0.05638172850012779, 0.0271438080817461, 0.04998046159744263, 0.08605339378118515, -0.10853873938322067, -0.0033840809483081102, 0.08663191646337509, -0.07560648024082184, -0.037302322685718536, 0.10919182002544403, 0.004786805249750614, 0.0723925307393074, 0.0092232...
0.018398
These steps are performed either a maximum number of times (``max\_trials``) or until one of the special stop criteria are met (see ``stop\_n\_inliers`` and ``stop\_score``). The final model is estimated using all inlier samples (consensus set) of the previously determined best model. The ``is\_data\_valid`` and ``is\_model\_valid`` functions allow to identify and reject degenerate combinations of random sub-samples. If the estimated model is not needed for identifying degenerate cases, ``is\_data\_valid`` should be used as it is called prior to fitting the model and thus leading to better computational performance. .. dropdown:: References \* https://en.wikipedia.org/wiki/RANSAC \* `"Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography" `\_ Martin A. Fischler and Robert C. Bolles - SRI International (1981) \* `"Performance Evaluation of RANSAC Family" `\_ Sunglok Choi, Taemin Kim and Wonpil Yu - BMVC (2009) .. \_theil\_sen\_regression: Theil-Sen estimator: generalized-median-based estimator -------------------------------------------------------- The :class:`TheilSenRegressor` estimator uses a generalization of the median in multiple dimensions. It is thus robust to multivariate outliers. Note however that the robustness of the estimator decreases quickly with the dimensionality of the problem. It loses its robustness properties and becomes no better than an ordinary least squares in high dimension. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_theilsen.py` \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_robust\_fit.py` .. dropdown:: Theoretical considerations :class:`TheilSenRegressor` is comparable to the :ref:`Ordinary Least Squares (OLS) ` in terms of asymptotic efficiency and as an unbiased estimator. In contrast to OLS, Theil-Sen is a non-parametric method which means it makes no assumption about the underlying distribution of the data. Since Theil-Sen is a median-based estimator, it is more robust against corrupted data aka outliers. In univariate setting, Theil-Sen has a breakdown point of about 29.3% in case of a simple linear regression which means that it can tolerate arbitrary corrupted data of up to 29.3%. .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_theilsen\_001.png :target: ../auto\_examples/linear\_model/plot\_theilsen.html :align: center :scale: 50% The implementation of :class:`TheilSenRegressor` in scikit-learn follows a generalization to a multivariate linear regression model [#f1]\_ using the spatial median which is a generalization of the median to multiple dimensions [#f2]\_. In terms of time and space complexity, Theil-Sen scales according to .. math:: \binom{n\_{\text{samples}}}{n\_{\text{subsamples}}} which makes it infeasible to be applied exhaustively to problems with a large number of samples and features. Therefore, the magnitude of a subpopulation can be chosen to limit the time and space complexity by considering only a random subset of all possible combinations. .. rubric:: References .. [#f1] Xin Dang, Hanxiang Peng, Xueqin Wang and Heping Zhang: `Theil-Sen Estimators in a Multiple Linear Regression Model. `\_ .. [#f2] T. Kärkkäinen and S. Äyrämö: `On Computation of Spatial Median for Robust Data Mining. `\_ Also see the `Wikipedia page `\_ .. \_huber\_regression: Huber Regression ---------------- The :class:`HuberRegressor` is different from :class:`Ridge` because it applies a linear loss to samples that are defined as outliers by the `epsilon` parameter. A sample is classified as an inlier if the absolute error of that sample is less than the threshold `epsilon`. It differs from :class:`TheilSenRegressor` and :class:`RANSACRegressor` because it does not ignore the effect of the outliers but gives a lesser weight to them. .. figure:: /auto\_examples/linear\_model/images/sphx\_glr\_plot\_huber\_vs\_ridge\_001.png :target: ../auto\_examples/linear\_model/plot\_huber\_vs\_ridge.html :align: center :scale: 50% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_huber\_vs\_ridge.py` .. dropdown:: Mathematical details :class:`HuberRegressor` minimizes .. math:: \min\_{w, \sigma} {\sum\_{i=1}^n\left(\sigma + H\_{\epsilon}\left(\frac{X\_{i}w - y\_{i}}{\sigma}\right)\sigma\right) + \alpha {||w||\_2}^2} where the loss function is given by .. math:: H\_{\epsilon}(z) = \begin{cases} z^2, & \text {if } |z| < \epsilon, \\ 2\epsilon|z| - \epsilon^2, & \text{otherwise} \end{cases} It is advised to set the parameter ``epsilon`` to 1.35 to achieve 95% statistical efficiency. .. rubric:: References \* Peter J. Huber, Elvezio M. Ronchetti: Robust Statistics, Concomitant scale
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.06671881675720215, -0.030790681019425392, 0.004184710793197155, 0.016128521412611008, 0.062047090381383896, -0.10041588544845581, -0.09832850098609924, 0.08287152647972107, -0.004661426413804293, -0.0059759411960840225, 0.01078044343739748, 0.0719764232635498, 0.014532629400491714, -0.0...
0.119492
by .. math:: H\_{\epsilon}(z) = \begin{cases} z^2, & \text {if } |z| < \epsilon, \\ 2\epsilon|z| - \epsilon^2, & \text{otherwise} \end{cases} It is advised to set the parameter ``epsilon`` to 1.35 to achieve 95% statistical efficiency. .. rubric:: References \* Peter J. Huber, Elvezio M. Ronchetti: Robust Statistics, Concomitant scale estimates, p. 172. The :class:`HuberRegressor` differs from using :class:`SGDRegressor` with loss set to `huber` in the following ways. - :class:`HuberRegressor` is scaling invariant. Once ``epsilon`` is set, scaling ``X`` and ``y`` down or up by different values would produce the same robustness to outliers as before. as compared to :class:`SGDRegressor` where ``epsilon`` has to be set again when ``X`` and ``y`` are scaled. - :class:`HuberRegressor` should be more efficient to use on data with small number of samples while :class:`SGDRegressor` needs a number of passes on the training data to produce the same robustness. Note that this estimator is different from the `R implementation of Robust Regression `\_ because the R implementation does a weighted least squares implementation with weights given to each sample on the basis of how much the residual is greater than a certain threshold. .. \_quantile\_regression: Quantile Regression =================== Quantile regression estimates the median or other quantiles of :math:`y` conditional on :math:`X`, while ordinary least squares (OLS) estimates the conditional mean. Quantile regression may be useful if one is interested in predicting an interval instead of point prediction. Sometimes, prediction intervals are calculated based on the assumption that prediction error is distributed normally with zero mean and constant variance. Quantile regression provides sensible prediction intervals even for errors with non-constant (but predictable) variance or non-normal distribution. .. figure:: /auto\_examples/linear\_model/images/sphx\_glr\_plot\_quantile\_regression\_002.png :target: ../auto\_examples/linear\_model/plot\_quantile\_regression.html :align: center :scale: 50% Based on minimizing the pinball loss, conditional quantiles can also be estimated by models other than linear models. For example, :class:`~sklearn.ensemble.GradientBoostingRegressor` can predict conditional quantiles if its parameter ``loss`` is set to ``"quantile"`` and parameter ``alpha`` is set to the quantile that should be predicted. See the example in :ref:`sphx\_glr\_auto\_examples\_ensemble\_plot\_gradient\_boosting\_quantile.py`. Most implementations of quantile regression are based on linear programming problem. The current implementation is based on :func:`scipy.optimize.linprog`. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_linear\_model\_plot\_quantile\_regression.py` .. dropdown:: Mathematical details As a linear model, the :class:`QuantileRegressor` gives linear predictions :math:`\hat{y}(w, X) = Xw` for the :math:`q`-th quantile, :math:`q \in (0, 1)`. The weights or coefficients :math:`w` are then found by the following minimization problem: .. math:: \min\_{w} {\frac{1}{n\_{\text{samples}}} \sum\_i PB\_q(y\_i - X\_i w) + \alpha ||w||\_1}. This consists of the pinball loss (also known as linear loss), see also :class:`~sklearn.metrics.mean\_pinball\_loss`, .. math:: PB\_q(t) = q \max(t, 0) + (1 - q) \max(-t, 0) = \begin{cases} q t, & t > 0, \\ 0, & t = 0, \\ (q-1) t, & t < 0 \end{cases} and the L1 penalty controlled by parameter ``alpha``, similar to :class:`Lasso`. As the pinball loss is only linear in the residuals, quantile regression is much more robust to outliers than squared error based estimation of the mean. Somewhat in between is the :class:`HuberRegressor`. .. dropdown:: References \* Koenker, R., & Bassett Jr, G. (1978). `Regression quantiles. `\_ Econometrica: journal of the Econometric Society, 33-50. \* Portnoy, S., & Koenker, R. (1997). :doi:`The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators. Statistical Science, 12, 279-300 <10.1214/ss/1030037960>`. \* Koenker, R. (2005). :doi:`Quantile Regression <10.1017/CBO9780511754098>`. Cambridge University Press. .. \_polynomial\_regression: Polynomial regression: extending linear models with basis functions =================================================================== .. currentmodule:: sklearn.preprocessing One common pattern within machine learning is to use linear models trained on nonlinear functions of the data. This approach maintains the generally fast performance of linear methods, while allowing them to fit a much wider
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.032202769070863724, -0.018238386139273643, -0.017065929248929024, 0.07077783346176147, 0.011510903015732765, -0.027600688859820366, -0.01485239528119564, 0.11285927891731262, -0.005450299941003323, -0.07931900024414062, 0.07798382639884949, 0.02564440667629242, 0.1322840452194214, -0.06...
0.05064
.. \_polynomial\_regression: Polynomial regression: extending linear models with basis functions =================================================================== .. currentmodule:: sklearn.preprocessing One common pattern within machine learning is to use linear models trained on nonlinear functions of the data. This approach maintains the generally fast performance of linear methods, while allowing them to fit a much wider range of data. .. dropdown:: Mathematical details For example, a simple linear regression can be extended by constructing \*\*polynomial features\*\* from the coefficients. In the standard linear regression case, you might have a model that looks like this for two-dimensional data: .. math:: \hat{y}(w, x) = w\_0 + w\_1 x\_1 + w\_2 x\_2 If we want to fit a paraboloid to the data instead of a plane, we can combine the features in second-order polynomials, so that the model looks like this: .. math:: \hat{y}(w, x) = w\_0 + w\_1 x\_1 + w\_2 x\_2 + w\_3 x\_1 x\_2 + w\_4 x\_1^2 + w\_5 x\_2^2 The (sometimes surprising) observation is that this is \*still a linear model\*: to see this, imagine creating a new set of features .. math:: z = [x\_1, x\_2, x\_1 x\_2, x\_1^2, x\_2^2] With this re-labeling of the data, our problem can be written .. math:: \hat{y}(w, z) = w\_0 + w\_1 z\_1 + w\_2 z\_2 + w\_3 z\_3 + w\_4 z\_4 + w\_5 z\_5 We see that the resulting \*polynomial regression\* is in the same class of linear models we considered above (i.e. the model is linear in :math:`w`) and can be solved by the same techniques. By considering linear fits within a higher-dimensional space built with these basis functions, the model has the flexibility to fit a much broader range of data. Here is an example of applying this idea to one-dimensional data, using polynomial features of varying degrees: .. figure:: ../auto\_examples/linear\_model/images/sphx\_glr\_plot\_polynomial\_interpolation\_001.png :target: ../auto\_examples/linear\_model/plot\_polynomial\_interpolation.html :align: center :scale: 50% This figure is created using the :class:`PolynomialFeatures` transformer, which transforms an input data matrix into a new data matrix of a given degree. It can be used as follows:: >>> from sklearn.preprocessing import PolynomialFeatures >>> import numpy as np >>> X = np.arange(6).reshape(3, 2) >>> X array([[0, 1], [2, 3], [4, 5]]) >>> poly = PolynomialFeatures(degree=2) >>> poly.fit\_transform(X) array([[ 1., 0., 1., 0., 0., 1.], [ 1., 2., 3., 4., 6., 9.], [ 1., 4., 5., 16., 20., 25.]]) The features of ``X`` have been transformed from :math:`[x\_1, x\_2]` to :math:`[1, x\_1, x\_2, x\_1^2, x\_1 x\_2, x\_2^2]`, and can now be used within any linear model. This sort of preprocessing can be streamlined with the :ref:`Pipeline ` tools. A single object representing a simple polynomial regression can be created and used as follows:: >>> from sklearn.preprocessing import PolynomialFeatures >>> from sklearn.linear\_model import LinearRegression >>> from sklearn.pipeline import Pipeline >>> import numpy as np >>> model = Pipeline([('poly', PolynomialFeatures(degree=3)), ... ('linear', LinearRegression(fit\_intercept=False))]) >>> # fit to an order-3 polynomial data >>> x = np.arange(5) >>> y = 3 - 2 \* x + x \*\* 2 - x \*\* 3 >>> model = model.fit(x[:, np.newaxis], y) >>> model.named\_steps['linear'].coef\_ array([ 3., -2., 1., -1.]) The linear model trained on polynomial features is able to exactly recover the input polynomial coefficients. In some cases it's not necessary to include higher powers of any single feature, but only the so-called \*interaction features\* that multiply together at most :math:`d` distinct features. These can be gotten from :class:`PolynomialFeatures` with the setting ``interaction\_only=True``. For example, when dealing with boolean features, :math:`x\_i^n = x\_i` for all :math:`n` and is therefore useless; but :math:`x\_i x\_j` represents the conjunction of two booleans. This way, we can solve the XOR problem with a linear classifier:: >>> from sklearn.linear\_model
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.06880504637956619, -0.04729097709059715, -0.01191532053053379, -0.016064833849668503, 0.030269775539636612, -0.04403252527117729, -0.07518471032381058, -0.025795914232730865, -0.044868819415569305, 0.015097405761480331, -0.024569928646087646, 0.037463944405317307, -0.033261921256780624, ...
0.150694
can be gotten from :class:`PolynomialFeatures` with the setting ``interaction\_only=True``. For example, when dealing with boolean features, :math:`x\_i^n = x\_i` for all :math:`n` and is therefore useless; but :math:`x\_i x\_j` represents the conjunction of two booleans. This way, we can solve the XOR problem with a linear classifier:: >>> from sklearn.linear\_model import Perceptron >>> from sklearn.preprocessing import PolynomialFeatures >>> import numpy as np >>> X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) >>> y = X[:, 0] ^ X[:, 1] >>> y array([0, 1, 1, 0]) >>> X = PolynomialFeatures(interaction\_only=True).fit\_transform(X).astype(int) >>> X array([[1, 0, 0, 0], [1, 0, 1, 0], [1, 1, 0, 0], [1, 1, 1, 1]]) >>> clf = Perceptron(fit\_intercept=False, max\_iter=10, tol=None, ... shuffle=False).fit(X, y) And the classifier "predictions" are perfect:: >>> clf.predict(X) array([0, 1, 1, 0]) >>> clf.score(X, y) 1.0
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/linear_model.rst
main
scikit-learn
[ -0.017428064718842506, 0.0014062434202060103, -0.0283182505518198, -0.018405519425868988, 0.04632025957107544, -0.05135485529899597, 0.052624501287937164, -0.12561143934726715, -0.053775444626808167, -0.012625733390450478, 0.03259211406111717, -0.04110023379325867, 0.043822042644023895, 0....
0.019247
.. \_clustering: ========== Clustering ========== `Clustering `\_\_ of unlabeled data can be performed with the module :mod:`sklearn.cluster`. Each clustering algorithm comes in two variants: a class, that implements the ``fit`` method to learn the clusters on train data, and a function, that, given train data, returns an array of integer labels corresponding to the different clusters. For the class, the labels over the training data can be found in the ``labels\_`` attribute. .. currentmodule:: sklearn.cluster .. topic:: Input data One important thing to note is that the algorithms implemented in this module can take different kinds of matrix as input. All the methods accept standard data matrices of shape ``(n\_samples, n\_features)``. These can be obtained from the classes in the :mod:`sklearn.feature\_extraction` module. For :class:`AffinityPropagation`, :class:`SpectralClustering` and :class:`DBSCAN` one can also input similarity matrices of shape ``(n\_samples, n\_samples)``. These can be obtained from the functions in the :mod:`sklearn.metrics.pairwise` module. Overview of clustering methods =============================== .. figure:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_cluster\_comparison\_001.png :target: ../auto\_examples/cluster/plot\_cluster\_comparison.html :align: center :scale: 50 A comparison of the clustering algorithms in scikit-learn .. list-table:: :header-rows: 1 :widths: 14 15 19 25 20 \* - Method name - Parameters - Scalability - Usecase - Geometry (metric used) \* - :ref:`K-Means ` - number of clusters - Very large ``n\_samples``, medium ``n\_clusters`` with :ref:`MiniBatch code ` - General-purpose, even cluster size, flat geometry, not too many clusters, inductive - Distances between points \* - :ref:`Affinity propagation ` - damping, sample preference - Not scalable with n\_samples - Many clusters, uneven cluster size, non-flat geometry, inductive - Graph distance (e.g. nearest-neighbor graph) \* - :ref:`Mean-shift ` - bandwidth - Not scalable with ``n\_samples`` - Many clusters, uneven cluster size, non-flat geometry, inductive - Distances between points \* - :ref:`Spectral clustering ` - number of clusters - Medium ``n\_samples``, small ``n\_clusters`` - Few clusters, even cluster size, non-flat geometry, transductive - Graph distance (e.g. nearest-neighbor graph) \* - :ref:`Ward hierarchical clustering ` - number of clusters or distance threshold - Large ``n\_samples`` and ``n\_clusters`` - Many clusters, possibly connectivity constraints, transductive - Distances between points \* - :ref:`Agglomerative clustering ` - number of clusters or distance threshold, linkage type, distance - Large ``n\_samples`` and ``n\_clusters`` - Many clusters, possibly connectivity constraints, non Euclidean distances, transductive - Any pairwise distance \* - :ref:`DBSCAN ` - neighborhood size - Very large ``n\_samples``, medium ``n\_clusters`` - Non-flat geometry, uneven cluster sizes, outlier removal, transductive - Distances between nearest points \* - :ref:`HDBSCAN ` - minimum cluster membership, minimum point neighbors - large ``n\_samples``, medium ``n\_clusters`` - Non-flat geometry, uneven cluster sizes, outlier removal, transductive, hierarchical, variable cluster density - Distances between nearest points \* - :ref:`OPTICS ` - minimum cluster membership - Very large ``n\_samples``, large ``n\_clusters`` - Non-flat geometry, uneven cluster sizes, variable cluster density, outlier removal, transductive - Distances between points \* - :ref:`Gaussian mixtures ` - many - Not scalable - Flat geometry, good for density estimation, inductive - Mahalanobis distances to centers \* - :ref:`BIRCH ` - branching factor, threshold, optional global clusterer. - Large ``n\_clusters`` and ``n\_samples`` - Large dataset, outlier removal, data reduction, inductive - Euclidean distance between points \* - :ref:`Bisecting K-Means ` - number of clusters - Very large ``n\_samples``, medium ``n\_clusters`` - General-purpose, even cluster size, flat geometry, no empty clusters, inductive, hierarchical - Distances between points Non-flat geometry clustering is useful when the clusters have a specific shape, i.e. a non-flat manifold, and the standard euclidean distance is not the right metric. This case arises in the two top rows of the figure above. Gaussian mixture models, useful for clustering, are described in :ref:`another chapter of the documentation `
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ -0.02556239441037178, -0.05363903194665909, -0.12555453181266785, 0.011049767956137657, 0.08306621760129929, 0.03907050937414169, 0.04605173319578171, -0.08472103625535965, -0.10448063164949417, -0.08665840327739716, 0.06608154624700546, 0.0013858623569831252, 0.07207535207271576, -0.04170...
0.178668
is useful when the clusters have a specific shape, i.e. a non-flat manifold, and the standard euclidean distance is not the right metric. This case arises in the two top rows of the figure above. Gaussian mixture models, useful for clustering, are described in :ref:`another chapter of the documentation ` dedicated to mixture models. KMeans can be seen as a special case of Gaussian mixture model with equal covariance per component. :term:`Transductive ` clustering methods (in contrast to :term:`inductive` clustering methods) are not designed to be applied to new, unseen data. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_inductive\_clustering.py`: An example of an inductive clustering model for handling new data. .. \_k\_means: K-means ======= The :class:`KMeans` algorithm clusters data by trying to separate samples in n groups of equal variance, minimizing a criterion known as the \*inertia\* or within-cluster sum-of-squares (see below). This algorithm requires the number of clusters to be specified. It scales well to large numbers of samples and has been used across a large range of application areas in many different fields. The k-means algorithm divides a set of :math:`N` samples :math:`X` into :math:`K` disjoint clusters :math:`C`, each described by the mean :math:`\mu\_j` of the samples in the cluster. The means are commonly called the cluster "centroids"; note that they are not, in general, points from :math:`X`, although they live in the same space. The K-means algorithm aims to choose centroids that minimise the \*\*inertia\*\*, or \*\*within-cluster sum-of-squares criterion\*\*: .. math:: \sum\_{i=0}^{n}\min\_{\mu\_j \in C}(||x\_i - \mu\_j||^2) Inertia can be recognized as a measure of how internally coherent clusters are. It suffers from various drawbacks: - Inertia makes the assumption that clusters are convex and isotropic, which is not always the case. It responds poorly to elongated clusters, or manifolds with irregular shapes. - Inertia is not a normalized metric: we just know that lower values are better and zero is optimal. But in very high-dimensional spaces, Euclidean distances tend to become inflated (this is an instance of the so-called "curse of dimensionality"). Running a dimensionality reduction algorithm such as :ref:`PCA` prior to k-means clustering can alleviate this problem and speed up the computations. .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_kmeans\_assumptions\_002.png :target: ../auto\_examples/cluster/plot\_kmeans\_assumptions.html :align: center :scale: 50 For more detailed descriptions of the issues shown above and how to address them, refer to the examples :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_assumptions.py` and :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_silhouette\_analysis.py`. K-means is often referred to as Lloyd's algorithm. In basic terms, the algorithm has three steps. The first step chooses the initial centroids, with the most basic method being to choose :math:`k` samples from the dataset :math:`X`. After initialization, K-means consists of looping between the two other steps. The first step assigns each sample to its nearest centroid. The second step creates new centroids by taking the mean value of all of the samples assigned to each previous centroid. The difference between the old and the new centroids are computed and the algorithm repeats these last two steps until this value is less than a threshold. In other words, it repeats until the centroids do not move significantly. .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_kmeans\_digits\_001.png :target: ../auto\_examples/cluster/plot\_kmeans\_digits.html :align: right :scale: 35 K-means is equivalent to the expectation-maximization algorithm with a small, all-equal, diagonal covariance matrix. The algorithm can also be understood through the concept of `Voronoi diagrams `\_. First the Voronoi diagram of the points is calculated using the current centroids. Each segment in the Voronoi diagram becomes a separate cluster. Secondly, the centroids are updated to the mean of each segment. The algorithm then repeats this until a stopping criterion is fulfilled. Usually, the algorithm stops when the relative decrease in the objective function between iterations is less than the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ -0.009736282750964165, -0.10333973169326782, 0.016703106462955475, -0.027883905917406082, 0.0888359546661377, 0.010253334417939186, -0.020652921870350838, -0.05540013313293457, 0.0073030549101531506, 0.004274341743439436, 0.06477752327919006, -0.034986089915037155, 0.08241122961044312, -0....
0.185289
Each segment in the Voronoi diagram becomes a separate cluster. Secondly, the centroids are updated to the mean of each segment. The algorithm then repeats this until a stopping criterion is fulfilled. Usually, the algorithm stops when the relative decrease in the objective function between iterations is less than the given tolerance value. This is not the case in this implementation: iteration stops when centroids move less than the tolerance. Given enough time, K-means will always converge, however this may be to a local minimum. This is highly dependent on the initialization of the centroids. As a result, the computation is often done several times, with different initializations of the centroids. One method to help address this issue is the k-means++ initialization scheme, which has been implemented in scikit-learn (use the ``init='k-means++'`` parameter). This initializes the centroids to be (generally) distant from each other, leading to probably better results than random initialization, as shown in the reference. For detailed examples of comparing different initialization schemes, refer to :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_digits.py` and :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_stability\_low\_dim\_dense.py`. K-means++ can also be called independently to select seeds for other clustering algorithms, see :func:`sklearn.cluster.kmeans\_plusplus` for details and example usage. The algorithm supports sample weights, which can be given by a parameter ``sample\_weight``. This allows to assign more weight to some samples when computing cluster centers and values of inertia. For example, assigning a weight of 2 to a sample is equivalent to adding a duplicate of that sample to the dataset :math:`X`. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_clustering.py`: Document clustering using :class:`KMeans` and :class:`MiniBatchKMeans` based on sparse data \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_plusplus.py`: Using K-means++ to select seeds for other clustering algorithms. Low-level parallelism --------------------- :class:`KMeans` benefits from OpenMP based parallelism through Cython. Small chunks of data (256 samples) are processed in parallel, which in addition yields a low memory footprint. For more details on how to control the number of threads, please refer to our :ref:`parallelism` notes. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_assumptions.py`: Demonstrating when k-means performs intuitively and when it does not \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_digits.py`: Clustering handwritten digits .. dropdown:: References \* `"k-means++: The advantages of careful seeding" `\_ Arthur, David, and Sergei Vassilvitskii, \*Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms\*, Society for Industrial and Applied Mathematics (2007) .. \_mini\_batch\_kmeans: Mini Batch K-Means ------------------ The :class:`MiniBatchKMeans` is a variant of the :class:`KMeans` algorithm which uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. Mini-batches are subsets of the input data, randomly sampled in each training iteration. These mini-batches drastically reduce the amount of computation required to converge to a local solution. In contrast to other algorithms that reduce the convergence time of k-means, mini-batch k-means produces results that are generally only slightly worse than the standard algorithm. The algorithm iterates between two major steps, similar to vanilla k-means. In the first step, :math:`b` samples are drawn randomly from the dataset, to form a mini-batch. These are then assigned to the nearest centroid. In the second step, the centroids are updated. In contrast to k-means, this is done on a per-sample basis. For each sample in the mini-batch, the assigned centroid is updated by taking the streaming average of the sample and all previous samples assigned to that centroid. This has the effect of decreasing the rate of change for a centroid over time. These steps are performed until convergence or a predetermined number of iterations is reached. :class:`MiniBatchKMeans` converges faster than :class:`KMeans`, but the quality of the results is reduced. In practice this difference in quality can be quite small, as shown in the example and cited reference. .. figure:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_mini\_batch\_kmeans\_001.png :target: ../auto\_examples/cluster/plot\_mini\_batch\_kmeans.html
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ 0.010270425118505955, -0.04090201109647751, 0.037438880652189255, -0.08215335756540298, -0.041926342993974686, -0.1461135894060135, -0.06668256968259811, -0.00989923533052206, 0.050278306007385254, -0.040770649909973145, 0.019332803785800934, 0.024799847975373268, 0.008687208406627178, -0....
0.085261
These steps are performed until convergence or a predetermined number of iterations is reached. :class:`MiniBatchKMeans` converges faster than :class:`KMeans`, but the quality of the results is reduced. In practice this difference in quality can be quite small, as shown in the example and cited reference. .. figure:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_mini\_batch\_kmeans\_001.png :target: ../auto\_examples/cluster/plot\_mini\_batch\_kmeans.html :align: center :scale: 100 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_mini\_batch\_kmeans.py`: Comparison of :class:`KMeans` and :class:`MiniBatchKMeans` \* :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_clustering.py`: Document clustering using :class:`KMeans` and :class:`MiniBatchKMeans` based on sparse data \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_dict\_face\_patches.py` .. dropdown:: References \* `"Web Scale K-Means clustering" `\_ D. Sculley, \*Proceedings of the 19th international conference on World wide web\* (2010). .. \_affinity\_propagation: Affinity Propagation ==================== :class:`AffinityPropagation` creates clusters by sending messages between pairs of samples until convergence. A dataset is then described using a small number of exemplars, which are identified as those most representative of other samples. The messages sent between pairs represent the suitability for one sample to be the exemplar of the other, which is updated in response to the values from other pairs. This updating happens iteratively until convergence, at which point the final exemplars are chosen, and hence the final clustering is given. .. figure:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_affinity\_propagation\_001.png :target: ../auto\_examples/cluster/plot\_affinity\_propagation.html :align: center :scale: 50 Affinity Propagation can be interesting as it chooses the number of clusters based on the data provided. For this purpose, the two important parameters are the \*preference\*, which controls how many exemplars are used, and the \*damping factor\* which damps the responsibility and availability messages to avoid numerical oscillations when updating these messages. The main drawback of Affinity Propagation is its complexity. The algorithm has a time complexity of the order :math:`O(N^2 T)`, where :math:`N` is the number of samples and :math:`T` is the number of iterations until convergence. Further, the memory complexity is of the order :math:`O(N^2)` if a dense similarity matrix is used, but reducible if a sparse similarity matrix is used. This makes Affinity Propagation most appropriate for small to medium sized datasets. .. dropdown:: Algorithm description The messages sent between points belong to one of two categories. The first is the responsibility :math:`r(i, k)`, which is the accumulated evidence that sample :math:`k` should be the exemplar for sample :math:`i`. The second is the availability :math:`a(i, k)` which is the accumulated evidence that sample :math:`i` should choose sample :math:`k` to be its exemplar, and considers the values for all other samples that :math:`k` should be an exemplar. In this way, exemplars are chosen by samples if they are (1) similar enough to many samples and (2) chosen by many samples to be representative of themselves. More formally, the responsibility of a sample :math:`k` to be the exemplar of sample :math:`i` is given by: .. math:: r(i, k) \leftarrow s(i, k) - max [ a(i, k') + s(i, k') \forall k' \neq k ] Where :math:`s(i, k)` is the similarity between samples :math:`i` and :math:`k`. The availability of sample :math:`k` to be the exemplar of sample :math:`i` is given by: .. math:: a(i, k) \leftarrow min [0, r(k, k) + \sum\_{i'~s.t.~i' \notin \{i, k\}}{r(i', k)}] To begin with, all values for :math:`r` and :math:`a` are set to zero, and the calculation of each iterates until convergence. As discussed above, in order to avoid numerical oscillations when updating the messages, the damping factor :math:`\lambda` is introduced to iteration process: .. math:: r\_{t+1}(i, k) = \lambda\cdot r\_{t}(i, k) + (1-\lambda)\cdot r\_{t+1}(i, k) .. math:: a\_{t+1}(i, k) = \lambda\cdot a\_{t}(i, k) + (1-\lambda)\cdot a\_{t+1}(i, k) where :math:`t` indicates the iteration times. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_affinity\_propagation.py`: Affinity Propagation on a synthetic 2D datasets with 3 classes \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_stock\_market.py` Affinity Propagation on financial time series
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ -0.020482556894421577, -0.017467956990003586, -0.043625202029943466, 0.021936284378170967, 0.04905646666884422, -0.026042431592941284, -0.0685068890452385, 0.0040733832865953445, -0.051681146025657654, 0.00024270056746900082, 0.08680875599384308, -0.015899619087576866, 0.05258948355913162, ...
0.143419
k) = \lambda\cdot r\_{t}(i, k) + (1-\lambda)\cdot r\_{t+1}(i, k) .. math:: a\_{t+1}(i, k) = \lambda\cdot a\_{t}(i, k) + (1-\lambda)\cdot a\_{t+1}(i, k) where :math:`t` indicates the iteration times. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_affinity\_propagation.py`: Affinity Propagation on a synthetic 2D datasets with 3 classes \* :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_stock\_market.py` Affinity Propagation on financial time series to find groups of companies .. \_mean\_shift: Mean Shift ========== :class:`MeanShift` clustering aims to discover \*blobs\* in a smooth density of samples. It is a centroid based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids. .. dropdown:: Mathematical details The position of centroid candidates is iteratively adjusted using a technique called hill climbing, which finds local maxima of the estimated probability density. Given a candidate centroid :math:`x` for iteration :math:`t`, the candidate is updated according to the following equation: .. math:: x^{t+1} = x^t + m(x^t) Where :math:`m` is the \*mean shift\* vector that is computed for each centroid that points towards a region of the maximum increase in the density of points. To compute :math:`m` we define :math:`N(x)` as the neighborhood of samples within a given distance around :math:`x`. Then :math:`m` is computed using the following equation, effectively updating a centroid to be the mean of the samples within its neighborhood: .. math:: m(x) = \frac{1}{|N(x)|} \sum\_{x\_j \in N(x)}x\_j - x In general, the equation for :math:`m` depends on a kernel used for density estimation. The generic formula is: .. math:: m(x) = \frac{\sum\_{x\_j \in N(x)}K(x\_j - x)x\_j}{\sum\_{x\_j \in N(x)}K(x\_j - x)} - x In our implementation, :math:`K(x)` is equal to 1 if :math:`x` is small enough and is equal to 0 otherwise. Effectively :math:`K(y - x)` indicates whether :math:`y` is in the neighborhood of :math:`x`. The algorithm automatically sets the number of clusters, instead of relying on a parameter ``bandwidth``, which dictates the size of the region to search through. This parameter can be set manually, but can be estimated using the provided ``estimate\_bandwidth`` function, which is called if the bandwidth is not set. The algorithm is not highly scalable, as it requires multiple nearest neighbor searches during the execution of the algorithm. The algorithm is guaranteed to converge, however the algorithm will stop iterating when the change in centroids is small. Labelling a new sample is performed by finding the nearest centroid for a given sample. .. figure:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_mean\_shift\_001.png :target: ../auto\_examples/cluster/plot\_mean\_shift.html :align: center :scale: 50 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_mean\_shift.py`: Mean Shift clustering on a synthetic 2D datasets with 3 classes. .. dropdown:: References \* :doi:`"Mean shift: A robust approach toward feature space analysis" <10.1109/34.1000236>` D. Comaniciu and P. Meer, \*IEEE Transactions on Pattern Analysis and Machine Intelligence\* (2002) .. \_spectral\_clustering: Spectral clustering =================== :class:`SpectralClustering` performs a low-dimension embedding of the affinity matrix between samples, followed by clustering, e.g., by KMeans, of the components of the eigenvectors in the low dimensional space. It is especially computationally efficient if the affinity matrix is sparse and the `amg` solver is used for the eigenvalue problem (Note, the `amg` solver requires that the `pyamg `\_ module is installed.) The present version of SpectralClustering requires the number of clusters to be specified in advance. It works well for a small number of clusters, but is not advised for many clusters. For two clusters, SpectralClustering solves a convex relaxation of the `normalized cuts `\_ problem on the similarity graph: cutting the graph in two so that the weight of the edges cut is small compared to the weights of the edges inside
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ -0.08057249337434769, -0.08674075454473495, -0.06454301625490189, 0.00939762033522129, -0.03640427440404892, -0.035343851894140244, 0.09218031913042068, -0.011880805715918541, 0.015663836151361465, -0.00500519759953022, 0.0789560079574585, -0.03169596195220947, 0.06403924524784088, -0.0040...
0.181971
of clusters, but is not advised for many clusters. For two clusters, SpectralClustering solves a convex relaxation of the `normalized cuts `\_ problem on the similarity graph: cutting the graph in two so that the weight of the edges cut is small compared to the weights of the edges inside each cluster. This criteria is especially interesting when working on images, where graph vertices are pixels, and weights of the edges of the similarity graph are computed using a function of a gradient of the image. .. |noisy\_img| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_segmentation\_toy\_001.png :target: ../auto\_examples/cluster/plot\_segmentation\_toy.html :scale: 50 .. |segmented\_img| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_segmentation\_toy\_002.png :target: ../auto\_examples/cluster/plot\_segmentation\_toy.html :scale: 50 .. centered:: |noisy\_img| |segmented\_img| .. warning:: Transforming distance to well-behaved similarities Note that if the values of your similarity matrix are not well distributed, e.g. with negative values or with a distance matrix rather than a similarity, the spectral problem will be singular and the problem not solvable. In which case it is advised to apply a transformation to the entries of the matrix. For instance, in the case of a signed distance matrix, is common to apply a heat kernel:: similarity = np.exp(-beta \* distance / distance.std()) See the examples for such an application. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_segmentation\_toy.py`: Segmenting objects from a noisy background using spectral clustering. \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_coin\_segmentation.py`: Spectral clustering to split the image of coins in regions. .. |coin\_kmeans| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_coin\_segmentation\_001.png :target: ../auto\_examples/cluster/plot\_coin\_segmentation.html :scale: 35 .. |coin\_discretize| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_coin\_segmentation\_002.png :target: ../auto\_examples/cluster/plot\_coin\_segmentation.html :scale: 35 .. |coin\_cluster\_qr| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_coin\_segmentation\_003.png :target: ../auto\_examples/cluster/plot\_coin\_segmentation.html :scale: 35 Different label assignment strategies ------------------------------------- Different label assignment strategies can be used, corresponding to the ``assign\_labels`` parameter of :class:`SpectralClustering`. ``"kmeans"`` strategy can match finer details, but can be unstable. In particular, unless you control the ``random\_state``, it may not be reproducible from run-to-run, as it depends on random initialization. The alternative ``"discretize"`` strategy is 100% reproducible, but tends to create parcels of fairly even and geometrical shape. The recently added ``"cluster\_qr"`` option is a deterministic alternative that tends to create the visually best partitioning on the example application below. ================================ ================================ ================================ ``assign\_labels="kmeans"`` ``assign\_labels="discretize"`` ``assign\_labels="cluster\_qr"`` ================================ ================================ ================================ |coin\_kmeans| |coin\_discretize| |coin\_cluster\_qr| ================================ ================================ ================================ .. dropdown:: References \* `"Multiclass spectral clustering" `\_ Stella X. Yu, Jianbo Shi, 2003 \* :doi:`"Simple, direct, and efficient multi-way spectral clustering"<10.1093/imaiai/iay008>` Anil Damle, Victor Minden, Lexing Ying, 2019 .. \_spectral\_clustering\_graph: Spectral Clustering Graphs -------------------------- Spectral Clustering can also be used to partition graphs via their spectral embeddings. In this case, the affinity matrix is the adjacency matrix of the graph, and SpectralClustering is initialized with `affinity='precomputed'`:: >>> from sklearn.cluster import SpectralClustering >>> sc = SpectralClustering(3, affinity='precomputed', n\_init=100, ... assign\_labels='discretize') >>> sc.fit\_predict(adjacency\_matrix) # doctest: +SKIP .. dropdown:: References \* :doi:`"A Tutorial on Spectral Clustering" <10.1007/s11222-007-9033-z>` Ulrike von Luxburg, 2007 \* :doi:`"Normalized cuts and image segmentation" <10.1109/34.868688>` Jianbo Shi, Jitendra Malik, 2000 \* `"A Random Walks View of Spectral Segmentation" `\_ Marina Meila, Jianbo Shi, 2001 \* `"On Spectral Clustering: Analysis and an algorithm" `\_ Andrew Y. Ng, Michael I. Jordan, Yair Weiss, 2001 \* :arxiv:`"Preconditioned Spectral Clustering for Stochastic Block Partition Streaming Graph Challenge" <1708.07481>` David Zhuzhunashvili, Andrew Knyazev .. \_hierarchical\_clustering: Hierarchical clustering ======================= Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. See the `Wikipedia page `\_ for more details. The :class:`AgglomerativeClustering` object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ 0.005357165355235338, -0.0845610573887825, 0.01030399464070797, -0.014764283783733845, 0.04080171883106232, -0.04004727676510811, -0.029164202511310577, -0.04708309844136238, -0.032122351229190826, -0.023776747286319733, 0.024979036301374435, -0.10129031538963318, 0.03577220067381859, 0.05...
0.052952
is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. See the `Wikipedia page `\_ for more details. The :class:`AgglomerativeClustering` object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The linkage criteria determines the metric used for the merge strategy: - \*\*Ward\*\* minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. - \*\*Maximum\*\* or \*\*complete linkage\*\* minimizes the maximum distance between observations of pairs of clusters. - \*\*Average linkage\*\* minimizes the average of the distances between all observations of pairs of clusters. - \*\*Single linkage\*\* minimizes the distance between the closest observations of pairs of clusters. :class:`AgglomerativeClustering` can also scale to large number of samples when it is used jointly with a connectivity matrix, but is computationally expensive when no connectivity constraints are added between samples: it considers at each step all the possible merges. .. topic:: :class:`FeatureAgglomeration` The :class:`FeatureAgglomeration` uses agglomerative clustering to group together features that look very similar, thus decreasing the number of features. It is a dimensionality reduction tool, see :ref:`data\_reduction`. Different linkage type: Ward, complete, average, and single linkage ------------------------------------------------------------------- :class:`AgglomerativeClustering` supports Ward, single, average, and complete linkage strategies. .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_linkage\_comparison\_001.png :target: ../auto\_examples/cluster/plot\_linkage\_comparison.html :scale: 43 Agglomerative cluster has a "rich get richer" behavior that leads to uneven cluster sizes. In this regard, single linkage is the worst strategy, and Ward gives the most regular sizes. However, the affinity (or distance used in clustering) cannot be varied with Ward, thus for non Euclidean metrics, average linkage is a good alternative. Single linkage, while not robust to noisy data, can be computed very efficiently and can therefore be useful to provide hierarchical clustering of larger datasets. Single linkage can also perform well on non-globular data. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_digits\_linkage.py`: exploration of the different linkage strategies in a real dataset. \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_linkage\_comparison.py`: exploration of the different linkage strategies in toy datasets. Visualization of cluster hierarchy ---------------------------------- It's possible to visualize the tree representing the hierarchical merging of clusters as a dendrogram. Visual inspection can often be useful for understanding the structure of the data, though more so in the case of small sample sizes. .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_agglomerative\_dendrogram\_001.png :target: ../auto\_examples/cluster/plot\_agglomerative\_dendrogram.html :scale: 42 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_agglomerative\_dendrogram.py` Adding connectivity constraints ------------------------------- An interesting aspect of :class:`AgglomerativeClustering` is that connectivity constraints can be added to this algorithm (only adjacent clusters can be merged together), through a connectivity matrix that defines for each sample the neighboring samples following a given structure of the data. For instance, in the Swiss-roll example below, the connectivity constraints forbid the merging of points that are not adjacent on the Swiss roll, and thus avoid forming clusters that extend across overlapping folds of the roll. .. |unstructured| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_ward\_structured\_vs\_unstructured\_001.png :target: ../auto\_examples/cluster/plot\_ward\_structured\_vs\_unstructured.html :scale: 49 .. |structured| image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_ward\_structured\_vs\_unstructured\_002.png :target: ../auto\_examples/cluster/plot\_ward\_structured\_vs\_unstructured.html :scale: 49 .. centered:: |unstructured| |structured| These constraints are not only useful to impose a certain local structure, but they also make the algorithm faster, especially when the number of the samples is high. The connectivity constraints are imposed via a connectivity matrix: a scipy sparse matrix that has elements only at the intersection of a row and a column with indices of the dataset that should be connected. This matrix can be constructed from a-priori information: for instance, you may wish to cluster web pages by only merging pages with a link pointing from one to another. It can
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ 0.012607655487954617, -0.08421842753887177, -0.0005872877663932741, -0.009791473858058453, 0.0327259860932827, -0.02201850898563862, -0.005225836299359798, 0.036572616547346115, 0.008971689268946648, -0.006236155051738024, -0.018614567816257477, -0.005733713507652283, 0.04641366004943848, ...
0.178835
only at the intersection of a row and a column with indices of the dataset that should be connected. This matrix can be constructed from a-priori information: for instance, you may wish to cluster web pages by only merging pages with a link pointing from one to another. It can also be learned from the data, for instance using :func:`sklearn.neighbors.kneighbors\_graph` to restrict merging to nearest neighbors as in :ref:`this example `, or using :func:`sklearn.feature\_extraction.image.grid\_to\_graph` to enable only merging of neighboring pixels on an image, as in the :ref:`coin ` example. .. warning:: \*\*Connectivity constraints with single, average and complete linkage\*\* Connectivity constraints and single, complete or average linkage can enhance the 'rich getting richer' aspect of agglomerative clustering, particularly so if they are built with :func:`sklearn.neighbors.kneighbors\_graph`. In the limit of a small number of clusters, they tend to give a few macroscopically occupied clusters and almost empty ones. (see the discussion in :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_ward\_structured\_vs\_unstructured.py`). Single linkage is the most brittle linkage option with regard to this issue. .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_ward\_structured\_vs\_unstructured\_003.png :target: ../auto\_examples/cluster/plot\_ward\_structured\_vs\_unstructured.html :scale: 38 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_coin\_ward\_segmentation.py`: Ward clustering to split the image of coins in regions. \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_ward\_structured\_vs\_unstructured.py`: Example of Ward algorithm on a Swiss-roll, comparison of structured approaches versus unstructured approaches. \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_feature\_agglomeration\_vs\_univariate\_selection.py`: Example of dimensionality reduction with feature agglomeration based on Ward hierarchical clustering. Varying the metric ------------------- Single, average and complete linkage can be used with a variety of distances (or affinities), in particular Euclidean distance (\*l2\*), Manhattan distance (or Cityblock, or \*l1\*), cosine distance, or any precomputed affinity matrix. \* \*l1\* distance is often good for sparse features, or sparse noise: i.e. many of the features are zero, as in text mining using occurrences of rare words. \* \*cosine\* distance is interesting because it is invariant to global scalings of the signal. The guidelines for choosing a metric is to use one that maximizes the distance between samples in different classes, and minimizes that within each class. .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_agglomerative\_clustering\_metrics\_005.png :target: ../auto\_examples/cluster/plot\_agglomerative\_clustering\_metrics.html :scale: 32 .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_agglomerative\_clustering\_metrics\_006.png :target: ../auto\_examples/cluster/plot\_agglomerative\_clustering\_metrics.html :scale: 32 .. image:: ../auto\_examples/cluster/images/sphx\_glr\_plot\_agglomerative\_clustering\_metrics\_007.png :target: ../auto\_examples/cluster/plot\_agglomerative\_clustering\_metrics.html :scale: 32 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_agglomerative\_clustering\_metrics.py` Bisecting K-Means ----------------- .. \_bisect\_k\_means: The :class:`BisectingKMeans` is an iterative variant of :class:`KMeans`, using divisive hierarchical clustering. Instead of creating all centroids at once, centroids are picked progressively based on a previous clustering: a cluster is split into two new clusters repeatedly until the target number of clusters is reached. :class:`BisectingKMeans` is more efficient than :class:`KMeans` when the number of clusters is large since it only works on a subset of the data at each bisection while :class:`KMeans` always works on the entire dataset. Although :class:`BisectingKMeans` can't benefit from the advantages of the `"k-means++"` initialization by design, it will still produce comparable results than `KMeans(init="k-means++")` in terms of inertia at cheaper computational costs, and will likely produce better results than `KMeans` with a random initialization. This variant is more efficient to agglomerative clustering if the number of clusters is small compared to the number of data points. This variant also does not produce empty clusters. There exist two strategies for selecting the cluster to split: - ``bisecting\_strategy="largest\_cluster"`` selects the cluster having the most points - ``bisecting\_strategy="biggest\_inertia"`` selects the cluster with biggest inertia (cluster with biggest Sum of Squared Errors within) Picking by largest amount of data points in most cases produces result as accurate as picking by inertia and is faster (especially for larger amount of data points, where calculating error may be costly). Picking by largest amount of data points will also likely produce clusters of similar sizes while `KMeans` is known to produce clusters of different sizes. Difference between
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ -0.0023609951604157686, -0.06784085929393768, -0.0897817611694336, 0.028041627258062363, 0.1265394240617752, -0.0372091569006443, -0.038203705102205276, -0.06384208798408508, -0.05189039930701256, 0.01396200805902481, 0.06963012367486954, 0.011813884600996971, 0.056151971220970154, -0.0050...
0.018405
result as accurate as picking by inertia and is faster (especially for larger amount of data points, where calculating error may be costly). Picking by largest amount of data points will also likely produce clusters of similar sizes while `KMeans` is known to produce clusters of different sizes. Difference between Bisecting K-Means and regular K-Means can be seen on example :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_bisect\_kmeans.py`. While the regular K-Means algorithm tends to create non-related clusters, clusters from Bisecting K-Means are well ordered and create quite a visible hierarchy. .. dropdown:: References \* `"A Comparison of Document Clustering Techniques" `\_ Michael Steinbach, George Karypis and Vipin Kumar, Department of Computer Science and Egineering, University of Minnesota (June 2000) \* `"Performance Analysis of K-Means and Bisecting K-Means Algorithms in Weblog Data" `\_ K.Abirami and Dr.P.Mayilvahanan, International Journal of Emerging Technologies in Engineering Research (IJETER) Volume 4, Issue 8, (August 2016) \* `"Bisecting K-means Algorithm Based on K-valued Self-determining and Clustering Center Optimization" `\_ Jian Di, Xinyue Gou School of Control and Computer Engineering,North China Electric Power University, Baoding, Hebei, China (August 2017) .. \_dbscan: DBSCAN ====== The :class:`DBSCAN` algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. The central component to the DBSCAN is the concept of \*core samples\*, which are samples that are in areas of high density. A cluster is therefore a set of core samples, each close to each other (measured by some distance measure) and a set of non-core samples that are close to a core sample (but are not themselves core samples). There are two parameters to the algorithm, ``min\_samples`` and ``eps``, which define formally what we mean when we say \*dense\*. Higher ``min\_samples`` or lower ``eps`` indicate higher density necessary to form a cluster. More formally, we define a core sample as being a sample in the dataset such that there exist ``min\_samples`` other samples within a distance of ``eps``, which are defined as \*neighbors\* of the core sample. This tells us that the core sample is in a dense area of the vector space. A cluster is a set of core samples that can be built by recursively taking a core sample, finding all of its neighbors that are core samples, finding all of \*their\* neighbors that are core samples, and so on. A cluster also has a set of non-core samples, which are samples that are neighbors of a core sample in the cluster but are not themselves core samples. Intuitively, these samples are on the fringes of a cluster. Any core sample is part of a cluster, by definition. Any sample that is not a core sample, and is at least ``eps`` in distance from any core sample, is considered an outlier by the algorithm. While the parameter ``min\_samples`` primarily controls how tolerant the algorithm is towards noise (on noisy and large data sets it may be desirable to increase this parameter), the parameter ``eps`` is \*crucial to choose appropriately\* for the data set and distance function and usually cannot be left at the default value. It controls the local neighborhood of the points. When chosen too small, most data will not be clustered at all (and labeled as ``-1`` for "noise"). When chosen too large, it causes close clusters to be merged into one cluster, and eventually the entire data set to be returned as a single cluster. Some heuristics for choosing this parameter have been discussed in the literature, for example based on a knee
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/clustering.rst
main
scikit-learn
[ 0.010338693857192993, 0.006625590845942497, -0.002515111817047, -0.016462991014122963, 0.0660722479224205, -0.10925566405057907, -0.0004971190355718136, -0.007953990250825882, 0.07207217812538147, 0.03036162070930004, 0.04987151548266411, -0.005682614166289568, 0.08121169358491898, -0.0242...
0.158328