content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
sub-problem while now all of them are recorded. :pr:`21998` by :user:`Olivier Grisel `. - |Fix| The property `family` of :class:`linear\_model.TweedieRegressor` is not validated in `\_\_init\_\_` anymore. Instead, this (private) property is deprecated in :class:`linear\_model.GammaRegressor`, :class:`linear\_model.PoissonRegressor` and :class:`linear\_model.TweedieRegressor`, and will be removed in 1.3. :pr:`22548` by :user:`Christian Lorentzen `. - |Fix| The `coef\_` and `intercept\_` attributes of :class:`linear\_model.LinearRegression` are now correctly computed in the presence of sample weights when the input is sparse. :pr:`22891` by :user:`Jérémie du Boisberranger `. - |Fix| The `coef\_` and `intercept\_` attributes of :class:`linear\_model.Ridge` with `solver="sparse\_cg"` and `solver="lbfgs"` are now correctly computed in the presence of sample weights when the input is sparse. :pr:`22899` by :user:`Jérémie du Boisberranger `. - |Fix| :class:`linear\_model.SGDRegressor` and :class:`linear\_model.SGDClassifier` now compute the validation error correctly when early stopping is enabled. :pr:`23256` by :user:`Zhehao Liu `. - |API| :class:`linear\_model.LassoLarsIC` now exposes `noise\_variance` as a parameter in order to provide an estimate of the noise variance. This is particularly relevant when `n\_features > n\_samples` and the estimator of the noise variance cannot be computed. :pr:`21481` by :user:`Guillaume Lemaitre `. :mod:`sklearn.manifold` ....................... - |Feature| :class:`manifold.Isomap` now supports radius-based neighbors via the `radius` argument. :pr:`19794` by :user:`Zhehao Liu `. - |Enhancement| :func:`manifold.spectral\_embedding` and :class:`manifold.SpectralEmbedding` support `np.float32` dtype and will preserve this dtype. :pr:`21534` by :user:`Andrew Knyazev `. - |Enhancement| Adds :term:`get\_feature\_names\_out` to :class:`manifold.Isomap` and :class:`manifold.LocallyLinearEmbedding`. :pr:`22254` by `Thomas Fan`\_. - |Enhancement| added `metric\_params` to :class:`manifold.TSNE` constructor for additional parameters of distance metric to use in optimization. :pr:`21805` by :user:`Jeanne Dionisi ` and :pr:`22685` by :user:`Meekail Zain `. - |Enhancement| :func:`manifold.trustworthiness` raises an error if `n\_neighbours >= n\_samples / 2` to ensure a correct support for the function. :pr:`18832` by :user:`Hong Shao Yang ` and :pr:`23033` by :user:`Meekail Zain `. - |Fix| :func:`manifold.spectral\_embedding` now uses Gaussian instead of the previous uniform on [0, 1] random initial approximations to eigenvectors in eigen\_solvers `lobpcg` and `amg` to improve their numerical stability. :pr:`21565` by :user:`Andrew Knyazev `. :mod:`sklearn.metrics` ...................... - |Feature| :func:`metrics.r2\_score` and :func:`metrics.explained\_variance\_score` have a new `force\_finite` parameter. Setting this parameter to `False` will return the actual non-finite score in case of perfect predictions or constant `y\_true`, instead of the finite approximation (`1.0` and `0.0` respectively) currently returned by default. :pr:`17266` by :user:`Sylvain Marié `. - |Feature| :func:`metrics.d2\_pinball\_score` and :func:`metrics.d2\_absolute\_error\_score` calculate the :math:`D^2` regression score for the pinball loss and the absolute error respectively. :func:`metrics.d2\_absolute\_error\_score` is a special case of :func:`metrics.d2\_pinball\_score` with a fixed quantile parameter `alpha=0.5` for ease of use and discovery. The :math:`D^2` scores are generalizations of the `r2\_score` and can be interpreted as the fraction of deviance explained. :pr:`22118` by :user:`Ohad Michel `. - |Enhancement| :func:`metrics.top\_k\_accuracy\_score` raises an improved error message when `y\_true` is binary and `y\_score` is 2d. :pr:`22284` by `Thomas Fan`\_. - |Enhancement| :func:`metrics.roc\_auc\_score` now supports ``average=None`` in the multiclass case when ``multiclass='ovr'`` which will return the score per class. :pr:`19158` by :user:`Nicki Skafte `. - |Enhancement| Adds `im\_kw` parameter to :meth:`metrics.ConfusionMatrixDisplay.from\_estimator` :meth:`metrics.ConfusionMatrixDisplay.from\_predictions`, and :meth:`metrics.ConfusionMatrixDisplay.plot`. The `im\_kw` parameter is passed to the `matplotlib.pyplot.imshow` call when plotting the confusion matrix. :pr:`20753` by `Thomas Fan`\_. - |Fix| :func:`metrics.silhouette\_score` now supports integer input for precomputed distances. :pr:`22108` by `Thomas Fan`\_. - |Fix| Fixed a bug in :func:`metrics.normalized\_mutual\_info\_score` which could return unbounded values. :pr:`22635` by :user:`Jérémie du Boisberranger `. - |Fix| Fixes :func:`metrics.precision\_recall\_curve` and :func:`metrics.average\_precision\_score` when true labels are all negative. :pr:`19085` by :user:`Varun Agrawal `. - |API| `metrics.SCORERS` is now deprecated and will be removed in 1.3. Please use :func:`metrics.get\_scorer\_names` to retrieve the names of all available scorers. :pr:`22866` by `Adrin Jalali`\_. - |API| Parameters ``sample\_weight`` and ``multioutput`` of :func:`metrics.mean\_absolute\_percentage\_error` are now keyword-only, in accordance with `SLEP009 `\_. A deprecation cycle was introduced. :pr:`21576` by :user:`Paul-Emile Dugnat
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.1.rst
main
scikit-learn
[ -0.09592345356941223, -0.0022007583174854517, 0.012523598968982697, 0.024676529690623283, 0.011061840690672398, -0.01026148535311222, 0.02117873542010784, 0.021575242280960083, -0.0372847281396389, -0.006121586076915264, 0.10422069579362869, -0.045618895441293716, -0.0054529630579054356, -...
-0.104908
`metrics.SCORERS` is now deprecated and will be removed in 1.3. Please use :func:`metrics.get\_scorer\_names` to retrieve the names of all available scorers. :pr:`22866` by `Adrin Jalali`\_. - |API| Parameters ``sample\_weight`` and ``multioutput`` of :func:`metrics.mean\_absolute\_percentage\_error` are now keyword-only, in accordance with `SLEP009 `\_. A deprecation cycle was introduced. :pr:`21576` by :user:`Paul-Emile Dugnat `. - |API| The `"wminkowski"` metric of :class:`metrics.DistanceMetric` is deprecated and will be removed in version 1.3. Instead the existing `"minkowski"` metric now takes in an optional `w` parameter for weights. This deprecation aims at remaining consistent with SciPy 1.8 convention. :pr:`21873` by :user:`Yar Khine Phyo `. - |API| :class:`metrics.DistanceMetric` has been moved from :mod:`sklearn.neighbors` to :mod:`sklearn.metrics`. Using `neighbors.DistanceMetric` for imports is still valid for backward compatibility, but this alias will be removed in 1.3. :pr:`21177` by :user:`Julien Jerphanion `. :mod:`sklearn.mixture` ...................... - |Enhancement| :class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture` can now be initialized using k-means++ and random data points. :pr:`20408` by :user:`Gordon Walsh `, :user:`Alberto Ceballos` and :user:`Andres Rios`. - |Fix| Fix a bug that correctly initializes `precisions\_cholesky\_` in :class:`mixture.GaussianMixture` when providing `precisions\_init` by taking its square root. :pr:`22058` by :user:`Guillaume Lemaitre `. - |Fix| :class:`mixture.GaussianMixture` now normalizes `weights\_` more safely, preventing rounding errors when calling :meth:`mixture.GaussianMixture.sample` with `n\_components=1`. :pr:`23034` by :user:`Meekail Zain `. :mod:`sklearn.model\_selection` .............................. - |Enhancement| it is now possible to pass `scoring="matthews\_corrcoef"` to all model selection tools with a `scoring` argument to use the Matthews correlation coefficient (MCC). :pr:`22203` by :user:`Olivier Grisel `. - |Enhancement| raise an error during cross-validation when the fits for all the splits failed. Similarly raise an error during grid-search when the fits for all the models and all the splits failed. :pr:`21026` by :user:`Loïc Estève `. - |Fix| :class:`model\_selection.GridSearchCV`, :class:`model\_selection.HalvingGridSearchCV` now validates input parameters in `fit` instead of `\_\_init\_\_`. :pr:`21880` by :user:`Mrinal Tyagi `. - |Fix| :func:`model\_selection.learning\_curve` now supports `partial\_fit` with regressors. :pr:`22982` by `Thomas Fan`\_. :mod:`sklearn.multiclass` ......................... - |Enhancement| :class:`multiclass.OneVsRestClassifier` now supports a `verbose` parameter so progress on fitting can be seen. :pr:`22508` by :user:`Chris Combs `. - |Fix| :meth:`multiclass.OneVsOneClassifier.predict` returns correct predictions when the inner classifier only has a :term:`predict\_proba`. :pr:`22604` by `Thomas Fan`\_. :mod:`sklearn.neighbors` ........................ - |Enhancement| Adds :term:`get\_feature\_names\_out` to :class:`neighbors.RadiusNeighborsTransformer`, :class:`neighbors.KNeighborsTransformer` and :class:`neighbors.NeighborhoodComponentsAnalysis`. :pr:`22212` by :user:`Meekail Zain `. - |Fix| :class:`neighbors.KernelDensity` now validates input parameters in `fit` instead of `\_\_init\_\_`. :pr:`21430` by :user:`Desislava Vasileva ` and :user:`Lucy Jimenez `. - |Fix| :func:`neighbors.KNeighborsRegressor.predict` now works properly when given an array-like input if `KNeighborsRegressor` is first constructed with a callable passed to the `weights` parameter. :pr:`22687` by :user:`Meekail Zain `. :mod:`sklearn.neural\_network` ............................. - |Enhancement| :func:`neural\_network.MLPClassifier` and :func:`neural\_network.MLPRegressor` show error messages when optimizers produce non-finite parameter weights. :pr:`22150` by :user:`Christian Ritter ` and :user:`Norbert Preining `. - |Enhancement| Adds :term:`get\_feature\_names\_out` to :class:`neural\_network.BernoulliRBM`. :pr:`22248` by `Thomas Fan`\_. :mod:`sklearn.pipeline` ....................... - |Enhancement| Added support for "passthrough" in :class:`pipeline.FeatureUnion`. Setting a transformer to "passthrough" will pass the features unchanged. :pr:`20860` by :user:`Shubhraneel Pal `. - |Fix| :class:`pipeline.Pipeline` now does not validate hyper-parameters in `\_\_init\_\_` but in `.fit()`. :pr:`21888` by :user:`iofall ` and :user:`Arisa Y. `. - |Fix| :class:`pipeline.FeatureUnion` does not validate hyper-parameters in `\_\_init\_\_`. Validation is now handled in `.fit()` and `.fit\_transform()`. :pr:`21954` by :user:`iofall ` and :user:`Arisa Y. `. - |Fix| Defines `\_\_sklearn\_is\_fitted\_\_` in :class:`pipeline.FeatureUnion` to return correct result with :func:`utils.validation.check\_is\_fitted`. :pr:`22953` by :user:`randomgeek78 `. :mod:`sklearn.preprocessing` ............................ - |Feature| :class:`preprocessing.OneHotEncoder` now supports grouping infrequent categories into a single feature. Grouping infrequent categories is enabled by specifying how to select infrequent categories with `min\_frequency` or `max\_categories`. :pr:`16018` by `Thomas Fan`\_. - |Enhancement| Adds a `subsample` parameter to :class:`preprocessing.KBinsDiscretizer`. This allows specifying a maximum number of samples to be used while fitting the model. The option is only available when `strategy` is set to `quantile`. :pr:`21445` by
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.1.rst
main
scikit-learn
[ -0.0698499083518982, -0.045718178153038025, -0.05112113803625107, 0.03094017319381237, -0.013891625218093395, -0.048741552978754044, 0.022457154467701912, 0.06453030556440353, 0.01135842315852642, -0.033093251287937164, 0.0748467743396759, -0.044005490839481354, 0.015911124646663666, 0.009...
0.106227
specifying how to select infrequent categories with `min\_frequency` or `max\_categories`. :pr:`16018` by `Thomas Fan`\_. - |Enhancement| Adds a `subsample` parameter to :class:`preprocessing.KBinsDiscretizer`. This allows specifying a maximum number of samples to be used while fitting the model. The option is only available when `strategy` is set to `quantile`. :pr:`21445` by :user:`Felipe Bidu ` and :user:`Amanda Dsouza `. - |Enhancement| Adds `encoded\_missing\_value` to :class:`preprocessing.OrdinalEncoder` to configure the encoded value for missing data. :pr:`21988` by `Thomas Fan`\_. - |Enhancement| Added the `get\_feature\_names\_out` method and a new parameter `feature\_names\_out` to :class:`preprocessing.FunctionTransformer`. You can set `feature\_names\_out` to 'one-to-one' to use the input features names as the output feature names, or you can set it to a callable that returns the output feature names. This is especially useful when the transformer changes the number of features. If `feature\_names\_out` is None (which is the default), then `get\_output\_feature\_names` is not defined. :pr:`21569` by :user:`Aurélien Geron `. - |Enhancement| Adds :term:`get\_feature\_names\_out` to :class:`preprocessing.Normalizer`, :class:`preprocessing.KernelCenterer`, :class:`preprocessing.OrdinalEncoder`, and :class:`preprocessing.Binarizer`. :pr:`21079` by `Thomas Fan`\_. - |Fix| :class:`preprocessing.PowerTransformer` with `method='yeo-johnson'` better supports significantly non-Gaussian data when searching for an optimal lambda. :pr:`20653` by `Thomas Fan`\_. - |Fix| :class:`preprocessing.LabelBinarizer` now validates input parameters in `fit` instead of `\_\_init\_\_`. :pr:`21434` by :user:`Krum Arnaudov `. - |Fix| :class:`preprocessing.FunctionTransformer` with `check\_inverse=True` now provides informative error message when input has mixed dtypes. :pr:`19916` by :user:`Zhehao Liu `. - |Fix| :class:`preprocessing.KBinsDiscretizer` handles bin edges more consistently now. :pr:`14975` by `Andreas Müller`\_ and :pr:`22526` by :user:`Meekail Zain `. - |Fix| Adds :meth:`preprocessing.KBinsDiscretizer.get\_feature\_names\_out` support when `encode="ordinal"`. :pr:`22735` by `Thomas Fan`\_. :mod:`sklearn.random\_projection` ................................ - |Enhancement| Adds an `inverse\_transform` method and a `compute\_inverse\_transform` parameter to :class:`random\_projection.GaussianRandomProjection` and :class:`random\_projection.SparseRandomProjection`. When the parameter is set to True, the pseudo-inverse of the components is computed during `fit` and stored as `inverse\_components\_`. :pr:`21701` by :user:`Aurélien Geron `. - |Enhancement| :class:`random\_projection.SparseRandomProjection` and :class:`random\_projection.GaussianRandomProjection` preserve dtype for `numpy.float32`. :pr:`22114` by :user:`Takeshi Oura `. - |Enhancement| Adds :term:`get\_feature\_names\_out` to all transformers in the :mod:`sklearn.random\_projection` module: :class:`random\_projection.GaussianRandomProjection` and :class:`random\_projection.SparseRandomProjection`. :pr:`21330` by :user:`Loïc Estève `. :mod:`sklearn.svm` .................. - |Enhancement| :class:`svm.OneClassSVM`, :class:`svm.NuSVC`, :class:`svm.NuSVR`, :class:`svm.SVC` and :class:`svm.SVR` now expose `n\_iter\_`, the number of iterations of the libsvm optimization routine. :pr:`21408` by :user:`Juan Martín Loyola `. - |Enhancement| :func:`svm.SVR`, :func:`svm.SVC`, :func:`svm.NuSVR`, :func:`svm.OneClassSVM`, :func:`svm.NuSVC` now raise an error when the dual-gap estimation produces non-finite parameter weights. :pr:`22149` by :user:`Christian Ritter ` and :user:`Norbert Preining `. - |Fix| :class:`svm.NuSVC`, :class:`svm.NuSVR`, :class:`svm.SVC`, :class:`svm.SVR`, :class:`svm.OneClassSVM` now validate input parameters in `fit` instead of `\_\_init\_\_`. :pr:`21436` by :user:`Haidar Almubarak `. :mod:`sklearn.tree` ................... - |Enhancement| :class:`tree.DecisionTreeClassifier` and :class:`tree.ExtraTreeClassifier` have the new `criterion="log\_loss"`, which is equivalent to `criterion="entropy"`. :pr:`23047` by :user:`Christian Lorentzen `. - |Fix| Fix a bug in the Poisson splitting criterion for :class:`tree.DecisionTreeRegressor`. :pr:`22191` by :user:`Christian Lorentzen `. - |API| Changed the default value of `max\_features` to 1.0 for :class:`tree.ExtraTreeRegressor` and to `"sqrt"` for :class:`tree.ExtraTreeClassifier`, which will not change the fit result. The original default value `"auto"` has been deprecated and will be removed in version 1.3. Setting `max\_features` to `"auto"` is also deprecated for :class:`tree.DecisionTreeClassifier` and :class:`tree.DecisionTreeRegressor`. :pr:`22476` by :user:`Zhehao Liu `. :mod:`sklearn.utils` .................... - |Enhancement| :func:`utils.check\_array` and :func:`utils.multiclass.type\_of\_target` now accept an `input\_name` parameter to make the error message more informative when passed invalid input data (e.g. with NaN or infinite values). :pr:`21219` by :user:`Olivier Grisel `. - |Enhancement| :func:`utils.check\_array` returns a float ndarray with `np.nan` when passed a `Float32` or `Float64` pandas extension array with `pd.NA`. :pr:`21278` by `Thomas Fan`\_. - |Enhancement| :func:`utils.estimator\_html\_repr` shows a more helpful error message when running in a jupyter notebook that is not trusted. :pr:`21316` by `Thomas Fan`\_. - |Enhancement| :func:`utils.estimator\_html\_repr` displays an arrow on the top left corner of the HTML representation to show how the elements are clickable. :pr:`21298` by
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.1.rst
main
scikit-learn
[ -0.018241068348288536, -0.012377055361866951, -0.047958437353372574, 0.048951271921396255, -0.0726286992430687, 0.016477901488542557, -0.03897508233785629, -0.005069823004305363, -0.008079001680016518, -0.03648064285516739, 0.043789446353912354, -0.15087714791297913, -0.008789134211838245, ...
-0.025396
by `Thomas Fan`\_. - |Enhancement| :func:`utils.estimator\_html\_repr` shows a more helpful error message when running in a jupyter notebook that is not trusted. :pr:`21316` by `Thomas Fan`\_. - |Enhancement| :func:`utils.estimator\_html\_repr` displays an arrow on the top left corner of the HTML representation to show how the elements are clickable. :pr:`21298` by `Thomas Fan`\_. - |Enhancement| :func:`utils.check\_array` with `dtype=None` returns numeric arrays when passed in a pandas DataFrame with mixed dtypes. `dtype="numeric"` will also make better infer the dtype when the DataFrame has mixed dtypes. :pr:`22237` by `Thomas Fan`\_. - |Enhancement| :func:`utils.check\_scalar` now has better messages when displaying the type. :pr:`22218` by `Thomas Fan`\_. - |Fix| Changes the error message of the `ValidationError` raised by :func:`utils.check\_X\_y` when y is None so that it is compatible with the `check\_requires\_y\_none` estimator check. :pr:`22578` by :user:`Claudio Salvatore Arcidiacono `. - |Fix| :func:`utils.class\_weight.compute\_class\_weight` now only requires that all classes in `y` have a weight in `class\_weight`. An error is still raised when a class is present in `y` but not in `class\_weight`. :pr:`22595` by `Thomas Fan`\_. - |Fix| :func:`utils.estimator\_html\_repr` has an improved visualization for nested meta-estimators. :pr:`21310` by `Thomas Fan`\_. - |Fix| :func:`utils.check\_scalar` raises an error when `include\_boundaries={"left", "right"}` and the boundaries are not set. :pr:`22027` by :user:`Marie Lanternier `. - |Fix| :func:`utils.metaestimators.available\_if` correctly returns a bounded method that can be pickled. :pr:`23077` by `Thomas Fan`\_. - |API| :func:`utils.estimator\_checks.check\_estimator`'s argument is now called `estimator` (previous name was `Estimator`). :pr:`22188` by :user:`Mathurin Massias `. - |API| ``utils.metaestimators.if\_delegate\_has\_method`` is deprecated and will be removed in version 1.3. Use :func:`utils.metaestimators.available\_if` instead. :pr:`22830` by :user:`Jérémie du Boisberranger `. .. rubric:: Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.0, including: 2357juan, Abhishek Gupta, adamgonzo, Adam Li, adijohar, Aditya Kumawat, Aditya Raghuwanshi, Aditya Singh, Adrian Trujillo Duron, Adrin Jalali, ahmadjubair33, AJ Druck, aj-white, Alan Peixinho, Alberto Mario Ceballos-Arroyo, Alek Lefebvre, Alex, Alexandr, Alexandre Gramfort, alexanmv, almeidayoel, Amanda Dsouza, Aman Sharma, Amar pratap singh, Amit, amrcode, András Simon, Andreas Grivas, Andreas Mueller, Andrew Knyazev, Andriy, Angus L'Herrou, Ankit Sharma, Anne Ducout, Arisa, Arth, arthurmello, Arturo Amor, ArturoAmor, Atharva Patil, aufarkari, Aurélien Geron, avm19, Ayan Bag, baam, Bardiya Ak, Behrouz B, Ben3940, Benjamin Bossan, Bharat Raghunathan, Bijil Subhash, bmreiniger, Brandon Truth, Brenden Kadota, Brian Sun, cdrig, Chalmer Lowe, Chiara Marmo, Chitteti Srinath Reddy, Chloe-Agathe Azencott, Christian Lorentzen, Christian Ritter, christopherlim98, Christoph T. Weidemann, Christos Aridas, Claudio Salvatore Arcidiacono, combscCode, Daniela Fernandes, darioka, Darren Nguyen, Dave Eargle, David Gilbertson, David Poznik, Dea María Léon, Dennis Osei, DessyVV, Dev514, Dimitri Papadopoulos Orfanos, Diwakar Gupta, Dr. Felix M. Riese, drskd, Emiko Sano, Emmanouil Gionanidis, EricEllwanger, Erich Schubert, Eric Larson, Eric Ndirangu, ErmolaevPA, Estefania Barreto-Ojeda, eyast, Fatima GASMI, Federico Luna, Felix Glushchenkov, fkaren27, Fortune Uwha, FPGAwesome, francoisgoupil, Frans Larsson, ftorres16, Gabor Berei, Gabor Kertesz, Gabriel Stefanini Vicente, Gabriel S Vicente, Gael Varoquaux, GAURAV CHOUDHARY, Gauthier I, genvalen, Geoffrey-Paris, Giancarlo Pablo, glennfrutiz, gpapadok, Guillaume Lemaitre, Guillermo Tomás Fernández Martín, Gustavo Oliveira, Haidar Almubarak, Hannah Bohle, Hansin Ahuja, Haoyin Xu, Haya, Helder Geovane Gomes de Lima, henrymooresc, Hideaki Imamura, Himanshu Kumar, Hind-M, hmasdev, hvassard, i-aki-y, iasoon, Inclusive Coding Bot, Ingela, iofall, Ishan Kumar, Jack Liu, Jake Cowton, jalexand3r, J Alexander, Jauhar, Jaya Surya Kommireddy, Jay Stanley, Jeff Hale, je-kr, JElfner, Jenny Vo, Jérémie du Boisberranger, Jihane, Jirka Borovec, Joel Nothman, Jon Haitz Legarreta Gorroño, Jordan Silke, Jorge Ciprián, Jorge Loayza, Joseph Chazalon, Joseph Schwartz-Messing, Jovan Stojanovic, JSchuerz, Juan Carlos Alfaro Jiménez, Juan Martin Loyola, Julien Jerphanion, katotten, Kaushik Roy Chowdhury, Ken4git, Kenneth Prabakaran, kernc, Kevin Doucet, KimAYoung, Koushik Joshi, Kranthi Sedamaki, krishna kumar, krumetoft, lesnee, Lisa Casino, Logan Thomas, Loic Esteve, Louis Wagner, LucieClair, Lucy Liu, Luiz
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.1.rst
main
scikit-learn
[ -0.003310683649033308, -0.10716894268989563, 0.002938307588919997, -0.011218996718525887, 0.08138997107744217, -0.11166144162416458, 0.04659200459718704, -0.07001757621765137, 0.002601308049634099, -0.015046760439872742, 0.06337091326713562, -0.015184614807367325, -0.02130238711833954, -0....
0.002898
Ciprián, Jorge Loayza, Joseph Chazalon, Joseph Schwartz-Messing, Jovan Stojanovic, JSchuerz, Juan Carlos Alfaro Jiménez, Juan Martin Loyola, Julien Jerphanion, katotten, Kaushik Roy Chowdhury, Ken4git, Kenneth Prabakaran, kernc, Kevin Doucet, KimAYoung, Koushik Joshi, Kranthi Sedamaki, krishna kumar, krumetoft, lesnee, Lisa Casino, Logan Thomas, Loic Esteve, Louis Wagner, LucieClair, Lucy Liu, Luiz Eduardo Amaral, Magali, MaggieChege, Mai, mandjevant, Mandy Gu, Manimaran, MarcoM, Marco Wurps, Maren Westermann, Maria Boerner, MarieS-WiMLDS, Martel Corentin, martin-kokos, mathurinm, Matías, matjansen, Matteo Francia, Maxwell, Meekail Zain, Megabyte, Mehrdad Moradizadeh, melemo2, Michael I Chen, michalkrawczyk, Micky774, milana2, millawell, Ming-Yang Ho, Mitzi, miwojc, Mizuki, mlant, Mohamed Haseeb, Mohit Sharma, Moonkyung94, mpoemsl, MrinalTyagi, Mr. Leu, msabatier, murata-yu, N, Nadirhan Şahin, Naipawat Poolsawat, NartayXD, nastegiano, nathansquan, nat-salt, Nicki Skafte Detlefsen, Nicolas Hug, Niket Jain, Nikhil Suresh, Nikita Titov, Nikolay Kondratyev, Ohad Michel, Oleksandr Husak, Olivier Grisel, partev, Patrick Ferreira, Paul, pelennor, PierreAttard, Piet Brömmel, Pieter Gijsbers, Pinky, poloso, Pramod Anantharam, puhuk, Purna Chandra Mansingh, QuadV, Rahil Parikh, Randall Boyes, randomgeek78, Raz Hoshia, Reshama Shaikh, Ricardo Ferreira, Richard Taylor, Rileran, Rishabh, Robin Thibaut, Rocco Meli, Roman Feldbauer, Roman Yurchak, Ross Barnowski, rsnegrin, Sachin Yadav, sakinaOuisrani, Sam Adam Day, Sanjay Marreddi, Sebastian Pujalte, SEELE, SELEE, Seyedsaman (Sam) Emami, ShanDeng123, Shao Yang Hong, sharmadharmpal, shaymerNaturalint, Shuangchi He, Shubhraneel Pal, siavrez, slishak, Smile, spikebh, sply88, Srinath Kailasa, Stéphane Collot, Sultan Orazbayev, Sumit Saha, Sven Eschlbeck, Sven Stehle, Swapnil Jha, Sylvain Marié, Takeshi Oura, Tamires Santana, Tenavi, teunpe, Theis Ferré Hjortkjær, Thiruvenkadam, Thomas J. Fan, t-jakubek, toastedyeast, Tom Dupré la Tour, Tom McTiernan, TONY GEORGE, Tyler Martin, Tyler Reddy, Udit Gupta, Ugo Marchand, Varun Agrawal, Venkatachalam N, Vera Komeyer, victoirelouis, Vikas Vishwakarma, Vikrant khedkar, Vladimir Chernyy, Vladimir Kim, WeijiaDu, Xiao Yuan, Yar Khine Phyo, Ying Xiong, yiyangq, Yosshi999, Yuki Koyama, Zach Deane-Mayer, Zeel B Patel, zempleni, zhenfisher, 赵丰 (Zhao Feng)
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.1.rst
main
scikit-learn
[ -0.009162905625998974, -0.09713784605264664, -0.027202101424336433, 0.060320641845464706, -0.05572684481739998, 0.059305332601070404, -0.0479758195579052, 0.13881142437458038, 0.0015700699295848608, 0.009413374587893486, 0.00032849176204763353, 0.014209155924618244, -0.02447492443025112, 0...
0.100946
.. include:: \_contributors.rst .. currentmodule:: sklearn ============ Version 0.13 ============ .. \_changes\_0\_13\_1: Version 0.13.1 ============== \*\*February 23, 2013\*\* The 0.13.1 release only fixes some bugs and does not add any new functionality. Changelog --------- - Fixed a testing error caused by the function `cross\_validation.train\_test\_split` being interpreted as a test by `Yaroslav Halchenko`\_. - Fixed a bug in the reassignment of small clusters in the :class:`cluster.MiniBatchKMeans` by `Gael Varoquaux`\_. - Fixed default value of ``gamma`` in :class:`decomposition.KernelPCA` by `Lars Buitinck`\_. - Updated joblib to ``0.7.0d`` by `Gael Varoquaux`\_. - Fixed scaling of the deviance in :class:`ensemble.GradientBoostingClassifier` by `Peter Prettenhofer`\_. - Better tie-breaking in :class:`multiclass.OneVsOneClassifier` by `Andreas Müller`\_. - Other small improvements to tests and documentation. People ------ List of contributors for release 0.13.1 by number of commits. \* 16 `Lars Buitinck`\_ \* 12 `Andreas Müller`\_ \* 8 `Gael Varoquaux`\_ \* 5 Robert Marchman \* 3 `Peter Prettenhofer`\_ \* 2 Hrishikesh Huilgolkar \* 1 Bastiaan van den Berg \* 1 Diego Molla \* 1 `Gilles Louppe`\_ \* 1 `Mathieu Blondel`\_ \* 1 `Nelle Varoquaux`\_ \* 1 Rafael Cunha de Almeida \* 1 Rolando Espinoza La fuente \* 1 `Vlad Niculae`\_ \* 1 `Yaroslav Halchenko`\_ .. \_changes\_0\_13: Version 0.13 ============ \*\*January 21, 2013\*\* New Estimator Classes --------------------- - :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`, two data-independent predictors by `Mathieu Blondel`\_. Useful to sanity-check your estimators. See :ref:`dummy\_estimators` in the user guide. Multioutput support added by `Arnaud Joly`\_. - :class:`decomposition.FactorAnalysis`, a transformer implementing the classical factor analysis, by `Christian Osendorfer`\_ and `Alexandre Gramfort`\_. See :ref:`FA` in the user guide. - :class:`feature\_extraction.FeatureHasher`, a transformer implementing the "hashing trick" for fast, low-memory feature extraction from string fields by `Lars Buitinck`\_ and :class:`feature\_extraction.text.HashingVectorizer` for text documents by `Olivier Grisel`\_ See :ref:`feature\_hashing` and :ref:`hashing\_vectorizer` for the documentation and sample usage. - :class:`pipeline.FeatureUnion`, a transformer that concatenates results of several other transformers by `Andreas Müller`\_. See :ref:`feature\_union` in the user guide. - :class:`random\_projection.GaussianRandomProjection`, :class:`random\_projection.SparseRandomProjection` and the function :func:`random\_projection.johnson\_lindenstrauss\_min\_dim`. The first two are transformers implementing Gaussian and sparse random projection matrix by `Olivier Grisel`\_ and `Arnaud Joly`\_. See :ref:`random\_projection` in the user guide. - :class:`kernel\_approximation.Nystroem`, a transformer for approximating arbitrary kernels by `Andreas Müller`\_. See :ref:`nystroem\_kernel\_approx` in the user guide. - :class:`preprocessing.OneHotEncoder`, a transformer that computes binary encodings of categorical features by `Andreas Müller`\_. See :ref:`preprocessing\_categorical\_features` in the user guide. - :class:`linear\_model.PassiveAggressiveClassifier` and :class:`linear\_model.PassiveAggressiveRegressor`, predictors implementing an efficient stochastic optimization for linear models by `Rob Zinkov`\_ and `Mathieu Blondel`\_. See :ref:`passive\_aggressive` in the user guide. - :class:`ensemble.RandomTreesEmbedding`, a transformer for creating high-dimensional sparse representations using ensembles of totally random trees by `Andreas Müller`\_. See :ref:`random\_trees\_embedding` in the user guide. - :class:`manifold.SpectralEmbedding` and function :func:`manifold.spectral\_embedding`, implementing the "laplacian eigenmaps" transformation for non-linear dimensionality reduction by Wei Li. See :ref:`spectral\_embedding` in the user guide. - :class:`isotonic.IsotonicRegression` by `Fabian Pedregosa`\_, `Alexandre Gramfort`\_ and `Nelle Varoquaux`\_, Changelog --------- - :func:`metrics.zero\_one\_loss` (formerly ``metrics.zero\_one``) now has an option for normalized output that reports the fraction of misclassifications, rather than the raw number of misclassifications. By Kyle Beauchamp. - :class:`tree.DecisionTreeClassifier` and all derived ensemble models now support sample weighting, by `Noel Dawe`\_ and `Gilles Louppe`\_. - Speedup improvement when using bootstrap samples in forests of randomized trees, by `Peter Prettenhofer`\_ and `Gilles Louppe`\_. - Partial dependence plots for :ref:`gradient\_boosting` in `ensemble.partial\_dependence.partial\_dependence` by `Peter Prettenhofer`\_. See :ref:`sphx\_glr\_auto\_examples\_inspection\_plot\_partial\_dependence.py` for an example. - The table of contents on the website has now been made expandable by `Jaques Grobler`\_. - :class:`feature\_selection.SelectPercentile` now breaks ties deterministically instead of returning all equally ranked features. - :class:`feature\_selection.SelectKBest` and :class:`feature\_selection.SelectPercentile` are more numerically stable since they use scores, rather than p-values, to rank results. This means that they might sometimes select different features than they did previously. -
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.13.rst
main
scikit-learn
[ -0.060271091759204865, -0.035671599209308624, -0.017954833805561066, 0.022271448746323586, 0.07610578835010529, -0.00231398013420403, -0.009205734357237816, 0.012557035312056541, -0.07112385332584381, -0.011990971863269806, 0.10049568116664886, -0.0667416900396347, -0.027226131409406662, -...
0.140481
made expandable by `Jaques Grobler`\_. - :class:`feature\_selection.SelectPercentile` now breaks ties deterministically instead of returning all equally ranked features. - :class:`feature\_selection.SelectKBest` and :class:`feature\_selection.SelectPercentile` are more numerically stable since they use scores, rather than p-values, to rank results. This means that they might sometimes select different features than they did previously. - Ridge regression and ridge classification fitting with ``sparse\_cg`` solver no longer has quadratic memory complexity, by `Lars Buitinck`\_ and `Fabian Pedregosa`\_. - Ridge regression and ridge classification now support a new fast solver called ``lsqr``, by `Mathieu Blondel`\_. - Speed up of :func:`metrics.precision\_recall\_curve` by Conrad Lee. - Added support for reading/writing svmlight files with pairwise preference attribute (qid in svmlight file format) in :func:`datasets.dump\_svmlight\_file` and :func:`datasets.load\_svmlight\_file` by `Fabian Pedregosa`\_. - Faster and more robust :func:`metrics.confusion\_matrix` and :ref:`clustering\_evaluation` by Wei Li. - `cross\_validation.cross\_val\_score` now works with precomputed kernels and affinity matrices, by `Andreas Müller`\_. - LARS algorithm made more numerically stable with heuristics to drop regressors too correlated as well as to stop the path when numerical noise becomes predominant, by `Gael Varoquaux`\_. - Faster implementation of :func:`metrics.precision\_recall\_curve` by Conrad Lee. - New kernel `metrics.chi2\_kernel` by `Andreas Müller`\_, often used in computer vision applications. - Fix of longstanding bug in :class:`naive\_bayes.BernoulliNB` fixed by Shaun Jackman. - Implemented ``predict\_proba`` in :class:`multiclass.OneVsRestClassifier`, by Andrew Winterman. - Improve consistency in gradient boosting: estimators :class:`ensemble.GradientBoostingRegressor` and :class:`ensemble.GradientBoostingClassifier` use the estimator :class:`tree.DecisionTreeRegressor` instead of the `tree.\_tree.Tree` data structure by `Arnaud Joly`\_. - Fixed a floating point exception in the :ref:`decision trees ` module, by Seberg. - Fix :func:`metrics.roc\_curve` fails when y\_true has only one class by Wei Li. - Add the :func:`metrics.mean\_absolute\_error` function which computes the mean absolute error. The :func:`metrics.mean\_squared\_error`, :func:`metrics.mean\_absolute\_error` and :func:`metrics.r2\_score` metrics support multioutput by `Arnaud Joly`\_. - Fixed ``class\_weight`` support in :class:`svm.LinearSVC` and :class:`linear\_model.LogisticRegression` by `Andreas Müller`\_. The meaning of ``class\_weight`` was reversed as erroneously higher weight meant less positives of a given class in earlier releases. - Improve narrative documentation and consistency in :mod:`sklearn.metrics` for regression and classification metrics by `Arnaud Joly`\_. - Fixed a bug in :class:`sklearn.svm.SVC` when using csr-matrices with unsorted indices by Xinfan Meng and `Andreas Müller`\_. - :class:`cluster.MiniBatchKMeans`: Add random reassignment of cluster centers with little observations attached to them, by `Gael Varoquaux`\_. API changes summary ------------------- - Renamed all occurrences of ``n\_atoms`` to ``n\_components`` for consistency. This applies to :class:`decomposition.DictionaryLearning`, :class:`decomposition.MiniBatchDictionaryLearning`, :func:`decomposition.dict\_learning`, :func:`decomposition.dict\_learning\_online`. - Renamed all occurrences of ``max\_iters`` to ``max\_iter`` for consistency. This applies to `semi\_supervised.LabelPropagation` and `semi\_supervised.label\_propagation.LabelSpreading`. - Renamed all occurrences of ``learn\_rate`` to ``learning\_rate`` for consistency in `ensemble.BaseGradientBoosting` and :class:`ensemble.GradientBoostingRegressor`. - The module ``sklearn.linear\_model.sparse`` is gone. Sparse matrix support was already integrated into the "regular" linear models. - `sklearn.metrics.mean\_square\_error`, which incorrectly returned the accumulated error, was removed. Use :func:`metrics.mean\_squared\_error` instead. - Passing ``class\_weight`` parameters to ``fit`` methods is no longer supported. Pass them to estimator constructors instead. - GMMs no longer have ``decode`` and ``rvs`` methods. Use the ``score``, ``predict`` or ``sample`` methods instead. - The ``solver`` fit option in Ridge regression and classification is now deprecated and will be removed in v0.14. Use the constructor option instead. - `feature\_extraction.text.DictVectorizer` now returns sparse matrices in the CSR format, instead of COO. - Renamed ``k`` in `cross\_validation.KFold` and `cross\_validation.StratifiedKFold` to ``n\_folds``, renamed ``n\_bootstraps`` to ``n\_iter`` in ``cross\_validation.Bootstrap``. - Renamed all occurrences of ``n\_iterations`` to ``n\_iter`` for consistency. This applies to `cross\_validation.ShuffleSplit`, `cross\_validation.StratifiedShuffleSplit`, :func:`utils.extmath.randomized\_range\_finder` and :func:`utils.extmath.randomized\_svd`. - Replaced ``rho`` in :class:`linear\_model.ElasticNet` and :class:`linear\_model.SGDClassifier` by ``l1\_ratio``. The ``rho`` parameter had different meanings; ``l1\_ratio`` was introduced to avoid confusion. It has the same meaning as previously ``rho`` in :class:`linear\_model.ElasticNet` and ``(1-rho)`` in :class:`linear\_model.SGDClassifier`. - :class:`linear\_model.LassoLars` and :class:`linear\_model.Lars` now store a list of paths in the case of
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.13.rst
main
scikit-learn
[ -0.10537929832935333, -0.09732753783464432, -0.04932968318462372, 0.018667446449398994, 0.04748145863413811, -0.022730456665158272, -0.02867560088634491, 0.05070722475647926, -0.05159008875489235, 0.026856282725930214, 0.034075453877449036, 0.07687243819236755, -0.05032803490757942, -0.002...
-0.025897
:func:`utils.extmath.randomized\_svd`. - Replaced ``rho`` in :class:`linear\_model.ElasticNet` and :class:`linear\_model.SGDClassifier` by ``l1\_ratio``. The ``rho`` parameter had different meanings; ``l1\_ratio`` was introduced to avoid confusion. It has the same meaning as previously ``rho`` in :class:`linear\_model.ElasticNet` and ``(1-rho)`` in :class:`linear\_model.SGDClassifier`. - :class:`linear\_model.LassoLars` and :class:`linear\_model.Lars` now store a list of paths in the case of multiple targets, rather than an array of paths. - The attribute ``gmm`` of `hmm.GMMHMM` was renamed to ``gmm\_`` to adhere more strictly with the API. - `cluster.spectral\_embedding` was moved to :func:`manifold.spectral\_embedding`. - Renamed ``eig\_tol`` in :func:`manifold.spectral\_embedding`, :class:`cluster.SpectralClustering` to ``eigen\_tol``, renamed ``mode`` to ``eigen\_solver``. - Renamed ``mode`` in :func:`manifold.spectral\_embedding` and :class:`cluster.SpectralClustering` to ``eigen\_solver``. - ``classes\_`` and ``n\_classes\_`` attributes of :class:`tree.DecisionTreeClassifier` and all derived ensemble models are now flat in case of single output problems and nested in case of multi-output problems. - The ``estimators\_`` attribute of :class:`ensemble.GradientBoostingRegressor` and :class:`ensemble.GradientBoostingClassifier` is now an array of :class:`tree.DecisionTreeRegressor`. - Renamed ``chunk\_size`` to ``batch\_size`` in :class:`decomposition.MiniBatchDictionaryLearning` and :class:`decomposition.MiniBatchSparsePCA` for consistency. - :class:`svm.SVC` and :class:`svm.NuSVC` now provide a ``classes\_`` attribute and support arbitrary dtypes for labels ``y``. Also, the dtype returned by ``predict`` now reflects the dtype of ``y`` during ``fit`` (used to be ``np.float``). - Changed default test\_size in `cross\_validation.train\_test\_split` to None, added possibility to infer ``test\_size`` from ``train\_size`` in `cross\_validation.ShuffleSplit` and `cross\_validation.StratifiedShuffleSplit`. - Renamed function `sklearn.metrics.zero\_one` to `sklearn.metrics.zero\_one\_loss`. Be aware that the default behavior in `sklearn.metrics.zero\_one\_loss` is different from `sklearn.metrics.zero\_one`: ``normalize=False`` is changed to ``normalize=True``. - Renamed function `metrics.zero\_one\_score` to :func:`metrics.accuracy\_score`. - :func:`datasets.make\_circles` now has the same number of inner and outer points. - In the Naive Bayes classifiers, the ``class\_prior`` parameter was moved from ``fit`` to ``\_\_init\_\_``. People ------ List of contributors for release 0.13 by number of commits. \* 364 `Andreas Müller`\_ \* 143 `Arnaud Joly`\_ \* 137 `Peter Prettenhofer`\_ \* 131 `Gael Varoquaux`\_ \* 117 `Mathieu Blondel`\_ \* 108 `Lars Buitinck`\_ \* 106 Wei Li \* 101 `Olivier Grisel`\_ \* 65 `Vlad Niculae`\_ \* 54 `Gilles Louppe`\_ \* 40 `Jaques Grobler`\_ \* 38 `Alexandre Gramfort`\_ \* 30 `Rob Zinkov`\_ \* 19 Aymeric Masurelle \* 18 Andrew Winterman \* 17 `Fabian Pedregosa`\_ \* 17 Nelle Varoquaux \* 16 `Christian Osendorfer`\_ \* 14 `Daniel Nouri`\_ \* 13 :user:`Virgile Fritsch ` \* 13 syhw \* 12 `Satrajit Ghosh`\_ \* 10 Corey Lynch \* 10 Kyle Beauchamp \* 9 Brian Cheung \* 9 Immanuel Bayer \* 9 mr.Shu \* 8 Conrad Lee \* 8 `James Bergstra`\_ \* 7 Tadej Janež \* 6 Brian Cajes \* 6 `Jake Vanderplas`\_ \* 6 Michael \* 6 Noel Dawe \* 6 Tiago Nunes \* 6 cow \* 5 Anze \* 5 Shiqiao Du \* 4 Christian Jauvin \* 4 Jacques Kvam \* 4 Richard T. Guy \* 4 `Robert Layton`\_ \* 3 Alexandre Abraham \* 3 Doug Coleman \* 3 Scott Dickerson \* 2 ApproximateIdentity \* 2 John Benediktsson \* 2 Mark Veronda \* 2 Matti Lyra \* 2 Mikhail Korobov \* 2 Xinfan Meng \* 1 Alejandro Weinstein \* 1 `Alexandre Passos`\_ \* 1 Christoph Deil \* 1 Eugene Nizhibitsky \* 1 Kenneth C. Arnold \* 1 Luis Pedro Coelho \* 1 Miroslav Batchkarov \* 1 Pavel \* 1 Sebastian Berg \* 1 Shaun Jackman \* 1 Subhodeep Moitra \* 1 bob \* 1 dengemann \* 1 emanuele \* 1 x006
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.13.rst
main
scikit-learn
[ -0.027839573100209236, -0.0787343978881836, -0.08645646274089813, 0.02065083757042885, 0.070072241127491, -0.008646923117339611, -0.005327197723090649, -0.016482548788189888, -0.050499774515628815, -0.004490706603974104, 0.06921061873435974, 0.013387148268520832, 0.027836812660098076, -0.0...
0.103813
.. include:: \_contributors.rst .. currentmodule:: sklearn ============ Version 0.15 ============ .. \_changes\_0\_15\_2: Version 0.15.2 ============== \*\*September 4, 2014\*\* Bug fixes --------- - Fixed handling of the ``p`` parameter of the Minkowski distance that was previously ignored in nearest neighbors models. By :user:`Nikolay Mayorov `. - Fixed duplicated alphas in :class:`linear\_model.LassoLars` with early stopping on 32 bit Python. By `Olivier Grisel`\_ and `Fabian Pedregosa`\_. - Fixed the build under Windows when scikit-learn is built with MSVC while NumPy is built with MinGW. By `Olivier Grisel`\_ and :user:`Federico Vaggi `. - Fixed an array index overflow bug in the coordinate descent solver. By `Gael Varoquaux`\_. - Better handling of numpy 1.9 deprecation warnings. By `Gael Varoquaux`\_. - Removed unnecessary data copy in :class:`cluster.KMeans`. By `Gael Varoquaux`\_. - Explicitly close open files to avoid ``ResourceWarnings`` under Python 3. By Calvin Giles. - The ``transform`` of :class:`discriminant\_analysis.LinearDiscriminantAnalysis` now projects the input on the most discriminant directions. By Martin Billinger. - Fixed potential overflow in ``\_tree.safe\_realloc`` by `Lars Buitinck`\_. - Performance optimization in :class:`isotonic.IsotonicRegression`. By Robert Bradshaw. - ``nose`` is no longer a runtime dependency to import ``sklearn``, only for running the tests. By `Joel Nothman`\_. - Many documentation and website fixes by `Joel Nothman`\_, `Lars Buitinck`\_ :user:`Matt Pico `, and others. .. \_changes\_0\_15\_1: Version 0.15.1 ============== \*\*August 1, 2014\*\* Bug fixes --------- - Made `cross\_validation.cross\_val\_score` use `cross\_validation.KFold` instead of `cross\_validation.StratifiedKFold` on multi-output classification problems. By :user:`Nikolay Mayorov `. - Support unseen labels :class:`preprocessing.LabelBinarizer` to restore the default behavior of 0.14.1 for backward compatibility. By :user:`Hamzeh Alsalhi `. - Fixed the :class:`cluster.KMeans` stopping criterion that prevented early convergence detection. By Edward Raff and `Gael Varoquaux`\_. - Fixed the behavior of :class:`multiclass.OneVsOneClassifier`. in case of ties at the per-class vote level by computing the correct per-class sum of prediction scores. By `Andreas Müller`\_. - Made `cross\_validation.cross\_val\_score` and `grid\_search.GridSearchCV` accept Python lists as input data. This is especially useful for cross-validation and model selection of text processing pipelines. By `Andreas Müller`\_. - Fixed data input checks of most estimators to accept input data that implements the NumPy ``\_\_array\_\_`` protocol. This is the case for for ``pandas.Series`` and ``pandas.DataFrame`` in recent versions of pandas. By `Gael Varoquaux`\_. - Fixed a regression for :class:`linear\_model.SGDClassifier` with ``class\_weight="auto"`` on data with non-contiguous labels. By `Olivier Grisel`\_. .. \_changes\_0\_15: Version 0.15 ============ \*\*July 15, 2014\*\* Highlights ----------- - Many speed and memory improvements all across the code - Huge speed and memory improvements to random forests (and extra trees) that also benefit better from parallel computing. - Incremental fit to :class:`BernoulliRBM ` - Added :class:`cluster.AgglomerativeClustering` for hierarchical agglomerative clustering with average linkage, complete linkage and ward strategies. - Added :class:`linear\_model.RANSACRegressor` for robust regression models. - Added dimensionality reduction with :class:`manifold.TSNE` which can be used to visualize high-dimensional data. Changelog --------- New features ............ - Added :class:`ensemble.BaggingClassifier` and :class:`ensemble.BaggingRegressor` meta-estimators for ensembling any kind of base estimator. See the :ref:`Bagging ` section of the user guide for details and examples. By `Gilles Louppe`\_. - New unsupervised feature selection algorithm :class:`feature\_selection.VarianceThreshold`, by `Lars Buitinck`\_. - Added :class:`linear\_model.RANSACRegressor` meta-estimator for the robust fitting of regression models. By :user:`Johannes Schönberger `. - Added :class:`cluster.AgglomerativeClustering` for hierarchical agglomerative clustering with average linkage, complete linkage and ward strategies, by `Nelle Varoquaux`\_ and `Gael Varoquaux`\_. - Shorthand constructors :func:`pipeline.make\_pipeline` and :func:`pipeline.make\_union` were added by `Lars Buitinck`\_. - Shuffle option for `cross\_validation.StratifiedKFold`. By :user:`Jeffrey Blackburne `. - Incremental learning (``partial\_fit``) for Gaussian Naive Bayes by Imran Haque. - Added ``partial\_fit`` to :class:`BernoulliRBM ` By :user:`Danny Sullivan `. - Added `learning\_curve` utility to chart performance with respect to training size. See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_learning\_curve.py`. By Alexander Fabisch. - Add positive option in :class:`LassoCV
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.15.rst
main
scikit-learn
[ -0.035123273730278015, -0.08038251847028732, -0.008691747672855854, -0.007952031679451466, 0.07018517702817917, -0.05342327058315277, -0.047707751393318176, 0.002674943068996072, -0.0439579002559185, -0.011910723522305489, 0.02678733877837658, 0.03232360631227493, -0.07393490523099899, 0.0...
0.043225
for `cross\_validation.StratifiedKFold`. By :user:`Jeffrey Blackburne `. - Incremental learning (``partial\_fit``) for Gaussian Naive Bayes by Imran Haque. - Added ``partial\_fit`` to :class:`BernoulliRBM ` By :user:`Danny Sullivan `. - Added `learning\_curve` utility to chart performance with respect to training size. See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_learning\_curve.py`. By Alexander Fabisch. - Add positive option in :class:`LassoCV ` and :class:`ElasticNetCV `. By Brian Wignall and `Alexandre Gramfort`\_. - Added :class:`linear\_model.MultiTaskElasticNetCV` and :class:`linear\_model.MultiTaskLassoCV`. By `Manoj Kumar`\_. - Added :class:`manifold.TSNE`. By Alexander Fabisch. Enhancements ............ - Add sparse input support to :class:`ensemble.AdaBoostClassifier` and :class:`ensemble.AdaBoostRegressor` meta-estimators. By :user:`Hamzeh Alsalhi `. - Memory improvements of decision trees, by `Arnaud Joly`\_. - Decision trees can now be built in best-first manner by using ``max\_leaf\_nodes`` as the stopping criteria. Refactored the tree code to use either a stack or a priority queue for tree building. By `Peter Prettenhofer`\_ and `Gilles Louppe`\_. - Decision trees can now be fitted on fortran- and c-style arrays, and non-continuous arrays without the need to make a copy. If the input array has a different dtype than ``np.float32``, a fortran-style copy will be made since fortran-style memory layout has speed advantages. By `Peter Prettenhofer`\_ and `Gilles Louppe`\_. - Speed improvement of regression trees by optimizing the the computation of the mean square error criterion. This lead to speed improvement of the tree, forest and gradient boosting tree modules. By `Arnaud Joly`\_ - The ``img\_to\_graph`` and ``grid\_tograph`` functions in :mod:`sklearn.feature\_extraction.image` now return ``np.ndarray`` instead of ``np.matrix`` when ``return\_as=np.ndarray``. See the Notes section for more information on compatibility. - Changed the internal storage of decision trees to use a struct array. This fixed some small bugs, while improving code and providing a small speed gain. By `Joel Nothman`\_. - Reduce memory usage and overhead when fitting and predicting with forests of randomized trees in parallel with ``n\_jobs != 1`` by leveraging new threading backend of joblib 0.8 and releasing the GIL in the tree fitting Cython code. By `Olivier Grisel`\_ and `Gilles Louppe`\_. - Speed improvement of the `sklearn.ensemble.gradient\_boosting` module. By `Gilles Louppe`\_ and `Peter Prettenhofer`\_. - Various enhancements to the `sklearn.ensemble.gradient\_boosting` module: a ``warm\_start`` argument to fit additional trees, a ``max\_leaf\_nodes`` argument to fit GBM style trees, a ``monitor`` fit argument to inspect the estimator during training, and refactoring of the verbose code. By `Peter Prettenhofer`\_. - Faster `sklearn.ensemble.ExtraTrees` by caching feature values. By `Arnaud Joly`\_. - Faster depth-based tree building algorithm such as decision tree, random forest, extra trees or gradient tree boosting (with depth based growing strategy) by avoiding trying to split on found constant features in the sample subset. By `Arnaud Joly`\_. - Add ``min\_weight\_fraction\_leaf`` pre-pruning parameter to tree-based methods: the minimum weighted fraction of the input samples required to be at a leaf node. By `Noel Dawe`\_. - Added :func:`metrics.pairwise\_distances\_argmin\_min`, by Philippe Gervais. - Added predict method to :class:`cluster.AffinityPropagation` and :class:`cluster.MeanShift`, by `Mathieu Blondel`\_. - Vector and matrix multiplications have been optimised throughout the library by `Denis Engemann`\_, and `Alexandre Gramfort`\_. In particular, they should take less memory with older NumPy versions (prior to 1.7.2). - Precision-recall and ROC examples now use train\_test\_split, and have more explanation of why these metrics are useful. By `Kyle Kastner`\_ - The training algorithm for :class:`decomposition.NMF` is faster for sparse matrices and has much lower memory complexity, meaning it will scale up gracefully to large datasets. By `Lars Buitinck`\_. - Added svd\_method option with default value to "randomized" to :class:`decomposition.FactorAnalysis` to save memory and significantly speedup computation by `Denis Engemann`\_, and `Alexandre Gramfort`\_. - Changed `cross\_validation.StratifiedKFold` to try and preserve as much of the original ordering of samples as possible so as not to hide overfitting on datasets with
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.15.rst
main
scikit-learn
[ -0.04621804505586624, -0.0698804184794426, -0.0647808313369751, -0.023356452584266663, 0.052986420691013336, 0.010731819085776806, 0.031090401113033295, 0.05061706528067589, -0.08481771498918533, 0.003560206387192011, 0.07201088219881058, -0.011278118006885052, 0.028667284175753593, -0.045...
0.108887
- Added svd\_method option with default value to "randomized" to :class:`decomposition.FactorAnalysis` to save memory and significantly speedup computation by `Denis Engemann`\_, and `Alexandre Gramfort`\_. - Changed `cross\_validation.StratifiedKFold` to try and preserve as much of the original ordering of samples as possible so as not to hide overfitting on datasets with a non-negligible level of samples dependency. By `Daniel Nouri`\_ and `Olivier Grisel`\_. - Add multi-output support to :class:`gaussian\_process.GaussianProcessRegressor` by John Novak. - Support for precomputed distance matrices in nearest neighbor estimators by `Robert Layton`\_ and `Joel Nothman`\_. - Norm computations optimized for NumPy 1.6 and later versions by `Lars Buitinck`\_. In particular, the k-means algorithm no longer needs a temporary data structure the size of its input. - :class:`dummy.DummyClassifier` can now be used to predict a constant output value. By `Manoj Kumar`\_. - :class:`dummy.DummyRegressor` has now a strategy parameter which allows to predict the mean, the median of the training set or a constant output value. By :user:`Maheshakya Wijewardena `. - Multi-label classification output in multilabel indicator format is now supported by :func:`metrics.roc\_auc\_score` and :func:`metrics.average\_precision\_score` by `Arnaud Joly`\_. - Significant performance improvements (more than 100x speedup for large problems) in :class:`isotonic.IsotonicRegression` by `Andrew Tulloch`\_. - Speed and memory usage improvements to the SGD algorithm for linear models: it now uses threads, not separate processes, when ``n\_jobs>1``. By `Lars Buitinck`\_. - Grid search and cross validation allow NaNs in the input arrays so that preprocessors such as `preprocessing.Imputer` can be trained within the cross validation loop, avoiding potentially skewed results. - Ridge regression can now deal with sample weights in feature space (only sample space until then). By :user:`Michael Eickenberg `. Both solutions are provided by the Cholesky solver. - Several classification and regression metrics now support weighted samples with the new ``sample\_weight`` argument: :func:`metrics.accuracy\_score`, :func:`metrics.zero\_one\_loss`, :func:`metrics.precision\_score`, :func:`metrics.average\_precision\_score`, :func:`metrics.f1\_score`, :func:`metrics.fbeta\_score`, :func:`metrics.recall\_score`, :func:`metrics.roc\_auc\_score`, :func:`metrics.explained\_variance\_score`, :func:`metrics.mean\_squared\_error`, :func:`metrics.mean\_absolute\_error`, :func:`metrics.r2\_score`. By `Noel Dawe`\_. - Speed up of the sample generator :func:`datasets.make\_multilabel\_classification`. By `Joel Nothman`\_. Documentation improvements ........................... - The Working With Text Data tutorial has now been worked in to the main documentation's tutorial section. Includes exercises and skeletons for tutorial presentation. Original tutorial created by several authors including `Olivier Grisel`\_, Lars Buitinck and many others. Tutorial integration into the scikit-learn documentation by `Jaques Grobler`\_ - Added :ref:`Computational Performance ` documentation. Discussion and examples of prediction latency / throughput and different factors that have influence over speed. Additional tips for building faster models and choosing a relevant compromise between speed and predictive power. By :user:`Eustache Diemert `. Bug fixes ......... - Fixed bug in :class:`decomposition.MiniBatchDictionaryLearning` : ``partial\_fit`` was not working properly. - Fixed bug in `linear\_model.stochastic\_gradient` : ``l1\_ratio`` was used as ``(1.0 - l1\_ratio)`` . - Fixed bug in :class:`multiclass.OneVsOneClassifier` with string labels. - Fixed a bug in :class:`LassoCV ` and :class:`ElasticNetCV `: they would not pre-compute the Gram matrix with ``precompute=True`` or ``precompute="auto"`` and ``n\_samples > n\_features``. By `Manoj Kumar`\_. - Fixed incorrect estimation of the degrees of freedom in :func:`feature\_selection.f\_regression` when variates are not centered. By :user:`Virgile Fritsch `. - Fixed a race condition in parallel processing with ``pre\_dispatch != "all"`` (for instance, in ``cross\_val\_score``). By `Olivier Grisel`\_. - Raise error in :class:`cluster.FeatureAgglomeration` and `cluster.WardAgglomeration` when no samples are given, rather than returning meaningless clustering. - Fixed bug in `gradient\_boosting.GradientBoostingRegressor` with ``loss='huber'``: ``gamma`` might have not been initialized. - Fixed feature importances as computed with a forest of randomized trees when fit with ``sample\_weight != None`` and/or with ``bootstrap=True``. By `Gilles Louppe`\_. API changes summary ------------------- - `sklearn.hmm` is deprecated. Its removal is planned for the 0.17 release. - Use of `covariance.EllipticEnvelop` has now been removed after deprecation. Please use :class:`covariance.EllipticEnvelope` instead. -
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.15.rst
main
scikit-learn
[ -0.097211092710495, -0.028959279879927635, -0.07860645651817322, -0.03567305579781532, 0.0894886702299118, -0.012427051551640034, 0.0019304184243083, -0.0407782718539238, -0.07445792108774185, -0.007804694585502148, 0.0279699619859457, 0.02521754428744316, -0.003458323422819376, -0.0473039...
0.076595
computed with a forest of randomized trees when fit with ``sample\_weight != None`` and/or with ``bootstrap=True``. By `Gilles Louppe`\_. API changes summary ------------------- - `sklearn.hmm` is deprecated. Its removal is planned for the 0.17 release. - Use of `covariance.EllipticEnvelop` has now been removed after deprecation. Please use :class:`covariance.EllipticEnvelope` instead. - `cluster.Ward` is deprecated. Use :class:`cluster.AgglomerativeClustering` instead. - `cluster.WardClustering` is deprecated. Use - :class:`cluster.AgglomerativeClustering` instead. - `cross\_validation.Bootstrap` is deprecated. `cross\_validation.KFold` or `cross\_validation.ShuffleSplit` are recommended instead. - Direct support for the sequence of sequences (or list of lists) multilabel format is deprecated. To convert to and from the supported binary indicator matrix format, use :class:`preprocessing.MultiLabelBinarizer`. By `Joel Nothman`\_. - Add score method to :class:`decomposition.PCA` following the model of probabilistic PCA and deprecate `ProbabilisticPCA` model whose score implementation is not correct. The computation now also exploits the matrix inversion lemma for faster computation. By `Alexandre Gramfort`\_. - The score method of :class:`decomposition.FactorAnalysis` now returns the average log-likelihood of the samples. Use score\_samples to get log-likelihood of each sample. By `Alexandre Gramfort`\_. - Generating boolean masks (the setting ``indices=False``) from cross-validation generators is deprecated. Support for masks will be removed in 0.17. The generators have produced arrays of indices by default since 0.10. By `Joel Nothman`\_. - 1-d arrays containing strings with ``dtype=object`` (as used in Pandas) are now considered valid classification targets. This fixes a regression from version 0.13 in some classifiers. By `Joel Nothman`\_. - Fix wrong ``explained\_variance\_ratio\_`` attribute in `RandomizedPCA`. By `Alexandre Gramfort`\_. - Fit alphas for each ``l1\_ratio`` instead of ``mean\_l1\_ratio`` in :class:`linear\_model.ElasticNetCV` and :class:`linear\_model.LassoCV`. This changes the shape of ``alphas\_`` from ``(n\_alphas,)`` to ``(n\_l1\_ratio, n\_alphas)`` if the ``l1\_ratio`` provided is a 1-D array like object of length greater than one. By `Manoj Kumar`\_. - Fix :class:`linear\_model.ElasticNetCV` and :class:`linear\_model.LassoCV` when fitting intercept and input data is sparse. The automatic grid of alphas was not computed correctly and the scaling with normalize was wrong. By `Manoj Kumar`\_. - Fix wrong maximal number of features drawn (``max\_features``) at each split for decision trees, random forests and gradient tree boosting. Previously, the count for the number of drawn features started only after one non constant features in the split. This bug fix will affect computational and generalization performance of those algorithms in the presence of constant features. To get back previous generalization performance, you should modify the value of ``max\_features``. By `Arnaud Joly`\_. - Fix wrong maximal number of features drawn (``max\_features``) at each split for :class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor`. Previously, only non constant features in the split was counted as drawn. Now constant features are counted as drawn. Furthermore at least one feature must be non constant in order to make a valid split. This bug fix will affect computational and generalization performance of extra trees in the presence of constant features. To get back previous generalization performance, you should modify the value of ``max\_features``. By `Arnaud Joly`\_. - Fix :func:`utils.class\_weight.compute\_class\_weight` when ``class\_weight=="auto"``. Previously it was broken for input of non-integer ``dtype`` and the weighted array that was returned was wrong. By `Manoj Kumar`\_. - Fix `cross\_validation.Bootstrap` to return ``ValueError`` when ``n\_train + n\_test > n``. By :user:`Ronald Phlypo `. People ------ List of contributors for release 0.15 by number of commits. \* 312 Olivier Grisel \* 275 Lars Buitinck \* 221 Gael Varoquaux \* 148 Arnaud Joly \* 134 Johannes Schönberger \* 119 Gilles Louppe \* 113 Joel Nothman \* 111 Alexandre Gramfort \* 95 Jaques Grobler \* 89 Denis Engemann \* 83 Peter Prettenhofer \* 83 Alexander Fabisch \* 62 Mathieu Blondel \* 60 Eustache Diemert \* 60 Nelle Varoquaux \* 49 Michael Bommarito \* 45 Manoj-Kumar-S \* 28 Kyle Kastner
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.15.rst
main
scikit-learn
[ -0.009218953549861908, -0.02564431168138981, -0.03710058704018593, 0.023523448035120964, 0.09624040126800537, 0.010627071373164654, 0.0017297430895268917, -0.04900851473212242, -0.040704868733882904, -0.02433251589536667, 0.07194153964519501, -0.10801438987255096, -0.03176768124103546, -0....
0.08172
119 Gilles Louppe \* 113 Joel Nothman \* 111 Alexandre Gramfort \* 95 Jaques Grobler \* 89 Denis Engemann \* 83 Peter Prettenhofer \* 83 Alexander Fabisch \* 62 Mathieu Blondel \* 60 Eustache Diemert \* 60 Nelle Varoquaux \* 49 Michael Bommarito \* 45 Manoj-Kumar-S \* 28 Kyle Kastner \* 26 Andreas Mueller \* 22 Noel Dawe \* 21 Maheshakya Wijewardena \* 21 Brooke Osborn \* 21 Hamzeh Alsalhi \* 21 Jake VanderPlas \* 21 Philippe Gervais \* 19 Bala Subrahmanyam Varanasi \* 12 Ronald Phlypo \* 10 Mikhail Korobov \* 8 Thomas Unterthiner \* 8 Jeffrey Blackburne \* 8 eltermann \* 8 bwignall \* 7 Ankit Agrawal \* 7 CJ Carey \* 6 Daniel Nouri \* 6 Chen Liu \* 6 Michael Eickenberg \* 6 ugurthemaster \* 5 Aaron Schumacher \* 5 Baptiste Lagarde \* 5 Rajat Khanduja \* 5 Robert McGibbon \* 5 Sergio Pascual \* 4 Alexis Metaireau \* 4 Ignacio Rossi \* 4 Virgile Fritsch \* 4 Sebastian Säger \* 4 Ilambharathi Kanniah \* 4 sdenton4 \* 4 Robert Layton \* 4 Alyssa \* 4 Amos Waterland \* 3 Andrew Tulloch \* 3 murad \* 3 Steven Maude \* 3 Karol Pysniak \* 3 Jacques Kvam \* 3 cgohlke \* 3 cjlin \* 3 Michael Becker \* 3 hamzeh \* 3 Eric Jacobsen \* 3 john collins \* 3 kaushik94 \* 3 Erwin Marsi \* 2 csytracy \* 2 LK \* 2 Vlad Niculae \* 2 Laurent Direr \* 2 Erik Shilts \* 2 Raul Garreta \* 2 Yoshiki Vázquez Baeza \* 2 Yung Siang Liau \* 2 abhishek thakur \* 2 James Yu \* 2 Rohit Sivaprasad \* 2 Roland Szabo \* 2 amormachine \* 2 Alexis Mignon \* 2 Oscar Carlsson \* 2 Nantas Nardelli \* 2 jess010 \* 2 kowalski87 \* 2 Andrew Clegg \* 2 Federico Vaggi \* 2 Simon Frid \* 2 Félix-Antoine Fortin \* 1 Ralf Gommers \* 1 t-aft \* 1 Ronan Amicel \* 1 Rupesh Kumar Srivastava \* 1 Ryan Wang \* 1 Samuel Charron \* 1 Samuel St-Jean \* 1 Fabian Pedregosa \* 1 Skipper Seabold \* 1 Stefan Walk \* 1 Stefan van der Walt \* 1 Stephan Hoyer \* 1 Allen Riddell \* 1 Valentin Haenel \* 1 Vijay Ramesh \* 1 Will Myers \* 1 Yaroslav Halchenko \* 1 Yoni Ben-Meshulam \* 1 Yury V. Zaytsev \* 1 adrinjalali \* 1 ai8rahim \* 1 alemagnani \* 1 alex \* 1 benjamin wilson \* 1 chalmerlowe \* 1 dzikie drożdże \* 1 jamestwebber \* 1 matrixorz \* 1 popo \* 1 samuela \* 1 François Boulogne \* 1 Alexander Measure \* 1 Ethan White \* 1 Guilherme Trein \* 1 Hendrik Heuer \* 1 IvicaJovic \* 1 Jan Hendrik Metzen \* 1 Jean Michel Rouly \* 1 Eduardo Ariño de la Rubia \* 1 Jelle Zijlstra \* 1 Eddy L O Jansson \* 1 Denis \* 1 John \* 1 John Schmidt \* 1 Jorge Cañardo Alastuey \* 1 Joseph Perla \* 1 Joshua Vredevoogd \* 1 José Ricardo \* 1 Julien Miotte \* 1 Kemal Eren \* 1 Kenta Sato \* 1 David Cournapeau \* 1 Kyle Kelley \* 1 Daniele Medri \* 1 Laurent Luce \* 1 Laurent Pierron \* 1 Luis Pedro Coelho \* 1 DanielWeitzenfeld \* 1 Craig Thompson \* 1 Chyi-Kwei Yau \* 1 Matthew Brett \* 1 Matthias Feurer \* 1 Max Linke \* 1 Chris Filo Gorgolewski \* 1 Charles Earl \* 1 Michael Hanke \* 1 Michele Orrù \* 1 Bryan Lunt \* 1 Brian Kearns \* 1 Paul Butler \* 1 Paweł Mandera \* 1
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.15.rst
main
scikit-learn
[ -0.009168731980025768, -0.004185497295111418, -0.054244015365839005, -0.04662759602069855, -0.007744862698018551, 0.09725788235664368, -0.03530481085181236, 0.0670805275440216, -0.012391574680805206, 0.03533131256699562, -0.07918265461921692, -0.08446723222732544, 0.03325529769062996, 0.02...
-0.059004
1 Chyi-Kwei Yau \* 1 Matthew Brett \* 1 Matthias Feurer \* 1 Max Linke \* 1 Chris Filo Gorgolewski \* 1 Charles Earl \* 1 Michael Hanke \* 1 Michele Orrù \* 1 Bryan Lunt \* 1 Brian Kearns \* 1 Paul Butler \* 1 Paweł Mandera \* 1 Peter \* 1 Andrew Ash \* 1 Pietro Zambelli \* 1 staubda
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.15.rst
main
scikit-learn
[ -0.05510634556412697, 0.03398619219660759, -0.06926824897527695, -0.08619667589664459, -0.03806348517537117, 0.08221209049224854, 0.02888997457921505, 0.028895191848278046, -0.012313352897763252, 0.05356767401099205, -0.053095199167728424, -0.12469878047704697, -0.0232522152364254, 0.04031...
0.006479
.. include:: \_contributors.rst .. currentmodule:: sklearn ============ Version 0.14 ============ .. \_changes\_0\_14: Version 0.14 =============== \*\*August 7, 2013\*\* Changelog --------- - Missing values with sparse and dense matrices can be imputed with the transformer `preprocessing.Imputer` by `Nicolas Trésegnie`\_. - The core implementation of decision trees has been rewritten from scratch, allowing for faster tree induction and lower memory consumption in all tree-based estimators. By `Gilles Louppe`\_. - Added :class:`ensemble.AdaBoostClassifier` and :class:`ensemble.AdaBoostRegressor`, by `Noel Dawe`\_ and `Gilles Louppe`\_. See the :ref:`AdaBoost ` section of the user guide for details and examples. - Added `grid\_search.RandomizedSearchCV` and `grid\_search.ParameterSampler` for randomized hyperparameter optimization. By `Andreas Müller`\_. - Added :ref:`biclustering ` algorithms (`sklearn.cluster.bicluster.SpectralCoclustering` and `sklearn.cluster.bicluster.SpectralBiclustering`), data generation methods (:func:`sklearn.datasets.make\_biclusters` and :func:`sklearn.datasets.make\_checkerboard`), and scoring metrics (:func:`sklearn.metrics.consensus\_score`). By `Kemal Eren`\_. - Added :ref:`Restricted Boltzmann Machines` (:class:`neural\_network.BernoulliRBM`). By `Yann Dauphin`\_. - Python 3 support by :user:`Justin Vincent `, `Lars Buitinck`\_, :user:`Subhodeep Moitra ` and `Olivier Grisel`\_. All tests now pass under Python 3.3. - Ability to pass one penalty (alpha value) per target in :class:`linear\_model.Ridge`, by @eickenberg and `Mathieu Blondel`\_. - Fixed `sklearn.linear\_model.stochastic\_gradient.py` L2 regularization issue (minor practical significance). By :user:`Norbert Crombach ` and `Mathieu Blondel`\_ . - Added an interactive version of `Andreas Müller`\_'s `Machine Learning Cheat Sheet (for scikit-learn) `\_ to the documentation. See :ref:`Choosing the right estimator `. By `Jaques Grobler`\_. - `grid\_search.GridSearchCV` and `cross\_validation.cross\_val\_score` now support the use of advanced scoring functions such as area under the ROC curve and f-beta scores. See :ref:`scoring\_parameter` for details. By `Andreas Müller`\_ and `Lars Buitinck`\_. Passing a function from :mod:`sklearn.metrics` as ``score\_func`` is deprecated. - Multi-label classification output is now supported by :func:`metrics.accuracy\_score`, :func:`metrics.zero\_one\_loss`, :func:`metrics.f1\_score`, :func:`metrics.fbeta\_score`, :func:`metrics.classification\_report`, :func:`metrics.precision\_score` and :func:`metrics.recall\_score` by `Arnaud Joly`\_. - Two new metrics :func:`metrics.hamming\_loss` and `metrics.jaccard\_similarity\_score` are added with multi-label support by `Arnaud Joly`\_. - Speed and memory usage improvements in :class:`feature\_extraction.text.CountVectorizer` and :class:`feature\_extraction.text.TfidfVectorizer`, by Jochen Wersdörfer and Roman Sinayev. - The ``min\_df`` parameter in :class:`feature\_extraction.text.CountVectorizer` and :class:`feature\_extraction.text.TfidfVectorizer`, which used to be 2, has been reset to 1 to avoid unpleasant surprises (empty vocabularies) for novice users who try it out on tiny document collections. A value of at least 2 is still recommended for practical use. - :class:`svm.LinearSVC`, :class:`linear\_model.SGDClassifier` and :class:`linear\_model.SGDRegressor` now have a ``sparsify`` method that converts their ``coef\_`` into a sparse matrix, meaning stored models trained using these estimators can be made much more compact. - :class:`linear\_model.SGDClassifier` now produces multiclass probability estimates when trained under log loss or modified Huber loss. - Hyperlinks to documentation in example code on the website by :user:`Martin Luessi `. - Fixed bug in :class:`preprocessing.MinMaxScaler` causing incorrect scaling of the features for non-default ``feature\_range`` settings. By `Andreas Müller`\_. - ``max\_features`` in :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor` and all derived ensemble estimators now support percentage values. By `Gilles Louppe`\_. - Performance improvements in :class:`isotonic.IsotonicRegression` by `Nelle Varoquaux`\_. - :func:`metrics.accuracy\_score` has an option normalize to return the fraction or the number of correctly classified samples by `Arnaud Joly`\_. - Added :func:`metrics.log\_loss` that computes log loss, aka cross-entropy loss. By Jochen Wersdörfer and `Lars Buitinck`\_. - A bug that caused :class:`ensemble.AdaBoostClassifier`'s to output incorrect probabilities has been fixed. - Feature selectors now share a mixin providing consistent ``transform``, ``inverse\_transform`` and ``get\_support`` methods. By `Joel Nothman`\_. - A fitted `grid\_search.GridSearchCV` or `grid\_search.RandomizedSearchCV` can now generally be pickled. By `Joel Nothman`\_. - Refactored and vectorized implementation of :func:`metrics.roc\_curve` and :func:`metrics.precision\_recall\_curve`. By `Joel Nothman`\_. - The new estimator :class:`sklearn.decomposition.TruncatedSVD` performs dimensionality reduction using SVD on sparse matrices, and can be used for latent semantic analysis (LSA). By `Lars Buitinck`\_. - Added self-contained example of out-of-core learning on text data :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_out\_of\_core\_classification.py`. By :user:`Eustache Diemert `. - The default number of components for `sklearn.decomposition.RandomizedPCA` is now
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.14.rst
main
scikit-learn
[ -0.07118576765060425, -0.03935582563281059, -0.057082097977399826, 0.08984202891588211, 0.06852386146783829, -0.015023687854409218, 0.009140283800661564, 0.01580161787569523, -0.03622311353683472, 0.06955096870660782, -0.05965295061469078, -0.029923276975750923, -0.01325011532753706, -0.03...
0.112668
- The new estimator :class:`sklearn.decomposition.TruncatedSVD` performs dimensionality reduction using SVD on sparse matrices, and can be used for latent semantic analysis (LSA). By `Lars Buitinck`\_. - Added self-contained example of out-of-core learning on text data :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_out\_of\_core\_classification.py`. By :user:`Eustache Diemert `. - The default number of components for `sklearn.decomposition.RandomizedPCA` is now correctly documented to be ``n\_features``. This was the default behavior, so programs using it will continue to work as they did. - :class:`sklearn.cluster.KMeans` now fits several orders of magnitude faster on sparse data (the speedup depends on the sparsity). By `Lars Buitinck`\_. - Reduce memory footprint of FastICA by `Denis Engemann`\_ and `Alexandre Gramfort`\_. - Verbose output in `sklearn.ensemble.gradient\_boosting` now uses a column format and prints progress in decreasing frequency. It also shows the remaining time. By `Peter Prettenhofer`\_. - `sklearn.ensemble.gradient\_boosting` provides out-of-bag improvement `oob\_improvement\_` rather than the OOB score for model selection. An example that shows how to use OOB estimates to select the number of trees was added. By `Peter Prettenhofer`\_. - Most metrics now support string labels for multiclass classification by `Arnaud Joly`\_ and `Lars Buitinck`\_. - New OrthogonalMatchingPursuitCV class by `Alexandre Gramfort`\_ and `Vlad Niculae`\_. - Fixed a bug in `sklearn.covariance.GraphLassoCV`: the 'alphas' parameter now works as expected when given a list of values. By Philippe Gervais. - Fixed an important bug in `sklearn.covariance.GraphLassoCV` that prevented all folds provided by a CV object to be used (only the first 3 were used). When providing a CV object, execution time may thus increase significantly compared to the previous version (bug results are correct now). By Philippe Gervais. - `cross\_validation.cross\_val\_score` and the `grid\_search` module is now tested with multi-output data by `Arnaud Joly`\_. - :func:`datasets.make\_multilabel\_classification` can now return the output in label indicator multilabel format by `Arnaud Joly`\_. - K-nearest neighbors, :class:`neighbors.KNeighborsRegressor` and :class:`neighbors.RadiusNeighborsRegressor`, and radius neighbors, :class:`neighbors.RadiusNeighborsRegressor` and :class:`neighbors.RadiusNeighborsClassifier` support multioutput data by `Arnaud Joly`\_. - Random state in LibSVM-based estimators (:class:`svm.SVC`, :class:`svm.NuSVC`, :class:`svm.OneClassSVM`, :class:`svm.SVR`, :class:`svm.NuSVR`) can now be controlled. This is useful to ensure consistency in the probability estimates for the classifiers trained with ``probability=True``. By `Vlad Niculae`\_. - Out-of-core learning support for discrete naive Bayes classifiers :class:`sklearn.naive\_bayes.MultinomialNB` and :class:`sklearn.naive\_bayes.BernoulliNB` by adding the ``partial\_fit`` method by `Olivier Grisel`\_. - New website design and navigation by `Gilles Louppe`\_, `Nelle Varoquaux`\_, Vincent Michel and `Andreas Müller`\_. - Improved documentation on :ref:`multi-class, multi-label and multi-output classification ` by `Yannick Schwartz`\_ and `Arnaud Joly`\_. - Better input and error handling in the :mod:`sklearn.metrics` module by `Arnaud Joly`\_ and `Joel Nothman`\_. - Speed optimization of the `hmm` module by :user:`Mikhail Korobov ` - Significant speed improvements for :class:`sklearn.cluster.DBSCAN` by `cleverless `\_ API changes summary ------------------- - The `auc\_score` was renamed :func:`metrics.roc\_auc\_score`. - Testing scikit-learn with ``sklearn.test()`` is deprecated. Use ``nosetests sklearn`` from the command line. - Feature importances in :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor` and all derived ensemble estimators are now computed on the fly when accessing the ``feature\_importances\_`` attribute. Setting ``compute\_importances=True`` is no longer required. By `Gilles Louppe`\_. - :class:`linear\_model.lasso\_path` and :class:`linear\_model.enet\_path` can return its results in the same format as that of :class:`linear\_model.lars\_path`. This is done by setting the ``return\_models`` parameter to ``False``. By `Jaques Grobler`\_ and `Alexandre Gramfort`\_ - `grid\_search.IterGrid` was renamed to `grid\_search.ParameterGrid`. - Fixed bug in `KFold` causing imperfect class balance in some cases. By `Alexandre Gramfort`\_ and Tadej Janež. - :class:`sklearn.neighbors.BallTree` has been refactored, and a :class:`sklearn.neighbors.KDTree` has been added which shares the same interface. The Ball Tree now works with a wide variety of distance metrics. Both classes have many new methods, including single-tree and dual-tree queries, breadth-first and depth-first searching, and more advanced queries such as kernel density estimation and 2-point correlation functions. By `Jake Vanderplas`\_
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.14.rst
main
scikit-learn
[ -0.016090037301182747, -0.05163346976041794, -0.05132998153567314, -0.0070936791598796844, 0.10871414840221405, -0.02881186082959175, -0.057734277099370956, 0.018815848976373672, -0.043891582638025284, 0.03411247208714485, 0.0385197214782238, 0.04101501777768135, -0.057109735906124115, -0....
0.04847
been added which shares the same interface. The Ball Tree now works with a wide variety of distance metrics. Both classes have many new methods, including single-tree and dual-tree queries, breadth-first and depth-first searching, and more advanced queries such as kernel density estimation and 2-point correlation functions. By `Jake Vanderplas`\_ - Support for scipy.spatial.cKDTree within neighbors queries has been removed, and the functionality replaced with the new :class:`sklearn.neighbors.KDTree` class. - :class:`sklearn.neighbors.KernelDensity` has been added, which performs efficient kernel density estimation with a variety of kernels. - :class:`sklearn.decomposition.KernelPCA` now always returns output with ``n\_components`` components, unless the new parameter ``remove\_zero\_eig`` is set to ``True``. This new behavior is consistent with the way kernel PCA was always documented; previously, the removal of components with zero eigenvalues was tacitly performed on all data. - ``gcv\_mode="auto"`` no longer tries to perform SVD on a densified sparse matrix in :class:`sklearn.linear\_model.RidgeCV`. - Sparse matrix support in `sklearn.decomposition.RandomizedPCA` is now deprecated in favor of the new ``TruncatedSVD``. - `cross\_validation.KFold` and `cross\_validation.StratifiedKFold` now enforce `n\_folds >= 2` otherwise a ``ValueError`` is raised. By `Olivier Grisel`\_. - :func:`datasets.load\_files`'s ``charset`` and ``charset\_errors`` parameters were renamed ``encoding`` and ``decode\_errors``. - Attribute ``oob\_score\_`` in :class:`sklearn.ensemble.GradientBoostingRegressor` and :class:`sklearn.ensemble.GradientBoostingClassifier` is deprecated and has been replaced by ``oob\_improvement\_`` . - Attributes in OrthogonalMatchingPursuit have been deprecated (copy\_X, Gram, ...) and precompute\_gram renamed precompute for consistency. See #2224. - :class:`sklearn.preprocessing.StandardScaler` now converts integer input to float, and raises a warning. Previously it rounded for dense integer input. - :class:`sklearn.multiclass.OneVsRestClassifier` now has a ``decision\_function`` method. This will return the distance of each sample from the decision boundary for each class, as long as the underlying estimators implement the ``decision\_function`` method. By `Kyle Kastner`\_. - Better input validation, warning on unexpected shapes for y. People ------ List of contributors for release 0.14 by number of commits. \* 277 Gilles Louppe \* 245 Lars Buitinck \* 187 Andreas Mueller \* 124 Arnaud Joly \* 112 Jaques Grobler \* 109 Gael Varoquaux \* 107 Olivier Grisel \* 102 Noel Dawe \* 99 Kemal Eren \* 79 Joel Nothman \* 75 Jake VanderPlas \* 73 Nelle Varoquaux \* 71 Vlad Niculae \* 65 Peter Prettenhofer \* 64 Alexandre Gramfort \* 54 Mathieu Blondel \* 38 Nicolas Trésegnie \* 35 eustache \* 27 Denis Engemann \* 25 Yann N. Dauphin \* 19 Justin Vincent \* 17 Robert Layton \* 15 Doug Coleman \* 14 Michael Eickenberg \* 13 Robert Marchman \* 11 Fabian Pedregosa \* 11 Philippe Gervais \* 10 Jim Holmström \* 10 Tadej Janež \* 10 syhw \* 9 Mikhail Korobov \* 9 Steven De Gryze \* 8 sergeyf \* 7 Ben Root \* 7 Hrishikesh Huilgolkar \* 6 Kyle Kastner \* 6 Martin Luessi \* 6 Rob Speer \* 5 Federico Vaggi \* 5 Raul Garreta \* 5 Rob Zinkov \* 4 Ken Geis \* 3 A. Flaxman \* 3 Denton Cockburn \* 3 Dougal Sutherland \* 3 Ian Ozsvald \* 3 Johannes Schönberger \* 3 Robert McGibbon \* 3 Roman Sinayev \* 3 Szabo Roland \* 2 Diego Molla \* 2 Imran Haque \* 2 Jochen Wersdörfer \* 2 Sergey Karayev \* 2 Yannick Schwartz \* 2 jamestwebber \* 1 Abhijeet Kolhe \* 1 Alexander Fabisch \* 1 Bastiaan van den Berg \* 1 Benjamin Peterson \* 1 Daniel Velkov \* 1 Fazlul Shahriar \* 1 Felix Brockherde \* 1 Félix-Antoine Fortin \* 1 Harikrishnan S \* 1 Jack Hale \* 1 JakeMick \* 1 James McDermott \* 1 John Benediktsson \* 1 John Zwinck \* 1 Joshua Vredevoogd \* 1 Justin Pati \* 1 Kevin Hughes \* 1 Kyle Kelley \* 1 Matthias Ekman \* 1 Miroslav
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.14.rst
main
scikit-learn
[ -0.026542728766798973, -0.04536481946706772, 0.008158582262694836, -0.01288384199142456, 0.039932336658239365, -0.07742278277873993, -0.08098793774843216, -0.0433213971555233, 0.010276377201080322, 0.02908720262348652, 0.005197911988943815, 0.013633042573928833, -0.017729129642248154, 0.00...
0.0237
\* 1 Félix-Antoine Fortin \* 1 Harikrishnan S \* 1 Jack Hale \* 1 JakeMick \* 1 James McDermott \* 1 John Benediktsson \* 1 John Zwinck \* 1 Joshua Vredevoogd \* 1 Justin Pati \* 1 Kevin Hughes \* 1 Kyle Kelley \* 1 Matthias Ekman \* 1 Miroslav Shubernetskiy \* 1 Naoki Orii \* 1 Norbert Crombach \* 1 Rafael Cunha de Almeida \* 1 Rolando Espinoza La fuente \* 1 Seamus Abshere \* 1 Sergey Feldman \* 1 Sergio Medina \* 1 Stefano Lattarini \* 1 Steve Koch \* 1 Sturla Molden \* 1 Thomas Jarosch \* 1 Yaroslav Halchenko
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.14.rst
main
scikit-learn
[ -0.016445588320493698, -0.028491348028182983, -0.05369264632463455, -0.06859360635280609, -0.05102207884192467, 0.12752297520637512, -0.028253240510821342, 0.026571722701191902, 0.0402584969997406, 0.03705272078514099, -0.023232219740748405, -0.1204332560300827, -0.035911690443754196, 0.01...
0.013255
.. include:: \_contributors.rst .. currentmodule:: sklearn .. \_release\_notes\_1\_9: =========== Version 1.9 =========== .. -- UNCOMMENT WHEN 1.9.0 IS RELEASED -- For a short description of the main highlights of the release, please refer to :ref:`sphx\_glr\_auto\_examples\_release\_highlights\_plot\_release\_highlights\_1\_9\_0.py`. .. DELETE WHEN 1.9.0 IS RELEASED Since October 2024, DO NOT add your changelog entry in this file. .. Instead, create a file named `..rst` in the relevant sub-folder in `doc/whats\_new/upcoming\_changes/`. For full details, see: https://github.com/scikit-learn/scikit-learn/blob/main/doc/whats\_new/upcoming\_changes/README.md .. include:: changelog\_legend.inc .. towncrier release notes start .. rubric:: Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.8, including: TODO: update at the time of the release.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.9.rst
main
scikit-learn
[ -0.02920551598072052, -0.04934009537100792, 0.05228477343916893, -0.027441293001174927, 0.13377761840820312, 0.004812570288777351, -0.02008306048810482, 0.017697438597679138, -0.02062532864511013, 0.00424561882391572, 0.05117745324969292, -0.0004151434695813805, -0.08046387881040573, -0.00...
0.025396
.. include:: \_contributors.rst .. currentmodule:: sklearn ============ Version 0.16 ============ .. \_changes\_0\_16\_1: Version 0.16.1 =============== \*\*April 14, 2015\*\* Changelog --------- Bug fixes ......... - Allow input data larger than ``block\_size`` in :class:`covariance.LedoitWolf` by `Andreas Müller`\_. - Fix a bug in :class:`isotonic.IsotonicRegression` deduplication that caused unstable result in :class:`calibration.CalibratedClassifierCV` by `Jan Hendrik Metzen`\_. - Fix sorting of labels in :func:`preprocessing.label\_binarize` by Michael Heilman. - Fix several stability and convergence issues in :class:`cross\_decomposition.CCA` and :class:`cross\_decomposition.PLSCanonical` by `Andreas Müller`\_ - Fix a bug in :class:`cluster.KMeans` when ``precompute\_distances=False`` on fortran-ordered data. - Fix a speed regression in :class:`ensemble.RandomForestClassifier`'s ``predict`` and ``predict\_proba`` by `Andreas Müller`\_. - Fix a regression where ``utils.shuffle`` converted lists and dataframes to arrays, by `Olivier Grisel`\_ .. \_changes\_0\_16: Version 0.16 ============ \*\*March 26, 2015\*\* Highlights ----------- - Speed improvements (notably in :class:`cluster.DBSCAN`), reduced memory requirements, bug-fixes and better default settings. - Multinomial Logistic regression and a path algorithm in :class:`linear\_model.LogisticRegressionCV`. - Out-of core learning of PCA via :class:`decomposition.IncrementalPCA`. - Probability calibration of classifiers using :class:`calibration.CalibratedClassifierCV`. - :class:`cluster.Birch` clustering method for large-scale datasets. - Scalable approximate nearest neighbors search with Locality-sensitive hashing forests in `neighbors.LSHForest`. - Improved error messages and better validation when using malformed input data. - More robust integration with pandas dataframes. Changelog --------- New features ............ - The new `neighbors.LSHForest` implements locality-sensitive hashing for approximate nearest neighbors search. By :user:`Maheshakya Wijewardena`. - Added :class:`svm.LinearSVR`. This class uses the liblinear implementation of Support Vector Regression which is much faster for large sample sizes than :class:`svm.SVR` with linear kernel. By `Fabian Pedregosa`\_ and Qiang Luo. - Incremental fit for :class:`GaussianNB `. - Added ``sample\_weight`` support to :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor`. By `Arnaud Joly`\_. - Added the :func:`metrics.label\_ranking\_average\_precision\_score` metrics. By `Arnaud Joly`\_. - Add the :func:`metrics.coverage\_error` metrics. By `Arnaud Joly`\_. - Added :class:`linear\_model.LogisticRegressionCV`. By `Manoj Kumar`\_, `Fabian Pedregosa`\_, `Gael Varoquaux`\_ and `Alexandre Gramfort`\_. - Added ``warm\_start`` constructor parameter to make it possible for any trained forest model to grow additional trees incrementally. By :user:`Laurent Direr`. - Added ``sample\_weight`` support to :class:`ensemble.GradientBoostingClassifier` and :class:`ensemble.GradientBoostingRegressor`. By `Peter Prettenhofer`\_. - Added :class:`decomposition.IncrementalPCA`, an implementation of the PCA algorithm that supports out-of-core learning with a ``partial\_fit`` method. By `Kyle Kastner`\_. - Averaged SGD for :class:`SGDClassifier ` and :class:`SGDRegressor ` By :user:`Danny Sullivan `. - Added `cross\_val\_predict` function which computes cross-validated estimates. By `Luis Pedro Coelho`\_ - Added :class:`linear\_model.TheilSenRegressor`, a robust generalized-median-based estimator. By :user:`Florian Wilhelm `. - Added :func:`metrics.median\_absolute\_error`, a robust metric. By `Gael Varoquaux`\_ and :user:`Florian Wilhelm `. - Add :class:`cluster.Birch`, an online clustering algorithm. By `Manoj Kumar`\_, `Alexandre Gramfort`\_ and `Joel Nothman`\_. - Added shrinkage support to :class:`discriminant\_analysis.LinearDiscriminantAnalysis` using two new solvers. By :user:`Clemens Brunner ` and `Martin Billinger`\_. - Added :class:`kernel\_ridge.KernelRidge`, an implementation of kernelized ridge regression. By `Mathieu Blondel`\_ and `Jan Hendrik Metzen`\_. - All solvers in :class:`linear\_model.Ridge` now support `sample\_weight`. By `Mathieu Blondel`\_. - Added `cross\_validation.PredefinedSplit` cross-validation for fixed user-provided cross-validation folds. By :user:`Thomas Unterthiner `. - Added :class:`calibration.CalibratedClassifierCV`, an approach for calibrating the predicted probabilities of a classifier. By `Alexandre Gramfort`\_, `Jan Hendrik Metzen`\_, `Mathieu Blondel`\_ and :user:`Balazs Kegl `. Enhancements ............ - Add option ``return\_distance`` in `hierarchical.ward\_tree` to return distances between nodes for both structured and unstructured versions of the algorithm. By `Matteo Visconti di Oleggio Castello`\_. The same option was added in `hierarchical.linkage\_tree`. By `Manoj Kumar`\_ - Add support for sample weights in scorer objects. Metrics with sample weight support will automatically benefit from it. By `Noel Dawe`\_ and `Vlad Niculae`\_. - Added ``newton-cg`` and `lbfgs` solver support in :class:`linear\_model.LogisticRegression`. By `Manoj Kumar`\_. - Add ``selection="random"`` parameter to implement stochastic coordinate descent for :class:`linear\_model.Lasso`, :class:`linear\_model.ElasticNet` and related. By `Manoj Kumar`\_. - Add ``sample\_weight`` parameter to `metrics.jaccard\_similarity\_score` and :func:`metrics.log\_loss`. By
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.16.rst
main
scikit-learn
[ -0.06536643207073212, -0.02031909115612507, -0.04082094132900238, -0.054015301167964935, -0.00168882985599339, 0.0043221707455813885, -0.00013347792264539748, 0.05866321176290512, -0.05176777020096779, -0.006850875914096832, 0.07460882514715195, -0.030957460403442383, -0.03632524609565735, ...
0.096987
support will automatically benefit from it. By `Noel Dawe`\_ and `Vlad Niculae`\_. - Added ``newton-cg`` and `lbfgs` solver support in :class:`linear\_model.LogisticRegression`. By `Manoj Kumar`\_. - Add ``selection="random"`` parameter to implement stochastic coordinate descent for :class:`linear\_model.Lasso`, :class:`linear\_model.ElasticNet` and related. By `Manoj Kumar`\_. - Add ``sample\_weight`` parameter to `metrics.jaccard\_similarity\_score` and :func:`metrics.log\_loss`. By :user:`Jatin Shah `. - Support sparse multilabel indicator representation in :class:`preprocessing.LabelBinarizer` and :class:`multiclass.OneVsRestClassifier` (by :user:`Hamzeh Alsalhi ` with thanks to Rohit Sivaprasad), as well as evaluation metrics (by `Joel Nothman`\_). - Add ``sample\_weight`` parameter to `metrics.jaccard\_similarity\_score`. By `Jatin Shah`. - Add support for multiclass in `metrics.hinge\_loss`. Added ``labels=None`` as optional parameter. By `Saurabh Jha`. - Add ``sample\_weight`` parameter to `metrics.hinge\_loss`. By `Saurabh Jha`. - Add ``multi\_class="multinomial"`` option in :class:`linear\_model.LogisticRegression` to implement a Logistic Regression solver that minimizes the cross-entropy or multinomial loss instead of the default One-vs-Rest setting. Supports `lbfgs` and `newton-cg` solvers. By `Lars Buitinck`\_ and `Manoj Kumar`\_. Solver option `newton-cg` by Simon Wu. - ``DictVectorizer`` can now perform ``fit\_transform`` on an iterable in a single pass, when giving the option ``sort=False``. By :user:`Dan Blanchard `. - :class:`model\_selection.GridSearchCV` and :class:`model\_selection.RandomizedSearchCV` can now be configured to work with estimators that may fail and raise errors on individual folds. This option is controlled by the `error\_score` parameter. This does not affect errors raised on re-fit. By :user:`Michal Romaniuk `. - Add ``digits`` parameter to `metrics.classification\_report` to allow report to show different precision of floating point numbers. By :user:`Ian Gilmore `. - Add a quantile prediction strategy to the :class:`dummy.DummyRegressor`. By :user:`Aaron Staple `. - Add ``handle\_unknown`` option to :class:`preprocessing.OneHotEncoder` to handle unknown categorical features more gracefully during transform. By `Manoj Kumar`\_. - Added support for sparse input data to decision trees and their ensembles. By `Fares Hedyati`\_ and `Arnaud Joly`\_. - Optimized :class:`cluster.AffinityPropagation` by reducing the number of memory allocations of large temporary data-structures. By `Antony Lee`\_. - Parallelization of the computation of feature importances in random forest. By `Olivier Grisel`\_ and `Arnaud Joly`\_. - Add ``n\_iter\_`` attribute to estimators that accept a ``max\_iter`` attribute in their constructor. By `Manoj Kumar`\_. - Added decision function for :class:`multiclass.OneVsOneClassifier` By `Raghav RV`\_ and :user:`Kyle Beauchamp `. - `neighbors.kneighbors\_graph` and `radius\_neighbors\_graph` support non-Euclidean metrics. By `Manoj Kumar`\_ - Parameter ``connectivity`` in :class:`cluster.AgglomerativeClustering` and family now accept callables that return a connectivity matrix. By `Manoj Kumar`\_. - Sparse support for :func:`metrics.pairwise.paired\_distances`. By `Joel Nothman`\_. - :class:`cluster.DBSCAN` now supports sparse input and sample weights and has been optimized: the inner loop has been rewritten in Cython and radius neighbors queries are now computed in batch. By `Joel Nothman`\_ and `Lars Buitinck`\_. - Add ``class\_weight`` parameter to automatically weight samples by class frequency for :class:`ensemble.RandomForestClassifier`, :class:`tree.DecisionTreeClassifier`, :class:`ensemble.ExtraTreesClassifier` and :class:`tree.ExtraTreeClassifier`. By `Trevor Stephens`\_. - `grid\_search.RandomizedSearchCV` now does sampling without replacement if all parameters are given as lists. By `Andreas Müller`\_. - Parallelized calculation of :func:`metrics.pairwise\_distances` is now supported for scipy metrics and custom callables. By `Joel Nothman`\_. - Allow the fitting and scoring of all clustering algorithms in :class:`pipeline.Pipeline`. By `Andreas Müller`\_. - More robust seeding and improved error messages in :class:`cluster.MeanShift` by `Andreas Müller`\_. - Make the stopping criterion for `mixture.GMM`, `mixture.DPGMM` and `mixture.VBGMM` less dependent on the number of samples by thresholding the average log-likelihood change instead of its sum over all samples. By `Hervé Bredin`\_. - The outcome of :func:`manifold.spectral\_embedding` was made deterministic by flipping the sign of eigenvectors. By :user:`Hasil Sharma `. - Significant performance and memory usage improvements in :class:`preprocessing.PolynomialFeatures`. By `Eric Martin`\_. - Numerical stability improvements for :class:`preprocessing.StandardScaler` and :func:`preprocessing.scale`. By `Nicolas Goix`\_ - :class:`svm.SVC` fitted on sparse input now implements ``decision\_function``. By `Rob Zinkov`\_ and `Andreas Müller`\_. - `cross\_validation.train\_test\_split` now preserves
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.16.rst
main
scikit-learn
[ -0.0357879213988781, -0.06562129408121109, -0.12423741817474365, -0.017217958346009254, 0.07360559701919556, 0.016391173005104065, 0.005501766689121723, 0.0609796941280365, -0.04872895032167435, 0.004835004452615976, -0.02248186059296131, -0.031521376222372055, 0.031351130455732346, 0.0041...
0.150039
the sign of eigenvectors. By :user:`Hasil Sharma `. - Significant performance and memory usage improvements in :class:`preprocessing.PolynomialFeatures`. By `Eric Martin`\_. - Numerical stability improvements for :class:`preprocessing.StandardScaler` and :func:`preprocessing.scale`. By `Nicolas Goix`\_ - :class:`svm.SVC` fitted on sparse input now implements ``decision\_function``. By `Rob Zinkov`\_ and `Andreas Müller`\_. - `cross\_validation.train\_test\_split` now preserves the input type, instead of converting to numpy arrays. Documentation improvements .......................... - Added example of using :class:`pipeline.FeatureUnion` for heterogeneous input. By :user:`Matt Terry ` - Documentation on scorers was improved, to highlight the handling of loss functions. By :user:`Matt Pico `. - A discrepancy between liblinear output and scikit-learn's wrappers is now noted. By `Manoj Kumar`\_. - Improved documentation generation: examples referring to a class or function are now shown in a gallery on the class/function's API reference page. By `Joel Nothman`\_. - More explicit documentation of sample generators and of data transformation. By `Joel Nothman`\_. - :class:`sklearn.neighbors.BallTree` and :class:`sklearn.neighbors.KDTree` used to point to empty pages stating that they are aliases of BinaryTree. This has been fixed to show the correct class docs. By `Manoj Kumar`\_. - Added silhouette plots for analysis of KMeans clustering using :func:`metrics.silhouette\_samples` and :func:`metrics.silhouette\_score`. See :ref:`sphx\_glr\_auto\_examples\_cluster\_plot\_kmeans\_silhouette\_analysis.py` Bug fixes ......... - Metaestimators now support ducktyping for the presence of ``decision\_function``, ``predict\_proba`` and other methods. This fixes behavior of `grid\_search.GridSearchCV`, `grid\_search.RandomizedSearchCV`, :class:`pipeline.Pipeline`, :class:`feature\_selection.RFE`, :class:`feature\_selection.RFECV` when nested. By `Joel Nothman`\_ - The ``scoring`` attribute of grid-search and cross-validation methods is no longer ignored when a `grid\_search.GridSearchCV` is given as a base estimator or the base estimator doesn't have predict. - The function `hierarchical.ward\_tree` now returns the children in the same order for both the structured and unstructured versions. By `Matteo Visconti di Oleggio Castello`\_. - :class:`feature\_selection.RFECV` now correctly handles cases when ``step`` is not equal to 1. By :user:`Nikolay Mayorov ` - The :class:`decomposition.PCA` now undoes whitening in its ``inverse\_transform``. Also, its ``components\_`` now always have unit length. By :user:`Michael Eickenberg `. - Fix incomplete download of the dataset when `datasets.download\_20newsgroups` is called. By `Manoj Kumar`\_. - Various fixes to the Gaussian processes subpackage by Vincent Dubourg and Jan Hendrik Metzen. - Calling ``partial\_fit`` with ``class\_weight=='auto'`` throws an appropriate error message and suggests a workaround. By :user:`Danny Sullivan `. - :class:`RBFSampler ` with ``gamma=g`` formerly approximated :func:`rbf\_kernel ` with ``gamma=g/2.``; the definition of ``gamma`` is now consistent, which may substantially change your results if you use a fixed value. (If you cross-validated over ``gamma``, it probably doesn't matter too much.) By :user:`Dougal Sutherland `. - Pipeline object delegates the ``classes\_`` attribute to the underlying estimator. It allows, for instance, to make bagging of a pipeline object. By `Arnaud Joly`\_ - :class:`neighbors.NearestCentroid` now uses the median as the centroid when metric is set to ``manhattan``. It was using the mean before. By `Manoj Kumar`\_ - Fix numerical stability issues in :class:`linear\_model.SGDClassifier` and :class:`linear\_model.SGDRegressor` by clipping large gradients and ensuring that weight decay rescaling is always positive (for large l2 regularization and large learning rate values). By `Olivier Grisel`\_ - When `compute\_full\_tree` is set to "auto", the full tree is built when n\_clusters is high and is early stopped when n\_clusters is low, while the behavior should be vice versa in :class:`cluster.AgglomerativeClustering` (and friends). This has been fixed By `Manoj Kumar`\_ - Fix lazy centering of data in :func:`linear\_model.enet\_path` and :func:`linear\_model.lasso\_path`. It was centered around one. It has been changed to be centered around the origin. By `Manoj Kumar`\_ - Fix handling of precomputed affinity matrices in :class:`cluster.AgglomerativeClustering` when using connectivity constraints. By :user:`Cathy Deng ` - Correct ``partial\_fit`` handling of ``class\_prior`` for :class:`sklearn.naive\_bayes.MultinomialNB` and :class:`sklearn.naive\_bayes.BernoulliNB`. By `Trevor Stephens`\_. - Fixed a crash in :func:`metrics.precision\_recall\_fscore\_support` when using unsorted ``labels``
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.16.rst
main
scikit-learn
[ -0.04684672877192497, -0.007991892285645008, -0.10667702555656433, -0.003466707421466708, 0.01448921300470829, -0.015475415624678135, -0.05598065257072449, 0.03688512369990349, -0.126966655254364, -0.02809748612344265, 0.002304681809619069, -0.0063941641710698605, 0.012083441019058228, -0....
0.065186
changed to be centered around the origin. By `Manoj Kumar`\_ - Fix handling of precomputed affinity matrices in :class:`cluster.AgglomerativeClustering` when using connectivity constraints. By :user:`Cathy Deng ` - Correct ``partial\_fit`` handling of ``class\_prior`` for :class:`sklearn.naive\_bayes.MultinomialNB` and :class:`sklearn.naive\_bayes.BernoulliNB`. By `Trevor Stephens`\_. - Fixed a crash in :func:`metrics.precision\_recall\_fscore\_support` when using unsorted ``labels`` in the multi-label setting. By `Andreas Müller`\_. - Avoid skipping the first nearest neighbor in the methods ``radius\_neighbors``, ``kneighbors``, ``kneighbors\_graph`` and ``radius\_neighbors\_graph`` in :class:`sklearn.neighbors.NearestNeighbors` and family, when the query data is not the same as fit data. By `Manoj Kumar`\_. - Fix log-density calculation in the `mixture.GMM` with tied covariance. By `Will Dawson`\_ - Fixed a scaling error in :class:`feature\_selection.SelectFdr` where a factor ``n\_features`` was missing. By `Andrew Tulloch`\_ - Fix zero division in :class:`neighbors.KNeighborsRegressor` and related classes when using distance weighting and having identical data points. By `Garret-R `\_. - Fixed round off errors with non positive-definite covariance matrices in GMM. By :user:`Alexis Mignon `. - Fixed an error in the computation of conditional probabilities in :class:`naive\_bayes.BernoulliNB`. By `Hanna Wallach`\_. - Make the method ``radius\_neighbors`` of :class:`neighbors.NearestNeighbors` return the samples lying on the boundary for ``algorithm='brute'``. By `Yan Yi`\_. - Flip sign of ``dual\_coef\_`` of :class:`svm.SVC` to make it consistent with the documentation and ``decision\_function``. By Artem Sobolev. - Fixed handling of ties in :class:`isotonic.IsotonicRegression`. We now use the weighted average of targets (secondary method). By `Andreas Müller`\_ and `Michael Bommarito `\_. API changes summary ------------------- - `GridSearchCV` and `cross\_val\_score` and other meta-estimators don't convert pandas DataFrames into arrays any more, allowing DataFrame specific operations in custom estimators. - `multiclass.fit\_ovr`, `multiclass.predict\_ovr`, `predict\_proba\_ovr`, `multiclass.fit\_ovo`, `multiclass.predict\_ovo`, `multiclass.fit\_ecoc` and `multiclass.predict\_ecoc` are deprecated. Use the underlying estimators instead. - Nearest neighbors estimators used to take arbitrary keyword arguments and pass these to their distance metric. This will no longer be supported in scikit-learn 0.18; use the ``metric\_params`` argument instead. - `n\_jobs` parameter of the fit method shifted to the constructor of the LinearRegression class. - The ``predict\_proba`` method of :class:`multiclass.OneVsRestClassifier` now returns two probabilities per sample in the multiclass case; this is consistent with other estimators and with the method's documentation, but previous versions accidentally returned only the positive probability. Fixed by Will Lamond and `Lars Buitinck`\_. - Change default value of precompute in :class:`linear\_model.ElasticNet` and :class:`linear\_model.Lasso` to False. Setting precompute to "auto" was found to be slower when n\_samples > n\_features since the computation of the Gram matrix is computationally expensive and outweighs the benefit of fitting the Gram for just one alpha. ``precompute="auto"`` is now deprecated and will be removed in 0.18 By `Manoj Kumar`\_. - Expose ``positive`` option in :func:`linear\_model.enet\_path` and :func:`linear\_model.enet\_path` which constrains coefficients to be positive. By `Manoj Kumar`\_. - Users should now supply an explicit ``average`` parameter to :func:`sklearn.metrics.f1\_score`, :func:`sklearn.metrics.fbeta\_score`, :func:`sklearn.metrics.recall\_score` and :func:`sklearn.metrics.precision\_score` when performing multiclass or multilabel (i.e. not binary) classification. By `Joel Nothman`\_. - `scoring` parameter for cross validation now accepts `'f1\_micro'`, `'f1\_macro'` or `'f1\_weighted'`. `'f1'` is now for binary classification only. Similar changes apply to `'precision'` and `'recall'`. By `Joel Nothman`\_. - The ``fit\_intercept``, ``normalize`` and ``return\_models`` parameters in :func:`linear\_model.enet\_path` and :func:`linear\_model.lasso\_path` have been removed. They were deprecated since 0.14 - From now onwards, all estimators will uniformly raise ``NotFittedError`` when any of the ``predict`` like methods are called before the model is fit. By `Raghav RV`\_. - Input data validation was refactored for more consistent input validation. The ``check\_arrays`` function was replaced by ``check\_array`` and ``check\_X\_y``. By `Andreas Müller`\_. - Allow ``X=None`` in the methods ``radius\_neighbors``, ``kneighbors``, ``kneighbors\_graph`` and ``radius\_neighbors\_graph`` in :class:`sklearn.neighbors.NearestNeighbors` and family. If set to None, then for every sample this avoids setting the sample itself as the first nearest neighbor.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.16.rst
main
scikit-learn
[ 0.019858265295624733, -0.05373139679431915, -0.10287059843540192, -0.061485935002565384, -0.01651081070303917, 0.0007157837389968336, 0.02274596504867077, -0.007676412351429462, -0.10453663021326065, -0.037157874554395676, 0.08098234236240387, -0.03023495152592659, 0.019831310957670212, -0...
0.057261
more consistent input validation. The ``check\_arrays`` function was replaced by ``check\_array`` and ``check\_X\_y``. By `Andreas Müller`\_. - Allow ``X=None`` in the methods ``radius\_neighbors``, ``kneighbors``, ``kneighbors\_graph`` and ``radius\_neighbors\_graph`` in :class:`sklearn.neighbors.NearestNeighbors` and family. If set to None, then for every sample this avoids setting the sample itself as the first nearest neighbor. By `Manoj Kumar`\_. - Add parameter ``include\_self`` in :func:`neighbors.kneighbors\_graph` and :func:`neighbors.radius\_neighbors\_graph` which has to be explicitly set by the user. If set to True, then the sample itself is considered as the first nearest neighbor. - `thresh` parameter is deprecated in favor of new `tol` parameter in `GMM`, `DPGMM` and `VBGMM`. See `Enhancements` section for details. By `Hervé Bredin`\_. - Estimators will treat input with dtype object as numeric when possible. By `Andreas Müller`\_ - Estimators now raise `ValueError` consistently when fitted on empty data (less than 1 sample or less than 1 feature for 2D input). By `Olivier Grisel`\_. - The ``shuffle`` option of :class:`.linear\_model.SGDClassifier`, :class:`linear\_model.SGDRegressor`, :class:`linear\_model.Perceptron`, :class:`linear\_model.PassiveAggressiveClassifier` and :class:`linear\_model.PassiveAggressiveRegressor` now defaults to ``True``. - :class:`cluster.DBSCAN` now uses a deterministic initialization. The `random\_state` parameter is deprecated. By :user:`Erich Schubert `. Code Contributors ----------------- A. Flaxman, Aaron Schumacher, Aaron Staple, abhishek thakur, Akshay, akshayah3, Aldrian Obaja, Alexander Fabisch, Alexandre Gramfort, Alexis Mignon, Anders Aagaard, Andreas Mueller, Andreas van Cranenburgh, Andrew Tulloch, Andrew Walker, Antony Lee, Arnaud Joly, banilo, Barmaley.exe, Ben Davies, Benedikt Koehler, bhsu, Boris Feld, Borja Ayerdi, Boyuan Deng, Brent Pedersen, Brian Wignall, Brooke Osborn, Calvin Giles, Cathy Deng, Celeo, cgohlke, chebee7i, Christian Stade-Schuldt, Christof Angermueller, Chyi-Kwei Yau, CJ Carey, Clemens Brunner, Daiki Aminaka, Dan Blanchard, danfrankj, Danny Sullivan, David Fletcher, Dmitrijs Milajevs, Dougal J. Sutherland, Erich Schubert, Fabian Pedregosa, Florian Wilhelm, floydsoft, Félix-Antoine Fortin, Gael Varoquaux, Garrett-R, Gilles Louppe, gpassino, gwulfs, Hampus Bengtsson, Hamzeh Alsalhi, Hanna Wallach, Harry Mavroforakis, Hasil Sharma, Helder, Herve Bredin, Hsiang-Fu Yu, Hugues SALAMIN, Ian Gilmore, Ilambharathi Kanniah, Imran Haque, isms, Jake VanderPlas, Jan Dlabal, Jan Hendrik Metzen, Jatin Shah, Javier López Peña, jdcaballero, Jean Kossaifi, Jeff Hammerbacher, Joel Nothman, Jonathan Helmus, Joseph, Kaicheng Zhang, Kevin Markham, Kyle Beauchamp, Kyle Kastner, Lagacherie Matthieu, Lars Buitinck, Laurent Direr, leepei, Loic Esteve, Luis Pedro Coelho, Lukas Michelbacher, maheshakya, Manoj Kumar, Manuel, Mario Michael Krell, Martin, Martin Billinger, Martin Ku, Mateusz Susik, Mathieu Blondel, Matt Pico, Matt Terry, Matteo Visconti dOC, Matti Lyra, Max Linke, Mehdi Cherti, Michael Bommarito, Michael Eickenberg, Michal Romaniuk, MLG, mr.Shu, Nelle Varoquaux, Nicola Montecchio, Nicolas, Nikolay Mayorov, Noel Dawe, Okal Billy, Olivier Grisel, Óscar Nájera, Paolo Puggioni, Peter Prettenhofer, Pratap Vardhan, pvnguyen, queqichao, Rafael Carrascosa, Raghav R V, Rahiel Kasim, Randall Mason, Rob Zinkov, Robert Bradshaw, Saket Choudhary, Sam Nicholls, Samuel Charron, Saurabh Jha, sethdandridge, sinhrks, snuderl, Stefan Otte, Stefan van der Walt, Steve Tjoa, swu, Sylvain Zimmer, tejesh95, terrycojones, Thomas Delteil, Thomas Unterthiner, Tomas Kazmar, trevorstephens, tttthomasssss, Tzu-Ming Kuo, ugurcaliskan, ugurthemaster, Vinayak Mehta, Vincent Dubourg, Vjacheslav Murashkin, Vlad Niculae, wadawson, Wei Xue, Will Lamond, Wu Jiang, x0l, Xinfan Meng, Yan Yi, Yu-Chin
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.16.rst
main
scikit-learn
[ -0.006524851080030203, -0.07479967921972275, -0.0583820566534996, -0.09146511554718018, 0.04923160374164581, -0.05196991562843323, -0.009183228947222233, -0.03341720998287201, -0.10071747750043869, 0.00419404823333025, 0.02303350158035755, -0.026848841458559036, 0.04165531322360039, -0.037...
-0.018834
.. include:: \_contributors.rst .. currentmodule:: sklearn ============== Older Versions ============== .. \_changes\_0\_12.1: Version 0.12.1 =============== \*\*October 8, 2012\*\* The 0.12.1 release is a bug-fix release with no additional features, but is instead a set of bug fixes Changelog ---------- - Improved numerical stability in spectral embedding by `Gael Varoquaux`\_ - Doctest under windows 64bit by `Gael Varoquaux`\_ - Documentation fixes for elastic net by `Andreas Müller`\_ and `Alexandre Gramfort`\_ - Proper behavior with fortran-ordered NumPy arrays by `Gael Varoquaux`\_ - Make GridSearchCV work with non-CSR sparse matrix by `Lars Buitinck`\_ - Fix parallel computing in MDS by `Gael Varoquaux`\_ - Fix Unicode support in count vectorizer by `Andreas Müller`\_ - Fix MinCovDet breaking with X.shape = (3, 1) by :user:`Virgile Fritsch ` - Fix clone of SGD objects by `Peter Prettenhofer`\_ - Stabilize GMM by :user:`Virgile Fritsch ` People ------ \* 14 `Peter Prettenhofer`\_ \* 12 `Gael Varoquaux`\_ \* 10 `Andreas Müller`\_ \* 5 `Lars Buitinck`\_ \* 3 :user:`Virgile Fritsch ` \* 1 `Alexandre Gramfort`\_ \* 1 `Gilles Louppe`\_ \* 1 `Mathieu Blondel`\_ .. \_changes\_0\_12: Version 0.12 ============ \*\*September 4, 2012\*\* Changelog --------- - Various speed improvements of the :ref:`decision trees ` module, by `Gilles Louppe`\_. - :class:`~ensemble.GradientBoostingRegressor` and :class:`~ensemble.GradientBoostingClassifier` now support feature subsampling via the ``max\_features`` argument, by `Peter Prettenhofer`\_. - Added Huber and Quantile loss functions to :class:`~ensemble.GradientBoostingRegressor`, by `Peter Prettenhofer`\_. - :ref:`Decision trees ` and :ref:`forests of randomized trees ` now support multi-output classification and regression problems, by `Gilles Louppe`\_. - Added :class:`~preprocessing.LabelEncoder`, a simple utility class to normalize labels or transform non-numerical labels, by `Mathieu Blondel`\_. - Added the epsilon-insensitive loss and the ability to make probabilistic predictions with the modified huber loss in :ref:`sgd`, by `Mathieu Blondel`\_. - Added :ref:`multidimensional\_scaling`, by Nelle Varoquaux. - SVMlight file format loader now detects compressed (gzip/bzip2) files and decompresses them on the fly, by `Lars Buitinck`\_. - SVMlight file format serializer now preserves double precision floating point values, by `Olivier Grisel`\_. - A common testing framework for all estimators was added, by `Andreas Müller`\_. - Understandable error messages for estimators that do not accept sparse input by `Gael Varoquaux`\_ - Speedups in hierarchical clustering by `Gael Varoquaux`\_. In particular building the tree now supports early stopping. This is useful when the number of clusters is not small compared to the number of samples. - Add MultiTaskLasso and MultiTaskElasticNet for joint feature selection, by `Alexandre Gramfort`\_. - Added `metrics.auc\_score` and :func:`metrics.average\_precision\_score` convenience functions by `Andreas Müller`\_. - Improved sparse matrix support in the :ref:`feature\_selection` module by `Andreas Müller`\_. - New word boundaries-aware character n-gram analyzer for the :ref:`text\_feature\_extraction` module by :user:`@kernc `. - Fixed bug in spectral clustering that led to single point clusters by `Andreas Müller`\_. - In :class:`~feature\_extraction.text.CountVectorizer`, added an option to ignore infrequent words, ``min\_df`` by `Andreas Müller`\_. - Add support for multiple targets in some linear models (ElasticNet, Lasso and OrthogonalMatchingPursuit) by `Vlad Niculae`\_ and `Alexandre Gramfort`\_. - Fixes in `decomposition.ProbabilisticPCA` score function by Wei Li. - Fixed feature importance computation in :ref:`gradient\_boosting`. API changes summary ------------------- - The old ``scikits.learn`` package has disappeared; all code should import from ``sklearn`` instead, which was introduced in 0.9. - In :func:`metrics.roc\_curve`, the ``thresholds`` array is now returned with its order reversed, in order to keep it consistent with the order of the returned ``fpr`` and ``tpr``. - In `hmm` objects, like `hmm.GaussianHMM`, `hmm.MultinomialHMM`, etc., all parameters must be passed to the object when initialising it and not through ``fit``. Now ``fit`` will only accept the data as an input parameter. - For all SVM classes, a faulty behavior of ``gamma`` was fixed. Previously, the default gamma value was
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.07103019207715988, -0.04215690866112709, -0.05324606969952583, 0.011652236804366112, 0.04457143694162369, -0.021307675167918205, -0.08332524448633194, -0.02997817099094391, -0.06544845551252365, -0.003914216533303261, -0.055817726999521255, 0.01353367231786251, -0.018639778718352318, 0....
0.027677
`hmm` objects, like `hmm.GaussianHMM`, `hmm.MultinomialHMM`, etc., all parameters must be passed to the object when initialising it and not through ``fit``. Now ``fit`` will only accept the data as an input parameter. - For all SVM classes, a faulty behavior of ``gamma`` was fixed. Previously, the default gamma value was only computed the first time ``fit`` was called and then stored. It is now recalculated on every call to ``fit``. - All ``Base`` classes are now abstract meta classes so that they can not be instantiated. - :func:`cluster.ward\_tree` now also returns the parent array. This is necessary for early-stopping in which case the tree is not completely built. - In :class:`~feature\_extraction.text.CountVectorizer` the parameters ``min\_n`` and ``max\_n`` were joined to the parameter ``n\_gram\_range`` to enable grid-searching both at once. - In :class:`~feature\_extraction.text.CountVectorizer`, words that appear only in one document are now ignored by default. To reproduce the previous behavior, set ``min\_df=1``. - Fixed API inconsistency: :meth:`linear\_model.SGDClassifier.predict\_proba` now returns 2d array when fit on two classes. - Fixed API inconsistency: :meth:`discriminant\_analysis.QuadraticDiscriminantAnalysis.decision\_function` and :meth:`discriminant\_analysis.LinearDiscriminantAnalysis.decision\_function` now return 1d arrays when fit on two classes. - Grid of alphas used for fitting :class:`~linear\_model.LassoCV` and :class:`~linear\_model.ElasticNetCV` is now stored in the attribute ``alphas\_`` rather than overriding the init parameter ``alphas``. - Linear models when alpha is estimated by cross-validation store the estimated value in the ``alpha\_`` attribute rather than just ``alpha`` or ``best\_alpha``. - :class:`~ensemble.GradientBoostingClassifier` now supports :meth:`~ensemble.GradientBoostingClassifier.staged\_predict\_proba`, and :meth:`~ensemble.GradientBoostingClassifier.staged\_predict`. - `svm.sparse.SVC` and other sparse SVM classes are now deprecated. All classes in the :ref:`svm` module now automatically select the sparse or dense representation based on the input. - All clustering algorithms now interpret the array ``X`` given to ``fit`` as input data, in particular :class:`~cluster.SpectralClustering` and :class:`~cluster.AffinityPropagation` which previously expected affinity matrices. - For clustering algorithms that take the desired number of clusters as a parameter, this parameter is now called ``n\_clusters``. People ------ \* 267 `Andreas Müller`\_ \* 94 `Gilles Louppe`\_ \* 89 `Gael Varoquaux`\_ \* 79 `Peter Prettenhofer`\_ \* 60 `Mathieu Blondel`\_ \* 57 `Alexandre Gramfort`\_ \* 52 `Vlad Niculae`\_ \* 45 `Lars Buitinck`\_ \* 44 Nelle Varoquaux \* 37 `Jaques Grobler`\_ \* 30 Alexis Mignon \* 30 Immanuel Bayer \* 27 `Olivier Grisel`\_ \* 16 Subhodeep Moitra \* 13 Yannick Schwartz \* 12 :user:`@kernc ` \* 11 :user:`Virgile Fritsch ` \* 9 Daniel Duckworth \* 9 `Fabian Pedregosa`\_ \* 9 `Robert Layton`\_ \* 8 John Benediktsson \* 7 Marko Burjek \* 5 `Nicolas Pinto`\_ \* 4 Alexandre Abraham \* 4 `Jake Vanderplas`\_ \* 3 `Brian Holt`\_ \* 3 `Edouard Duchesnay`\_ \* 3 Florian Hoenig \* 3 flyingimmidev \* 2 Francois Savard \* 2 Hannes Schulz \* 2 Peter Welinder \* 2 `Yaroslav Halchenko`\_ \* 2 Wei Li \* 1 Alex Companioni \* 1 Brandyn A. White \* 1 Bussonnier Matthias \* 1 Charles-Pierre Astolfi \* 1 Dan O'Huiginn \* 1 David Cournapeau \* 1 Keith Goodman \* 1 Ludwig Schwardt \* 1 Olivier Hervieu \* 1 Sergio Medina \* 1 Shiqiao Du \* 1 Tim Sheerman-Chase \* 1 buguen .. \_changes\_0\_11: Version 0.11 ============ \*\*May 7, 2012\*\* Changelog --------- Highlights ............. - Gradient boosted regression trees (:ref:`gradient\_boosting`) for classification and regression by `Peter Prettenhofer`\_ and `Scott White`\_ . - Simple dict-based feature loader with support for categorical variables (:class:`~feature\_extraction.DictVectorizer`) by `Lars Buitinck`\_. - Added Matthews correlation coefficient (:func:`metrics.matthews\_corrcoef`) and added macro and micro average options to :func:`~metrics.precision\_score`, :func:`metrics.recall\_score` and :func:`~metrics.f1\_score` by `Satrajit Ghosh`\_. - :ref:`out\_of\_bag` of generalization error for :ref:`ensemble` by `Andreas Müller`\_. - Randomized sparse linear models for feature selection, by `Alexandre Gramfort`\_ and `Gael Varoquaux`\_ - :ref:`label\_propagation` for semi-supervised learning, by Clay Woolam. \*\*Note\*\* the semi-supervised API is still work
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.04719749465584755, -0.03669537231326103, -0.0846158042550087, 0.09383906424045563, -0.018095893785357475, 0.00778457336127758, 0.019538946449756622, 0.02263438142836094, -0.07569452375173569, -0.04189585894346237, 0.07787801325321198, -0.05018587410449982, -0.002825494622811675, -0.0210...
0.062813
micro average options to :func:`~metrics.precision\_score`, :func:`metrics.recall\_score` and :func:`~metrics.f1\_score` by `Satrajit Ghosh`\_. - :ref:`out\_of\_bag` of generalization error for :ref:`ensemble` by `Andreas Müller`\_. - Randomized sparse linear models for feature selection, by `Alexandre Gramfort`\_ and `Gael Varoquaux`\_ - :ref:`label\_propagation` for semi-supervised learning, by Clay Woolam. \*\*Note\*\* the semi-supervised API is still work in progress, and may change. - Added BIC/AIC model selection to classical :ref:`gmm` and unified the API with the remainder of scikit-learn, by `Bertrand Thirion`\_ - Added `sklearn.cross\_validation.StratifiedShuffleSplit`, which is a `sklearn.cross\_validation.ShuffleSplit` with balanced splits, by Yannick Schwartz. - :class:`~sklearn.neighbors.NearestCentroid` classifier added, along with a ``shrink\_threshold`` parameter, which implements \*\*shrunken centroid classification\*\*, by `Robert Layton`\_. Other changes .............. - Merged dense and sparse implementations of :ref:`sgd` module and exposed utility extension types for sequential datasets ``seq\_dataset`` and weight vectors ``weight\_vector`` by `Peter Prettenhofer`\_. - Added ``partial\_fit`` (support for online/minibatch learning) and warm\_start to the :ref:`sgd` module by `Mathieu Blondel`\_. - Dense and sparse implementations of :ref:`svm` classes and :class:`~linear\_model.LogisticRegression` merged by `Lars Buitinck`\_. - Regressors can now be used as base estimator in the :ref:`multiclass` module by `Mathieu Blondel`\_. - Added n\_jobs option to :func:`metrics.pairwise\_distances` and :func:`metrics.pairwise.pairwise\_kernels` for parallel computation, by `Mathieu Blondel`\_. - :ref:`k\_means` can now be run in parallel, using the ``n\_jobs`` argument to either :ref:`k\_means` or :class:`cluster.KMeans`, by `Robert Layton`\_. - Improved :ref:`cross\_validation` and :ref:`grid\_search` documentation and introduced the new `cross\_validation.train\_test\_split` helper function by `Olivier Grisel`\_ - :class:`~svm.SVC` members ``coef\_`` and ``intercept\_`` changed sign for consistency with ``decision\_function``; for ``kernel==linear``, ``coef\_`` was fixed in the one-vs-one case, by `Andreas Müller`\_. - Performance improvements to efficient leave-one-out cross-validated Ridge regression, esp. for the ``n\_samples > n\_features`` case, in :class:`~linear\_model.RidgeCV`, by Reuben Fletcher-Costin. - Refactoring and simplification of the :ref:`text\_feature\_extraction` API and fixed a bug that caused possible negative IDF, by `Olivier Grisel`\_. - Beam pruning option in `\_BaseHMM` module has been removed since it is difficult to Cythonize. If you are interested in contributing a Cython version, you can use the python version in the git history as a reference. - Classes in :ref:`neighbors` now support arbitrary Minkowski metric for nearest neighbors searches. The metric can be specified by argument ``p``. API changes summary ------------------- - `covariance.EllipticEnvelop` is now deprecated. Please use :class:`~covariance.EllipticEnvelope` instead. - ``NeighborsClassifier`` and ``NeighborsRegressor`` are gone in the module :ref:`neighbors`. Use the classes :class:`~neighbors.KNeighborsClassifier`, :class:`~neighbors.RadiusNeighborsClassifier`, :class:`~neighbors.KNeighborsRegressor` and/or :class:`~neighbors.RadiusNeighborsRegressor` instead. - Sparse classes in the :ref:`sgd` module are now deprecated. - In `mixture.GMM`, `mixture.DPGMM` and `mixture.VBGMM`, parameters must be passed to an object when initialising it and not through ``fit``. Now ``fit`` will only accept the data as an input parameter. - methods ``rvs`` and ``decode`` in `GMM` module are now deprecated. ``sample`` and ``score`` or ``predict`` should be used instead. - attribute ``\_scores`` and ``\_pvalues`` in univariate feature selection objects are now deprecated. ``scores\_`` or ``pvalues\_`` should be used instead. - In :class:`~linear\_model.LogisticRegression`, :class:`~svm.LinearSVC`, :class:`~svm.SVC` and :class:`~svm.NuSVC`, the ``class\_weight`` parameter is now an initialization parameter, not a parameter to fit. This makes grid searches over this parameter possible. - LFW ``data`` is now always shape ``(n\_samples, n\_features)`` to be consistent with the Olivetti faces dataset. Use ``images`` and ``pairs`` attribute to access the natural images shapes instead. - In :class:`~svm.LinearSVC`, the meaning of the ``multi\_class`` parameter changed. Options now are ``'ovr'`` and ``'crammer\_singer'``, with ``'ovr'`` being the default. This does not change the default behavior but hopefully is less confusing. - Class `feature\_selection.text.Vectorizer` is deprecated and replaced by `feature\_selection.text.TfidfVectorizer`. - The preprocessor / analyzer nested structure for text feature extraction has been removed. All those features are now directly passed as flat constructor arguments to `feature\_selection.text.TfidfVectorizer` and `feature\_selection.text.CountVectorizer`, in particular the following parameters are
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.07864052057266235, -0.09551075845956802, -0.013071143999695778, 0.030845778062939644, 0.04886629804968834, -0.006586552131921053, -0.010050997138023376, 0.04504357650876045, -0.1104506403207779, -0.015051343478262424, -0.029113925993442535, -0.07674500346183777, -0.03681286796927452, -0...
0.150596
behavior but hopefully is less confusing. - Class `feature\_selection.text.Vectorizer` is deprecated and replaced by `feature\_selection.text.TfidfVectorizer`. - The preprocessor / analyzer nested structure for text feature extraction has been removed. All those features are now directly passed as flat constructor arguments to `feature\_selection.text.TfidfVectorizer` and `feature\_selection.text.CountVectorizer`, in particular the following parameters are now used: - ``analyzer`` can be ``'word'`` or ``'char'`` to switch the default analysis scheme, or use a specific python callable (as previously). - ``tokenizer`` and ``preprocessor`` have been introduced to make it still possible to customize those steps with the new API. - ``input`` explicitly control how to interpret the sequence passed to ``fit`` and ``predict``: filenames, file objects or direct (byte or Unicode) strings. - charset decoding is explicit and strict by default. - the ``vocabulary``, fitted or not is now stored in the ``vocabulary\_`` attribute to be consistent with the project conventions. - Class `feature\_selection.text.TfidfVectorizer` now derives directly from `feature\_selection.text.CountVectorizer` to make grid search trivial. - methods ``rvs`` in `\_BaseHMM` module are now deprecated. ``sample`` should be used instead. - Beam pruning option in `\_BaseHMM` module is removed since it is difficult to be Cythonized. If you are interested, you can look in the history codes by git. - The SVMlight format loader now supports files with both zero-based and one-based column indices, since both occur "in the wild". - Arguments in class :class:`~model\_selection.ShuffleSplit` are now consistent with :class:`~model\_selection.StratifiedShuffleSplit`. Arguments ``test\_fraction`` and ``train\_fraction`` are deprecated and renamed to ``test\_size`` and ``train\_size`` and can accept both ``float`` and ``int``. - Arguments in class `Bootstrap` are now consistent with :class:`~model\_selection.StratifiedShuffleSplit`. Arguments ``n\_test`` and ``n\_train`` are deprecated and renamed to ``test\_size`` and ``train\_size`` and can accept both ``float`` and ``int``. - Argument ``p`` added to classes in :ref:`neighbors` to specify an arbitrary Minkowski metric for nearest neighbors searches. People ------ \* 282 `Andreas Müller`\_ \* 239 `Peter Prettenhofer`\_ \* 198 `Gael Varoquaux`\_ \* 129 `Olivier Grisel`\_ \* 114 `Mathieu Blondel`\_ \* 103 Clay Woolam \* 96 `Lars Buitinck`\_ \* 88 `Jaques Grobler`\_ \* 82 `Alexandre Gramfort`\_ \* 50 `Bertrand Thirion`\_ \* 42 `Robert Layton`\_ \* 28 flyingimmidev \* 26 `Jake Vanderplas`\_ \* 26 Shiqiao Du \* 21 `Satrajit Ghosh`\_ \* 17 `David Marek`\_ \* 17 `Gilles Louppe`\_ \* 14 `Vlad Niculae`\_ \* 11 Yannick Schwartz \* 10 `Fabian Pedregosa`\_ \* 9 fcostin \* 7 Nick Wilson \* 5 Adrien Gaidon \* 5 `Nicolas Pinto`\_ \* 4 `David Warde-Farley`\_ \* 5 Nelle Varoquaux \* 5 Emmanuelle Gouillart \* 3 Joonas Sillanpää \* 3 Paolo Losi \* 2 Charles McCarthy \* 2 Roy Hyunjin Han \* 2 Scott White \* 2 ibayer \* 1 Brandyn White \* 1 Carlos Scheidegger \* 1 Claire Revillet \* 1 Conrad Lee \* 1 `Edouard Duchesnay`\_ \* 1 Jan Hendrik Metzen \* 1 Meng Xinfan \* 1 `Rob Zinkov`\_ \* 1 Shiqiao \* 1 Udi Weinsberg \* 1 Virgile Fritsch \* 1 Xinfan Meng \* 1 Yaroslav Halchenko \* 1 jansoe \* 1 Leon Palafox .. \_changes\_0\_10: Version 0.10 ============ \*\*January 11, 2012\*\* Changelog --------- - Python 2.5 compatibility was dropped; the minimum Python version needed to use scikit-learn is now 2.6. - :ref:`sparse\_inverse\_covariance` estimation using the graph Lasso, with associated cross-validated estimator, by `Gael Varoquaux`\_ - New :ref:`Tree ` module by `Brian Holt`\_, `Peter Prettenhofer`\_, `Satrajit Ghosh`\_ and `Gilles Louppe`\_. The module comes with complete documentation and examples. - Fixed a bug in the RFE module by `Gilles Louppe`\_ (issue #378). - Fixed a memory leak in :ref:`svm` module by `Brian Holt`\_ (issue #367). - Faster tests by `Fabian Pedregosa`\_ and others. - Silhouette Coefficient cluster analysis evaluation metric added as :func:`~sklearn.metrics.silhouette\_score` by Robert Layton.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.06843739002943039, 0.032406408339738846, -0.007045939564704895, 0.09066858887672424, 0.04906350374221802, -0.032086391001939774, -0.01612095721065998, 0.03572126105427742, -0.06052471324801445, -0.024381104856729507, 0.015561976470053196, -0.023357385769486427, -0.011783817782998085, 0....
0.062025
documentation and examples. - Fixed a bug in the RFE module by `Gilles Louppe`\_ (issue #378). - Fixed a memory leak in :ref:`svm` module by `Brian Holt`\_ (issue #367). - Faster tests by `Fabian Pedregosa`\_ and others. - Silhouette Coefficient cluster analysis evaluation metric added as :func:`~sklearn.metrics.silhouette\_score` by Robert Layton. - Fixed a bug in :ref:`k\_means` in the handling of the ``n\_init`` parameter: the clustering algorithm used to be run ``n\_init`` times but the last solution was retained instead of the best solution by `Olivier Grisel`\_. - Minor refactoring in :ref:`sgd` module; consolidated dense and sparse predict methods; Enhanced test time performance by converting model parameters to fortran-style arrays after fitting (only multi-class). - Adjusted Mutual Information metric added as :func:`~sklearn.metrics.adjusted\_mutual\_info\_score` by Robert Layton. - Models like SVC/SVR/LinearSVC/LogisticRegression from libsvm/liblinear now support scaling of C regularization parameter by the number of samples by `Alexandre Gramfort`\_. - New :ref:`Ensemble Methods ` module by `Gilles Louppe`\_ and `Brian Holt`\_. The module comes with the random forest algorithm and the extra-trees method, along with documentation and examples. - :ref:`outlier\_detection`: outlier and novelty detection, by :user:`Virgile Fritsch `. - :ref:`kernel\_approximation`: a transform implementing kernel approximation for fast SGD on non-linear kernels by `Andreas Müller`\_. - Fixed a bug due to atom swapping in :ref:`OMP` by `Vlad Niculae`\_. - :ref:`SparseCoder` by `Vlad Niculae`\_. - :ref:`mini\_batch\_kmeans` performance improvements by `Olivier Grisel`\_. - :ref:`k\_means` support for sparse matrices by `Mathieu Blondel`\_. - Improved documentation for developers and for the :mod:`sklearn.utils` module, by `Jake Vanderplas`\_. - Vectorized 20newsgroups dataset loader (:func:`~sklearn.datasets.fetch\_20newsgroups\_vectorized`) by `Mathieu Blondel`\_. - :ref:`multiclass` by `Lars Buitinck`\_. - Utilities for fast computation of mean and variance for sparse matrices by `Mathieu Blondel`\_. - Make :func:`~sklearn.preprocessing.scale` and `sklearn.preprocessing.Scaler` work on sparse matrices by `Olivier Grisel`\_ - Feature importances using decision trees and/or forest of trees, by `Gilles Louppe`\_. - Parallel implementation of forests of randomized trees by `Gilles Louppe`\_. - `sklearn.cross\_validation.ShuffleSplit` can subsample the train sets as well as the test sets by `Olivier Grisel`\_. - Errors in the build of the documentation fixed by `Andreas Müller`\_. API changes summary ------------------- Here are the code migration instructions when upgrading from scikit-learn version 0.9: - Some estimators that may overwrite their inputs to save memory previously had ``overwrite\_`` parameters; these have been replaced with ``copy\_`` parameters with exactly the opposite meaning. This particularly affects some of the estimators in :mod:`~sklearn.linear\_model`. The default behavior is still to copy everything passed in. - The SVMlight dataset loader :func:`~sklearn.datasets.load\_svmlight\_file` no longer supports loading two files at once; use ``load\_svmlight\_files`` instead. Also, the (unused) ``buffer\_mb`` parameter is gone. - Sparse estimators in the :ref:`sgd` module use dense parameter vector ``coef\_`` instead of ``sparse\_coef\_``. This significantly improves test time performance. - The :ref:`covariance` module now has a robust estimator of covariance, the Minimum Covariance Determinant estimator. - Cluster evaluation metrics in :mod:`~sklearn.metrics.cluster` have been refactored but the changes are backwards compatible. They have been moved to the `metrics.cluster.supervised`, along with `metrics.cluster.unsupervised` which contains the Silhouette Coefficient. - The ``permutation\_test\_score`` function now behaves the same way as ``cross\_val\_score`` (i.e. uses the mean score across the folds.) - Cross Validation generators now use integer indices (``indices=True``) by default instead of boolean masks. This makes it more intuitive to use with sparse matrix data. - The functions used for sparse coding, ``sparse\_encode`` and ``sparse\_encode\_parallel`` have been combined into :func:`~sklearn.decomposition.sparse\_encode`, and the shapes of the arrays have been transposed for consistency with the matrix factorization setting, as opposed to the regression setting. - Fixed an off-by-one error in the SVMlight/LibSVM file format handling; files generated using :func:`~sklearn.datasets.dump\_svmlight\_file` should be re-generated. (They should continue to work, but accidentally
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.09764614701271057, -0.043432995676994324, -0.04030883312225342, 0.1157594546675682, 0.10969717055559158, -0.02753036841750145, -0.04241903871297836, 0.03696925565600395, -0.0910317450761795, 0.015057298354804516, 0.021047266200184822, -0.023514172062277794, -0.014667029492557049, -0.027...
0.164556
combined into :func:`~sklearn.decomposition.sparse\_encode`, and the shapes of the arrays have been transposed for consistency with the matrix factorization setting, as opposed to the regression setting. - Fixed an off-by-one error in the SVMlight/LibSVM file format handling; files generated using :func:`~sklearn.datasets.dump\_svmlight\_file` should be re-generated. (They should continue to work, but accidentally had one extra column of zeros prepended.) - ``BaseDictionaryLearning`` class replaced by ``SparseCodingMixin``. - `sklearn.utils.extmath.fast\_svd` has been renamed :func:`~sklearn.utils.extmath.randomized\_svd` and the default oversampling is now fixed to 10 additional random vectors instead of doubling the number of components to extract. The new behavior follows the reference paper. People ------ The following people contributed to scikit-learn since last release: \* 246 `Andreas Müller`\_ \* 242 `Olivier Grisel`\_ \* 220 `Gilles Louppe`\_ \* 183 `Brian Holt`\_ \* 166 `Gael Varoquaux`\_ \* 144 `Lars Buitinck`\_ \* 73 `Vlad Niculae`\_ \* 65 `Peter Prettenhofer`\_ \* 64 `Fabian Pedregosa`\_ \* 60 Robert Layton \* 55 `Mathieu Blondel`\_ \* 52 `Jake Vanderplas`\_ \* 44 Noel Dawe \* 38 `Alexandre Gramfort`\_ \* 24 :user:`Virgile Fritsch ` \* 23 `Satrajit Ghosh`\_ \* 3 Jan Hendrik Metzen \* 3 Kenneth C. Arnold \* 3 Shiqiao Du \* 3 Tim Sheerman-Chase \* 3 `Yaroslav Halchenko`\_ \* 2 Bala Subrahmanyam Varanasi \* 2 DraXus \* 2 Michael Eickenberg \* 1 Bogdan Trach \* 1 Félix-Antoine Fortin \* 1 Juan Manuel Caicedo Carvajal \* 1 Nelle Varoquaux \* 1 `Nicolas Pinto`\_ \* 1 Tiziano Zito \* 1 Xinfan Meng .. \_changes\_0\_9: Version 0.9 =========== \*\*September 21, 2011\*\* scikit-learn 0.9 was released on September 2011, three months after the 0.8 release and includes the new modules :ref:`manifold`, :ref:`dirichlet\_process` as well as several new algorithms and documentation improvements. This release also includes the dictionary-learning work developed by `Vlad Niculae`\_ as part of the `Google Summer of Code `\_ program. .. |banner1| image:: ../auto\_examples/manifold/images/thumb/sphx\_glr\_plot\_compare\_methods\_thumb.png :target: ../auto\_examples/manifold/plot\_compare\_methods.html .. |banner2| image:: ../auto\_examples/linear\_model/images/thumb/sphx\_glr\_plot\_omp\_thumb.png :target: ../auto\_examples/linear\_model/plot\_omp.html .. |banner3| image:: ../auto\_examples/decomposition/images/thumb/sphx\_glr\_plot\_kernel\_pca\_thumb.png :target: ../auto\_examples/decomposition/plot\_kernel\_pca.html .. |center-div| raw:: html .. |end-div| raw:: html |center-div| |banner2| |banner1| |banner3| |end-div| Changelog --------- - New :ref:`manifold` module by `Jake Vanderplas`\_ and `Fabian Pedregosa`\_. - New :ref:`Dirichlet Process ` Gaussian Mixture Model by `Alexandre Passos`\_ - :ref:`neighbors` module refactoring by `Jake Vanderplas`\_ : general refactoring, support for sparse matrices in input, speed and documentation improvements. See the next section for a full list of API changes. - Improvements on the :ref:`feature\_selection` module by `Gilles Louppe`\_ : refactoring of the RFE classes, documentation rewrite, increased efficiency and minor API changes. - :ref:`SparsePCA` by `Vlad Niculae`\_, `Gael Varoquaux`\_ and `Alexandre Gramfort`\_ - Printing an estimator now behaves independently of architectures and Python version thanks to :user:`Jean Kossaifi `. - :ref:`Loader for libsvm/svmlight format ` by `Mathieu Blondel`\_ and `Lars Buitinck`\_ - Documentation improvements: thumbnails in example gallery by `Fabian Pedregosa`\_. - Important bugfixes in :ref:`svm` module (segfaults, bad performance) by `Fabian Pedregosa`\_. - Added :ref:`multinomial\_naive\_bayes` and :ref:`bernoulli\_naive\_bayes` by `Lars Buitinck`\_ - Text feature extraction optimizations by Lars Buitinck - Chi-Square feature selection (:func:`feature\_selection.chi2`) by `Lars Buitinck`\_. - :ref:`sample\_generators` module refactoring by `Gilles Louppe`\_ - :ref:`multiclass` by `Mathieu Blondel`\_ - Ball tree rewrite by `Jake Vanderplas`\_ - Implementation of :ref:`dbscan` algorithm by Robert Layton - Kmeans predict and transform by Robert Layton - Preprocessing module refactoring by `Olivier Grisel`\_ - Faster mean shift by Conrad Lee - New ``Bootstrap``, :ref:`ShuffleSplit` and various other improvements in cross validation schemes by `Olivier Grisel`\_ and `Gael Varoquaux`\_ - Adjusted Rand index and V-Measure clustering evaluation metrics by `Olivier Grisel`\_ - Added :class:`Orthogonal Matching Pursuit ` by `Vlad Niculae`\_ - Added 2D-patch extractor utilities in the :ref:`feature\_extraction` module by `Vlad Niculae`\_ - Implementation of :class:`~linear\_model.LassoLarsCV` (cross-validated Lasso solver using the Lars algorithm) and
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.06543849408626556, -0.06994076073169708, -0.08184438198804855, 0.011655651032924652, 0.07803049683570862, -0.06722560524940491, -0.03419957309961319, -0.04651983827352524, -0.06717657297849655, 0.015987200662493706, 0.029797786846756935, 0.052094411104917526, -0.08189902454614639, -0.05...
0.040922
`Olivier Grisel`\_ and `Gael Varoquaux`\_ - Adjusted Rand index and V-Measure clustering evaluation metrics by `Olivier Grisel`\_ - Added :class:`Orthogonal Matching Pursuit ` by `Vlad Niculae`\_ - Added 2D-patch extractor utilities in the :ref:`feature\_extraction` module by `Vlad Niculae`\_ - Implementation of :class:`~linear\_model.LassoLarsCV` (cross-validated Lasso solver using the Lars algorithm) and :class:`~linear\_model.LassoLarsIC` (BIC/AIC model selection in Lars) by `Gael Varoquaux`\_ and `Alexandre Gramfort`\_ - Scalability improvements to :func:`metrics.roc\_curve` by Olivier Hervieu - Distance helper functions :func:`metrics.pairwise\_distances` and :func:`metrics.pairwise.pairwise\_kernels` by Robert Layton - :class:`Mini-Batch K-Means ` by Nelle Varoquaux and Peter Prettenhofer. - mldata utilities by Pietro Berkes. - :ref:`olivetti\_faces\_dataset` by `David Warde-Farley`\_. API changes summary ------------------- Here are the code migration instructions when upgrading from scikit-learn version 0.8: - The ``scikits.learn`` package was renamed ``sklearn``. There is still a ``scikits.learn`` package alias for backward compatibility. Third-party projects with a dependency on scikit-learn 0.9+ should upgrade their codebase. For instance, under Linux / MacOSX just run (make a backup first!):: find -name "\*.py" | xargs sed -i 's/\bscikits.learn\b/sklearn/g' - Estimators no longer accept model parameters as ``fit`` arguments: instead all parameters must be only be passed as constructor arguments or using the now public ``set\_params`` method inherited from :class:`~base.BaseEstimator`. Some estimators can still accept keyword arguments on the ``fit`` but this is restricted to data-dependent values (e.g. a Gram matrix or an affinity matrix that are precomputed from the ``X`` data matrix. - The ``cross\_val`` package has been renamed to ``cross\_validation`` although there is also a ``cross\_val`` package alias in place for backward compatibility. Third-party projects with a dependency on scikit-learn 0.9+ should upgrade their codebase. For instance, under Linux / MacOSX just run (make a backup first!):: find -name "\*.py" | xargs sed -i 's/\bcross\_val\b/cross\_validation/g' - The ``score\_func`` argument of the ``sklearn.cross\_validation.cross\_val\_score`` function is now expected to accept ``y\_test`` and ``y\_predicted`` as only arguments for classification and regression tasks or ``X\_test`` for unsupervised estimators. - ``gamma`` parameter for support vector machine algorithms is set to ``1 / n\_features`` by default, instead of ``1 / n\_samples``. - The ``sklearn.hmm`` has been marked as orphaned: it will be removed from scikit-learn in version 0.11 unless someone steps up to contribute documentation, examples and fix lurking numerical stability issues. - ``sklearn.neighbors`` has been made into a submodule. The two previously available estimators, ``NeighborsClassifier`` and ``NeighborsRegressor`` have been marked as deprecated. Their functionality has been divided among five new classes: ``NearestNeighbors`` for unsupervised neighbors searches, ``KNeighborsClassifier`` & ``RadiusNeighborsClassifier`` for supervised classification problems, and ``KNeighborsRegressor`` & ``RadiusNeighborsRegressor`` for supervised regression problems. - ``sklearn.ball\_tree.BallTree`` has been moved to ``sklearn.neighbors.BallTree``. Using the former will generate a warning. - ``sklearn.linear\_model.LARS()`` and related classes (LassoLARS, LassoLARSCV, etc.) have been renamed to ``sklearn.linear\_model.Lars()``. - All distance metrics and kernels in ``sklearn.metrics.pairwise`` now have a Y parameter, which by default is None. If not given, the result is the distance (or kernel similarity) between each sample in Y. If given, the result is the pairwise distance (or kernel similarity) between samples in X to Y. - ``sklearn.metrics.pairwise.l1\_distance`` is now called ``manhattan\_distance``, and by default returns the pairwise distance. For the component wise distance, set the parameter ``sum\_over\_features`` to ``False``. Backward compatibility package aliases and other deprecated classes and functions will be removed in version 0.11. People ------ 38 people contributed to this release. - 387 `Vlad Niculae`\_ - 320 `Olivier Grisel`\_ - 192 `Lars Buitinck`\_ - 179 `Gael Varoquaux`\_ - 168 `Fabian Pedregosa`\_ (`INRIA`\_, `Parietal Team`\_) - 127 `Jake Vanderplas`\_ - 120 `Mathieu Blondel`\_ - 85 `Alexandre Passos`\_ - 67 `Alexandre Gramfort`\_ - 57 `Peter Prettenhofer`\_ - 56 `Gilles Louppe`\_ - 42 Robert Layton - 38 Nelle Varoquaux - 32 :user:`Jean Kossaifi
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.06600770354270935, -0.09452725946903229, -0.060376692563295364, -0.01661856845021248, 0.07593643665313721, 0.021300818771123886, 0.0007703714072704315, 0.0001930713333422318, -0.028185836970806122, 0.023531312122941017, -0.0023470723535865545, 0.012611289508640766, -0.05089615657925606, ...
0.141102
192 `Lars Buitinck`\_ - 179 `Gael Varoquaux`\_ - 168 `Fabian Pedregosa`\_ (`INRIA`\_, `Parietal Team`\_) - 127 `Jake Vanderplas`\_ - 120 `Mathieu Blondel`\_ - 85 `Alexandre Passos`\_ - 67 `Alexandre Gramfort`\_ - 57 `Peter Prettenhofer`\_ - 56 `Gilles Louppe`\_ - 42 Robert Layton - 38 Nelle Varoquaux - 32 :user:`Jean Kossaifi ` - 30 Conrad Lee - 22 Pietro Berkes - 18 andy - 17 David Warde-Farley - 12 Brian Holt - 11 Robert - 8 Amit Aides - 8 :user:`Virgile Fritsch ` - 7 `Yaroslav Halchenko`\_ - 6 Salvatore Masecchia - 5 Paolo Losi - 4 Vincent Schut - 3 Alexis Metaireau - 3 Bryan Silverthorn - 3 `Andreas Müller`\_ - 2 Minwoo Jake Lee - 1 Emmanuelle Gouillart - 1 Keith Goodman - 1 Lucas Wiman - 1 `Nicolas Pinto`\_ - 1 Thouis (Ray) Jones - 1 Tim Sheerman-Chase .. \_changes\_0\_8: Version 0.8 =========== \*\*May 11, 2011\*\* scikit-learn 0.8 was released on May 2011, one month after the first "international" `scikit-learn coding sprint `\_ and is marked by the inclusion of important modules: :ref:`hierarchical\_clustering`, :ref:`cross\_decomposition`, :ref:`NMF`, initial support for Python 3 and by important enhancements and bug fixes. Changelog --------- Several new modules were introduced during this release: - New :ref:`hierarchical\_clustering` module by Vincent Michel, `Bertrand Thirion`\_, `Alexandre Gramfort`\_ and `Gael Varoquaux`\_. - :ref:`kernel\_pca` implementation by `Mathieu Blondel`\_ - :ref:`labeled\_faces\_in\_the\_wild\_dataset` by `Olivier Grisel`\_. - New :ref:`cross\_decomposition` module by `Edouard Duchesnay`\_. - :ref:`NMF` module `Vlad Niculae`\_ - Implementation of the :ref:`oracle\_approximating\_shrinkage` algorithm by :user:`Virgile Fritsch ` in the :ref:`covariance` module. Some other modules benefited from significant improvements or cleanups. - Initial support for Python 3: builds and imports cleanly, some modules are usable while others have failing tests by `Fabian Pedregosa`\_. - :class:`~decomposition.PCA` is now usable from the Pipeline object by `Olivier Grisel`\_. - Guide :ref:`performance-howto` by `Olivier Grisel`\_. - Fixes for memory leaks in libsvm bindings, 64-bit safer BallTree by Lars Buitinck. - bug and style fixing in :ref:`k\_means` algorithm by Jan Schlüter. - Add attribute converged to Gaussian Mixture Models by Vincent Schut. - Implemented ``transform``, ``predict\_log\_proba`` in :class:`~discriminant\_analysis.LinearDiscriminantAnalysis` By `Mathieu Blondel`\_. - Refactoring in the :ref:`svm` module and bug fixes by `Fabian Pedregosa`\_, `Gael Varoquaux`\_ and Amit Aides. - Refactored SGD module (removed code duplication, better variable naming), added interface for sample weight by `Peter Prettenhofer`\_. - Wrapped BallTree with Cython by Thouis (Ray) Jones. - Added function :func:`svm.l1\_min\_c` by Paolo Losi. - Typos, doc style, etc. by `Yaroslav Halchenko`\_, `Gael Varoquaux`\_, `Olivier Grisel`\_, Yann Malet, `Nicolas Pinto`\_, Lars Buitinck and `Fabian Pedregosa`\_. People ------- People that made this release possible preceded by number of commits: - 159 `Olivier Grisel`\_ - 96 `Gael Varoquaux`\_ - 96 `Vlad Niculae`\_ - 94 `Fabian Pedregosa`\_ - 36 `Alexandre Gramfort`\_ - 32 Paolo Losi - 31 `Edouard Duchesnay`\_ - 30 `Mathieu Blondel`\_ - 25 `Peter Prettenhofer`\_ - 22 `Nicolas Pinto`\_ - 11 :user:`Virgile Fritsch ` - 7 Lars Buitinck - 6 Vincent Michel - 5 `Bertrand Thirion`\_ - 4 Thouis (Ray) Jones - 4 Vincent Schut - 3 Jan Schlüter - 2 Julien Miotte - 2 `Matthieu Perrot`\_ - 2 Yann Malet - 2 `Yaroslav Halchenko`\_ - 1 Amit Aides - 1 `Andreas Müller`\_ - 1 Feth Arezki - 1 Meng Xinfan .. \_changes\_0\_7: Version 0.7 =========== \*\*March 2, 2011\*\* scikit-learn 0.7 was released in March 2011, roughly three months after the 0.6 release. This release is marked by the speed improvements in existing algorithms like k-Nearest Neighbors and K-Means algorithm and by the inclusion of an efficient algorithm for computing the Ridge Generalized Cross Validation solution. Unlike the preceding release, no new modules were added to this release. Changelog ---------
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.041510291397571564, -0.031231161206960678, -0.05977103114128113, -0.042904745787382126, -0.008662068285048008, 0.07275804877281189, 0.01800498552620411, 0.07664725929498672, 0.015805400907993317, 0.01516811829060316, -0.01994336023926735, -0.07356873154640198, 0.030232517048716545, 0.00...
0.005467
the 0.6 release. This release is marked by the speed improvements in existing algorithms like k-Nearest Neighbors and K-Means algorithm and by the inclusion of an efficient algorithm for computing the Ridge Generalized Cross Validation solution. Unlike the preceding release, no new modules were added to this release. Changelog --------- - Performance improvements for Gaussian Mixture Model sampling [Jan Schlüter]. - Implementation of efficient leave-one-out cross-validated Ridge in :class:`~linear\_model.RidgeCV` [`Mathieu Blondel`\_] - Better handling of collinearity and early stopping in :func:`linear\_model.lars\_path` [`Alexandre Gramfort`\_ and `Fabian Pedregosa`\_]. - Fixes for liblinear ordering of labels and sign of coefficients [Dan Yamins, Paolo Losi, `Mathieu Blondel`\_ and `Fabian Pedregosa`\_]. - Performance improvements for Nearest Neighbors algorithm in high-dimensional spaces [`Fabian Pedregosa`\_]. - Performance improvements for :class:`~cluster.KMeans` [`Gael Varoquaux`\_ and `James Bergstra`\_]. - Sanity checks for SVM-based classes [`Mathieu Blondel`\_]. - Refactoring of `neighbors.NeighborsClassifier` and :func:`neighbors.kneighbors\_graph`: added different algorithms for the k-Nearest Neighbor Search and implemented a more stable algorithm for finding barycenter weights. Also added some developer documentation for this module, see `notes\_neighbors `\_ for more information [`Fabian Pedregosa`\_]. - Documentation improvements: Added `pca.RandomizedPCA` and :class:`~linear\_model.LogisticRegression` to the class reference. Also added references of matrices used for clustering and other fixes [`Gael Varoquaux`\_, `Fabian Pedregosa`\_, `Mathieu Blondel`\_, `Olivier Grisel`\_, Virgile Fritsch , Emmanuelle Gouillart] - Binded decision\_function in classes that make use of liblinear\_, dense and sparse variants, like :class:`~svm.LinearSVC` or :class:`~linear\_model.LogisticRegression` [`Fabian Pedregosa`\_]. - Performance and API improvements to :func:`metrics.pairwise.euclidean\_distances` and to `pca.RandomizedPCA` [`James Bergstra`\_]. - Fix compilation issues under NetBSD [Kamel Ibn Hassen Derouiche] - Allow input sequences of different lengths in `hmm.GaussianHMM` [`Ron Weiss`\_]. - Fix bug in affinity propagation caused by incorrect indexing [Xinfan Meng] People ------ People that made this release possible preceded by number of commits: - 85 `Fabian Pedregosa`\_ - 67 `Mathieu Blondel`\_ - 20 `Alexandre Gramfort`\_ - 19 `James Bergstra`\_ - 14 Dan Yamins - 13 `Olivier Grisel`\_ - 12 `Gael Varoquaux`\_ - 4 `Edouard Duchesnay`\_ - 4 `Ron Weiss`\_ - 2 Satrajit Ghosh - 2 Vincent Dubourg - 1 Emmanuelle Gouillart - 1 Kamel Ibn Hassen Derouiche - 1 Paolo Losi - 1 VirgileFritsch - 1 `Yaroslav Halchenko`\_ - 1 Xinfan Meng .. \_changes\_0\_6: Version 0.6 =========== \*\*December 21, 2010\*\* scikit-learn 0.6 was released on December 2010. It is marked by the inclusion of several new modules and a general renaming of old ones. It is also marked by the inclusion of new example, including applications to real-world datasets. Changelog --------- - New `stochastic gradient `\_ descent module by Peter Prettenhofer. The module comes with complete documentation and examples. - Improved svm module: memory consumption has been reduced by 50%, heuristic to automatically set class weights, possibility to assign weights to samples (see :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_weighted\_samples.py` for an example). - New :ref:`gaussian\_process` module by Vincent Dubourg. This module also has great documentation and some very neat examples. See example\_gaussian\_process\_plot\_gp\_regression.py or example\_gaussian\_process\_plot\_gp\_probabilistic\_classification\_after\_regression.py for a taste of what can be done. - It is now possible to use liblinear's Multi-class SVC (option multi\_class in :class:`~svm.LinearSVC`) - New features and performance improvements of text feature extraction. - Improved sparse matrix support, both in main classes (:class:`~model\_selection.GridSearchCV`) as in modules sklearn.svm.sparse and sklearn.linear\_model.sparse. - Lots of cool new examples and a new section that uses real-world datasets was created. These include: :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_face\_recognition.py`, :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_species\_distribution\_modeling.py`, :ref:`sphx\_glr\_auto\_examples\_applications\_wikipedia\_principal\_eigenvector.py` and others. - Faster :ref:`least\_angle\_regression` algorithm. It is now 2x faster than the R version on worst case and up to 10x times faster on some cases. - Faster coordinate descent algorithm. In particular, the full path version of lasso (:func:`linear\_model.lasso\_path`) is more than 200x times faster than before. - It is now possible to get
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.0605693943798542, -0.07423023879528046, 0.008663421496748924, -0.08113570511341095, 0.11762619018554688, 0.03290128335356712, -0.049426328390836716, -0.023289375007152557, -0.012636173516511917, 0.017175234854221344, 0.06195501610636711, 0.05699879303574562, -0.015497436746954918, -0.03...
0.022742
It is now 2x faster than the R version on worst case and up to 10x times faster on some cases. - Faster coordinate descent algorithm. In particular, the full path version of lasso (:func:`linear\_model.lasso\_path`) is more than 200x times faster than before. - It is now possible to get probability estimates from a :class:`~linear\_model.LogisticRegression` model. - module renaming: the glm module has been renamed to linear\_model, the gmm module has been included into the more general mixture model and the sgd module has been included in linear\_model. - Lots of bug fixes and documentation improvements. People ------ People that made this release possible preceded by number of commits: \* 207 `Olivier Grisel`\_ \* 167 `Fabian Pedregosa`\_ \* 97 `Peter Prettenhofer`\_ \* 68 `Alexandre Gramfort`\_ \* 59 `Mathieu Blondel`\_ \* 55 `Gael Varoquaux`\_ \* 33 Vincent Dubourg \* 21 `Ron Weiss`\_ \* 9 Bertrand Thirion \* 3 `Alexandre Passos`\_ \* 3 Anne-Laure Fouque \* 2 Ronan Amicel \* 1 `Christian Osendorfer`\_ .. \_changes\_0\_5: Version 0.5 =========== \*\*October 11, 2010\*\* Changelog --------- New classes ----------- - Support for sparse matrices in some classifiers of modules ``svm`` and ``linear\_model`` (see `svm.sparse.SVC`, `svm.sparse.SVR`, `svm.sparse.LinearSVC`, `linear\_model.sparse.Lasso`, `linear\_model.sparse.ElasticNet`) - New :class:`~pipeline.Pipeline` object to compose different estimators. - Recursive Feature Elimination routines in module :ref:`feature\_selection`. - Addition of various classes capable of cross validation in the linear\_model module (:class:`~linear\_model.LassoCV`, :class:`~linear\_model.ElasticNetCV`, etc.). - New, more efficient LARS algorithm implementation. The Lasso variant of the algorithm is also implemented. See :class:`~linear\_model.lars\_path`, :class:`~linear\_model.Lars` and :class:`~linear\_model.LassoLars`. - New Hidden Markov Models module (see classes `hmm.GaussianHMM`, `hmm.MultinomialHMM`, `hmm.GMMHMM`) - New module feature\_extraction (see :ref:`class reference `) - New FastICA algorithm in module sklearn.fastica Documentation ------------- - Improved documentation for many modules, now separating narrative documentation from the class reference. As an example, see `documentation for the SVM module `\_ and the complete `class reference `\_. Fixes ----- - API changes: adhere variable names to PEP-8, give more meaningful names. - Fixes for svm module to run on a shared memory context (multiprocessing). - It is again possible to generate latex (and thus PDF) from the sphinx docs. Examples -------- - new examples using some of the mlcomp datasets: ``sphx\_glr\_auto\_examples\_mlcomp\_sparse\_document\_classification.py`` (since removed) and :ref:`sphx\_glr\_auto\_examples\_text\_plot\_document\_classification\_20newsgroups.py` - Many more examples. `See here `\_ the full list of examples. External dependencies --------------------- - Joblib is now a dependency of this package, although it is shipped with (sklearn.externals.joblib). Removed modules --------------- - Module ann (Artificial Neural Networks) has been removed from the distribution. Users wanting this sort of algorithms should take a look into pybrain. Misc ---- - New sphinx theme for the web page. Authors ------- The following is a list of authors for this release, preceded by number of commits: \* 262 Fabian Pedregosa \* 240 Gael Varoquaux \* 149 Alexandre Gramfort \* 116 Olivier Grisel \* 40 Vincent Michel \* 38 Ron Weiss \* 23 Matthieu Perrot \* 10 Bertrand Thirion \* 7 Yaroslav Halchenko \* 9 VirgileFritsch \* 6 Edouard Duchesnay \* 4 Mathieu Blondel \* 1 Ariel Rokem \* 1 Matthieu Brucher Version 0.4 =========== \*\*August 26, 2010\*\* Changelog --------- Major changes in this release include: - Coordinate Descent algorithm (Lasso, ElasticNet) refactoring & speed improvements (roughly 100x times faster). - Coordinate Descent Refactoring (and bug fixing) for consistency with R's package GLMNET. - New metrics module. - New GMM module contributed by Ron Weiss. - Implementation of the LARS algorithm (without Lasso variant for now). - feature\_selection module redesign. - Migration to GIT as version control system. - Removal of obsolete attrselect module. - Rename of private compiled extensions (added underscore). - Removal of legacy unmaintained code. - Documentation improvements
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.017826596274971962, -0.09171808511018753, -0.05424433574080467, 0.022006463259458542, 0.07759293913841248, -0.02522747591137886, -0.06817276030778885, 0.028797393664717674, -0.044282425194978714, 0.059918779879808426, 0.05023057013750076, 0.12274570018053055, -0.0766492486000061, 0.0298...
-0.012443
contributed by Ron Weiss. - Implementation of the LARS algorithm (without Lasso variant for now). - feature\_selection module redesign. - Migration to GIT as version control system. - Removal of obsolete attrselect module. - Rename of private compiled extensions (added underscore). - Removal of legacy unmaintained code. - Documentation improvements (both docstring and rst). - Improvement of the build system to (optionally) link with MKL. Also, provide a lite BLAS implementation in case no system-wide BLAS is found. - Lots of new examples. - Many, many bug fixes ... Authors ------- The committer list for this release is the following (preceded by number of commits): \* 143 Fabian Pedregosa \* 35 Alexandre Gramfort \* 34 Olivier Grisel \* 11 Gael Varoquaux \* 5 Yaroslav Halchenko \* 2 Vincent Michel \* 1 Chris Filo Gorgolewski Earlier versions ================ Earlier versions included contributions by Fred Mailhot, David Cooke, David Huard, Dave Morrill, Ed Schofield, Travis Oliphant, Pearu Peterson.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/older_versions.rst
main
scikit-learn
[ -0.03891029953956604, 0.012711863964796066, -0.06754834949970245, -0.04417050629854202, 0.03985361382365227, -0.03409867361187935, -0.0342889130115509, 0.03390318900346756, -0.018635569140315056, 0.0946856215596199, 0.005537048447877169, 0.04524008184671402, -0.020052321255207062, -0.03634...
0.104335
.. include:: \_contributors.rst .. currentmodule:: sklearn ============ Version 0.17 ============ .. \_changes\_0\_17\_1: Version 0.17.1 ============== \*\*February 18, 2016\*\* Changelog --------- Bug fixes ......... - Upgrade vendored joblib to version 0.9.4 that fixes an important bug in ``joblib.Parallel`` that can silently yield to wrong results when working on datasets larger than 1MB: https://github.com/joblib/joblib/blob/0.9.4/CHANGES.rst - Fixed reading of Bunch pickles generated with scikit-learn version <= 0.16. This can affect users who have already downloaded a dataset with scikit-learn 0.16 and are loading it with scikit-learn 0.17. See :issue:`6196` for how this affected :func:`datasets.fetch\_20newsgroups`. By `Loic Esteve`\_. - Fixed a bug that prevented using ROC AUC score to perform grid search on several CPU / cores on large arrays. See :issue:`6147` By `Olivier Grisel`\_. - Fixed a bug that prevented to properly set the ``presort`` parameter in :class:`ensemble.GradientBoostingRegressor`. See :issue:`5857` By Andrew McCulloh. - Fixed a joblib error when evaluating the perplexity of a :class:`decomposition.LatentDirichletAllocation` model. See :issue:`6258` By Chyi-Kwei Yau. .. \_changes\_0\_17: Version 0.17 ============ \*\*November 5, 2015\*\* Changelog --------- New features ............ - All the Scaler classes but :class:`preprocessing.RobustScaler` can be fitted online by calling `partial\_fit`. By :user:`Giorgio Patrini `. - The new class :class:`ensemble.VotingClassifier` implements a "majority rule" / "soft voting" ensemble classifier to combine estimators for classification. By `Sebastian Raschka`\_. - The new class :class:`preprocessing.RobustScaler` provides an alternative to :class:`preprocessing.StandardScaler` for feature-wise centering and range normalization that is robust to outliers. By :user:`Thomas Unterthiner `. - The new class :class:`preprocessing.MaxAbsScaler` provides an alternative to :class:`preprocessing.MinMaxScaler` for feature-wise range normalization when the data is already centered or sparse. By :user:`Thomas Unterthiner `. - The new class :class:`preprocessing.FunctionTransformer` turns a Python function into a ``Pipeline``-compatible transformer object. By Joe Jevnik. - The new classes `cross\_validation.LabelKFold` and `cross\_validation.LabelShuffleSplit` generate train-test folds, respectively similar to `cross\_validation.KFold` and `cross\_validation.ShuffleSplit`, except that the folds are conditioned on a label array. By `Brian McFee`\_, :user:`Jean Kossaifi ` and `Gilles Louppe`\_. - :class:`decomposition.LatentDirichletAllocation` implements the Latent Dirichlet Allocation topic model with online variational inference. By :user:`Chyi-Kwei Yau `, with code based on an implementation by Matt Hoffman. (:issue:`3659`) - The new solver ``sag`` implements a Stochastic Average Gradient descent and is available in both :class:`linear\_model.LogisticRegression` and :class:`linear\_model.Ridge`. This solver is very efficient for large datasets. By :user:`Danny Sullivan ` and `Tom Dupre la Tour`\_. (:issue:`4738`) - The new solver ``cd`` implements a Coordinate Descent in :class:`decomposition.NMF`. Previous solver based on Projected Gradient is still available setting new parameter ``solver`` to ``pg``, but is deprecated and will be removed in 0.19, along with `decomposition.ProjectedGradientNMF` and parameters ``sparseness``, ``eta``, ``beta`` and ``nls\_max\_iter``. New parameters ``alpha`` and ``l1\_ratio`` control L1 and L2 regularization, and ``shuffle`` adds a shuffling step in the ``cd`` solver. By `Tom Dupre la Tour`\_ and `Mathieu Blondel`\_. Enhancements ............ - :class:`manifold.TSNE` now supports approximate optimization via the Barnes-Hut method, leading to much faster fitting. By Christopher Erick Moody. (:issue:`4025`) - :class:`cluster.MeanShift` now supports parallel execution, as implemented in the ``mean\_shift`` function. By :user:`Martino Sorbaro `. - :class:`naive\_bayes.GaussianNB` now supports fitting with ``sample\_weight``. By `Jan Hendrik Metzen`\_. - :class:`dummy.DummyClassifier` now supports a prior fitting strategy. By `Arnaud Joly`\_. - Added a ``fit\_predict`` method for `mixture.GMM` and subclasses. By :user:`Cory Lorenz `. - Added the :func:`metrics.label\_ranking\_loss` metric. By `Arnaud Joly`\_. - Added the :func:`metrics.cohen\_kappa\_score` metric. - Added a ``warm\_start`` constructor parameter to the bagging ensemble models to increase the size of the ensemble. By :user:`Tim Head `. - Added option to use multi-output regression metrics without averaging. By Konstantin Shmelkov and :user:`Michael Eickenberg`. - Added ``stratify`` option to `cross\_validation.train\_test\_split` for stratified splitting. By Miroslav Batchkarov. - The :func:`tree.export\_graphviz` function now supports aesthetic improvements for :class:`tree.DecisionTreeClassifier` and :class:`tree.DecisionTreeRegressor`, including
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.17.rst
main
scikit-learn
[ -0.0676715150475502, -0.031219393014907837, -0.0018566743237897754, 0.012177061289548874, 0.08653426170349121, -0.10651101171970367, -0.03474738448858261, -0.010390193201601505, -0.05512983724474907, -0.02080674283206463, 0.018152697011828423, 0.013719726353883743, -0.04729011282324791, -0...
0.005897
the size of the ensemble. By :user:`Tim Head `. - Added option to use multi-output regression metrics without averaging. By Konstantin Shmelkov and :user:`Michael Eickenberg`. - Added ``stratify`` option to `cross\_validation.train\_test\_split` for stratified splitting. By Miroslav Batchkarov. - The :func:`tree.export\_graphviz` function now supports aesthetic improvements for :class:`tree.DecisionTreeClassifier` and :class:`tree.DecisionTreeRegressor`, including options for coloring nodes by their majority class or impurity, showing variable names, and using node proportions instead of raw sample counts. By `Trevor Stephens`\_. - Improved speed of ``newton-cg`` solver in :class:`linear\_model.LogisticRegression`, by avoiding loss computation. By `Mathieu Blondel`\_ and `Tom Dupre la Tour`\_. - The ``class\_weight="auto"`` heuristic in classifiers supporting ``class\_weight`` was deprecated and replaced by the ``class\_weight="balanced"`` option, which has a simpler formula and interpretation. By `Hanna Wallach`\_ and `Andreas Müller`\_. - Add ``class\_weight`` parameter to automatically weight samples by class frequency for :class:`linear\_model.PassiveAggressiveClassifier`. By `Trevor Stephens`\_. - Added backlinks from the API reference pages to the user guide. By `Andreas Müller`\_. - The ``labels`` parameter to :func:`sklearn.metrics.f1\_score`, :func:`sklearn.metrics.fbeta\_score`, :func:`sklearn.metrics.recall\_score` and :func:`sklearn.metrics.precision\_score` has been extended. It is now possible to ignore one or more labels, such as where a multiclass problem has a majority class to ignore. By `Joel Nothman`\_. - Add ``sample\_weight`` support to :class:`linear\_model.RidgeClassifier`. By `Trevor Stephens`\_. - Provide an option for sparse output from :func:`sklearn.metrics.pairwise.cosine\_similarity`. By :user:`Jaidev Deshpande `. - Add :func:`preprocessing.minmax\_scale` to provide a function interface for :class:`preprocessing.MinMaxScaler`. By :user:`Thomas Unterthiner `. - ``dump\_svmlight\_file`` now handles multi-label datasets. By Chih-Wei Chang. - RCV1 dataset loader (:func:`sklearn.datasets.fetch\_rcv1`). By `Tom Dupre la Tour`\_. - The "Wisconsin Breast Cancer" classical two-class classification dataset is now included in scikit-learn, available with :func:`datasets.load\_breast\_cancer`. - Upgraded to joblib 0.9.3 to benefit from the new automatic batching of short tasks. This makes it possible for scikit-learn to benefit from parallelism when many very short tasks are executed in parallel, for instance by the `grid\_search.GridSearchCV` meta-estimator with ``n\_jobs > 1`` used with a large grid of parameters on a small dataset. By `Vlad Niculae`\_, `Olivier Grisel`\_ and `Loic Esteve`\_. - For more details about changes in joblib 0.9.3 see the release notes: https://github.com/joblib/joblib/blob/master/CHANGES.rst#release-093 - Improved speed (3 times per iteration) of `decomposition.DictLearning` with coordinate descent method from :class:`linear\_model.Lasso`. By :user:`Arthur Mensch `. - Parallel processing (threaded) for queries of nearest neighbors (using the ball-tree) by Nikolay Mayorov. - Allow :func:`datasets.make\_multilabel\_classification` to output a sparse ``y``. By Kashif Rasul. - :class:`cluster.DBSCAN` now accepts a sparse matrix of precomputed distances, allowing memory-efficient distance precomputation. By `Joel Nothman`\_. - :class:`tree.DecisionTreeClassifier` now exposes an ``apply`` method for retrieving the leaf indices samples are predicted as. By :user:`Daniel Galvez ` and `Gilles Louppe`\_. - Speed up decision tree regressors, random forest regressors, extra trees regressors and gradient boosting estimators by computing a proxy of the impurity improvement during the tree growth. The proxy quantity is such that the split that maximizes this value also maximizes the impurity improvement. By `Arnaud Joly`\_, :user:`Jacob Schreiber ` and `Gilles Louppe`\_. - Speed up tree based methods by reducing the number of computations needed when computing the impurity measure taking into account linear relationship of the computed statistics. The effect is particularly visible with extra trees and on datasets with categorical or sparse features. By `Arnaud Joly`\_. - :class:`ensemble.GradientBoostingRegressor` and :class:`ensemble.GradientBoostingClassifier` now expose an ``apply`` method for retrieving the leaf indices each sample ends up in under each try. By :user:`Jacob Schreiber `. - Add ``sample\_weight`` support to :class:`linear\_model.LinearRegression`. By Sonny Hu. (:issue:`#4881`) - Add ``n\_iter\_without\_progress`` to :class:`manifold.TSNE` to control the stopping criterion. By Santi Villalba. (:issue:`5186`) - Added optional parameter ``random\_state`` in :class:`linear\_model.Ridge` , to set the seed of the pseudo random generator used in ``sag`` solver. By `Tom
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.17.rst
main
scikit-learn
[ -0.051590919494628906, -0.016688887029886246, 0.013069609180092812, 0.10392691940069199, 0.07662247866392136, -0.010822642594575882, -0.05152922868728638, 0.10261102020740509, -0.11209405958652496, 0.0629492849111557, -0.0685274675488472, -0.08866269886493683, 0.01556448731571436, -0.05539...
0.142653
:user:`Jacob Schreiber `. - Add ``sample\_weight`` support to :class:`linear\_model.LinearRegression`. By Sonny Hu. (:issue:`#4881`) - Add ``n\_iter\_without\_progress`` to :class:`manifold.TSNE` to control the stopping criterion. By Santi Villalba. (:issue:`5186`) - Added optional parameter ``random\_state`` in :class:`linear\_model.Ridge` , to set the seed of the pseudo random generator used in ``sag`` solver. By `Tom Dupre la Tour`\_. - Added optional parameter ``warm\_start`` in :class:`linear\_model.LogisticRegression`. If set to True, the solvers ``lbfgs``, ``newton-cg`` and ``sag`` will be initialized with the coefficients computed in the previous fit. By `Tom Dupre la Tour`\_. - Added ``sample\_weight`` support to :class:`linear\_model.LogisticRegression` for the ``lbfgs``, ``newton-cg``, and ``sag`` solvers. By `Valentin Stolbunov`\_. Support added to the ``liblinear`` solver. By `Manoj Kumar`\_. - Added optional parameter ``presort`` to :class:`ensemble.GradientBoostingRegressor` and :class:`ensemble.GradientBoostingClassifier`, keeping default behavior the same. This allows gradient boosters to turn off presorting when building deep trees or using sparse data. By :user:`Jacob Schreiber `. - Altered :func:`metrics.roc\_curve` to drop unnecessary thresholds by default. By :user:`Graham Clenaghan `. - Added :class:`feature\_selection.SelectFromModel` meta-transformer which can be used along with estimators that have `coef\_` or `feature\_importances\_` attribute to select important features of the input data. By :user:`Maheshakya Wijewardena `, `Joel Nothman`\_ and `Manoj Kumar`\_. - Added :func:`metrics.pairwise.laplacian\_kernel`. By `Clyde Fare `\_. - `covariance.GraphLasso` allows separate control of the convergence criterion for the Elastic-Net subproblem via the ``enet\_tol`` parameter. - Improved verbosity in :class:`decomposition.DictionaryLearning`. - :class:`ensemble.RandomForestClassifier` and :class:`ensemble.RandomForestRegressor` no longer explicitly store the samples used in bagging, resulting in a much reduced memory footprint for storing random forest models. - Added ``positive`` option to :class:`linear\_model.Lars` and :func:`linear\_model.lars\_path` to force coefficients to be positive. (:issue:`5131`) - Added the ``X\_norm\_squared`` parameter to :func:`metrics.pairwise.euclidean\_distances` to provide precomputed squared norms for ``X``. - Added the ``fit\_predict`` method to :class:`pipeline.Pipeline`. - Added the :func:`preprocessing.minmax\_scale` function. Bug fixes ......... - Fixed non-determinism in :class:`dummy.DummyClassifier` with sparse multi-label output. By `Andreas Müller`\_. - Fixed the output shape of :class:`linear\_model.RANSACRegressor` to ``(n\_samples, )``. By `Andreas Müller`\_. - Fixed bug in `decomposition.DictLearning` when ``n\_jobs < 0``. By `Andreas Müller`\_. - Fixed bug where `grid\_search.RandomizedSearchCV` could consume a lot of memory for large discrete grids. By `Joel Nothman`\_. - Fixed bug in :class:`linear\_model.LogisticRegressionCV` where `penalty` was ignored in the final fit. By `Manoj Kumar`\_. - Fixed bug in `ensemble.forest.ForestClassifier` while computing oob\_score and X is a sparse.csc\_matrix. By :user:`Ankur Ankan `. - All regressors now consistently handle and warn when given ``y`` that is of shape ``(n\_samples, 1)``. By `Andreas Müller`\_ and Henry Lin. (:issue:`5431`) - Fix in :class:`cluster.KMeans` cluster reassignment for sparse input by `Lars Buitinck`\_. - Fixed a bug in :class:`discriminant\_analysis.LinearDiscriminantAnalysis` that could cause asymmetric covariance matrices when using shrinkage. By `Martin Billinger`\_. - Fixed `cross\_validation.cross\_val\_predict` for estimators with sparse predictions. By Buddha Prakash. - Fixed the ``predict\_proba`` method of :class:`linear\_model.LogisticRegression` to use soft-max instead of one-vs-rest normalization. By `Manoj Kumar`\_. (:issue:`5182`) - Fixed the `partial\_fit` method of :class:`linear\_model.SGDClassifier` when called with ``average=True``. By :user:`Andrew Lamb `. (:issue:`5282`) - Dataset fetchers use different filenames under Python 2 and Python 3 to avoid pickling compatibility issues. By `Olivier Grisel`\_. (:issue:`5355`) - Fixed a bug in :class:`naive\_bayes.GaussianNB` which caused classification results to depend on scale. By `Jake Vanderplas`\_. - Fixed temporarily :class:`linear\_model.Ridge`, which was incorrect when fitting the intercept in the case of sparse data. The fix automatically changes the solver to 'sag' in this case. :issue:`5360` by `Tom Dupre la Tour`\_. - Fixed a performance bug in `decomposition.RandomizedPCA` on data with a large number of features and fewer samples. (:issue:`4478`) By `Andreas Müller`\_, `Loic Esteve`\_ and :user:`Giorgio Patrini `. - Fixed bug in `cross\_decomposition.PLS` that yielded unstable and platform dependent output, and failed on `fit\_transform`. By :user:`Arthur Mensch `. - Fixes to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.17.rst
main
scikit-learn
[ -0.12423978000879288, -0.036593034863471985, 0.0067018079571425915, 0.06724366545677185, 0.03447519242763519, 0.011815589852631092, -0.04291428253054619, 0.0113513870164752, -0.08120374381542206, 0.03907078132033348, 0.01829737238585949, -0.06071329489350319, 0.05093536898493767, -0.122979...
0.00826
Fixed a performance bug in `decomposition.RandomizedPCA` on data with a large number of features and fewer samples. (:issue:`4478`) By `Andreas Müller`\_, `Loic Esteve`\_ and :user:`Giorgio Patrini `. - Fixed bug in `cross\_decomposition.PLS` that yielded unstable and platform dependent output, and failed on `fit\_transform`. By :user:`Arthur Mensch `. - Fixes to the ``Bunch`` class used to store datasets. - Fixed `ensemble.plot\_partial\_dependence` ignoring the ``percentiles`` parameter. - Providing a ``set`` as vocabulary in ``CountVectorizer`` no longer leads to inconsistent results when pickling. - Fixed the conditions on when a precomputed Gram matrix needs to be recomputed in :class:`linear\_model.LinearRegression`, :class:`linear\_model.OrthogonalMatchingPursuit`, :class:`linear\_model.Lasso` and :class:`linear\_model.ElasticNet`. - Fixed inconsistent memory layout in the coordinate descent solver that affected `linear\_model.DictionaryLearning` and `covariance.GraphLasso`. (:issue:`5337`) By `Olivier Grisel`\_. - :class:`manifold.LocallyLinearEmbedding` no longer ignores the ``reg`` parameter. - Nearest Neighbor estimators with custom distance metrics can now be pickled. (:issue:`4362`) - Fixed a bug in :class:`pipeline.FeatureUnion` where ``transformer\_weights`` were not properly handled when performing grid-searches. - Fixed a bug in :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV` when using ``class\_weight='balanced'`` or ``class\_weight='auto'``. By `Tom Dupre la Tour`\_. - Fixed bug :issue:`5495` when doing OVR(SVC(decision\_function\_shape="ovr")). Fixed by :user:`Elvis Dohmatob `. API changes summary ------------------- - Attribute `data\_min`, `data\_max` and `data\_range` in :class:`preprocessing.MinMaxScaler` are deprecated and won't be available from 0.19. Instead, the class now exposes `data\_min\_`, `data\_max\_` and `data\_range\_`. By :user:`Giorgio Patrini `. - All Scaler classes now have an `scale\_` attribute, the feature-wise rescaling applied by their `transform` methods. The old attribute `std\_` in :class:`preprocessing.StandardScaler` is deprecated and superseded by `scale\_`; it won't be available in 0.19. By :user:`Giorgio Patrini `. - :class:`svm.SVC` and :class:`svm.NuSVC` now have an ``decision\_function\_shape`` parameter to make their decision function of shape ``(n\_samples, n\_classes)`` by setting ``decision\_function\_shape='ovr'``. This will be the default behavior starting in 0.19. By `Andreas Müller`\_. - Passing 1D data arrays as input to estimators is now deprecated as it caused confusion in how the array elements should be interpreted as features or as samples. All data arrays are now expected to be explicitly shaped ``(n\_samples, n\_features)``. By :user:`Vighnesh Birodkar `. - `lda.LDA` and `qda.QDA` have been moved to :class:`discriminant\_analysis.LinearDiscriminantAnalysis` and :class:`discriminant\_analysis.QuadraticDiscriminantAnalysis`. - The ``store\_covariance`` and ``tol`` parameters have been moved from the fit method to the constructor in :class:`discriminant\_analysis.LinearDiscriminantAnalysis` and the ``store\_covariances`` and ``tol`` parameters have been moved from the fit method to the constructor in :class:`discriminant\_analysis.QuadraticDiscriminantAnalysis`. - Models inheriting from ``\_LearntSelectorMixin`` will no longer support the transform methods. (i.e, RandomForests, GradientBoosting, LogisticRegression, DecisionTrees, SVMs and SGD related models). Wrap these models around the metatransfomer :class:`feature\_selection.SelectFromModel` to remove features (according to `coefs\_` or `feature\_importances\_`) which are below a certain threshold value instead. - :class:`cluster.KMeans` re-runs cluster-assignments in case of non-convergence, to ensure consistency of ``predict(X)`` and ``labels\_``. By :user:`Vighnesh Birodkar `. - Classifier and Regressor models are now tagged as such using the ``\_estimator\_type`` attribute. - Cross-validation iterators always provide indices into training and test set, not boolean masks. - The ``decision\_function`` on all regressors was deprecated and will be removed in 0.19. Use ``predict`` instead. - `datasets.load\_lfw\_pairs` is deprecated and will be removed in 0.19. Use :func:`datasets.fetch\_lfw\_pairs` instead. - The deprecated ``hmm`` module was removed. - The deprecated ``Bootstrap`` cross-validation iterator was removed. - The deprecated ``Ward`` and ``WardAgglomerative`` classes have been removed. Use :class:`cluster.AgglomerativeClustering` instead. - `cross\_validation.check\_cv` is now a public function. - The property ``residues\_`` of :class:`linear\_model.LinearRegression` is deprecated and will be removed in 0.19. - The deprecated ``n\_jobs`` parameter of :class:`linear\_model.LinearRegression` has been moved to the constructor. - Removed deprecated ``class\_weight`` parameter from :class:`linear\_model.SGDClassifier`'s ``fit`` method. Use the construction parameter instead. - The deprecated support for the sequence of sequences (or list of lists) multilabel format was removed. To convert to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.17.rst
main
scikit-learn
[ -0.0723634883761406, -0.05963587760925293, -0.028016360476613045, 0.03772956505417824, 0.0252922922372818, -0.06899624317884445, -0.061653733253479004, 0.012646396644413471, -0.04003939777612686, 0.017780259251594543, 0.044893912971019745, 0.005929658189415932, -0.036634813994169235, -0.04...
0.010842
removed in 0.19. - The deprecated ``n\_jobs`` parameter of :class:`linear\_model.LinearRegression` has been moved to the constructor. - Removed deprecated ``class\_weight`` parameter from :class:`linear\_model.SGDClassifier`'s ``fit`` method. Use the construction parameter instead. - The deprecated support for the sequence of sequences (or list of lists) multilabel format was removed. To convert to and from the supported binary indicator matrix format, use :class:`MultiLabelBinarizer `. - The behavior of calling the ``inverse\_transform`` method of ``Pipeline.pipeline`` will change in 0.19. It will no longer reshape one-dimensional input to two-dimensional input. - The deprecated attributes ``indicator\_matrix\_``, ``multilabel\_`` and ``classes\_`` of :class:`preprocessing.LabelBinarizer` were removed. - Using ``gamma=0`` in :class:`svm.SVC` and :class:`svm.SVR` to automatically set the gamma to ``1. / n\_features`` is deprecated and will be removed in 0.19. Use ``gamma="auto"`` instead. Code Contributors ----------------- Aaron Schumacher, Adithya Ganesh, akitty, Alexandre Gramfort, Alexey Grigorev, Ali Baharev, Allen Riddell, Ando Saabas, Andreas Mueller, Andrew Lamb, Anish Shah, Ankur Ankan, Anthony Erlinger, Ari Rouvinen, Arnaud Joly, Arnaud Rachez, Arthur Mensch, banilo, Barmaley.exe, benjaminirving, Boyuan Deng, Brett Naul, Brian McFee, Buddha Prakash, Chi Zhang, Chih-Wei Chang, Christof Angermueller, Christoph Gohlke, Christophe Bourguignat, Christopher Erick Moody, Chyi-Kwei Yau, Cindy Sridharan, CJ Carey, Clyde-fare, Cory Lorenz, Dan Blanchard, Daniel Galvez, Daniel Kronovet, Danny Sullivan, Data1010, David, David D Lowe, David Dotson, djipey, Dmitry Spikhalskiy, Donne Martin, Dougal J. Sutherland, Dougal Sutherland, edson duarte, Eduardo Caro, Eric Larson, Eric Martin, Erich Schubert, Fernando Carrillo, Frank C. Eckert, Frank Zalkow, Gael Varoquaux, Ganiev Ibraim, Gilles Louppe, Giorgio Patrini, giorgiop, Graham Clenaghan, Gryllos Prokopis, gwulfs, Henry Lin, Hsuan-Tien Lin, Immanuel Bayer, Ishank Gulati, Jack Martin, Jacob Schreiber, Jaidev Deshpande, Jake Vanderplas, Jan Hendrik Metzen, Jean Kossaifi, Jeffrey04, Jeremy, jfraj, Jiali Mei, Joe Jevnik, Joel Nothman, John Kirkham, John Wittenauer, Joseph, Joshua Loyal, Jungkook Park, KamalakerDadi, Kashif Rasul, Keith Goodman, Kian Ho, Konstantin Shmelkov, Kyler Brown, Lars Buitinck, Lilian Besson, Loic Esteve, Louis Tiao, maheshakya, Maheshakya Wijewardena, Manoj Kumar, MarkTab marktab.net, Martin Ku, Martin Spacek, MartinBpr, martinosorb, MaryanMorel, Masafumi Oyamada, Mathieu Blondel, Matt Krump, Matti Lyra, Maxim Kolganov, mbillinger, mhg, Michael Heilman, Michael Patterson, Miroslav Batchkarov, Nelle Varoquaux, Nicolas, Nikolay Mayorov, Olivier Grisel, Omer Katz, Óscar Nájera, Pauli Virtanen, Peter Fischer, Peter Prettenhofer, Phil Roth, pianomania, Preston Parry, Raghav RV, Rob Zinkov, Robert Layton, Rohan Ramanath, Saket Choudhary, Sam Zhang, santi, saurabh.bansod, scls19fr, Sebastian Raschka, Sebastian Saeger, Shivan Sornarajah, SimonPL, sinhrks, Skipper Seabold, Sonny Hu, sseg, Stephen Hoover, Steven De Gryze, Steven Seguin, Theodore Vasiloudis, Thomas Unterthiner, Tiago Freitas Pereira, Tian Wang, Tim Head, Timothy Hopper, tokoroten, Tom Dupré la Tour, Trevor Stephens, Valentin Stolbunov, Vighnesh Birodkar, Vinayak Mehta, Vincent, Vincent Michel, vstolbunov, wangz10, Wei Xue, Yucheng Low, Yury Zhauniarovich, Zac Stewart, zhai\_pro, Zichen Wang
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.17.rst
main
scikit-learn
[ -0.057001009583473206, -0.02906106971204281, -0.11088806390762329, -0.016006579622626305, -0.003722290974110365, 0.032510243356227875, -0.03733968734741211, 0.02855878695845604, -0.07524167001247406, -0.06729736924171448, 0.010039144195616245, -0.07760865986347198, -0.016912581399083138, -...
0.069016
.. include:: \_contributors.rst .. currentmodule:: sklearn .. \_release\_notes\_1\_8: =========== Version 1.8 =========== For a short description of the main highlights of the release, please refer to :ref:`sphx\_glr\_auto\_examples\_release\_highlights\_plot\_release\_highlights\_1\_8\_0.py`. .. include:: changelog\_legend.inc .. towncrier release notes start .. \_changes\_1\_8\_0: Version 1.8.0 ============= \*\*December 2025\*\* Changes impacting many modules ------------------------------ - |Efficiency| Improved CPU and memory usage in estimators and metric functions that rely on weighted percentiles and better match NumPy and Scipy (un-weighted) implementations of percentiles. By :user:`Lucy Liu ` :pr:`31775` Support for Array API --------------------- Additional estimators and functions have been updated to include support for all `Array API `\_ compliant inputs. See :ref:`array\_api` for more details. - |Feature| :class:`sklearn.preprocessing.StandardScaler` now supports Array API compliant inputs. By :user:`Alexander Fabisch `, :user:`Edoardo Abati `, :user:`Olivier Grisel ` and :user:`Charles Hill `. :pr:`27113` - |Feature| :class:`linear\_model.RidgeCV`, :class:`linear\_model.RidgeClassifier` and :class:`linear\_model.RidgeClassifierCV` now support array API compatible inputs with `solver="svd"`. By :user:`Jérôme Dockès `. :pr:`27961` - |Feature| :func:`metrics.pairwise.pairwise\_kernels` for any kernel except "laplacian" and :func:`metrics.pairwise\_distances` for metrics "cosine", "euclidean" and "l2" now support array API inputs. By :user:`Emily Chen ` and :user:`Lucy Liu ` :pr:`29822` - |Feature| :func:`sklearn.metrics.confusion\_matrix` now supports Array API compatible inputs. By :user:`Stefanie Senger ` :pr:`30562` - |Feature| :class:`sklearn.mixture.GaussianMixture` with `init\_params="random"` or `init\_params="random\_from\_data"` and `warm\_start=False` now supports Array API compatible inputs. By :user:`Stefanie Senger ` and :user:`Loïc Estève ` :pr:`30777` - |Feature| :func:`sklearn.metrics.roc\_curve` now supports Array API compatible inputs. By :user:`Thomas Li ` :pr:`30878` - |Feature| :class:`preprocessing.PolynomialFeatures` now supports array API compatible inputs. By :user:`Omar Salman ` :pr:`31580` - |Feature| :class:`calibration.CalibratedClassifierCV` now supports array API compatible inputs with `method="temperature"` and when the underlying `estimator` also supports the array API. By :user:`Omar Salman ` :pr:`32246` - |Feature| :func:`sklearn.metrics.precision\_recall\_curve` now supports array API compatible inputs. By :user:`Lucy Liu ` :pr:`32249` - |Feature| :func:`sklearn.model\_selection.cross\_val\_predict` now supports array API compatible inputs. By :user:`Omar Salman ` :pr:`32270` - |Feature| :func:`sklearn.metrics.brier\_score\_loss`, :func:`sklearn.metrics.log\_loss`, :func:`sklearn.metrics.d2\_brier\_score` and :func:`sklearn.metrics.d2\_log\_loss\_score` now support array API compatible inputs. By :user:`Omar Salman ` :pr:`32422` - |Feature| :class:`naive\_bayes.GaussianNB` now supports array API compatible inputs. By :user:`Omar Salman ` :pr:`32497` - |Feature| :class:`preprocessing.LabelBinarizer` and :func:`preprocessing.label\_binarize` now support numeric array API compatible inputs with `sparse\_output=False`. By :user:`Virgil Chan `. :pr:`32582` - |Feature| :func:`sklearn.metrics.det\_curve` now supports Array API compliant inputs. By :user:`Josef Affourtit `. :pr:`32586` - |Feature| :func:`sklearn.metrics.pairwise.manhattan\_distances` now supports array API compatible inputs. By :user:`Omar Salman `. :pr:`32597` - |Feature| :func:`sklearn.metrics.calinski\_harabasz\_score` now supports Array API compliant inputs. By :user:`Josef Affourtit `. :pr:`32600` - |Feature| :func:`sklearn.metrics.balanced\_accuracy\_score` now supports array API compatible inputs. By :user:`Omar Salman `. :pr:`32604` - |Feature| :func:`sklearn.metrics.pairwise.laplacian\_kernel` now supports array API compatible inputs. By :user:`Zubair Shakoor `. :pr:`32613` - |Feature| :func:`sklearn.metrics.cohen\_kappa\_score` now supports array API compatible inputs. By :user:`Omar Salman `. :pr:`32619` - |Feature| :func:`sklearn.metrics.cluster.davies\_bouldin\_score` now supports Array API compliant inputs. By :user:`Josef Affourtit `. :pr:`32693` - |Fix| Estimators with array API support no longer reject dataframe inputs when array API support is enabled. By :user:`Tim Head ` :pr:`32838` Metadata routing ---------------- Refer to the :ref:`Metadata Routing User Guide ` for more details. - |Fix| Fixed an issue where passing `sample\_weight` to a :class:`Pipeline` inside a :class:`GridSearchCV` would raise an error with metadata routing enabled. By `Adrin Jalali`\_. :pr:`31898` Free-threaded CPython 3.14 support ---------------------------------- scikit-learn has support for free-threaded CPython, in particular free-threaded wheels are available for all of our supported platforms on Python 3.14. Free-threaded (also known as nogil) CPython is a version of CPython that aims at enabling efficient multi-threaded use cases by removing the Global Interpreter Lock (GIL). If you want to try out free-threaded Python, the recommendation is to use Python 3.14, that has fixed a number of issues compared to Python 3.13. Feel free to try free-threaded on your use case and report
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.8.rst
main
scikit-learn
[ -0.0828724279999733, -0.04667436704039574, 0.02279958687722683, -0.011190526187419891, 0.11473646759986877, 0.0007746427436359227, -0.03485018387436867, 0.07716301828622818, -0.05794542655348778, -0.009570450522005558, 0.061485011130571365, -0.012353510595858097, -0.053315095603466034, -0....
0.114252
at enabling efficient multi-threaded use cases by removing the Global Interpreter Lock (GIL). If you want to try out free-threaded Python, the recommendation is to use Python 3.14, that has fixed a number of issues compared to Python 3.13. Feel free to try free-threaded on your use case and report any issues! For more details about free-threaded CPython see `py-free-threading doc `\_, in particular `how to install a free-threaded CPython `\_ and `Ecosystem compatibility tracking `\_. By :user:`Loïc Estève ` and :user:`Olivier Grisel ` and many other people in the wider Scientific Python and CPython ecosystem, for example :user:`Nathan Goldbaum `, :user:`Ralf Gommers `, :user:`Edgar Andrés Margffoy Tuay `. :pr:`32079` :mod:`sklearn.base` ------------------- - |Feature| Refactored :meth:`dir` in :class:`BaseEstimator` to recognize condition check in :meth:`available\_if`. By :user:`John Hendricks ` and :user:`Miguel Parece `. :pr:`31928` - |Fix| Fixed the handling of pandas missing values in HTML display of all estimators. By :user:`Dea María Léon `. :pr:`32341` :mod:`sklearn.calibration` -------------------------- - |Feature| Added temperature scaling method in :class:`calibration.CalibratedClassifierCV`. By :user:`Virgil Chan ` and :user:`Christian Lorentzen `. :pr:`31068` :mod:`sklearn.cluster` ---------------------- - |Efficiency| :func:`cluster.kmeans\_plusplus` now uses `np.cumsum` directly without extra numerical stability checks and without casting to `np.float64`. By :user:`Tiziano Zito ` :pr:`31991` - |Fix| The default value of the `copy` parameter in :class:`cluster.HDBSCAN` will change from `False` to `True` in 1.10 to avoid data modification and maintain consistency with other estimators. By :user:`Sarthak Puri `. :pr:`31973` :mod:`sklearn.compose` ---------------------- - |Fix| The :class:`compose.ColumnTransformer` now correctly fits on data provided as a `polars.DataFrame` when any transformer has a sparse output. By :user:`Phillipp Gnan `. :pr:`32188` :mod:`sklearn.covariance` ------------------------- - |Efficiency| :class:`sklearn.covariance.GraphicalLasso`, :class:`sklearn.covariance.GraphicalLassoCV` and :func:`sklearn.covariance.graphical\_lasso` with `mode="cd"` profit from the fit time performance improvement of :class:`sklearn.linear\_model.Lasso` by means of gap safe screening rules. By :user:`Christian Lorentzen `. :pr:`31987` - |Fix| Fixed uncontrollable randomness in :class:`sklearn.covariance.GraphicalLasso`, :class:`sklearn.covariance.GraphicalLassoCV` and :func:`sklearn.covariance.graphical\_lasso`. For `mode="cd"`, they now use cyclic coordinate descent. Before, it was random coordinate descent with uncontrollable random number seeding. By :user:`Christian Lorentzen `. :pr:`31987` - |Fix| Added correction to :class:`covariance.MinCovDet` to adjust for consistency at the normal distribution. This reduces the bias present when applying this method to data that is normally distributed. By :user:`Daniel Herrera-Esposito ` :pr:`32117` :mod:`sklearn.decomposition` ---------------------------- - |Efficiency| :class:`sklearn.decomposition.DictionaryLearning` and :class:`sklearn.decomposition.MiniBatchDictionaryLearning` with `fit\_algorithm="cd"`, :class:`sklearn.decomposition.SparseCoder` with `transform\_algorithm="lasso\_cd"`, :class:`sklearn.decomposition.MiniBatchSparsePCA`, :class:`sklearn.decomposition.SparsePCA`, :func:`sklearn.decomposition.dict\_learning` and :func:`sklearn.decomposition.dict\_learning\_online` with `method="cd"`, :func:`sklearn.decomposition.sparse\_encode` with `algorithm="lasso\_cd"` all profit from the fit time performance improvement of :class:`sklearn.linear\_model.Lasso` by means of gap safe screening rules. By :user:`Christian Lorentzen `. :pr:`31987` - |Enhancement| :class:`decomposition.SparseCoder` now follows the transformer API of scikit-learn. In addition, the :meth:`fit` method now validates the input and parameters. By :user:`François Paugam `. :pr:`32077` - |Fix| Add input checks to the `inverse\_transform` method of :class:`decomposition.PCA` and :class:`decomposition.IncrementalPCA`. :pr:`29310` by :user:`Ian Faust `. :pr:`29310` :mod:`sklearn.discriminant\_analysis` ------------------------------------ - |Feature| Added `solver`, `covariance\_estimator` and `shrinkage` in :class:`discriminant\_analysis.QuadraticDiscriminantAnalysis`. The resulting class is more similar to :class:`discriminant\_analysis.LinearDiscriminantAnalysis` and allows for more flexibility in the estimation of the covariance matrices. By :user:`Daniel Herrera-Esposito `. :pr:`32108` :mod:`sklearn.ensemble` ----------------------- - |Fix| :class:`ensemble.BaggingClassifier`, :class:`ensemble.BaggingRegressor` and :class:`ensemble.IsolationForest` now use `sample\_weight` to draw the samples instead of forwarding them multiplied by a uniformly sampled mask to the underlying estimators. Furthermore, when `max\_samples` is a float, it is now interpreted as a fraction of `sample\_weight.sum()` instead of `X.shape[0]`. The new default `max\_samples=None` draws `X.shape[0]` samples, irrespective of `sample\_weight`. By :user:`Antoine Baker `. :pr:`31414` and :pr:`32825` :mod:`sklearn.feature\_selection` -------------------------------- - |Enhancement| :class:`feature\_selection.SelectFromModel` now does not force `max\_features` to be less than or equal to the number of input features. By :user:`Thibault ` :pr:`31939` :mod:`sklearn.gaussian\_process` ------------------------------- - |Efficiency| make :class:`GaussianProcessRegressor.predict` faster when `return\_cov` and `return\_std` are both `False`. By :user:`Rafael Ayllón Gavilán `. :pr:`31431` :mod:`sklearn.linear\_model` --------------------------- - |Efficiency| :class:`linear\_model.ElasticNet` and :class:`linear\_model.Lasso` with
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.8.rst
main
scikit-learn
[ -0.09506018459796906, -0.07444146275520325, -0.055428940802812576, -0.05370417237281799, 0.04591818153858185, -0.11289568990468979, -0.047153621912002563, 0.00407741405069828, 0.014859750866889954, -0.022809771820902824, 0.047812893986701965, -0.03964032232761383, -0.04502139240503311, -0....
0.036527
:class:`feature\_selection.SelectFromModel` now does not force `max\_features` to be less than or equal to the number of input features. By :user:`Thibault ` :pr:`31939` :mod:`sklearn.gaussian\_process` ------------------------------- - |Efficiency| make :class:`GaussianProcessRegressor.predict` faster when `return\_cov` and `return\_std` are both `False`. By :user:`Rafael Ayllón Gavilán `. :pr:`31431` :mod:`sklearn.linear\_model` --------------------------- - |Efficiency| :class:`linear\_model.ElasticNet` and :class:`linear\_model.Lasso` with `precompute=False` use less memory for dense `X` and are a bit faster. Previously, they used twice the memory of `X` even for Fortran-contiguous `X`. By :user:`Christian Lorentzen ` :pr:`31665` - |Efficiency| :class:`linear\_model.ElasticNet` and :class:`linear\_model.Lasso` avoid double input checking and are therefore a bit faster. By :user:`Christian Lorentzen `. :pr:`31848` - |Efficiency| :class:`linear\_model.ElasticNet`, :class:`linear\_model.ElasticNetCV`, :class:`linear\_model.Lasso`, :class:`linear\_model.LassoCV`, :class:`linear\_model.MultiTaskElasticNet`, :class:`linear\_model.MultiTaskElasticNetCV`, :class:`linear\_model.MultiTaskLasso` and :class:`linear\_model.MultiTaskLassoCV` are faster to fit by avoiding a BLAS level 1 (axpy) call in the innermost loop. Same for functions :func:`linear\_model.enet\_path` and :func:`linear\_model.lasso\_path`. By :user:`Christian Lorentzen ` :pr:`31956` and :pr:`31880` - |Efficiency| :class:`linear\_model.ElasticNetCV`, :class:`linear\_model.LassoCV`, :class:`linear\_model.MultiTaskElasticNetCV` and :class:`linear\_model.MultiTaskLassoCV` avoid an additional copy of `X` with default `copy\_X=True`. By :user:`Christian Lorentzen `. :pr:`31946` - |Efficiency| :class:`linear\_model.ElasticNet`, :class:`linear\_model.ElasticNetCV`, :class:`linear\_model.Lasso`, :class:`linear\_model.LassoCV`, :class:`linear\_model.MultiTaskElasticNet`, :class:`linear\_model.MultiTaskElasticNetCV` :class:`linear\_model.MultiTaskLasso`, :class:`linear\_model.MultiTaskLassoCV` as well as :func:`linear\_model.lasso\_path` and :func:`linear\_model.enet\_path` now implement gap safe screening rules in the coordinate descent solver for dense and sparse `X`. The speedup of fitting time is particularly pronounced (10-times is possible) when computing regularization paths like the \\*CV-variants of the above estimators do. There is now an additional check of the stopping criterion before entering the main loop of descent steps. As the stopping criterion requires the computation of the dual gap, the screening happens whenever the dual gap is computed. By :user:`Christian Lorentzen ` :pr:`31882`, :pr:`31986`, :pr:`31987` and :pr:`32014` - |Enhancement| :class:`linear\_model.ElasticNet`, :class:`linear\_model.ElasticNetCV`, :class:`linear\_model.Lasso`, :class:`linear\_model.LassoCV`, :class:`MultiTaskElasticNet`, :class:`MultiTaskElasticNetCV`, :class:`MultiTaskLasso`, :class:`MultiTaskLassoCV`, as well as :func:`linear\_model.enet\_path` and :func:`linear\_model.lasso\_path` now use `dual gap <= tol` instead of `dual gap < tol` as stopping criterion. The resulting coefficients might differ to previous versions of scikit-learn in rare cases. By :user:`Christian Lorentzen `. :pr:`31906` - |Fix| Fix the convergence criteria for SGD models, to avoid premature convergence when `tol != None`. This primarily impacts :class:`SGDOneClassSVM` but also affects :class:`SGDClassifier` and :class:`SGDRegressor`. Before this fix, only the loss function without penalty was used as the convergence check, whereas now, the full objective with regularization is used. By :user:`Guillaume Lemaitre ` and :user:`kostayScr ` :pr:`31856` - |Fix| The allowed parameter range for the initial learning rate `eta0` in :class:`linear\_model.SGDClassifier`, :class:`linear\_model.SGDOneClassSVM`, :class:`linear\_model.SGDRegressor` and :class:`linear\_model.Perceptron` changed from non-negative numbers to strictly positive numbers. As a consequence, the default `eta0` of :class:`linear\_model.SGDClassifier` and :class:`linear\_model.SGDOneClassSVM` changed from 0 to 0.01. But note that `eta0` is not used by the default learning rate "optimal" of those two estimators. By :user:`Christian Lorentzen `. :pr:`31933` - |Fix| :class:`linear\_model.LogisticRegressionCV` is able to handle CV splits where some class labels are missing in some folds. Before, it raised an error whenever a class label were missing in a fold. By :user:`Christian Lorentzen `. :pr:`32747` - |API| :class:`linear\_model.PassiveAggressiveClassifier` and :class:`linear\_model.PassiveAggressiveRegressor` are deprecated and will be removed in 1.10. Equivalent estimators are available with :class:`linear\_model.SGDClassifier` and :class:`SGDRegressor`, both of which expose the options `learning\_rate="pa1"` and `"pa2"`. The parameter `eta0` can be used to specify the aggressiveness parameter of the Passive-Aggressive-Algorithms, called C in the reference paper. By :user:`Christian Lorentzen ` :pr:`31932` and :pr:`29097` - |API| :class:`linear\_model.SGDClassifier`, :class:`linear\_model.SGDRegressor`, and :class:`linear\_model.SGDOneClassSVM` now deprecate negative values for the `power\_t` parameter. Using a negative value will raise a warning in version 1.8 and will raise an error in version 1.10. A value in the range [0.0, inf) must be used instead. By :user:`Ritvi Alagusankar ` :pr:`31474` - |API| Raising error in :class:`sklearn.linear\_model.LogisticRegression` when liblinear solver is used and input X values are larger than 1e30, the liblinear solver freezes otherwise.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.8.rst
main
scikit-learn
[ 0.032106462866067886, -0.03802391514182091, -0.06407107412815094, 0.021563030779361725, 0.09351486712694168, -0.041907280683517456, -0.00292083527892828, 0.008393645286560059, -0.036436643451452255, 0.050291113555431366, 0.024844584986567497, 0.026879046112298965, -0.007616868242621422, -0...
0.013244
version 1.8 and will raise an error in version 1.10. A value in the range [0.0, inf) must be used instead. By :user:`Ritvi Alagusankar ` :pr:`31474` - |API| Raising error in :class:`sklearn.linear\_model.LogisticRegression` when liblinear solver is used and input X values are larger than 1e30, the liblinear solver freezes otherwise. By :user:`Shruti Nath `. :pr:`31888` - |API| :class:`linear\_model.LogisticRegressionCV` got a new parameter `use\_legacy\_attributes` to control the types and shapes of the fitted attributes `C\_`, `l1\_ratio\_`, `coefs\_paths\_`, `scores\_` and `n\_iter\_`. The current default value `True` keeps the legacy behaviour. If `False` then: - ``C\_`` is a float. - ``l1\_ratio\_`` is a float. - ``coefs\_paths\_`` is an ndarray of shape (n\_folds, n\_l1\_ratios, n\_cs, n\_classes, n\_features). For binary problems (n\_classes=2), the 2nd last dimension is 1. - ``scores\_`` is an ndarray of shape (n\_folds, n\_l1\_ratios, n\_cs). - ``n\_iter\_`` is an ndarray of shape (n\_folds, n\_l1\_ratios, n\_cs). In version 1.10, the default will change to `False` and `use\_legacy\_attributes` will be deprecated. In 1.12 `use\_legacy\_attributes` will be removed. By :user:`Christian Lorentzen `. :pr:`32114` - |API| Parameter `penalty` of :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV` is deprecated and will be removed in version 1.10. The equivalent behaviour can be obtained as follows: - for :class:`linear\_model.LogisticRegression` - use `l1\_ratio=0` instead of `penalty="l2"` - use `l1\_ratio=1` instead of `penalty="l1"` - use `0`. :pr:`32659` - |API| The `n\_jobs` parameter of :class:`linear\_model.LogisticRegression` is deprecated and will be removed in 1.10. It has no effect since 1.8. By :user:`Loïc Estève `. :pr:`32742` :mod:`sklearn.manifold` ----------------------- - |MajorFeature| :class:`manifold.ClassicalMDS` was implemented to perform classical MDS (eigendecomposition of the double-centered distance matrix). By :user:`Dmitry Kobak ` and :user:`Meekail Zain ` :pr:`31322` - |Feature| :class:`manifold.MDS` now supports arbitrary distance metrics (via `metric` and `metric\_params` parameters) and initialization via classical MDS (via `init` parameter). The `dissimilarity` parameter was deprecated. The old `metric` parameter was renamed into `metric\_mds`. By :user:`Dmitry Kobak ` :pr:`32229` - |Feature| :class:`manifold.TSNE` now supports PCA initialization with sparse input matrices. By :user:`Arturo Amor `. :pr:`32433` :mod:`sklearn.metrics` ---------------------- - |Feature| :func:`metrics.d2\_brier\_score` has been added which calculates the D^2 for the Brier score. By :user:`Omar Salman `. :pr:`28971` - |Feature| Add :func:`metrics.confusion\_matrix\_at\_thresholds` function that returns the number of true negatives, false positives, false negatives and true positives per threshold. By :user:`Success Moses `. :pr:`30134` - |Efficiency| Avoid redundant input validation in :func:`metrics.d2\_log\_loss\_score` leading to a 1.2x speedup in large scale benchmarks. By :user:`Olivier Grisel ` and :user:`Omar Salman ` :pr:`32356` - |Enhancement| :func:`metrics.median\_absolute\_error` now supports Array API compatible inputs. By :user:`Lucy Liu `. :pr:`31406` - |Enhancement| Improved the error message for sparse inputs for the following metrics: :func:`metrics.accuracy\_score`, :func:`metrics.multilabel\_confusion\_matrix`, :func:`metrics.jaccard\_score`, :func:`metrics.zero\_one\_loss`, :func:`metrics.f1\_score`, :func:`metrics.fbeta\_score`, :func:`metrics.precision\_recall\_fscore\_support`, :func:`metrics.class\_likelihood\_ratios`, :func:`metrics.precision\_score`, :func:`metrics.recall\_score`, :func:`metrics.classification\_report`, :func:`metrics.hamming\_loss`. By :user:`Lucy Liu `. :pr:`32047` - |Fix| :func:`metrics.median\_absolute\_error` now uses `\_averaged\_weighted\_percentile` instead of `\_weighted\_percentile` to calculate median when `sample\_weight` is not `None`. This is equivalent to using the "averaged\_inverted\_cdf" instead of the "inverted\_cdf" quantile method, which gives results equivalent to `numpy.median` if equal weights used. By :user:`Lucy Liu ` :pr:`30787` - |Fix| Additional `sample\_weight` checking has been added to :func:`metrics.accuracy\_score`, :func:`metrics.balanced\_accuracy\_score`, :func:`metrics.brier\_score\_loss`, :func:`metrics.class\_likelihood\_ratios`, :func:`metrics.classification\_report`, :func:`metrics.cohen\_kappa\_score`, :func:`metrics.confusion\_matrix`, :func:`metrics.f1\_score`, :func:`metrics.fbeta\_score`, :func:`metrics.hamming\_loss`, :func:`metrics.jaccard\_score`, :func:`metrics.matthews\_corrcoef`, :func:`metrics.multilabel\_confusion\_matrix`, :func:`metrics.precision\_recall\_fscore\_support`, :func:`metrics.precision\_score`, :func:`metrics.recall\_score` and :func:`metrics.zero\_one\_loss`. `sample\_weight` can only be 1D, consistent to `y\_true` and `y\_pred` in length,and all values must be finite and not complex. By :user:`Lucy Liu `. :pr:`31701` - |Fix| `y\_pred` is deprecated in favour of `y\_score` in :func:`metrics.DetCurveDisplay.from\_predictions` and :func:`metrics.PrecisionRecallDisplay.from\_predictions`. `y\_pred` will be removed in v1.10. By :user:`Luis ` :pr:`31764` - |Fix| `repr` on a scorer which has been created with a `partial` `score\_func` now correctly works and uses the `repr` of the given `partial` object. By `Adrin Jalali`\_. :pr:`31891` - |Fix| kwargs specified in the `curve\_kwargs` parameter of :meth:`metrics.RocCurveDisplay.from\_cv\_results` now only overwrite their
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.8.rst
main
scikit-learn
[ -0.035885997116565704, -0.0925310030579567, -0.07607603073120117, 0.0098491869866848, 0.0768045112490654, -0.025223063305020332, -0.07397644966840744, 0.04432879388332367, -0.11529479175806046, 0.02843865565955639, 0.03745206817984581, -0.08807312697172165, -0.016944579780101776, -0.029834...
-0.031423
in v1.10. By :user:`Luis ` :pr:`31764` - |Fix| `repr` on a scorer which has been created with a `partial` `score\_func` now correctly works and uses the `repr` of the given `partial` object. By `Adrin Jalali`\_. :pr:`31891` - |Fix| kwargs specified in the `curve\_kwargs` parameter of :meth:`metrics.RocCurveDisplay.from\_cv\_results` now only overwrite their corresponding default value before being passed to Matplotlib's `plot`. Previously, passing any `curve\_kwargs` would overwrite all default kwargs. By :user:`Lucy Liu `. :pr:`32313` - |Fix| Registered named scorer objects for :func:`metrics.d2\_brier\_score` and :func:`metrics.d2\_log\_loss\_score` and updated their input validation to be consistent with related metric functions. By :user:`Olivier Grisel ` and :user:`Omar Salman ` :pr:`32356` - |Fix| :meth:`metrics.RocCurveDisplay.from\_cv\_results` will now infer `pos\_label` as `estimator.classes\_[-1]`, using the estimator from `cv\_results`, when `pos\_label=None`. Previously, an error was raised when `pos\_label=None`. By :user:`Lucy Liu `. :pr:`32372` - |Fix| All classification metrics now raise a `ValueError` when required input arrays (`y\_pred`, `y\_true`, `y1`, `y2`, `pred\_decision`, or `y\_proba`) are empty. Previously, `accuracy\_score`, `class\_likelihood\_ratios`, `classification\_report`, `confusion\_matrix`, `hamming\_loss`, `jaccard\_score`, `matthews\_corrcoef`, `multilabel\_confusion\_matrix`, and `precision\_recall\_fscore\_support` did not raise this error consistently. By :user:`Stefanie Senger `. :pr:`32549` - |API| :func:`metrics.cluster.entropy` is deprecated and will be removed in v1.10. By :user:`Lucy Liu ` :pr:`31294` - |API| The `estimator\_name` parameter is deprecated in favour of `name` in :class:`metrics.PrecisionRecallDisplay` and will be removed in 1.10. By :user:`Lucy Liu `. :pr:`32310` :mod:`sklearn.model\_selection` ------------------------------ - |Enhancement| :class:`model\_selection.StratifiedShuffleSplit` will now specify which classes have too few members when raising a ``ValueError`` if any class has less than 2 members. This is useful to identify which classes are causing the error. By :user:`Marc Bresson ` :pr:`32265` - |Fix| Fix shuffle behaviour in :class:`model\_selection.StratifiedGroupKFold`. Now stratification among folds is also preserved when `shuffle=True`. By :user:`Pau Folch `. :pr:`32540` :mod:`sklearn.multiclass` ------------------------- - |Fix| Fix tie-breaking behavior in :class:`multiclass.OneVsRestClassifier` to match `np.argmax` tie-breaking behavior. By :user:`Lakshmi Krishnan `. :pr:`15504` :mod:`sklearn.naive\_bayes` -------------------------- - |Fix| :class:`naive\_bayes.GaussianNB` preserves the dtype of the fitted attributes according to the dtype of `X`. By :user:`Omar Salman ` :pr:`32497` :mod:`sklearn.preprocessing` ---------------------------- - |Enhancement| :class:`preprocessing.SplineTransformer` can now handle missing values with the parameter `handle\_missing`. By :user:`Stefanie Senger `. :pr:`28043` - |Enhancement| The :class:`preprocessing.PowerTransformer` now returns a warning when NaN values are encountered in the inverse transform, `inverse\_transform`, typically caused by extremely skewed data. By :user:`Roberto Mourao ` :pr:`29307` - |Enhancement| :class:`preprocessing.MaxAbsScaler` can now clip out-of-range values in held-out data with the parameter `clip`. By :user:`Hleb Levitski `. :pr:`31790` - |Fix| Fixed a bug in :class:`preprocessing.OneHotEncoder` where `handle\_unknown='warn'` incorrectly behaved like `'ignore'` instead of `'infrequent\_if\_exist'`. By :user:`Nithurshen ` :pr:`32592` :mod:`sklearn.semi\_supervised` ------------------------------ - |Fix| User written kernel results are now normalized in :class:`semi\_supervised.LabelPropagation` so all row sums equal 1 even if kernel gives asymmetric or non-uniform row sums. By :user:`Dan Schult `. :pr:`31924` :mod:`sklearn.tree` ------------------- - |Efficiency| :class:`tree.DecisionTreeRegressor` with `criterion="absolute\_error"` now runs much faster: O(n log n) complexity against previous O(n^2) allowing to scale to millions of data points, even hundred of millions. By :user:`Arthur Lacote ` :pr:`32100` - |Fix| Make :func:`tree.export\_text` thread-safe. By :user:`Olivier Grisel `. :pr:`30041` - |Fix| :func:`~sklearn.tree.export\_graphviz` now raises a `ValueError` if given feature names are not all strings. By :user:`Guilherme Peixoto ` :pr:`31036` - |Fix| :class:`tree.DecisionTreeRegressor` with `criterion="absolute\_error"` would sometimes make sub-optimal splits (i.e. splits that don't minimize the absolute error). Now it's fixed. Hence retraining trees might gives slightly different results. By :user:`Arthur Lacote ` :pr:`32100` - |Fix| Fixed a regression in :ref:`decision trees ` where almost constant features were not handled properly. By :user:`Sercan Turkmen `. :pr:`32259` - |Fix| Fixed splitting logic during training in :class:`tree.DecisionTree\*` (and consequently in :class:`ensemble.RandomForest\*`) for nodes containing near-constant feature values and missing values. Beforehand, trees were cut short if a constant feature was found, even if there was
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.8.rst
main
scikit-learn
[ -0.0367574617266655, -0.018210167065262794, 0.0015015826793387532, 0.007655797991901636, -0.007075255736708641, -0.023249493911862373, -0.028769416734576225, 0.030420515686273575, 0.021364498883485794, -0.03697320073843002, 0.008573721162974834, -0.0721106305718422, -0.03110506944358349, 0...
-0.059516
where almost constant features were not handled properly. By :user:`Sercan Turkmen `. :pr:`32259` - |Fix| Fixed splitting logic during training in :class:`tree.DecisionTree\*` (and consequently in :class:`ensemble.RandomForest\*`) for nodes containing near-constant feature values and missing values. Beforehand, trees were cut short if a constant feature was found, even if there was more splitting that could be done on the basis of missing values. By :user:`Arthur Lacote ` :pr:`32274` - |Fix| Fix handling of missing values in method :func:`decision\_path` of trees (:class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and :class:`tree.ExtraTreeRegressor`) By :user:`Arthur Lacote `. :pr:`32280` - |Fix| Fix decision tree splitting with missing values present in some features. In some cases the last non-missing sample would not be partitioned correctly. By :user:`Tim Head ` and :user:`Arthur Lacote `. :pr:`32351` :mod:`sklearn.utils` -------------------- - |Efficiency| The function :func:`sklearn.utils.extmath.safe\_sparse\_dot` was improved by a dedicated Cython routine for the case of `a @ b` with sparse 2-dimensional `a` and `b` and when a dense output is required, i.e., `dense\_output=True`. This improves several algorithms in scikit-learn when dealing with sparse arrays (or matrices). By :user:`Christian Lorentzen `. :pr:`31952` - |Enhancement| The parameter table in the HTML representation of all scikit-learn estimators and more generally of estimators inheriting from :class:`base.BaseEstimator` now displays the parameter description as a tooltip and has a link to the online documentation for each parameter. By :user:`Dea María Léon `. :pr:`31564` - |Enhancement| ``sklearn.utils.\_check\_sample\_weight`` now raises a clearer error message when the provided weights are neither a scalar nor a 1-D array-like of the same size as the input data. By :user:`Kapil Parekh `. :pr:`31873` - |Enhancement| :func:`sklearn.utils.estimator\_checks.parametrize\_with\_checks` now lets you configure strict mode for xfailing checks. Tests that unexpectedly pass will lead to a test failure. The default behaviour is unchanged. By :user:`Tim Head `. :pr:`31951` - |Enhancement| Fixed the alignment of the "?" and "i" symbols and improved the color style of the HTML representation of estimators. By :user:`Guillaume Lemaitre `. :pr:`31969` - |Fix| Changes the way color are chosen when displaying an estimator as an HTML representation. Colors are not adapted anymore to the user's theme, but chosen based on theme declared color scheme (light or dark) for VSCode and JupyterLab. If theme does not declare a color scheme, scheme is chosen according to default text color of the page, if it fails fallbacks to a media query. By :user:`Matt J. `. :pr:`32330` - |API| :func:`utils.extmath.stable\_cumsum` is deprecated and will be removed in v1.10. Use `np.cumulative\_sum` with the desired dtype directly instead. By :user:`Tiziano Zito `. :pr:`32258` .. rubric:: Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.7, including: $id, 4hm3d, Acciaro Gennaro Daniele, achyuthan.s, Adam J. Stewart, Adriano Leão, Adrien Linares, Adrin Jalali, Aitsaid Azzedine Idir, Alexander Fabisch, Alexandre Abraham, Andrés H. Zapke, Anne Beyer, Anthony Gitter, AnthonyPrudent, antoinebaker, Arpan Mukherjee, Arthur, Arthur Lacote, Arturo Amor, ayoub.agouzoul, Ayrat, Ayush, Ayush Tanwar, Basile Jezequel, Bhavya Patwa, BRYANT MUSI BABILA, Casey Heath, Chems Ben, Christian Lorentzen, Christian Veenhuis, Christine P. Chai, cstec, C. Titus Brown, Daniel Herrera-Esposito, Dan Schult, dbXD320, Dea María Léon, Deepyaman Datta, dependabot[bot], Dhyey Findoriya, Dimitri Papadopoulos Orfanos, Dipak Dhangar, Dmitry Kobak, elenafillo, Elham Babaei, EmilyXinyi, Emily (Xinyi) Chen, Eugen-Bleck, Evgeni Burovski, fabarca, Fabrizio Damicelli, Faizan-Ul Huda, François Goupil, François Paugam, Gaetan, GaetandeCast, Gesa Loof, Gonçalo Guiomar, Gordon Grey, Gowtham Kumar K., Guilherme Peixoto, Guillaume Lemaitre, hakan çanakçı, Harshil Sanghvi, Henri Bonamy, Hleb Levitski, HulusiOzy, hvtruong, Ian Faust, Imad Saddik, Jérémie du Boisberranger, Jérôme Dockès, John Hendricks, Joris Van den Bossche, Josef Affourtit, Josh, jshn9515, Junaid, KALLA GANASEKHAR, Kapil Parekh, Kenneth Enevoldsen, Kian Eliasi, kostayScr, Krishnan Vignesh, kryggird, Kyle S, Lakshmi
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.8.rst
main
scikit-learn
[ -0.004544555209577084, 0.022703327238559723, 0.06923454999923706, 0.07074518501758575, 0.11515891551971436, -0.07129748165607452, -0.03519847244024277, 0.006664969027042389, -0.07006754726171494, 0.023455791175365448, 0.026136962696909904, -0.08160410821437836, -0.020281685516238213, -0.05...
-0.047049
K., Guilherme Peixoto, Guillaume Lemaitre, hakan çanakçı, Harshil Sanghvi, Henri Bonamy, Hleb Levitski, HulusiOzy, hvtruong, Ian Faust, Imad Saddik, Jérémie du Boisberranger, Jérôme Dockès, John Hendricks, Joris Van den Bossche, Josef Affourtit, Josh, jshn9515, Junaid, KALLA GANASEKHAR, Kapil Parekh, Kenneth Enevoldsen, Kian Eliasi, kostayScr, Krishnan Vignesh, kryggird, Kyle S, Lakshmi Krishnan, Leomax, Loic Esteve, Luca Bittarello, Lucas Colley, Lucy Liu, Luigi Giugliano, Luis, Mahdi Abid, Mahi Dhiman, Maitrey Talware, Mamduh Zabidi, Manikandan Gobalakrishnan, Marc Bresson, Marco Edward Gorelli, Marek Pokropiński, Maren Westermann, Marie Sacksick, Marija Vlajic, Matt J., Mayank Raj, Michael Burkhart, Michael Šimáček, Miguel Fernandes, Miro Hrončok, Mohamed DHIFALLAH, Muhammad Waseem, MUHAMMED SINAN D, Natalia Mokeeva, Nicholas Farr, Nicolas Bolle, Nicolas Hug, nithish-74, Nithurshen, Nitin Pratap Singh, NotAceNinja, Olivier Grisel, omahs, Omar Salman, Patrick Walsh, Peter Holzer, pfolch, ph-ll-pp, Prashant Bansal, Quan H. Nguyen, Radovenchyk, Rafael Ayllón Gavilán, Raghvender, Ranjodh Singh, Ravichandranayakar, Remi Gau, Reshama Shaikh, Richard Harris, RishiP2006, Ritvi Alagusankar, Roberto Mourao, Robert Pollak, Roshangoli, roychan, R Sagar Shresti, Sarthak Puri, saskra, scikit-learn-bot, Scott Huberty, Sercan Turkmen, Sergio P, Shashank S, Shaurya Bisht, Shivam, Shruti Nath, SIKAI ZHANG, sisird864, SiyuJin-1, S. M. Mohiuddin Khan Shiam, Somdutta Banerjee, sotagg, Sota Goto, Spencer Bradkin, Stefan, Stefanie Senger, Steffen Rehberg, Steven Hur, Success Moses, Sylvain Combettes, ThibaultDECO, Thomas J. Fan, Thomas Li, Thomas S., Tim Head, Tingwei Zhu, Tiziano Zito, TJ Norred, Username46786, Utsab Dahal, Vasanth K, Veghit, VirenPassi, Virgil Chan, Vivaan Nanavati, Xiao Yuan, xuzhang0327, Yaroslav Halchenko, Yaswanth Kumar, Zijun yi, zodchi94, Zubair Shakoor
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.8.rst
main
scikit-learn
[ -0.026173485442996025, -0.017915397882461548, -0.0379847027361393, 0.08832793682813644, -0.0700208768248558, 0.031319696456193924, -0.08161669969558716, 0.06101998686790466, -0.017703091725707054, -0.05946531891822815, 0.020419036969542503, 0.03666350990533829, -0.03227883577346802, -0.013...
0.112868
- :class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`, :class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor` now use `sample\_weight` to draw the samples instead of forwarding them multiplied by a uniformly sampled mask to the underlying estimators. Furthermore, when `max\_samples` is a float, it is now interpreted as a fraction of `sample\_weight.sum()` instead of `X.shape[0]`. As sampling is done with replacement, a float `max\_samples` greater than `1.0` is now allowed, as well as an integer `max\_samples` greater then `X.shape[0]`. The default `max\_samples=None` draws `X.shape[0]` samples, irrespective of `sample\_weight`. By :user:`Antoine Baker `.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/upcoming_changes/sklearn.ensemble/31529.fix.rst
main
scikit-learn
[ -0.06968507915735245, -0.01786368526518345, 0.004523979499936104, 0.03931558504700661, 0.08594988286495209, -0.10371273756027222, 0.025993378832936287, -0.01184967253357172, -0.03175335377454758, 0.009099048562347889, -0.06465563923120499, -0.0824199914932251, 0.022529758512973785, -0.0641...
0.038835
:orphan: .. title:: Testimonials .. \_testimonials: ========================== Who is using scikit-learn? ========================== `J.P.Morgan `\_ ---------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Scikit-learn is an indispensable part of the Python machine learning toolkit at JPMorgan. It is very widely used across all parts of the bank for classification, predictive analytics, and very many other machine learning tasks. Its straightforward API, its breadth of algorithms, and the quality of its documentation combine to make scikit-learn simultaneously very approachable and very powerful. .. rst-class:: annotation Stephen Simmons, VP, Athena Research, JPMorgan .. div:: image-box .. image:: images/jpmorgan.png :target: https://www.jpmorgan.com `Spotify `\_ ------------------------------------ .. div:: sk-text-image-grid-large .. div:: text-box Scikit-learn provides a toolbox with solid implementations of a bunch of state-of-the-art models and makes it easy to plug them into existing applications. We've been using it quite a lot for music recommendations at Spotify and I think it's the most well-designed ML package I've seen so far. .. rst-class:: annotation Erik Bernhardsson, Engineering Manager Music Discovery & Machine Learning, Spotify .. div:: image-box .. image:: images/spotify.png :target: https://www.spotify.com `Inria `\_ -------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At INRIA, we use scikit-learn to support leading-edge basic research in many teams: `Parietal `\_ for neuroimaging, `Lear `\_ for computer vision, `Visages `\_ for medical image analysis, `Privatics `\_ for security. The project is a fantastic tool to address difficult applications of machine learning in an academic environment as it is performant and versatile, but all easy-to-use and well documented, which makes it well suited to grad students. .. rst-class:: annotation Gaël Varoquaux, research at Parietal .. div:: image-box .. image:: images/inria.png :target: https://www.inria.fr/ `betaworks `\_ ------------------------------------ .. div:: sk-text-image-grid-large .. div:: text-box Betaworks is a NYC-based startup studio that builds new products, grows companies, and invests in others. Over the past 8 years we've launched a handful of social data analytics-driven services, such as Bitly, Chartbeat, digg and Scale Model. Consistently the betaworks data science team uses Scikit-learn for a variety of tasks. From exploratory analysis, to product development, it is an essential part of our toolkit. Recent uses are included in `digg's new video recommender system `\_, and Poncho's `dynamic heuristic subspace clustering `\_. .. rst-class:: annotation Gilad Lotan, Chief Data Scientist .. div:: image-box .. image:: images/betaworks.png :target: https://betaworks.com `Hugging Face `\_ ---------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At Hugging Face we're using NLP and probabilistic models to generate conversational Artificial intelligences that are fun to chat with. Despite using deep neural nets for `a few `\_ of our `NLP tasks `\_, scikit-learn is still the bread-and-butter of our daily machine learning routine. The ease of use and predictability of the interface, as well as the straightforward mathematical explanations that are here when you need them, is the killer feature. We use a variety of scikit-learn models in production and they are also operationally very pleasant to work with. .. rst-class:: annotation Julien Chaumond, Chief Technology Officer .. div:: image-box .. image:: images/huggingface.png :target: https://huggingface.co `Evernote `\_ ---------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Building a classifier is typically an iterative process of exploring the data, selecting the features (the attributes of the data believed to be predictive in some way), training the models, and finally evaluating them. For many of these tasks, we relied on the excellent scikit-learn package for Python. `Read more `\_ .. rst-class:: annotation Mark Ayzenshtat, VP, Augmented Intelligence .. div:: image-box .. image:: images/evernote.png :target: https://evernote.com `Télécom ParisTech `\_ -------------------------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At Telecom ParisTech, scikit-learn is used for hands-on sessions and home assignments in introductory and advanced machine learning courses. The classes
https://github.com/scikit-learn/scikit-learn/blob/main//doc/testimonials/testimonials.rst
main
scikit-learn
[ -0.09228754788637161, 0.004192444961518049, -0.040501728653907776, 0.03659231215715408, 0.08403325080871582, -0.09828289598226547, 0.0005278104799799621, -0.022125495597720146, -0.004433842375874519, -0.03686342015862465, -0.0019235603976994753, -0.04059630632400513, -0.03375496342778206, ...
0.235743
`Read more `\_ .. rst-class:: annotation Mark Ayzenshtat, VP, Augmented Intelligence .. div:: image-box .. image:: images/evernote.png :target: https://evernote.com `Télécom ParisTech `\_ -------------------------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At Telecom ParisTech, scikit-learn is used for hands-on sessions and home assignments in introductory and advanced machine learning courses. The classes are for undergrads and masters students. The great benefit of scikit-learn is its fast learning curve that allows students to quickly start working on interesting and motivating problems. .. rst-class:: annotation Alexandre Gramfort, Assistant Professor .. div:: image-box .. image:: images/telecomparistech.jpg :target: https://www.telecom-paristech.fr/ `Booking.com `\_ ---------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At Booking.com, we use machine learning algorithms for many different applications, such as recommending hotels and destinations to our customers, detecting fraudulent reservations, or scheduling our customer service agents. Scikit-learn is one of the tools we use when implementing standard algorithms for prediction tasks. Its API and documentations are excellent and make it easy to use. The scikit-learn developers do a great job of incorporating state of the art implementations and new algorithms into the package. Thus, scikit-learn provides convenient access to a wide spectrum of algorithms, and allows us to readily find the right tool for the right job. .. rst-class:: annotation Melanie Mueller, Data Scientist .. div:: image-box .. image:: images/booking.png :target: https://www.booking.com `AWeber `\_ ----------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box The scikit-learn toolkit is indispensable for the Data Analysis and Management team at AWeber. It allows us to do AWesome stuff we would not otherwise have the time or resources to accomplish. The documentation is excellent, allowing new engineers to quickly evaluate and apply many different algorithms to our data. The text feature extraction utilities are useful when working with the large volume of email content we have at AWeber. The RandomizedPCA implementation, along with Pipelining and FeatureUnions, allows us to develop complex machine learning algorithms efficiently and reliably. Anyone interested in learning more about how AWeber deploys scikit-learn in a production environment should check out talks from PyData Boston by AWeber's Michael Becker available at https://github.com/mdbecker/pydata\_2013. .. rst-class:: annotation Michael Becker, Software Engineer, Data Analysis and Management Ninjas .. div:: image-box .. image:: images/aweber.png :target: https://www.aweber.com `Yhat `\_ ------------------------------ .. div:: sk-text-image-grid-large .. div:: text-box The combination of consistent APIs, thorough documentation, and top notch implementation make scikit-learn our favorite machine learning package in Python. scikit-learn makes doing advanced analysis in Python accessible to anyone. At Yhat, we make it easy to integrate these models into your production applications. Thus eliminating the unnecessary dev time encountered productionizing analytical work. .. rst-class:: annotation Greg Lamp, Co-founder .. div:: image-box .. image:: images/yhat.png :target: https://www.yhat.com `Rangespan `\_ --------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box The Python scikit-learn toolkit is a core tool in the data science group at Rangespan. Its large collection of well documented models and algorithms allow our team of data scientists to prototype fast and quickly iterate to find the right solution to our learning problems. We find that scikit-learn is not only the right tool for prototyping, but its careful and well tested implementation give us the confidence to run scikit-learn models in production. .. rst-class:: annotation Jurgen Van Gael, Data Science Director .. div:: image-box .. image:: images/rangespan.png :target: http://www.rangespan.com `Birchbox `\_ -------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At Birchbox, we face a range of machine learning problems typical to E-commerce: product recommendation, user clustering, inventory prediction, trends detection, etc. Scikit-learn lets us experiment with many models, especially in the exploration phase of a new project: the data can be passed around in a consistent way; models are easy
https://github.com/scikit-learn/scikit-learn/blob/main//doc/testimonials/testimonials.rst
main
scikit-learn
[ -0.09607560187578201, 0.024789098650217056, -0.005784971639513969, -0.01664060913026333, 0.09411526471376419, -0.044596701860427856, 0.008924406953155994, 0.04576453939080238, 0.03167261183261871, -0.003826892003417015, 0.03425590693950653, 0.06404894590377808, -0.04596870392560959, 0.0509...
0.221233
Birchbox, we face a range of machine learning problems typical to E-commerce: product recommendation, user clustering, inventory prediction, trends detection, etc. Scikit-learn lets us experiment with many models, especially in the exploration phase of a new project: the data can be passed around in a consistent way; models are easy to save and reuse; updates keep us informed of new developments from the pattern discovery research community. Scikit-learn is an important tool for our team, built the right way in the right language. .. rst-class:: annotation Thierry Bertin-Mahieux, Data Scientist .. div:: image-box .. image:: images/birchbox.jpg :target: https://www.birchbox.com `Bestofmedia Group `\_ ------------------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Scikit-learn is our #1 toolkit for all things machine learning at Bestofmedia. We use it for a variety of tasks (e.g. spam fighting, ad click prediction, various ranking models) thanks to the varied, state-of-the-art algorithm implementations packaged into it. In the lab it accelerates prototyping of complex pipelines. In production I can say it has proven to be robust and efficient enough to be deployed for business critical components. .. rst-class:: annotation Eustache Diemert, Lead Scientist .. div:: image-box .. image:: images/bestofmedia-logo.png :target: http://www.bestofmedia.com `Change.org `\_ -------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At change.org we automate the use of scikit-learn's RandomForestClassifier in our production systems to drive email targeting that reaches millions of users across the world each week. In the lab, scikit-learn's ease-of-use, performance, and overall variety of algorithms implemented has proved invaluable in giving us a single reliable source to turn to for our machine-learning needs. .. rst-class:: annotation Vijay Ramesh, Software Engineer in Data/science at Change.org .. div:: image-box .. image:: images/change-logo.png :target: https://www.change.org `PHIMECA Engineering `\_ --------------------------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At PHIMECA Engineering, we use scikit-learn estimators as surrogates for expensive-to-evaluate numerical models (mostly but not exclusively finite-element mechanical models) for speeding up the intensive post-processing operations involved in our simulation-based decision making framework. Scikit-learn's fit/predict API together with its efficient cross-validation tools considerably eases the task of selecting the best-fit estimator. We are also using scikit-learn for illustrating concepts in our training sessions. Trainees are always impressed by the ease-of-use of scikit-learn despite the apparent theoretical complexity of machine learning. .. rst-class:: annotation Vincent Dubourg, PHIMECA Engineering, PhD Engineer .. div:: image-box .. image:: images/phimeca.png :target: https://www.phimeca.com/?lang=en `HowAboutWe `\_ ------------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At HowAboutWe, scikit-learn lets us implement a wide array of machine learning techniques in analysis and in production, despite having a small team. We use scikit-learn's classification algorithms to predict user behavior, enabling us to (for example) estimate the value of leads from a given traffic source early in the lead's tenure on our site. Also, our users' profiles consist of primarily unstructured data (answers to open-ended questions), so we use scikit-learn's feature extraction and dimensionality reduction tools to translate these unstructured data into inputs for our matchmaking system. .. rst-class:: annotation Daniel Weitzenfeld, Senior Data Scientist at HowAboutWe .. div:: image-box .. image:: images/howaboutwe.png :target: https://www.howaboutwe.com/ `PeerIndex `\_ ------------------------------------------------------------------ .. div:: sk-text-image-grid-large .. div:: text-box At PeerIndex we use scientific methodology to build the Influence Graph - a unique dataset that allows us to identify who's really influential and in which context. To do this, we have to tackle a range of machine learning and predictive modeling problems. Scikit-learn has emerged as our primary tool for developing prototypes and making quick progress. From predicting missing data and classifying tweets to clustering communities of social media users, scikit- learn proved useful in a variety of applications. Its very intuitive interface and excellent compatibility with other python tools
https://github.com/scikit-learn/scikit-learn/blob/main//doc/testimonials/testimonials.rst
main
scikit-learn
[ -0.061663683503866196, -0.018681611865758896, -0.013775463216006756, 0.04156012460589409, 0.19676508009433746, -0.0063282158225774765, -0.014026651158928871, -0.02460530772805214, -0.006418362259864807, -0.019507091492414474, -0.007709775120019913, 0.02450067177414894, -0.0008542566210962832...
0.145586
modeling problems. Scikit-learn has emerged as our primary tool for developing prototypes and making quick progress. From predicting missing data and classifying tweets to clustering communities of social media users, scikit- learn proved useful in a variety of applications. Its very intuitive interface and excellent compatibility with other python tools makes it and indispensable tool in our daily research efforts. .. rst-class:: annotation Ferenc Huszar, Senior Data Scientist at Peerindex .. div:: image-box .. image:: images/peerindex.png :target: https://www.brandwatch.com/peerindex-and-brandwatch `DataRobot `\_ ---------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box DataRobot is building next generation predictive analytics software to make data scientists more productive, and scikit-learn is an integral part of our system. The variety of machine learning techniques in combination with the solid implementations that scikit-learn offers makes it a one-stop-shopping library for machine learning in Python. Moreover, its consistent API, well-tested code and permissive licensing allow us to use it in a production environment. Scikit-learn has literally saved us years of work we would have had to do ourselves to bring our product to market. .. rst-class:: annotation Jeremy Achin, CEO & Co-founder DataRobot Inc. .. div:: image-box .. image:: images/datarobot.png :target: https://www.datarobot.com `OkCupid `\_ ------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box We're using scikit-learn at OkCupid to evaluate and improve our matchmaking system. The range of features it has, especially preprocessing utilities, means we can use it for a wide variety of projects, and it's performant enough to handle the volume of data that we need to sort through. The documentation is really thorough, as well, which makes the library quite easy to use. .. rst-class:: annotation David Koh - Senior Data Scientist at OkCupid .. div:: image-box .. image:: images/okcupid.png :target: https://www.okcupid.com `Lovely `\_ ----------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At Lovely, we strive to deliver the best apartment marketplace, with respect to our users and our listings. From understanding user behavior, improving data quality, and detecting fraud, scikit-learn is a regular tool for gathering insights, predictive modeling and improving our product. The easy-to-read documentation and intuitive architecture of the API makes machine learning both explorable and accessible to a wide range of python developers. I'm constantly recommending that more developers and scientists try scikit-learn. .. rst-class:: annotation Simon Frid - Data Scientist, Lead at Lovely .. div:: image-box .. image:: images/lovely.png :target: https://livelovely.com `Data Publica `\_ ---------------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Data Publica builds a new predictive sales tool for commercial and marketing teams called C-Radar. We extensively use scikit-learn to build segmentations of customers through clustering, and to predict future customers based on past partnerships success or failure. We also categorize companies using their website communication thanks to scikit-learn and its machine learning algorithm implementations. Eventually, machine learning makes it possible to detect weak signals that traditional tools cannot see. All these complex tasks are performed in an easy and straightforward way thanks to the great quality of the scikit-learn framework. .. rst-class:: annotation Guillaume Lebourgeois & Samuel Charron - Data Scientists at Data Publica .. div:: image-box .. image:: images/datapublica.png :target: http://www.data-publica.com/ `Machinalis `\_ ------------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Scikit-learn is the cornerstone of all the machine learning projects carried at Machinalis. It has a consistent API, a wide selection of algorithms and lots of auxiliary tools to deal with the boilerplate. We have used it in production environments on a variety of projects including click-through rate prediction, `information extraction `\_, and even counting sheep! In fact, we use it so much that we've started to freeze our common use cases into Python packages, some of them open-sourced, like `FeatureForge `\_.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/testimonials/testimonials.rst
main
scikit-learn
[ -0.07845713943243027, -0.016034647822380066, -0.051311884075403214, 0.0696248859167099, 0.16566789150238037, -0.07655386626720428, -0.03239388391375542, -0.0125853531062603, -0.02080126851797104, -0.007364301010966301, 0.014142433181405067, 0.0456324927508831, 0.03316207602620125, 0.009227...
0.129629
boilerplate. We have used it in production environments on a variety of projects including click-through rate prediction, `information extraction `\_, and even counting sheep! In fact, we use it so much that we've started to freeze our common use cases into Python packages, some of them open-sourced, like `FeatureForge `\_. Scikit-learn in one word: Awesome. .. rst-class:: annotation Rafael Carrascosa, Lead developer .. div:: image-box .. image:: images/machinalis.png :target: https://www.machinalis.com/ `solido `\_ ----------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Scikit-learn is helping to drive Moore's Law, via Solido. Solido creates computer-aided design tools used by the majority of top-20 semiconductor companies and fabs, to design the bleeding-edge chips inside smartphones, automobiles, and more. Scikit-learn helps to power Solido's algorithms for rare-event estimation, worst-case verification, optimization, and more. At Solido, we are particularly fond of scikit-learn's libraries for Gaussian Process models, large-scale regularized linear regression, and classification. Scikit-learn has increased our productivity, because for many ML problems we no longer need to “roll our own” code. `This PyData 2014 talk `\_ has details. .. rst-class:: annotation Trent McConaghy, founder, Solido Design Automation Inc. .. div:: image-box .. image:: images/solido\_logo.png :target: https://www.solidodesign.com/ `INFONEA `\_ ---------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box We employ scikit-learn for rapid prototyping and custom-made Data Science solutions within our in-memory based Business Intelligence Software INFONEA®. As a well-documented and comprehensive collection of state-of-the-art algorithms and pipelining methods, scikit-learn enables us to provide flexible and scalable scientific analysis solutions. Thus, scikit-learn is immensely valuable in realizing a powerful integration of Data Science technology within self-service business analytics. .. rst-class:: annotation Thorsten Kranz, Data Scientist, Coma Soft AG. .. div:: image-box .. image:: images/infonea.jpg :target: https://www.infonea.com/en/ `Dataiku `\_ ------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Our software, Data Science Studio (DSS), enables users to create data services that combine `ETL `\_ with Machine Learning. Our Machine Learning module integrates many scikit-learn algorithms. The scikit-learn library is a perfect integration with DSS because it offers algorithms for virtually all business cases. Our goal is to offer a transparent and flexible tool that makes it easier to optimize time consuming aspects of building a data service, preparing data, and training machine learning algorithms on all types of data. .. rst-class:: annotation Florian Douetteau, CEO, Dataiku .. div:: image-box .. image:: images/dataiku\_logo.png :target: https://www.dataiku.com/ `Otto Group `\_ -------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Here at Otto Group, one of global Big Five B2C online retailers, we are using scikit-learn in all aspects of our daily work from data exploration to development of machine learning application to the productive deployment of those services. It helps us to tackle machine learning problems ranging from e-commerce to logistics. It consistent APIs enabled us to build the `Palladium REST-API framework `\_ around it and continuously deliver scikit-learn based services. .. rst-class:: annotation Christian Rammig, Head of Data Science, Otto Group .. div:: image-box .. image:: images/ottogroup\_logo.png :target: https://ottogroup.com `Zopa `\_ --------------------------- .. div:: sk-text-image-grid-large .. div:: text-box At Zopa, the first ever Peer-to-Peer lending platform, we extensively use scikit-learn to run the business and optimize our users' experience. It powers our Machine Learning models involved in credit risk, fraud risk, marketing, and pricing, and has been used for originating at least 1 billion GBP worth of Zopa loans. It is very well documented, powerful, and simple to use. We are grateful for the capabilities it has provided, and for allowing us to deliver on our mission of making money simple and fair. .. rst-class:: annotation Vlasios Vasileiou, Head of Data Science, Zopa .. div:: image-box .. image:: images/zopa.png :target: https://zopa.com `MARS `\_ ------------------------------------- .. div::
https://github.com/scikit-learn/scikit-learn/blob/main//doc/testimonials/testimonials.rst
main
scikit-learn
[ -0.11143381148576736, 0.024540312588214874, -0.08199920505285263, 0.04579290747642517, 0.14205287396907806, -0.10378620773553848, 0.000665948202367872, 0.0412132665514946, 0.0023906638380140066, 0.018233777955174446, -0.017444582656025887, 0.020542893558740616, 0.02072369121015072, 0.02529...
0.318078
simple to use. We are grateful for the capabilities it has provided, and for allowing us to deliver on our mission of making money simple and fair. .. rst-class:: annotation Vlasios Vasileiou, Head of Data Science, Zopa .. div:: image-box .. image:: images/zopa.png :target: https://zopa.com `MARS `\_ ------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box Scikit-Learn is integral to the Machine Learning Ecosystem at Mars. Whether we're designing better recipes for petfood or closely analysing our cocoa supply chain, Scikit-Learn is used as a tool for rapidly prototyping ideas and taking them to production. This allows us to better understand and meet the needs of our consumers worldwide. Scikit-Learn's feature-rich toolset is easy to use and equips our associates with the capabilities they need to solve the business challenges they face every day. .. rst-class:: annotation Michael Fitzke, Next Generation Technologies Sr Leader, Mars Inc. .. div:: image-box .. image:: images/mars.png :target: https://www.mars.com/global `BNP Paribas Cardif `\_ --------------------------------------------------------- .. div:: sk-text-image-grid-large .. div:: text-box BNP Paribas Cardif uses scikit-learn for several of its machine learning models in production. Our internal community of developers and data scientists has been using scikit-learn since 2015, for several reasons: the quality of the developments, documentation and contribution governance, and the sheer size of the contributing community. We even explicitly mention the use of scikit-learn's pipelines in our internal model risk governance as one of our good practices to decrease operational risks and overfitting risk. As a way to support open source software development and in particular scikit-learn project, we decided to participate to scikit-learn's consortium at La Fondation Inria since its creation in 2018. .. rst-class:: annotation Sébastien Conort, Chief Data Scientist, BNP Paribas Cardif .. div:: image-box .. image:: images/bnp\_paribas\_cardif.png :target: https://www.bnpparibascardif.com/
https://github.com/scikit-learn/scikit-learn/blob/main//doc/testimonials/testimonials.rst
main
scikit-learn
[ -0.031047141179442406, -0.00362963555380702, -0.020043136551976204, 0.05600205436348915, 0.1614867001771927, -0.0435895137488842, -0.042836662381887436, 0.020288249477744102, -0.043852150440216064, 0.026172392070293427, -0.01547032967209816, -0.0215125884860754, -0.05670103058218956, 0.045...
0.184719
The scikit-learn machine learning cheat sheet was originally created by Andreas Mueller: https://peekaboo-vision.blogspot.de/2013/01/machine-learning-cheat-sheet-for-scikit.html The current version of the chart is located at `doc/images/ml\_map.svg` in SVG+XML format, created using [draw.io](https://draw.io/). To edit the chart, open the file in draw.io, make changes, and save. This should update the chart in-place. Another option would be to re-export the chart as SVG and replace the existing file. The options used for exporting the chart are: - Zoom: 100% - Border width: 15 - Size: Diagram - Transparent Background: False - Appearance: Light Note that estimators nodes are clickable and should go to the estimator documentation. After updating or re-exporting the SVG with draw.io, the links may be prefixed with e.g. `https://app.diagrams.net/`. Remember to check and remove them, for instance by replacing all occurrences of `https://app.diagrams.net/./` with `./` with the following command: .. prompt:: bash perl -pi -e 's@https://app.diagrams.net/\./@./@g' doc/images/ml\_map.svg
https://github.com/scikit-learn/scikit-learn/blob/main//doc/images/ml_map.README.rst
main
scikit-learn
[ 0.005060315597802401, -0.06827963888645172, 0.031290460377931595, 0.02966953068971634, 0.09150222688913345, -0.024131113663315773, -0.1066179946064949, -0.00440134946256876, -0.041864875704050064, 0.06781567633152008, 0.03175424784421921, -0.00391588406637311, -0.07323365658521652, -0.0774...
0.032123
.. \_loading\_other\_datasets: Loading other datasets ====================== .. currentmodule:: sklearn.datasets .. \_sample\_images: Sample images ------------- Scikit-learn also embeds a couple of sample JPEG images published under Creative Commons license by their authors. Those images can be useful to test algorithms and pipelines on 2D data. .. autosummary:: load\_sample\_images load\_sample\_image .. plot:: :context: close-figs :scale: 30 :align: right :include-source: False import matplotlib.pyplot as plt from sklearn.datasets import load\_sample\_image china = load\_sample\_image("china.jpg") plt.imshow(china) plt.axis('off') plt.tight\_layout() plt.show() .. warning:: The default coding of images is based on the ``uint8`` dtype to spare memory. Often machine learning algorithms work best if the input is converted to a floating point representation first. Also, if you plan to use ``matplotlib.pyplot.imshow``, don't forget to scale to the range 0 - 1 as done in the following example. .. \_libsvm\_loader: Datasets in svmlight / libsvm format ------------------------------------ scikit-learn includes utility functions for loading datasets in the svmlight / libsvm format. In this format, each line takes the form `` : : ...``. This format is especially suitable for sparse datasets. In this module, scipy sparse CSR matrices are used for ``X`` and numpy arrays are used for ``y``. You may load a dataset like this as follows:: >>> from sklearn.datasets import load\_svmlight\_file >>> X\_train, y\_train = load\_svmlight\_file("/path/to/train\_dataset.txt") ... # doctest: +SKIP You may also load two (or more) datasets at once:: >>> X\_train, y\_train, X\_test, y\_test = load\_svmlight\_files( ... ("/path/to/train\_dataset.txt", "/path/to/test\_dataset.txt")) ... # doctest: +SKIP In this case, ``X\_train`` and ``X\_test`` are guaranteed to have the same number of features. Another way to achieve the same result is to fix the number of features:: >>> X\_test, y\_test = load\_svmlight\_file( ... "/path/to/test\_dataset.txt", n\_features=X\_train.shape[1]) ... # doctest: +SKIP .. rubric:: Related links - `Public datasets in svmlight / libsvm format`: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets - `Faster API-compatible implementation`: https://github.com/mblondel/svmlight-loader .. For doctests: >>> import numpy as np >>> import os .. \_openml: Downloading datasets from the openml.org repository --------------------------------------------------- `openml.org `\_ is a public repository for machine learning data and experiments, that allows everybody to upload open datasets. The ``sklearn.datasets`` package is able to download datasets from the repository using the function :func:`sklearn.datasets.fetch\_openml`. For example, to download a dataset of gene expressions in mice brains:: >>> from sklearn.datasets import fetch\_openml >>> mice = fetch\_openml(name='miceprotein', version=4) To fully specify a dataset, you need to provide a name and a version, though the version is optional, see :ref:`openml\_versions` below. The dataset contains a total of 1080 examples belonging to 8 different classes:: >>> mice.data.shape (1080, 77) >>> mice.target.shape (1080,) >>> np.unique(mice.target) array(['c-CS-m', 'c-CS-s', 'c-SC-m', 'c-SC-s', 't-CS-m', 't-CS-s', 't-SC-m', 't-SC-s'], dtype=object) You can get more information on the dataset by looking at the ``DESCR`` and ``details`` attributes:: >>> print(mice.DESCR) # doctest: +SKIP \*\*Author\*\*: Clara Higuera, Katheleen J. Gardiner, Krzysztof J. Cios \*\*Source\*\*: [UCI](https://archive.ics.uci.edu/ml/datasets/Mice+Protein+Expression) - 2015 \*\*Please cite\*\*: Higuera C, Gardiner KJ, Cios KJ (2015) Self-Organizing Feature Maps Identify Proteins Critical to Learning in a Mouse Model of Down Syndrome. PLoS ONE 10(6): e0129126... >>> mice.details # doctest: +SKIP {'id': '40966', 'name': 'MiceProtein', 'version': '4', 'format': 'ARFF', 'upload\_date': '2017-11-08T16:00:15', 'licence': 'Public', 'url': 'https://www.openml.org/data/v1/download/17928620/MiceProtein.arff', 'file\_id': '17928620', 'default\_target\_attribute': 'class', 'row\_id\_attribute': 'MouseID', 'ignore\_attribute': ['Genotype', 'Treatment', 'Behavior'], 'tag': ['OpenML-CC18', 'study\_135', 'study\_98', 'study\_99'], 'visibility': 'public', 'status': 'active', 'md5\_checksum': '3c479a6885bfa0438971388283a1ce32'} The ``DESCR`` contains a free-text description of the data, while ``details`` contains a dictionary of meta-data stored by openml, like the dataset id. For more details, see the `OpenML documentation `\_ The ``data\_id`` of the mice protein dataset is 40966, and you can use this (or the name) to get more information on the dataset on the openml website:: >>> mice.url 'https://www.openml.org/d/40966' The ``data\_id`` also uniquely identifies a dataset from OpenML:: >>> mice =
https://github.com/scikit-learn/scikit-learn/blob/main//doc/datasets/loading_other_datasets.rst
main
scikit-learn
[ -0.03414742648601532, -0.03747107461094856, 0.013779005035758018, -0.04517881199717522, 0.12702403962612152, -0.10933268815279007, -0.05081046372652054, 0.017247427254915237, -0.01221407763659954, -0.040884144604206085, 0.04107072576880455, -0.061845481395721436, -0.0027414634823799133, 0....
0.058406
more details, see the `OpenML documentation `\_ The ``data\_id`` of the mice protein dataset is 40966, and you can use this (or the name) to get more information on the dataset on the openml website:: >>> mice.url 'https://www.openml.org/d/40966' The ``data\_id`` also uniquely identifies a dataset from OpenML:: >>> mice = fetch\_openml(data\_id=40966) >>> mice.details # doctest: +SKIP {'id': '4550', 'name': 'MiceProtein', 'version': '1', 'format': 'ARFF', 'creator': ..., 'upload\_date': '2016-02-17T14:32:49', 'licence': 'Public', 'url': 'https://www.openml.org/data/v1/download/1804243/MiceProtein.ARFF', 'file\_id': '1804243', 'default\_target\_attribute': 'class', 'citation': 'Higuera C, Gardiner KJ, Cios KJ (2015) Self-Organizing Feature Maps Identify Proteins Critical to Learning in a Mouse Model of Down Syndrome. PLoS ONE 10(6): e0129126. [Web Link] journal.pone.0129126', 'tag': ['OpenML100', 'study\_14', 'study\_34'], 'visibility': 'public', 'status': 'active', 'md5\_checksum': '3c479a6885bfa0438971388283a1ce32'} .. \_openml\_versions: Dataset Versions ~~~~~~~~~~~~~~~~ A dataset is uniquely specified by its ``data\_id``, but not necessarily by its name. Several different "versions" of a dataset with the same name can exist which can contain entirely different datasets. If a particular version of a dataset has been found to contain significant issues, it might be deactivated. Using a name to specify a dataset will yield the earliest version of a dataset that is still active. That means that ``fetch\_openml(name="miceprotein")`` can yield different results at different times if earlier versions become inactive. You can see that the dataset with ``data\_id`` 40966 that we fetched above is the first version of the "miceprotein" dataset:: >>> mice.details['version'] #doctest: +SKIP '1' In fact, this dataset only has one version. The iris dataset on the other hand has multiple versions:: >>> iris = fetch\_openml(name="iris") >>> iris.details['version'] #doctest: +SKIP '1' >>> iris.details['id'] #doctest: +SKIP '61' >>> iris\_61 = fetch\_openml(data\_id=61) >>> iris\_61.details['version'] '1' >>> iris\_61.details['id'] '61' >>> iris\_969 = fetch\_openml(data\_id=969) >>> iris\_969.details['version'] '3' >>> iris\_969.details['id'] '969' Specifying the dataset by the name "iris" yields the lowest version, version 1, with the ``data\_id`` 61. To make sure you always get this exact dataset, it is safest to specify it by the dataset ``data\_id``. The other dataset, with ``data\_id`` 969, is version 3 (version 2 has become inactive), and contains a binarized version of the data:: >>> np.unique(iris\_969.target) array(['N', 'P'], dtype=object) You can also specify both the name and the version, which also uniquely identifies the dataset:: >>> iris\_version\_3 = fetch\_openml(name="iris", version=3) >>> iris\_version\_3.details['version'] '3' >>> iris\_version\_3.details['id'] '969' .. rubric:: References \* :arxiv:`Vanschoren, van Rijn, Bischl and Torgo. "OpenML: networked science in machine learning" ACM SIGKDD Explorations Newsletter, 15(2), 49-60, 2014. <1407.7722>` .. \_openml\_parser: ARFF parser ~~~~~~~~~~~ From version 1.2, scikit-learn provides a new keyword argument `parser` that provides several options to parse the ARFF files provided by OpenML. The legacy parser (i.e. `parser="liac-arff"`) is based on the project `LIAC-ARFF `\_. This parser is however slow and consumes more memory than required. A new parser based on pandas (i.e. `parser="pandas"`) is both faster and more memory efficient. However, this parser does not support sparse data. Therefore, we recommend using `parser="auto"` which will use the best parser available for the requested dataset. The `"pandas"` and `"liac-arff"` parsers can lead to different data types in the output. The notable differences are the following: - The `"liac-arff"` parser always encodes categorical features as `str` objects. To the contrary, the `"pandas"` parser instead infers the type while reading and numerical categories will be casted into integers whenever possible. - The `"liac-arff"` parser uses float64 to encode numerical features tagged as 'REAL' and 'NUMERICAL' in the metadata. The `"pandas"` parser instead infers if these numerical features correspond to integers and uses pandas' Integer extension dtype. - In particular, classification datasets with integer categories are typically loaded as such `(0, 1, ...)` with the `"pandas"` parser while `"liac-arff"` will force
https://github.com/scikit-learn/scikit-learn/blob/main//doc/datasets/loading_other_datasets.rst
main
scikit-learn
[ -0.024096179753541946, 0.03562574461102486, -0.005973293911665678, -0.021980924531817436, 0.09694007784128189, -0.05895979329943657, 0.027600208297371864, 0.03187930956482887, -0.007921074517071247, -0.020721757784485817, 0.09195338189601898, -0.0101389829069376, 0.01943431980907917, -0.09...
0.02172
tagged as 'REAL' and 'NUMERICAL' in the metadata. The `"pandas"` parser instead infers if these numerical features correspond to integers and uses pandas' Integer extension dtype. - In particular, classification datasets with integer categories are typically loaded as such `(0, 1, ...)` with the `"pandas"` parser while `"liac-arff"` will force the use of string encoded class labels such as `"0"`, `"1"` and so on. - The `"pandas"` parser will not strip single quotes - i.e. `'` - from string columns. For instance, a string `'my string'` will be kept as is while the `"liac-arff"` parser will strip the single quotes. For categorical columns, the single quotes are stripped from the values. In addition, when `as\_frame=False` is used, the `"liac-arff"` parser returns ordinally encoded data where the categories are provided in the attribute `categories` of the `Bunch` instance. Instead, `"pandas"` returns a NumPy array were the categories. Then it's up to the user to design a feature engineering pipeline with an instance of `OneHotEncoder` or `OrdinalEncoder` typically wrapped in a `ColumnTransformer` to preprocess the categorical columns explicitly. See for instance: :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer\_mixed\_types.py`. .. \_external\_datasets: Loading from external datasets ------------------------------ scikit-learn works on any numeric data stored as numpy arrays or scipy sparse matrices. Other types that are convertible to numeric arrays such as pandas DataFrame are also acceptable. Here are some recommended ways to load standard columnar data into a format usable by scikit-learn: \* `pandas.io `\_ provides tools to read data from common formats including CSV, Excel, JSON and SQL. DataFrames may also be constructed from lists of tuples or dicts. Pandas handles heterogeneous data smoothly and provides tools for manipulation and conversion into a numeric array suitable for scikit-learn. \* `scipy.io `\_ specializes in binary formats often used in scientific computing contexts such as .mat and .arff \* `numpy/routines.io `\_ for standard loading of columnar data into numpy arrays \* scikit-learn's :func:`load\_svmlight\_file` for the svmlight or libSVM sparse format \* scikit-learn's :func:`load\_files` for directories of text files where the name of each directory is the name of each category and each file inside of each directory corresponds to one sample from that category For some miscellaneous data such as images, videos, and audio, you may wish to refer to: \* `skimage.io `\_ or `Imageio `\_ for loading images and videos into numpy arrays \* `scipy.io.wavfile.read `\_ for reading WAV files into a numpy array Categorical (or nominal) features stored as strings (common in pandas DataFrames) will need converting to numerical features using :class:`~sklearn.preprocessing.OneHotEncoder` or :class:`~sklearn.preprocessing.OrdinalEncoder` or similar. See :ref:`preprocessing`. Note: if you manage your own numerical data it is recommended to use an optimized file format such as HDF5 to reduce data load times. Various libraries such as H5Py, PyTables and pandas provide a Python interface for reading and writing data in that format.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/datasets/loading_other_datasets.rst
main
scikit-learn
[ -0.027330003678798676, -0.07818745821714401, -0.09765813499689102, -0.009464876726269722, 0.025468437001109123, -0.043429192155599594, 0.017052171751856804, -0.003419644432142377, -0.005229433067142963, -0.04167401045560837, 0.039000336080789566, -0.03284192830324173, -0.07288207858800888, ...
0.044492
.. \_real\_world\_datasets: Real world datasets =================== .. currentmodule:: sklearn.datasets scikit-learn provides tools to load larger datasets, downloading them if necessary. They can be loaded using the following functions: .. autosummary:: fetch\_olivetti\_faces fetch\_20newsgroups fetch\_20newsgroups\_vectorized fetch\_lfw\_people fetch\_lfw\_pairs fetch\_covtype fetch\_rcv1 fetch\_kddcup99 fetch\_california\_housing fetch\_species\_distributions .. include:: ../../sklearn/datasets/descr/olivetti\_faces.rst .. include:: ../../sklearn/datasets/descr/twenty\_newsgroups.rst .. include:: ../../sklearn/datasets/descr/lfw.rst .. include:: ../../sklearn/datasets/descr/covtype.rst .. include:: ../../sklearn/datasets/descr/rcv1.rst .. include:: ../../sklearn/datasets/descr/kddcup99.rst .. include:: ../../sklearn/datasets/descr/california\_housing.rst .. include:: ../../sklearn/datasets/descr/species\_distributions.rst
https://github.com/scikit-learn/scikit-learn/blob/main//doc/datasets/real_world.rst
main
scikit-learn
[ -0.014601126313209534, -0.03213285654783249, -0.010441198945045471, 0.024854181334376335, 0.07028283178806305, -0.0773947536945343, -0.05210902914404869, -0.014698403887450695, -0.07518729567527771, 0.020815866068005562, 0.0975136011838913, -0.1495594084262848, -0.08589717000722885, -0.082...
0.050496
.. \_sample\_generators: Generated datasets ================== .. currentmodule:: sklearn.datasets In addition, scikit-learn includes various random sample generators that can be used to build artificial datasets of controlled size and complexity. Generators for classification and clustering -------------------------------------------- These generators produce a matrix of features and corresponding discrete targets. Single label ~~~~~~~~~~~~ :func:`make\_blobs` creates a multiclass dataset by allocating each class to one normally-distributed cluster of points. It provides control over the centers and standard deviations of each cluster. This dataset is used to demonstrate clustering. .. plot:: :context: close-figs :scale: 70 :align: center import matplotlib.pyplot as plt from sklearn.datasets import make\_blobs X, y = make\_blobs(centers=3, cluster\_std=0.5, random\_state=0) plt.scatter(X[:, 0], X[:, 1], c=y) plt.title("Three normally-distributed clusters") plt.show() :func:`make\_classification` also creates multiclass datasets but specializes in introducing noise by way of: correlated, redundant and uninformative features; multiple Gaussian clusters per class; and linear transformations of the feature space. .. plot:: :context: close-figs :scale: 70 :align: center import matplotlib.pyplot as plt from sklearn.datasets import make\_classification fig, axs = plt.subplots(1, 3, figsize=(12, 4), sharey=True, sharex=True) titles = ["Two classes,\none informative feature,\none cluster per class", "Two classes,\ntwo informative features,\ntwo clusters per class", "Three classes,\ntwo informative features,\none cluster per class"] params = [ {"n\_informative": 1, "n\_clusters\_per\_class": 1, "n\_classes": 2}, {"n\_informative": 2, "n\_clusters\_per\_class": 2, "n\_classes": 2}, {"n\_informative": 2, "n\_clusters\_per\_class": 1, "n\_classes": 3} ] for i, param in enumerate(params): X, Y = make\_classification(n\_features=2, n\_redundant=0, random\_state=1, \*\*param) axs[i].scatter(X[:, 0], X[:, 1], c=Y) axs[i].set\_title(titles[i]) plt.tight\_layout() plt.show() :func:`make\_gaussian\_quantiles` divides a single Gaussian cluster into near-equal-size classes separated by concentric hyperspheres. .. plot:: :context: close-figs :scale: 70 :align: center import matplotlib.pyplot as plt from sklearn.datasets import make\_gaussian\_quantiles X, Y = make\_gaussian\_quantiles(n\_features=2, n\_classes=3, random\_state=0) plt.scatter(X[:, 0], X[:, 1], c=Y) plt.title("Gaussian divided into three quantiles") plt.show() :func:`make\_hastie\_10\_2` generates a similar binary, 10-dimensional problem. :func:`make\_circles` and :func:`make\_moons` generate 2D binary classification datasets that are challenging to certain algorithms (e.g., centroid-based clustering or linear classification), including optional Gaussian noise. They are useful for visualization. :func:`make\_circles` produces Gaussian data with a spherical decision boundary for binary classification, while :func:`make\_moons` produces two interleaving half-circles. .. plot:: :context: close-figs :scale: 70 :align: center import matplotlib.pyplot as plt from sklearn.datasets import make\_circles, make\_moons fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(8, 4)) X, Y = make\_circles(noise=0.1, factor=0.3, random\_state=0) ax1.scatter(X[:, 0], X[:, 1], c=Y) ax1.set\_title("make\_circles") X, Y = make\_moons(noise=0.1, random\_state=0) ax2.scatter(X[:, 0], X[:, 1], c=Y) ax2.set\_title("make\_moons") plt.tight\_layout() plt.show() Multilabel ~~~~~~~~~~ :func:`make\_multilabel\_classification` generates random samples with multiple labels, reflecting a bag of words drawn from a mixture of topics. The number of topics for each document is drawn from a Poisson distribution, and the topics themselves are drawn from a fixed random distribution. Similarly, the number of words is drawn from Poisson, with words drawn from a multinomial, where each topic defines a probability distribution over words. Simplifications with respect to true bag-of-words mixtures include: \* Per-topic word distributions are independently drawn, where in reality all would be affected by a sparse base distribution, and would be correlated. \* For a document generated from multiple topics, all topics are weighted equally in generating its bag of words. \* Documents without labels words at random, rather than from a base distribution. .. image:: ../auto\_examples/datasets/images/sphx\_glr\_plot\_random\_multilabel\_dataset\_001.png :target: ../auto\_examples/datasets/plot\_random\_multilabel\_dataset.html :scale: 50 :align: center Biclustering ~~~~~~~~~~~~ .. autosummary:: make\_biclusters make\_checkerboard Generators for regression ------------------------- :func:`make\_regression` produces regression targets as an optionally-sparse random linear combination of random features, with noise. Its informative features may be uncorrelated, or low rank (few features account for most of the variance). Other regression generators generate functions deterministically from randomized features. :func:`make\_sparse\_uncorrelated` produces a target as a linear combination of four features with fixed coefficients. Others encode explicitly non-linear relations: :func:`make\_friedman1` is related by polynomial and
https://github.com/scikit-learn/scikit-learn/blob/main//doc/datasets/sample_generators.rst
main
scikit-learn
[ -0.04506130516529083, -0.05701165273785591, -0.08503033965826035, 0.002169081475585699, 0.10615766793489456, -0.038097213953733444, 0.04426831752061844, -0.04921836405992508, -0.004325603134930134, -0.03027293272316456, 0.0344790443778038, -0.11195165663957596, 0.02025754377245903, -0.0832...
0.178004
informative features may be uncorrelated, or low rank (few features account for most of the variance). Other regression generators generate functions deterministically from randomized features. :func:`make\_sparse\_uncorrelated` produces a target as a linear combination of four features with fixed coefficients. Others encode explicitly non-linear relations: :func:`make\_friedman1` is related by polynomial and sine transforms; :func:`make\_friedman2` includes feature multiplication and reciprocation; and :func:`make\_friedman3` is similar with an arctan transformation on the target. Generators for manifold learning -------------------------------- .. autosummary:: make\_s\_curve make\_swiss\_roll Generators for decomposition ---------------------------- .. autosummary:: make\_low\_rank\_matrix make\_sparse\_coded\_signal make\_spd\_matrix make\_sparse\_spd\_matrix
https://github.com/scikit-learn/scikit-learn/blob/main//doc/datasets/sample_generators.rst
main
scikit-learn
[ -0.08944720029830933, -0.050430722534656525, -0.00740035017952323, 0.06191796064376831, 0.03884274140000343, 0.021257804706692696, -0.07480935752391815, -0.08699443191289902, 0.017765039578080177, -0.03811552748084068, 0.015084241516888142, -0.04240431636571884, 0.06269721686840057, -0.016...
0.067997
.. \_toy\_datasets: Toy datasets ============ .. currentmodule:: sklearn.datasets scikit-learn comes with a few small standard datasets that do not require to download any file from some external website. They can be loaded using the following functions: .. autosummary:: load\_iris load\_diabetes load\_digits load\_linnerud load\_wine load\_breast\_cancer These datasets are useful to quickly illustrate the behavior of the various algorithms implemented in scikit-learn. They are however often too small to be representative of real world machine learning tasks. .. include:: ../../sklearn/datasets/descr/iris.rst .. include:: ../../sklearn/datasets/descr/diabetes.rst .. include:: ../../sklearn/datasets/descr/digits.rst .. include:: ../../sklearn/datasets/descr/linnerud.rst .. include:: ../../sklearn/datasets/descr/wine\_data.rst .. include:: ../../sklearn/datasets/descr/breast\_cancer.rst
https://github.com/scikit-learn/scikit-learn/blob/main//doc/datasets/toy_dataset.rst
main
scikit-learn
[ -0.06893555074930191, -0.052974384278059006, -0.03912001848220825, -0.007968578487634659, 0.10624027252197266, -0.06303001195192337, -0.0503748282790184, 0.018766049295663834, -0.0801338478922844, -0.009908908978104591, 0.016754264011979103, -0.021911323070526123, -0.09348423779010773, -0....
0.101757
{{ objname | escape | underline(line="=") }} {% if objtype == "module" -%} .. automodule:: {{ fullname }} {%- elif objtype == "function" -%} .. currentmodule:: {{ module }} .. autofunction:: {{ objname }} .. minigallery:: {{ module }}.{{ objname }} :add-heading: Gallery examples :heading-level: - {%- elif objtype == "class" -%} .. currentmodule:: {{ module }} .. autoclass:: {{ objname }} :members: :inherited-members: :special-members: \_\_call\_\_ .. minigallery:: {{ module }}.{{ objname }} {% for meth in methods %}{{ module }}.{{ objname }}.{{ meth }} {% endfor %} :add-heading: Gallery examples :heading-level: - {%- else -%} .. currentmodule:: {{ module }} .. auto{{ objtype }}:: {{ objname }} {%- endif -%}
https://github.com/scikit-learn/scikit-learn/blob/main//doc/templates/base.rst
main
scikit-learn
[ -0.041048817336559296, 0.0991494283080101, 0.020926708355545998, 0.041580330580472946, 0.036644626408815384, 0.036736439913511276, 0.09082615375518799, 0.06078079342842102, -0.0835946723818779, -0.0630989819765091, -0.009291567839682102, -0.08365736901760101, 0.027100136503577232, 0.048491...
0.069284
Parallelism, resource management, and configuration =================================================== .. \_parallelism: Parallelism ----------- Some scikit-learn estimators and utilities parallelize costly operations using multiple CPU cores. Depending on the type of estimator and sometimes the values of the constructor parameters, this is either done: - with higher-level parallelism via `joblib `\_. - with lower-level parallelism via OpenMP, used in C or Cython code. - with lower-level parallelism via BLAS, used by NumPy and SciPy for generic operations on arrays. The `n\_jobs` parameters of estimators always controls the amount of parallelism managed by joblib (processes or threads depending on the joblib backend). The thread-level parallelism managed by OpenMP in scikit-learn's own Cython code or by BLAS & LAPACK libraries used by NumPy and SciPy operations used in scikit-learn is always controlled by environment variables or `threadpoolctl` as explained below. Note that some estimators can leverage all three kinds of parallelism at different points of their training and prediction methods. We describe these 3 types of parallelism in the following subsections in more details. Higher-level parallelism with joblib .................................... When the underlying implementation uses joblib, the number of workers (threads or processes) that are spawned in parallel can be controlled via the ``n\_jobs`` parameter. .. note:: Where (and how) parallelization happens in the estimators using joblib by specifying `n\_jobs` is currently poorly documented. Please help us by improving our docs and tackle `issue 14228 `\_! Joblib is able to support both multi-processing and multi-threading. Whether joblib chooses to spawn a thread or a process depends on the \*\*backend\*\* that it's using. scikit-learn generally relies on the ``loky`` backend, which is joblib's default backend. Loky is a multi-processing backend. When doing multi-processing, in order to avoid duplicating the memory in each process (which isn't reasonable with big datasets), joblib will create a `memmap `\_ that all processes can share, when the data is bigger than 1MB. In some specific cases (when the code that is run in parallel releases the GIL), scikit-learn will indicate to ``joblib`` that a multi-threading backend is preferable. As a user, you may control the backend that joblib will use (regardless of what scikit-learn recommends) by using a context manager:: from joblib import parallel\_backend with parallel\_backend('threading', n\_jobs=2): # Your scikit-learn code here Please refer to the `joblib's docs `\_ for more details. In practice, whether parallelism is helpful at improving runtime depends on many factors. It is usually a good idea to experiment rather than assuming that increasing the number of workers is always a good thing. In some cases it can be highly detrimental to performance to run multiple copies of some estimators or functions in parallel (see :ref:`oversubscription` below). .. \_lower-level-parallelism-with-openmp: Lower-level parallelism with OpenMP ................................... OpenMP is used to parallelize code written in Cython or C, relying on multi-threading exclusively. By default, the implementations using OpenMP will use as many threads as possible, i.e. as many threads as logical cores. You can control the exact number of threads that are used either: - via the ``OMP\_NUM\_THREADS`` environment variable, for instance when: running a python script: .. prompt:: bash $ OMP\_NUM\_THREADS=4 python my\_script.py - or via `threadpoolctl` as explained by `this piece of documentation `\_. Parallel NumPy and SciPy routines from numerical libraries .......................................................... scikit-learn relies heavily on NumPy and SciPy, which internally call multi-threaded linear algebra routines (BLAS & LAPACK) implemented in libraries such as MKL, OpenBLAS or BLIS. You can control the exact number of threads used by BLAS for each library using environment variables, namely: - ``MKL\_NUM\_THREADS`` sets the number of threads MKL uses, - ``OPENBLAS\_NUM\_THREADS`` sets the number of threads OpenBLAS uses - ``BLIS\_NUM\_THREADS`` sets the number
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/parallelism.rst
main
scikit-learn
[ -0.10208937525749207, -0.02290492318570614, -0.06562581658363342, -0.019100580364465714, 0.02700519561767578, -0.11461164057254791, -0.06696823239326477, 0.00003107615339104086, 0.029869336634874344, 0.007313609588891268, -0.005286179482936859, -0.04715342819690704, 0.02878519520163536, -0...
0.203098
implemented in libraries such as MKL, OpenBLAS or BLIS. You can control the exact number of threads used by BLAS for each library using environment variables, namely: - ``MKL\_NUM\_THREADS`` sets the number of threads MKL uses, - ``OPENBLAS\_NUM\_THREADS`` sets the number of threads OpenBLAS uses - ``BLIS\_NUM\_THREADS`` sets the number of threads BLIS uses Note that BLAS & LAPACK implementations can also be impacted by `OMP\_NUM\_THREADS`. To check whether this is the case in your environment, you can inspect how the number of threads effectively used by those libraries is affected when running the following command in a bash or zsh terminal for different values of `OMP\_NUM\_THREADS`: .. prompt:: bash $ OMP\_NUM\_THREADS=2 python -m threadpoolctl -i numpy scipy .. note:: At the time of writing (2022), NumPy and SciPy packages which are distributed on pypi.org (i.e. the ones installed via ``pip install``) and on the conda-forge channel (i.e. the ones installed via ``conda install --channel conda-forge``) are linked with OpenBLAS, while NumPy and SciPy packages shipped on the ``defaults`` conda channel from Anaconda.org (i.e. the ones installed via ``conda install``) are linked by default with MKL. .. \_oversubscription: Oversubscription: spawning too many threads ........................................... It is generally recommended to avoid using significantly more processes or threads than the number of CPUs on a machine. Over-subscription happens when a program is running too many threads at the same time. Suppose you have a machine with 8 CPUs. Consider a case where you're running a :class:`~sklearn.model\_selection.GridSearchCV` (parallelized with joblib) with ``n\_jobs=8`` over a :class:`~sklearn.ensemble.HistGradientBoostingClassifier` (parallelized with OpenMP). Each instance of :class:`~sklearn.ensemble.HistGradientBoostingClassifier` will spawn 8 threads (since you have 8 CPUs). That's a total of ``8 \* 8 = 64`` threads, which leads to oversubscription of threads for physical CPU resources and thus to scheduling overhead. Oversubscription can arise in the exact same fashion with parallelized routines from MKL, OpenBLAS or BLIS that are nested in joblib calls. Starting from ``joblib >= 0.14``, when the ``loky`` backend is used (which is the default), joblib will tell its child \*\*processes\*\* to limit the number of threads they can use, so as to avoid oversubscription. In practice the heuristic that joblib uses is to tell the processes to use ``max\_threads = n\_cpus // n\_jobs``, via their corresponding environment variable. Back to our example from above, since the joblib backend of :class:`~sklearn.model\_selection.GridSearchCV` is ``loky``, each process will only be able to use 1 thread instead of 8, thus mitigating the oversubscription issue. Note that: - Manually setting one of the environment variables (``OMP\_NUM\_THREADS``, ``MKL\_NUM\_THREADS``, ``OPENBLAS\_NUM\_THREADS``, or ``BLIS\_NUM\_THREADS``) will take precedence over what joblib tries to do. The total number of threads will be ``n\_jobs \* \_NUM\_THREADS``. Note that setting this limit will also impact your computations in the main process, which will only use ``\_NUM\_THREADS``. Joblib exposes a context manager for finer control over the number of threads in its workers (see joblib docs linked below). - When joblib is configured to use the ``threading`` backend, there is no mechanism to avoid oversubscriptions when calling into parallel native libraries in the joblib-managed threads. - All scikit-learn estimators that explicitly rely on OpenMP in their Cython code always use `threadpoolctl` internally to automatically adapt the numbers of threads used by OpenMP and potentially nested BLAS calls so as to avoid oversubscription. You will find additional details about joblib mitigation of oversubscription in `joblib documentation `\_. You will find additional details about parallelism in numerical python libraries in `this document from Thomas J. Fan `\_. Configuration switches ----------------------- Python API .......... :func:`sklearn.set\_config` and :func:`sklearn.config\_context` can be used to change parameters of the configuration which control aspect of parallelism.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/parallelism.rst
main
scikit-learn
[ -0.04915911704301834, -0.11170545220375061, -0.05636081099510193, 0.0003386891621630639, 0.007224847096949816, -0.10716459900140762, -0.04282301664352417, -0.019603386521339417, 0.07680816203355789, -0.0541708841919899, -0.027775218710303307, -0.024113794788718224, -0.033346135169267654, -...
0.075063
joblib mitigation of oversubscription in `joblib documentation `\_. You will find additional details about parallelism in numerical python libraries in `this document from Thomas J. Fan `\_. Configuration switches ----------------------- Python API .......... :func:`sklearn.set\_config` and :func:`sklearn.config\_context` can be used to change parameters of the configuration which control aspect of parallelism. .. \_environment\_variable: Environment variables ..................... These environment variables should be set before importing scikit-learn. `SKLEARN\_ASSUME\_FINITE` ~~~~~~~~~~~~~~~~~~~~~~~ Sets the default value for the `assume\_finite` argument of :func:`sklearn.set\_config`. `SKLEARN\_WORKING\_MEMORY` ~~~~~~~~~~~~~~~~~~~~~~~~ Sets the default value for the `working\_memory` argument of :func:`sklearn.set\_config`. `SKLEARN\_SEED` ~~~~~~~~~~~~~~ Sets the seed of the global random generator when running the tests, for reproducibility. Note that scikit-learn tests are expected to run deterministically with explicit seeding of their own independent RNG instances instead of relying on the numpy or Python standard library RNG singletons to make sure that test results are independent of the test execution order. However some tests might forget to use explicit seeding and this variable is a way to control the initial state of the aforementioned singletons. `SKLEARN\_TESTS\_GLOBAL\_RANDOM\_SEED` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Controls the seeding of the random number generator used in tests that rely on the `global\_random\_seed` fixture. All tests that use this fixture accept the contract that they should deterministically pass for any seed value from 0 to 99 included. In nightly CI builds, the `SKLEARN\_TESTS\_GLOBAL\_RANDOM\_SEED` environment variable is drawn randomly in the above range and all fixtured tests will run for that specific seed. The goal is to ensure that, over time, our CI will run all tests with different seeds while keeping the test duration of a single run of the full test suite limited. This will check that the assertions of tests written to use this fixture are not dependent on a specific seed value. The range of admissible seed values is limited to [0, 99] because it is often not possible to write a test that can work for any possible seed and we want to avoid having tests that randomly fail on the CI. Valid values for `SKLEARN\_TESTS\_GLOBAL\_RANDOM\_SEED`: - `SKLEARN\_TESTS\_GLOBAL\_RANDOM\_SEED="42"`: run tests with a fixed seed of 42 - `SKLEARN\_TESTS\_GLOBAL\_RANDOM\_SEED="40-42"`: run the tests with all seeds between 40 and 42 included - `SKLEARN\_TESTS\_GLOBAL\_RANDOM\_SEED="all"`: run the tests with all seeds between 0 and 99 included. This can take a long time: only use for individual tests, not the full test suite! If the variable is not set, then 42 is used as the global seed in a deterministic manner. This ensures that, by default, the scikit-learn test suite is as deterministic as possible to avoid disrupting our friendly third-party package maintainers. Similarly, this variable should not be set in the CI config of pull-requests to make sure that our friendly contributors are not the first people to encounter a seed-sensitivity regression in a test unrelated to the changes of their own PR. Only the scikit-learn maintainers who watch the results of the nightly builds are expected to be annoyed by this. When writing a new test function that uses this fixture, please use the following command to make sure that it passes deterministically for all admissible seeds on your local machine: .. prompt:: bash $ SKLEARN\_TESTS\_GLOBAL\_RANDOM\_SEED="all" pytest -v -k test\_your\_test\_name `SKLEARN\_SKIP\_NETWORK\_TESTS` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When this environment variable is set to a non zero value, the tests that need network access are skipped. When this environment variable is not set then network tests are skipped. `SKLEARN\_RUN\_FLOAT32\_TESTS` ~~~~~~~~~~~~~~~~~~~~~~~~~~~ When this environment variable is set to '1', the tests using the `global\_dtype` fixture are also run on float32 data. When this environment variable is not set, the tests are only run on float64 data. `SKLEARN\_ENABLE\_DEBUG\_CYTHON\_DIRECTIVES` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/parallelism.rst
main
scikit-learn
[ -0.09220283478498459, -0.050581496208906174, -0.04691660776734352, 0.002303830813616514, 0.012456821277737617, -0.11521710455417633, -0.048541802912950516, -0.0005253798444755375, -0.11054729670286179, -0.022535543888807297, 0.012822359800338745, -0.006328170653432608, 0.04075859859585762, ...
0.072022
environment variable is not set then network tests are skipped. `SKLEARN\_RUN\_FLOAT32\_TESTS` ~~~~~~~~~~~~~~~~~~~~~~~~~~~ When this environment variable is set to '1', the tests using the `global\_dtype` fixture are also run on float32 data. When this environment variable is not set, the tests are only run on float64 data. `SKLEARN\_ENABLE\_DEBUG\_CYTHON\_DIRECTIVES` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When this environment variable is set to a non zero value, the `Cython` derivative, `boundscheck` is set to `True`. This is useful for finding segfaults. `SKLEARN\_BUILD\_ENABLE\_DEBUG\_SYMBOLS` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When this environment variable is set to a non zero value, the debug symbols will be included in the compiled C extensions. Only debug symbols for POSIX systems are configured. `SKLEARN\_PAIRWISE\_DIST\_CHUNK\_SIZE` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This sets the size of chunk to be used by the underlying `PairwiseDistancesReductions` implementations. The default value is `256` which has been showed to be adequate on most machines. Users looking for the best performance might want to tune this variable using powers of 2 so as to get the best parallelism behavior for their hardware, especially with respect to their caches' sizes. `SKLEARN\_WARNINGS\_AS\_ERRORS` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This environment variable is used to turn warnings into errors in tests and documentation build. Some CI (Continuous Integration) builds set `SKLEARN\_WARNINGS\_AS\_ERRORS=1`, for example to make sure that we catch deprecation warnings from our dependencies and that we adapt our code. To locally run with the same "warnings as errors" setting as in these CI builds you can set `SKLEARN\_WARNINGS\_AS\_ERRORS=1`. By default, warnings are not turned into errors. This is the case if `SKLEARN\_WARNINGS\_AS\_ERRORS` is unset, or `SKLEARN\_WARNINGS\_AS\_ERRORS=0`. This environment variable uses specific warning filters to ignore some warnings, since sometimes warnings originate from third-party libraries and there is not much we can do about it. You can see the warning filters in the `\_get\_warnings\_filters\_info\_list` function in `sklearn/utils/\_testing.py`. Note that for documentation build, `SKLEARN\_WARNING\_AS\_ERRORS=1` is checking that the documentation build, in particular running examples, does not produce any warnings. This is different from the `-W` `sphinx-build` argument that catches syntax warnings in the rst files.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/parallelism.rst
main
scikit-learn
[ 0.006629774812608957, -0.004047594498842955, -0.05749254301190376, 0.0027133040130138397, 0.01816682331264019, -0.1252221316099167, -0.04679195582866669, 0.046348366886377335, -0.07117748260498047, -0.02151375450193882, 0.021887386217713356, -0.10345450788736343, -0.016217926517128944, 0.0...
-0.016582
.. \_computational\_performance: .. currentmodule:: sklearn Computational Performance ========================= For some applications the performance (mainly latency and throughput at prediction time) of estimators is crucial. It may also be of interest to consider the training throughput but this is often less important in a production setup (where it often takes place offline). We will review here the orders of magnitude you can expect from a number of scikit-learn estimators in different contexts and provide some tips and tricks for overcoming performance bottlenecks. Prediction latency is measured as the elapsed time necessary to make a prediction (e.g. in microseconds). Latency is often viewed as a distribution and operations engineers often focus on the latency at a given percentile of this distribution (e.g. the 90th percentile). Prediction throughput is defined as the number of predictions the software can deliver in a given amount of time (e.g. in predictions per second). An important aspect of performance optimization is also that it can hurt prediction accuracy. Indeed, simpler models (e.g. linear instead of non-linear, or with fewer parameters) often run faster but are not always able to take into account the same exact properties of the data as more complex ones. Prediction Latency ------------------ One of the most straightforward concerns one may have when using/choosing a machine learning toolkit is the latency at which predictions can be made in a production environment. The main factors that influence the prediction latency are 1. Number of features 2. Input data representation and sparsity 3. Model complexity 4. Feature extraction A last major parameter is also the possibility to do predictions in bulk or one-at-a-time mode. Bulk versus Atomic mode ........................ In general doing predictions in bulk (many instances at the same time) is more efficient for a number of reasons (branching predictability, CPU cache, linear algebra libraries optimizations etc.). Here we see on a setting with few features that independently of estimator choice the bulk mode is always faster, and for some of them by 1 to 2 orders of magnitude: .. |atomic\_prediction\_latency| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_prediction\_latency\_001.png :target: ../auto\_examples/applications/plot\_prediction\_latency.html :scale: 80 .. centered:: |atomic\_prediction\_latency| .. |bulk\_prediction\_latency| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_prediction\_latency\_002.png :target: ../auto\_examples/applications/plot\_prediction\_latency.html :scale: 80 .. centered:: |bulk\_prediction\_latency| To benchmark different estimators for your case you can simply change the ``n\_features`` parameter in this example: :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_prediction\_latency.py`. This should give you an estimate of the order of magnitude of the prediction latency. Configuring Scikit-learn for reduced validation overhead ......................................................... Scikit-learn does some validation on data that increases the overhead per call to ``predict`` and similar functions. In particular, checking that features are finite (not NaN or infinite) involves a full pass over the data. If you ensure that your data is acceptable, you may suppress checking for finiteness by setting the environment variable ``SKLEARN\_ASSUME\_FINITE`` to a non-empty string before importing scikit-learn, or configure it in Python with :func:`set\_config`. For more control than these global settings, a :func:`config\_context` allows you to set this configuration within a specified context:: >>> import sklearn >>> with sklearn.config\_context(assume\_finite=True): ... pass # do learning/prediction here with reduced validation Note that this will affect all uses of :func:`~utils.assert\_all\_finite` within the context. Influence of the Number of Features .................................... Obviously when the number of features increases so does the memory consumption of each example. Indeed, for a matrix of :math:`M` instances with :math:`N` features, the space complexity is in :math:`O(NM)`. From a computing perspective it also means that the number of basic operations (e.g., multiplications for vector-matrix products in linear models) increases too. Here is a graph of the evolution of the prediction latency with the number of features: .. |influence\_of\_n\_features\_on\_latency| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_prediction\_latency\_003.png :target: ../auto\_examples/applications/plot\_prediction\_latency.html :scale: 80 .. centered:: |influence\_of\_n\_features\_on\_latency| Overall
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/computational_performance.rst
main
scikit-learn
[ -0.018133986741304398, -0.06176217645406723, -0.07294007390737534, 0.04824015498161316, 0.07879314571619034, -0.09161055833101273, -0.033457960933446884, -0.00440272968262434, 0.02052791230380535, -0.02021474950015545, -0.09417172521352768, -0.025287777185440063, -0.027560342103242874, -0....
0.089528
a computing perspective it also means that the number of basic operations (e.g., multiplications for vector-matrix products in linear models) increases too. Here is a graph of the evolution of the prediction latency with the number of features: .. |influence\_of\_n\_features\_on\_latency| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_prediction\_latency\_003.png :target: ../auto\_examples/applications/plot\_prediction\_latency.html :scale: 80 .. centered:: |influence\_of\_n\_features\_on\_latency| Overall you can expect the prediction time to increase at least linearly with the number of features (non-linear cases can happen depending on the global memory footprint and estimator). Influence of the Input Data Representation ........................................... Scipy provides sparse matrix data structures which are optimized for storing sparse data. The main feature of sparse formats is that you don't store zeros so if your data is sparse then you use much less memory. A non-zero value in a sparse (`CSR or CSC `\_) representation will only take on average one 32bit integer position + the 64 bit floating point value + an additional 32bit per row or column in the matrix. Using sparse input on a dense (or sparse) linear model can speedup prediction by quite a bit as only the non zero valued features impact the dot product and thus the model predictions. Hence if you have 100 non zeros in 1e6 dimensional space, you only need 100 multiply and add operation instead of 1e6. Calculation over a dense representation, however, may leverage highly optimized vector operations and multithreading in BLAS, and tends to result in fewer CPU cache misses. So the sparsity should typically be quite high (10% non-zeros max, to be checked depending on the hardware) for the sparse input representation to be faster than the dense input representation on a machine with many CPUs and an optimized BLAS implementation. Here is sample code to test the sparsity of your input:: def sparsity\_ratio(X): return 1.0 - np.count\_nonzero(X) / float(X.shape[0] \* X.shape[1]) print("input sparsity ratio:", sparsity\_ratio(X)) As a rule of thumb you can consider that if the sparsity ratio is greater than 90% you can probably benefit from sparse formats. Check Scipy's sparse matrix formats `documentation `\_ for more information on how to build (or convert your data to) sparse matrix formats. Most of the time the ``CSR`` and ``CSC`` formats work best. Influence of the Model Complexity .................................. Generally speaking, when model complexity increases, predictive power and latency are supposed to increase. Increasing predictive power is usually interesting, but for many applications we would better not increase prediction latency too much. We will now review this idea for different families of supervised models. For :mod:`sklearn.linear\_model` (e.g. Lasso, ElasticNet, SGDClassifier/Regressor, Ridge & RidgeClassifier, LinearSVC, LogisticRegression...) the decision function that is applied at prediction time is the same (a dot product), so latency should be equivalent. Here is an example using :class:`~linear\_model.SGDClassifier` with the ``elasticnet`` penalty. The regularization strength is globally controlled by the ``alpha`` parameter. With a sufficiently high ``alpha``, one can then increase the ``l1\_ratio`` parameter of ``elasticnet`` to enforce various levels of sparsity in the model coefficients. Higher sparsity here is interpreted as less model complexity as we need fewer coefficients to describe it fully. Of course sparsity influences in turn the prediction time as the sparse dot-product takes time roughly proportional to the number of non-zero coefficients. .. |en\_model\_complexity| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_model\_complexity\_influence\_001.png :target: ../auto\_examples/applications/plot\_model\_complexity\_influence.html :scale: 80 .. centered:: |en\_model\_complexity| For the :mod:`sklearn.svm` family of algorithms with a non-linear kernel, the latency is tied to the number of support vectors (the fewer the faster). Latency and throughput should (asymptotically) grow linearly with the number of support vectors in an SVC or SVR model. The kernel will also influence the latency as it is used to compute the projection
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/computational_performance.rst
main
scikit-learn
[ 0.0032280483283102512, -0.08345247805118561, -0.06510239094495773, 0.04159349203109741, 0.03463488072156906, -0.07957743108272552, -0.028922617435455322, 0.051026057451963425, -0.0003505175991449505, 0.0015644669765606523, -0.009227512404322624, 0.026084259152412415, 0.0674717128276825, 0....
0.197486
non-linear kernel, the latency is tied to the number of support vectors (the fewer the faster). Latency and throughput should (asymptotically) grow linearly with the number of support vectors in an SVC or SVR model. The kernel will also influence the latency as it is used to compute the projection of the input vector once per support vector. In the following graph the ``nu`` parameter of :class:`~svm.NuSVR` was used to influence the number of support vectors. .. |nusvr\_model\_complexity| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_model\_complexity\_influence\_002.png :target: ../auto\_examples/applications/plot\_model\_complexity\_influence.html :scale: 80 .. centered:: |nusvr\_model\_complexity| For :mod:`sklearn.ensemble` of trees (e.g. RandomForest, GBT, ExtraTrees, etc.) the number of trees and their depth play the most important role. Latency and throughput should scale linearly with the number of trees. In this case we used directly the ``n\_estimators`` parameter of :class:`~ensemble.GradientBoostingRegressor`. .. |gbt\_model\_complexity| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_model\_complexity\_influence\_003.png :target: ../auto\_examples/applications/plot\_model\_complexity\_influence.html :scale: 80 .. centered:: |gbt\_model\_complexity| In any case be warned that decreasing model complexity can hurt accuracy as mentioned above. For instance a non-linearly separable problem can be handled with a speedy linear model but prediction power will very likely suffer in the process. Feature Extraction Latency .......................... Most scikit-learn models are usually pretty fast as they are implemented either with compiled Cython extensions or optimized computing libraries. On the other hand, in many real world applications the feature extraction process (i.e. turning raw data like database rows or network packets into numpy arrays) governs the overall prediction time. For example on the Reuters text classification task the whole preparation (reading and parsing SGML files, tokenizing the text and hashing it into a common vector space) is taking 100 to 500 times more time than the actual prediction code, depending on the chosen model. .. |prediction\_time| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_out\_of\_core\_classification\_004.png :target: ../auto\_examples/applications/plot\_out\_of\_core\_classification.html :scale: 80 .. centered:: |prediction\_time| In many cases it is thus recommended to carefully time and profile your feature extraction code as it may be a good place to start optimizing when your overall latency is too slow for your application. Prediction Throughput ---------------------- Another important metric to care about when sizing production systems is the throughput i.e. the number of predictions you can make in a given amount of time. Here is a benchmark from the :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_prediction\_latency.py` example that measures this quantity for a number of estimators on synthetic data: .. |throughput\_benchmark| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_prediction\_latency\_004.png :target: ../auto\_examples/applications/plot\_prediction\_latency.html :scale: 80 .. centered:: |throughput\_benchmark| These throughputs are achieved on a single process. An obvious way to increase the throughput of your application is to spawn additional instances (usually processes in Python because of the `GIL `\_) that share the same model. One might also add machines to spread the load. A detailed explanation on how to achieve this is beyond the scope of this documentation though. Tips and Tricks ---------------- Linear algebra libraries ......................... As scikit-learn relies heavily on Numpy/Scipy and linear algebra in general it makes sense to take explicit care of the versions of these libraries. Basically, you ought to make sure that Numpy is built using an optimized `BLAS `\_ / `LAPACK `\_ library. Not all models benefit from optimized BLAS and Lapack implementations. For instance models based on (randomized) decision trees typically do not rely on BLAS calls in their inner loops, nor do kernel SVMs (``SVC``, ``SVR``, ``NuSVC``, ``NuSVR``). On the other hand a linear model implemented with a BLAS DGEMM call (via ``numpy.dot``) will typically benefit hugely from a tuned BLAS implementation and lead to orders of magnitude speedup over a non-optimized BLAS. You can display the BLAS / LAPACK implementation used by your NumPy / SciPy / scikit-learn install with the following command:: python -c "import sklearn; sklearn.show\_versions()" Optimized
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/computational_performance.rst
main
scikit-learn
[ 0.01156317163258791, -0.14850759506225586, -0.03687252849340439, 0.01627271994948387, 0.1538955271244049, -0.09076095372438431, -0.026370523497462273, 0.02940496802330017, 0.0954788327217102, 0.02668224833905697, -0.015385204926133156, -0.006696566008031368, -0.013848821632564068, -0.01347...
0.121612
call (via ``numpy.dot``) will typically benefit hugely from a tuned BLAS implementation and lead to orders of magnitude speedup over a non-optimized BLAS. You can display the BLAS / LAPACK implementation used by your NumPy / SciPy / scikit-learn install with the following command:: python -c "import sklearn; sklearn.show\_versions()" Optimized BLAS / LAPACK implementations include: - Atlas (need hardware specific tuning by rebuilding on the target machine) - OpenBLAS - MKL - Apple Accelerate and vecLib frameworks (OSX only) More information can be found on the `NumPy install page `\_ and in this `blog post `\_ from Daniel Nouri which has some nice step by step install instructions for Debian / Ubuntu. .. \_working\_memory: Limiting Working Memory ........................ Some calculations when implemented using standard numpy vectorized operations involve using a large amount of temporary memory. This may potentially exhaust system memory. Where computations can be performed in fixed-memory chunks, we attempt to do so, and allow the user to hint at the maximum size of this working memory (defaulting to 1GB) using :func:`set\_config` or :func:`config\_context`. The following suggests to limit temporary working memory to 128 MiB:: >>> import sklearn >>> with sklearn.config\_context(working\_memory=128): ... pass # do chunked work here An example of a chunked operation adhering to this setting is :func:`~metrics.pairwise\_distances\_chunked`, which facilitates computing row-wise reductions of a pairwise distance matrix. Model Compression .................. Model compression in scikit-learn only concerns linear models for the moment. In this context it means that we want to control the model sparsity (i.e. the number of non-zero coordinates in the model vectors). It is generally a good idea to combine model sparsity with sparse input data representation. Here is sample code that illustrates the use of the ``sparsify()`` method:: clf = SGDRegressor(penalty='elasticnet', l1\_ratio=0.25) clf.fit(X\_train, y\_train).sparsify() clf.predict(X\_test) In this example we prefer the ``elasticnet`` penalty as it is often a good compromise between model compactness and prediction power. One can also further tune the ``l1\_ratio`` parameter (in combination with the regularization strength ``alpha``) to control this tradeoff. A typical `benchmark `\_ on synthetic data yields a >30% decrease in latency when both the model and input are sparse (with 0.000024 and 0.027400 non-zero coefficients ratio respectively). Your mileage may vary depending on the sparsity and size of your data and model. Furthermore, sparsifying can be very useful to reduce the memory usage of predictive models deployed on production servers. Model Reshaping ................ Model reshaping consists in selecting only a portion of the available features to fit a model. In other words, if a model discards features during the learning phase we can then strip those from the input. This has several benefits. Firstly it reduces memory (and therefore time) overhead of the model itself. It also allows to discard explicit feature selection components in a pipeline once we know which features to keep from a previous run. Finally, it can help reduce processing time and I/O usage upstream in the data access and feature extraction layers by not collecting and building features that are discarded by the model. For instance if the raw data come from a database, it is possible to write simpler and faster queries or reduce I/O usage by making the queries return lighter records. At the moment, reshaping needs to be performed manually in scikit-learn. In the case of sparse input (particularly in ``CSR`` format), it is generally sufficient to not generate the relevant features, leaving their columns empty. Links ...... - :ref:`scikit-learn developer performance documentation ` - `Scipy sparse matrix formats documentation `\_
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/computational_performance.rst
main
scikit-learn
[ -0.06555812805891037, -0.015340846963226795, -0.04943619295954704, -0.025283658877015114, 0.010387608781456947, -0.1338638812303543, -0.09213888645172119, 0.026613617315888405, -0.08963657915592194, -0.014567592181265354, -0.026662370190024376, 0.0021610925905406475, -0.05628906562924385, ...
0.124681
(particularly in ``CSR`` format), it is generally sufficient to not generate the relevant features, leaving their columns empty. Links ...... - :ref:`scikit-learn developer performance documentation ` - `Scipy sparse matrix formats documentation `\_
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/computational_performance.rst
main
scikit-learn
[ -0.028396494686603546, 0.021518312394618988, -0.137627512216568, 0.06454870104789734, 0.04276331886649132, -0.10078811645507812, -0.12335776537656784, -0.02468431182205677, -0.020278451964259148, 0.002512475475668907, 0.0604415200650692, -0.04066399857401848, 0.008212852291762829, -0.09096...
-0.02658
.. \_scaling\_strategies: Strategies to scale computationally: bigger data ================================================= For some applications the amount of examples, features (or both) and/or the speed at which they need to be processed are challenging for traditional approaches. In these cases scikit-learn has a number of options you can consider to make your system scale. Scaling with instances using out-of-core learning -------------------------------------------------- Out-of-core (or "external memory") learning is a technique used to learn from data that cannot fit in a computer's main memory (RAM). Here is a sketch of a system designed to achieve this goal: 1. a way to stream instances 2. a way to extract features from instances 3. an incremental algorithm Streaming instances .................... Basically, 1. may be a reader that yields instances from files on a hard drive, a database, from a network stream etc. However, details on how to achieve this are beyond the scope of this documentation. Extracting features ................... \2. could be any relevant way to extract features among the different :ref:`feature extraction ` methods supported by scikit-learn. However, when working with data that needs vectorization and where the set of features or values is not known in advance one should take explicit care. A good example is text classification where unknown terms are likely to be found during training. It is possible to use a stateful vectorizer if making multiple passes over the data is reasonable from an application point of view. Otherwise, one can turn up the difficulty by using a stateless feature extractor. Currently the preferred way to do this is to use the so-called :ref:`hashing trick` as implemented by :class:`sklearn.feature\_extraction.FeatureHasher` for datasets with categorical variables represented as list of Python dicts or :class:`sklearn.feature\_extraction.text.HashingVectorizer` for text documents. Incremental learning ..................... Finally, for 3. we have a number of options inside scikit-learn. Although not all algorithms can learn incrementally (i.e. without seeing all the instances at once), all estimators implementing the ``partial\_fit`` API are candidates. Actually, the ability to learn incrementally from a mini-batch of instances (sometimes called "online learning") is key to out-of-core learning as it guarantees that at any given time there will be only a small amount of instances in the main memory. Choosing a good size for the mini-batch that balances relevancy and memory footprint could involve some tuning [1]\_. Here is a list of incremental estimators for different tasks: - Classification + :class:`sklearn.naive\_bayes.MultinomialNB` + :class:`sklearn.naive\_bayes.BernoulliNB` + :class:`sklearn.linear\_model.Perceptron` + :class:`sklearn.linear\_model.SGDClassifier` + :class:`sklearn.neural\_network.MLPClassifier` - Regression + :class:`sklearn.linear\_model.SGDRegressor` + :class:`sklearn.neural\_network.MLPRegressor` - Clustering + :class:`sklearn.cluster.MiniBatchKMeans` + :class:`sklearn.cluster.Birch` - Decomposition / feature Extraction + :class:`sklearn.decomposition.MiniBatchDictionaryLearning` + :class:`sklearn.decomposition.IncrementalPCA` + :class:`sklearn.decomposition.LatentDirichletAllocation` + :class:`sklearn.decomposition.MiniBatchNMF` - Preprocessing + :class:`sklearn.preprocessing.StandardScaler` + :class:`sklearn.preprocessing.MinMaxScaler` + :class:`sklearn.preprocessing.MaxAbsScaler` For classification, a somewhat important thing to note is that although a stateless feature extraction routine may be able to cope with new/unseen attributes, the incremental learner itself may be unable to cope with new/unseen targets classes. In this case you have to pass all the possible classes to the first ``partial\_fit`` call using the ``classes=`` parameter. Another aspect to consider when choosing a proper algorithm is that not all of them put the same importance on each example over time. Namely, the ``Perceptron`` is still sensitive to badly labeled examples even after many examples whereas the ``SGD\*`` family is more robust to this kind of artifacts. Conversely, the latter also tend to give less importance to remarkably different, yet properly labeled examples when they come late in the stream as their learning rate decreases over time. Examples .......... Finally, we have a full-fledged example of :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_out\_of\_core\_classification.py`. It is aimed at providing a starting point for people wanting to build out-of-core learning systems and demonstrates most
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/scaling_strategies.rst
main
scikit-learn
[ 0.012470702640712261, -0.019704828038811684, -0.07767552882432938, 0.0008048488525673747, 0.05932465195655823, -0.08764484524726868, -0.057167500257492065, 0.06084698066115379, -0.056536100804805756, -0.024277539923787117, -0.02029128558933735, -0.0018083994509652257, -0.015851907432079315, ...
0.181341
to remarkably different, yet properly labeled examples when they come late in the stream as their learning rate decreases over time. Examples .......... Finally, we have a full-fledged example of :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_out\_of\_core\_classification.py`. It is aimed at providing a starting point for people wanting to build out-of-core learning systems and demonstrates most of the notions discussed above. Furthermore, it also shows the evolution of the performance of different algorithms with the number of processed examples. .. |accuracy\_over\_time| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_out\_of\_core\_classification\_001.png :target: ../auto\_examples/applications/plot\_out\_of\_core\_classification.html :scale: 80 .. centered:: |accuracy\_over\_time| Now looking at the computation time of the different parts, we see that the vectorization is much more expensive than learning itself. From the different algorithms, ``MultinomialNB`` is the most expensive, but its overhead can be mitigated by increasing the size of the mini-batches (exercise: change ``minibatch\_size`` to 100 and 10000 in the program and compare). .. |computation\_time| image:: ../auto\_examples/applications/images/sphx\_glr\_plot\_out\_of\_core\_classification\_003.png :target: ../auto\_examples/applications/plot\_out\_of\_core\_classification.html :scale: 80 .. centered:: |computation\_time| Notes ...... .. [1] Depending on the algorithm the mini-batch size can influence results or not. SGD\* and discrete NaiveBayes are truly online and are not affected by batch size. Conversely, MiniBatchKMeans convergence rate is affected by the batch size. Also, its memory footprint can vary dramatically with batch size.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/computing/scaling_strategies.rst
main
scikit-learn
[ -0.07934537529945374, -0.07475617527961731, -0.03691018372774124, -0.017939291894435883, 0.1554810255765915, -0.07962991297245026, -0.004753114655613899, 0.09066396951675415, 0.0060254354029893875, -0.03289151191711426, -0.05530379340052605, 0.05064830183982849, -0.004996141418814659, 0.03...
0.153583
.. \_kernel\_approximation: Kernel Approximation ==================== This submodule contains functions that approximate the feature mappings that correspond to certain kernels, as they are used for example in support vector machines (see :ref:`svm`). The following feature functions perform non-linear transformations of the input, which can serve as a basis for linear classification or other algorithms. .. currentmodule:: sklearn.linear\_model The advantage of using approximate explicit feature maps compared to the `kernel trick `\_, which makes use of feature maps implicitly, is that explicit mappings can be better suited for online learning and can significantly reduce the cost of learning with very large datasets. Standard kernelized SVMs do not scale well to large datasets, but using an approximate kernel map it is possible to use much more efficient linear SVMs. In particular, the combination of kernel map approximations with :class:`SGDClassifier` can make non-linear learning on large datasets possible. Since there has not been much empirical work using approximate embeddings, it is advisable to compare results against exact kernel methods when possible. .. seealso:: :ref:`polynomial\_regression` for an exact polynomial transformation. .. currentmodule:: sklearn.kernel\_approximation .. \_nystroem\_kernel\_approx: Nystroem Method for Kernel Approximation ---------------------------------------- The Nystroem method, as implemented in :class:`Nystroem` is a general method for reduced rank approximations of kernels. It achieves this by subsampling without replacement rows/columns of the data on which the kernel is evaluated. While the computational complexity of the exact method is :math:`\mathcal{O}(n^3\_{\text{samples}})`, the complexity of the approximation is :math:`\mathcal{O}(n^2\_{\text{components}} \cdot n\_{\text{samples}})`, where one can set :math:`n\_{\text{components}} \ll n\_{\text{samples}}` without a significant decrease in performance [WS2001]\_. We can construct the eigendecomposition of the kernel matrix :math:`K`, based on the features of the data, and then split it into sampled and unsampled data points. .. math:: K = U \Lambda U^T = \begin{bmatrix} U\_1 \\ U\_2\end{bmatrix} \Lambda \begin{bmatrix} U\_1 \\ U\_2 \end{bmatrix}^T = \begin{bmatrix} U\_1 \Lambda U\_1^T & U\_1 \Lambda U\_2^T \\ U\_2 \Lambda U\_1^T & U\_2 \Lambda U\_2^T \end{bmatrix} \equiv \begin{bmatrix} K\_{11} & K\_{12} \\ K\_{21} & K\_{22} \end{bmatrix} where: \* :math:`U` is orthonormal \* :math:`\Lambda` is diagonal matrix of eigenvalues \* :math:`U\_1` is orthonormal matrix of samples that were chosen \* :math:`U\_2` is orthonormal matrix of samples that were not chosen Given that :math:`U\_1 \Lambda U\_1^T` can be obtained by orthonormalization of the matrix :math:`K\_{11}`, and :math:`U\_2 \Lambda U\_1^T` can be evaluated (as well as its transpose), the only remaining term to elucidate is :math:`U\_2 \Lambda U\_2^T`. To do this we can express it in terms of the already evaluated matrices: .. math:: \begin{align} U\_2 \Lambda U\_2^T &= \left(K\_{21} U\_1 \Lambda^{-1}\right) \Lambda \left(K\_{21} U\_1 \Lambda^{-1}\right)^T \\&= K\_{21} U\_1 (\Lambda^{-1} \Lambda) \Lambda^{-1} U\_1^T K\_{21}^T \\&= K\_{21} U\_1 \Lambda^{-1} U\_1^T K\_{21}^T \\&= K\_{21} K\_{11}^{-1} K\_{21}^T \\&= \left( K\_{21} K\_{11}^{-\frac12} \right) \left( K\_{21} K\_{11}^{-\frac12} \right)^T .\end{align} During ``fit``, the class :class:`Nystroem` evaluates the basis :math:`U\_1`, and computes the normalization constant, :math:`K\_{11}^{-\frac12}`. Later, during ``transform``, the kernel matrix is determined between the basis (given by the `components\_` attribute) and the new data points, ``X``. This matrix is then multiplied by the ``normalization\_`` matrix for the final result. By default :class:`Nystroem` uses the ``rbf`` kernel, but it can use any kernel function or a precomputed kernel matrix. The number of samples used - which is also the dimensionality of the features computed - is given by the parameter ``n\_components``. .. rubric:: Examples \* See the example entitled :ref:`sphx\_glr\_auto\_examples\_applications\_plot\_cyclical\_feature\_engineering.py`, that shows an efficient machine learning pipeline that uses a :class:`Nystroem` kernel. \* See :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_kernel\_approximation.py` for a comparison of :class:`Nystroem` kernel with :class:`RBFSampler`. .. \_rbf\_kernel\_approx: Radial Basis Function Kernel ---------------------------- The :class:`RBFSampler` constructs an approximate mapping for the radial basis function kernel, also known as \*Random Kitchen Sinks\* [RR2007]\_.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/kernel_approximation.rst
main
scikit-learn
[ -0.09763592481613159, -0.09918724000453949, 0.010881624184548855, -0.005069673992693424, 0.10726756602525711, -0.04482250660657883, -0.050042808055877686, 0.010230259969830513, -0.024521637707948685, -0.0008869483135640621, 0.029426652938127518, 0.031972963362932205, -0.05326547101140022, ...
0.118233
:ref:`sphx\_glr\_auto\_examples\_applications\_plot\_cyclical\_feature\_engineering.py`, that shows an efficient machine learning pipeline that uses a :class:`Nystroem` kernel. \* See :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_kernel\_approximation.py` for a comparison of :class:`Nystroem` kernel with :class:`RBFSampler`. .. \_rbf\_kernel\_approx: Radial Basis Function Kernel ---------------------------- The :class:`RBFSampler` constructs an approximate mapping for the radial basis function kernel, also known as \*Random Kitchen Sinks\* [RR2007]\_. This transformation can be used to explicitly model a kernel map, prior to applying a linear algorithm, for example a linear SVM:: >>> from sklearn.kernel\_approximation import RBFSampler >>> from sklearn.linear\_model import SGDClassifier >>> X = [[0, 0], [1, 1], [1, 0], [0, 1]] >>> y = [0, 0, 1, 1] >>> rbf\_feature = RBFSampler(gamma=1, random\_state=1) >>> X\_features = rbf\_feature.fit\_transform(X) >>> clf = SGDClassifier(max\_iter=5) >>> clf.fit(X\_features, y) SGDClassifier(max\_iter=5) >>> clf.score(X\_features, y) 1.0 The mapping relies on a Monte Carlo approximation to the kernel values. The ``fit`` function performs the Monte Carlo sampling, whereas the ``transform`` method performs the mapping of the data. Because of the inherent randomness of the process, results may vary between different calls to the ``fit`` function. The ``fit`` function takes two arguments: ``n\_components``, which is the target dimensionality of the feature transform, and ``gamma``, the parameter of the RBF-kernel. A higher ``n\_components`` will result in a better approximation of the kernel and will yield results more similar to those produced by a kernel SVM. Note that "fitting" the feature function does not actually depend on the data given to the ``fit`` function. Only the dimensionality of the data is used. Details on the method can be found in [RR2007]\_. For a given value of ``n\_components`` :class:`RBFSampler` is often less accurate as :class:`Nystroem`. :class:`RBFSampler` is cheaper to compute, though, making use of larger feature spaces more efficient. .. figure:: ../auto\_examples/miscellaneous/images/sphx\_glr\_plot\_kernel\_approximation\_002.png :target: ../auto\_examples/miscellaneous/plot\_kernel\_approximation.html :scale: 50% :align: center Comparing an exact RBF kernel (left) with the approximation (right) .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_kernel\_approximation.py` for a comparison of :class:`Nystroem` kernel with :class:`RBFSampler`. .. \_additive\_chi\_kernel\_approx: Additive Chi Squared Kernel --------------------------- The additive chi squared kernel is a kernel on histograms, often used in computer vision. The additive chi squared kernel as used here is given by .. math:: k(x, y) = \sum\_i \frac{2x\_iy\_i}{x\_i+y\_i} This is not exactly the same as :func:`sklearn.metrics.pairwise.additive\_chi2\_kernel`. The authors of [VZ2010]\_ prefer the version above as it is always positive definite. Since the kernel is additive, it is possible to treat all components :math:`x\_i` separately for embedding. This makes it possible to sample the Fourier transform in regular intervals, instead of approximating using Monte Carlo sampling. The class :class:`AdditiveChi2Sampler` implements this component wise deterministic sampling. Each component is sampled :math:`n` times, yielding :math:`2n+1` dimensions per input dimension (the multiple of two stems from the real and complex part of the Fourier transform). In the literature, :math:`n` is usually chosen to be 1 or 2, transforming the dataset to size ``n\_samples \* 5 \* n\_features`` (in the case of :math:`n=2`). The approximate feature map provided by :class:`AdditiveChi2Sampler` can be combined with the approximate feature map provided by :class:`RBFSampler` to yield an approximate feature map for the exponentiated chi squared kernel. See the [VZ2010]\_ for details and [VVZ2010]\_ for combination with the :class:`RBFSampler`. .. \_skewed\_chi\_kernel\_approx: Skewed Chi Squared Kernel ------------------------- The skewed chi squared kernel is given by: .. math:: k(x,y) = \prod\_i \frac{2\sqrt{x\_i+c}\sqrt{y\_i+c}}{x\_i + y\_i + 2c} It has properties that are similar to the exponentiated chi squared kernel often used in computer vision, but allows for a simple Monte Carlo approximation of the feature map. The usage of the :class:`SkewedChi2Sampler` is the same as the usage described above for the :class:`RBFSampler`. The only difference is in the free parameter, that is called :math:`c`. For a motivation for
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/kernel_approximation.rst
main
scikit-learn
[ -0.04318631812930107, -0.09627614170312881, -0.042517226189374924, -0.02272806130349636, 0.0715053379535675, -0.07243790477514267, -0.040193457156419754, 0.06441377848386765, -0.026189487427473068, -0.05133889243006706, -0.0002689650864340365, 0.017473476007580757, -0.03530869260430336, -0...
0.148373
kernel often used in computer vision, but allows for a simple Monte Carlo approximation of the feature map. The usage of the :class:`SkewedChi2Sampler` is the same as the usage described above for the :class:`RBFSampler`. The only difference is in the free parameter, that is called :math:`c`. For a motivation for this mapping and the mathematical details see [LS2010]\_. .. \_polynomial\_kernel\_approx: Polynomial Kernel Approximation via Tensor Sketch ------------------------------------------------- The :ref:`polynomial kernel ` is a popular type of kernel function given by: .. math:: k(x, y) = (\gamma x^\top y +c\_0)^d where: \* ``x``, ``y`` are the input vectors \* ``d`` is the kernel degree Intuitively, the feature space of the polynomial kernel of degree `d` consists of all possible degree-`d` products among input features, which enables learning algorithms using this kernel to account for interactions between features. The TensorSketch [PP2013]\_ method, as implemented in :class:`PolynomialCountSketch`, is a scalable, input data independent method for polynomial kernel approximation. It is based on the concept of Count sketch [WIKICS]\_ [CCF2002]\_ , a dimensionality reduction technique similar to feature hashing, which instead uses several independent hash functions. TensorSketch obtains a Count Sketch of the outer product of two vectors (or a vector with itself), which can be used as an approximation of the polynomial kernel feature space. In particular, instead of explicitly computing the outer product, TensorSketch computes the Count Sketch of the vectors and then uses polynomial multiplication via the Fast Fourier Transform to compute the Count Sketch of their outer product. Conveniently, the training phase of TensorSketch simply consists of initializing some random variables. It is thus independent of the input data, i.e. it only depends on the number of input features, but not the data values. In addition, this method can transform samples in :math:`\mathcal{O}(n\_{\text{samples}}(n\_{\text{features}} + n\_{\text{components}} \log(n\_{\text{components}})))` time, where :math:`n\_{\text{components}}` is the desired output dimension, determined by ``n\_components``. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_kernel\_approximation\_plot\_scalable\_poly\_kernels.py` .. \_tensor\_sketch\_kernel\_approx: Mathematical Details -------------------- Kernel methods like support vector machines or kernelized PCA rely on a property of reproducing kernel Hilbert spaces. For any positive definite kernel function :math:`k` (a so called Mercer kernel), it is guaranteed that there exists a mapping :math:`\phi` into a Hilbert space :math:`\mathcal{H}`, such that .. math:: k(x,y) = \langle \phi(x), \phi(y) \rangle Where :math:`\langle \cdot, \cdot \rangle` denotes the inner product in the Hilbert space. If an algorithm, such as a linear support vector machine or PCA, relies only on the scalar product of data points :math:`x\_i`, one may use the value of :math:`k(x\_i, x\_j)`, which corresponds to applying the algorithm to the mapped data points :math:`\phi(x\_i)`. The advantage of using :math:`k` is that the mapping :math:`\phi` never has to be calculated explicitly, allowing for arbitrary large features (even infinite). One drawback of kernel methods is, that it might be necessary to store many kernel values :math:`k(x\_i, x\_j)` during optimization. If a kernelized classifier is applied to new data :math:`y\_j`, :math:`k(x\_i, y\_j)` needs to be computed to make predictions, possibly for many different :math:`x\_i` in the training set. The classes in this submodule allow to approximate the embedding :math:`\phi`, thereby working explicitly with the representations :math:`\phi(x\_i)`, which obviates the need to apply the kernel or store training examples. .. rubric:: References .. [WS2001] `"Using the Nyström method to speed up kernel machines" `\_ Williams, C.K.I.; Seeger, M. - 2001. .. [RR2007] `"Random features for large-scale kernel machines" `\_ Rahimi, A. and Recht, B. - Advances in neural information processing 2007, .. [LS2010] `"Random Fourier approximations for skewed multiplicative histogram kernels" `\_ Li, F., Ionescu, C., and Sminchisescu, C. - Pattern Recognition, DAGM 2010, Lecture Notes in Computer Science. .. [VZ2010] `"Efficient
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/kernel_approximation.rst
main
scikit-learn
[ -0.06829135119915009, -0.07376720756292343, -0.03714986518025398, -0.04823723062872887, 0.08053403347730637, -0.03951716050505638, 0.03369009867310524, 0.06442799419164658, -0.010624835267663002, -0.03703590855002403, 0.05756375193595886, -0.008969702757894993, 0.019356239587068558, -0.029...
0.167867
`"Random features for large-scale kernel machines" `\_ Rahimi, A. and Recht, B. - Advances in neural information processing 2007, .. [LS2010] `"Random Fourier approximations for skewed multiplicative histogram kernels" `\_ Li, F., Ionescu, C., and Sminchisescu, C. - Pattern Recognition, DAGM 2010, Lecture Notes in Computer Science. .. [VZ2010] `"Efficient additive kernels via explicit feature maps" `\_ Vedaldi, A. and Zisserman, A. - Computer Vision and Pattern Recognition 2010 .. [VVZ2010] `"Generalized RBF feature maps for Efficient Detection" `\_ Vempati, S. and Vedaldi, A. and Zisserman, A. and Jawahar, CV - 2010 .. [PP2013] :doi:`"Fast and scalable polynomial kernels via explicit feature maps" <10.1145/2487575.2487591>` Pham, N., & Pagh, R. - 2013 .. [CCF2002] `"Finding frequent items in data streams" `\_ Charikar, M., Chen, K., & Farach-Colton - 2002 .. [WIKICS] `"Wikipedia: Count sketch" `\_
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/kernel_approximation.rst
main
scikit-learn
[ -0.04411735758185387, -0.06490661948919296, -0.0035570969339460135, -0.05526449903845787, 0.10517674684524536, -0.006974216550588608, 0.047311361879110336, -0.03894074261188507, -0.03605100139975548, -0.06324237585067749, 0.028895527124404907, 0.0045968713238835335, 0.041448306292295456, 0...
0.121383
.. \_combining\_estimators: ================================== Pipelines and composite estimators ================================== To build a composite estimator, transformers are usually combined with other transformers or with :term:`predictors` (such as classifiers or regressors). The most common tool used for composing estimators is a :ref:`Pipeline `. Pipelines require all steps except the last to be a :term:`transformer`. The last step can be anything, a transformer, a :term:`predictor`, or a clustering estimator which might have or not have a `.predict(...)` method. A pipeline exposes all methods provided by the last estimator: if the last step provides a `transform` method, then the pipeline would have a `transform` method and behave like a transformer. If the last step provides a `predict` method, then the pipeline would expose that method, and given a data :term:`X`, use all steps except the last to transform the data, and then give that transformed data to the `predict` method of the last step of the pipeline. The class :class:`Pipeline` is often used in combination with :ref:`ColumnTransformer ` or :ref:`FeatureUnion ` which concatenate the output of transformers into a composite feature space. :ref:`TransformedTargetRegressor ` deals with transforming the :term:`target` (i.e. log-transform :term:`y`). .. \_pipeline: Pipeline: chaining estimators ============================= .. currentmodule:: sklearn.pipeline :class:`Pipeline` can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. :class:`Pipeline` serves multiple purposes here: Convenience and encapsulation You only have to call :term:`fit` and :term:`predict` once on your data to fit a whole sequence of estimators. Joint parameter selection You can :ref:`grid search ` over parameters of all estimators in the pipeline at once. Safety Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. All estimators in a pipeline, except the last one, must be transformers (i.e. must have a :term:`transform` method). The last estimator may be any type (transformer, classifier, etc.). .. note:: Calling ``fit`` on the pipeline is the same as calling ``fit`` on each estimator in turn, ``transform`` the input and pass it on to the next step. The pipeline has all the methods that the last estimator in the pipeline has, i.e. if the last estimator is a classifier, the :class:`Pipeline` can be used as a classifier. If the last estimator is a transformer, again, so is the pipeline. Usage ----- Build a pipeline ................ The :class:`Pipeline` is built using a list of ``(key, value)`` pairs, where the ``key`` is a string containing the name you want to give this step and ``value`` is an estimator object:: >>> from sklearn.pipeline import Pipeline >>> from sklearn.svm import SVC >>> from sklearn.decomposition import PCA >>> estimators = [('reduce\_dim', PCA()), ('clf', SVC())] >>> pipe = Pipeline(estimators) >>> pipe Pipeline(steps=[('reduce\_dim', PCA()), ('clf', SVC())]) .. dropdown:: Shorthand version using :func:`make\_pipeline` The utility function :func:`make\_pipeline` is a shorthand for constructing pipelines; it takes a variable number of estimators and returns a pipeline, filling in the names automatically:: >>> from sklearn.pipeline import make\_pipeline >>> make\_pipeline(PCA(), SVC()) Pipeline(steps=[('pca', PCA()), ('svc', SVC())]) Access pipeline steps ..................... The estimators of a pipeline are stored as a list in the ``steps`` attribute. A sub-pipeline can be extracted using the slicing notation commonly used for Python Sequences such as lists or strings (although only a step of 1 is permitted). This is convenient for performing only some of the transformations (or their inverse): >>> pipe[:1] Pipeline(steps=[('reduce\_dim', PCA())]) >>> pipe[-1:] Pipeline(steps=[('clf', SVC())]) .. dropdown:: Accessing a step by name or position A specific step can also be accessed by
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/compose.rst
main
scikit-learn
[ -0.08867518603801727, -0.05797417461872101, -0.016100147739052773, 0.04572106897830963, 0.01132723968476057, -0.017932353541254997, -0.10627079755067825, 0.03140793368220329, -0.023337893187999725, 0.033980850130319595, -0.04562073200941086, -0.06059116870164871, 0.04697280749678612, -0.00...
0.11207
as lists or strings (although only a step of 1 is permitted). This is convenient for performing only some of the transformations (or their inverse): >>> pipe[:1] Pipeline(steps=[('reduce\_dim', PCA())]) >>> pipe[-1:] Pipeline(steps=[('clf', SVC())]) .. dropdown:: Accessing a step by name or position A specific step can also be accessed by index or name by indexing (with ``[idx]``) the pipeline:: >>> pipe.steps[0] ('reduce\_dim', PCA()) >>> pipe[0] PCA() >>> pipe['reduce\_dim'] PCA() `Pipeline`'s `named\_steps` attribute allows accessing steps by name with tab completion in interactive environments:: >>> pipe.named\_steps.reduce\_dim is pipe['reduce\_dim'] True Tracking feature names in a pipeline .................................... To enable model inspection, :class:`~sklearn.pipeline.Pipeline` has a ``get\_feature\_names\_out()`` method, just like all transformers. You can use pipeline slicing to get the feature names going into each step:: >>> from sklearn.datasets import load\_iris >>> from sklearn.linear\_model import LogisticRegression >>> from sklearn.feature\_selection import SelectKBest >>> iris = load\_iris() >>> pipe = Pipeline(steps=[ ... ('select', SelectKBest(k=2)), ... ('clf', LogisticRegression())]) >>> pipe.fit(iris.data, iris.target) Pipeline(steps=[('select', SelectKBest(...)), ('clf', LogisticRegression(...))]) >>> pipe[:-1].get\_feature\_names\_out() array(['x2', 'x3'], ...) .. dropdown:: Customize feature names You can also provide custom feature names for the input data using ``get\_feature\_names\_out``:: >>> pipe[:-1].get\_feature\_names\_out(iris.feature\_names) array(['petal length (cm)', 'petal width (cm)'], ...) .. \_pipeline\_nested\_parameters: Access to nested parameters ........................... It is common to adjust the parameters of an estimator within a pipeline. This parameter is therefore nested because it belongs to a particular sub-step. Parameters of the estimators in the pipeline are accessible using the ``\_\_`` syntax:: >>> pipe = Pipeline(steps=[("reduce\_dim", PCA()), ("clf", SVC())]) >>> pipe.set\_params(clf\_\_C=10) Pipeline(steps=[('reduce\_dim', PCA()), ('clf', SVC(C=10))]) .. dropdown:: When does it matter? This is particularly important for doing grid searches:: >>> from sklearn.model\_selection import GridSearchCV >>> param\_grid = dict(reduce\_dim\_\_n\_components=[2, 5, 10], ... clf\_\_C=[0.1, 10, 100]) >>> grid\_search = GridSearchCV(pipe, param\_grid=param\_grid) Individual steps may also be replaced as parameters, and non-final steps may be ignored by setting them to ``'passthrough'``:: >>> param\_grid = dict(reduce\_dim=['passthrough', PCA(5), PCA(10)], ... clf=[SVC(), LogisticRegression()], ... clf\_\_C=[0.1, 10, 100]) >>> grid\_search = GridSearchCV(pipe, param\_grid=param\_grid) .. seealso:: \* :ref:`composite\_grid\_search` .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_feature\_selection\_plot\_feature\_selection\_pipeline.py` \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_grid\_search\_text\_feature\_extraction.py` \* :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_digits\_pipe.py` \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_kernel\_approximation.py` \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_svm\_anova.py` \* :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_compare\_reduction.py` \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_pipeline\_display.py` .. \_pipeline\_cache: Caching transformers: avoid repeated computation ------------------------------------------------- .. currentmodule:: sklearn.pipeline Fitting transformers may be computationally expensive. With its ``memory`` parameter set, :class:`Pipeline` will cache each transformer after calling ``fit``. This feature is used to avoid computing the fit transformers within a pipeline if the parameters and input data are identical. A typical example is the case of a grid search in which the transformers can be fitted only once and reused for each configuration. The last step will never be cached, even if it is a transformer. The parameter ``memory`` is needed in order to cache the transformers. ``memory`` can be either a string containing the directory where to cache the transformers or a `joblib.Memory `\_ object:: >>> from tempfile import mkdtemp >>> from shutil import rmtree >>> from sklearn.decomposition import PCA >>> from sklearn.svm import SVC >>> from sklearn.pipeline import Pipeline >>> estimators = [('reduce\_dim', PCA()), ('clf', SVC())] >>> cachedir = mkdtemp() >>> pipe = Pipeline(estimators, memory=cachedir) >>> pipe Pipeline(memory=..., steps=[('reduce\_dim', PCA()), ('clf', SVC())]) >>> # Clear the cache directory when you don't need it anymore >>> rmtree(cachedir) .. dropdown:: Side effect of caching transformers :color: warning Using a :class:`Pipeline` without cache enabled, it is possible to inspect the original instance such as:: >>> from sklearn.datasets import load\_digits >>> X\_digits, y\_digits = load\_digits(return\_X\_y=True) >>> pca1 = PCA(n\_components=10) >>> svm1 = SVC() >>> pipe = Pipeline([('reduce\_dim', pca1), ('clf', svm1)]) >>> pipe.fit(X\_digits, y\_digits) Pipeline(steps=[('reduce\_dim', PCA(n\_components=10)), ('clf', SVC())]) >>> # The pca instance can be inspected directly >>> pca1.components\_.shape (10, 64) Enabling caching triggers a clone of the transformers before fitting.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/compose.rst
main
scikit-learn
[ -0.03196842968463898, 0.03230305388569832, -0.026933103799819946, 0.008670453913509846, -0.039113979786634445, -0.04500969126820564, 0.023803984746336937, 0.03888557851314545, -0.017216354608535767, -0.02635909616947174, -0.019173627719283104, -0.016715869307518005, -0.050275158137083054, ...
0.026741
>>> X\_digits, y\_digits = load\_digits(return\_X\_y=True) >>> pca1 = PCA(n\_components=10) >>> svm1 = SVC() >>> pipe = Pipeline([('reduce\_dim', pca1), ('clf', svm1)]) >>> pipe.fit(X\_digits, y\_digits) Pipeline(steps=[('reduce\_dim', PCA(n\_components=10)), ('clf', SVC())]) >>> # The pca instance can be inspected directly >>> pca1.components\_.shape (10, 64) Enabling caching triggers a clone of the transformers before fitting. Therefore, the transformer instance given to the pipeline cannot be inspected directly. In the following example, accessing the :class:`~sklearn.decomposition.PCA` instance ``pca2`` will raise an ``AttributeError`` since ``pca2`` will be an unfitted transformer. Instead, use the attribute ``named\_steps`` to inspect estimators within the pipeline:: >>> cachedir = mkdtemp() >>> pca2 = PCA(n\_components=10) >>> svm2 = SVC() >>> cached\_pipe = Pipeline([('reduce\_dim', pca2), ('clf', svm2)], ... memory=cachedir) >>> cached\_pipe.fit(X\_digits, y\_digits) Pipeline(memory=..., steps=[('reduce\_dim', PCA(n\_components=10)), ('clf', SVC())]) >>> cached\_pipe.named\_steps['reduce\_dim'].components\_.shape (10, 64) >>> # Remove the cache directory >>> rmtree(cachedir) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_compare\_reduction.py` .. \_transformed\_target\_regressor: Transforming target in regression ================================= :class:`~sklearn.compose.TransformedTargetRegressor` transforms the targets ``y`` before fitting a regression model. The predictions are mapped back to the original space via an inverse transform. It takes as an argument the regressor that will be used for prediction, and the transformer that will be applied to the target variable:: >>> import numpy as np >>> from sklearn.datasets import make\_regression >>> from sklearn.compose import TransformedTargetRegressor >>> from sklearn.preprocessing import QuantileTransformer >>> from sklearn.linear\_model import LinearRegression >>> from sklearn.model\_selection import train\_test\_split >>> # create a synthetic dataset >>> X, y = make\_regression(n\_samples=20640, ... n\_features=8, ... noise=100.0, ... random\_state=0) >>> y = np.exp( 1 + (y - y.min()) \* (4 / (y.max() - y.min()))) >>> X, y = X[:2000, :], y[:2000] # select a subset of data >>> transformer = QuantileTransformer(output\_distribution='normal') >>> regressor = LinearRegression() >>> regr = TransformedTargetRegressor(regressor=regressor, ... transformer=transformer) >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split(X, y, random\_state=0) >>> regr.fit(X\_train, y\_train) TransformedTargetRegressor(...) >>> print(f"R2 score: {regr.score(X\_test, y\_test):.2f}") R2 score: 0.67 >>> raw\_target\_regr = LinearRegression().fit(X\_train, y\_train) >>> print(f"R2 score: {raw\_target\_regr.score(X\_test, y\_test):.2f}") R2 score: 0.64 For simple transformations, instead of a Transformer object, a pair of functions can be passed, defining the transformation and its inverse mapping:: >>> def func(x): ... return np.log(x) >>> def inverse\_func(x): ... return np.exp(x) Subsequently, the object is created as:: >>> regr = TransformedTargetRegressor(regressor=regressor, ... func=func, ... inverse\_func=inverse\_func) >>> regr.fit(X\_train, y\_train) TransformedTargetRegressor(...) >>> print(f"R2 score: {regr.score(X\_test, y\_test):.2f}") R2 score: 0.67 By default, the provided functions are checked at each fit to be the inverse of each other. However, it is possible to bypass this checking by setting ``check\_inverse`` to ``False``:: >>> def inverse\_func(x): ... return x >>> regr = TransformedTargetRegressor(regressor=regressor, ... func=func, ... inverse\_func=inverse\_func, ... check\_inverse=False) >>> regr.fit(X\_train, y\_train) TransformedTargetRegressor(...) >>> print(f"R2 score: {regr.score(X\_test, y\_test):.2f}") R2 score: -3.02 .. note:: The transformation can be triggered by setting either ``transformer`` or the pair of functions ``func`` and ``inverse\_func``. However, setting both options will raise an error. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_transformed\_target.py` .. \_feature\_union: FeatureUnion: composite feature spaces ====================================== .. currentmodule:: sklearn.pipeline :class:`FeatureUnion` combines several transformer objects into a new transformer that combines their output. A :class:`FeatureUnion` takes a list of transformer objects. During fitting, each of these is fit to the data independently. The transformers are applied in parallel, and the feature matrices they output are concatenated side-by-side into a larger matrix. When you want to apply different transformations to each field of the data, see the related class :class:`~sklearn.compose.ColumnTransformer` (see :ref:`user guide `). :class:`FeatureUnion` serves the same purposes as :class:`Pipeline` - convenience and joint parameter estimation and validation. :class:`FeatureUnion` and :class:`Pipeline` can be combined to create complex models. (A :class:`FeatureUnion` has no way of checking whether two transformers might produce identical features. It only produces a union when the feature sets are disjoint, and
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/compose.rst
main
scikit-learn
[ -0.0819469541311264, -0.013822013512253761, -0.042520321905612946, -0.0534331314265728, -0.0022121791262179613, -0.07081177830696106, -0.031005363911390305, 0.05192267522215843, -0.04741506278514862, -0.06958723068237305, -0.01175510510802269, 0.015898939222097397, -0.07766252756118774, -0...
-0.031857
serves the same purposes as :class:`Pipeline` - convenience and joint parameter estimation and validation. :class:`FeatureUnion` and :class:`Pipeline` can be combined to create complex models. (A :class:`FeatureUnion` has no way of checking whether two transformers might produce identical features. It only produces a union when the feature sets are disjoint, and making sure they are is the caller's responsibility.) Usage ----- A :class:`FeatureUnion` is built using a list of ``(key, value)`` pairs, where the ``key`` is the name you want to give to a given transformation (an arbitrary string; it only serves as an identifier) and ``value`` is an estimator object:: >>> from sklearn.pipeline import FeatureUnion >>> from sklearn.decomposition import PCA >>> from sklearn.decomposition import KernelPCA >>> estimators = [('linear\_pca', PCA()), ('kernel\_pca', KernelPCA())] >>> combined = FeatureUnion(estimators) >>> combined FeatureUnion(transformer\_list=[('linear\_pca', PCA()), ('kernel\_pca', KernelPCA())]) Like pipelines, feature unions have a shorthand constructor called :func:`make\_union` that does not require explicit naming of the components. Like ``Pipeline``, individual steps may be replaced using ``set\_params``, and ignored by setting to ``'drop'``:: >>> combined.set\_params(kernel\_pca='drop') FeatureUnion(transformer\_list=[('linear\_pca', PCA()), ('kernel\_pca', 'drop')]) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_feature\_union.py` .. \_column\_transformer: ColumnTransformer for heterogeneous data ======================================== Many datasets contain features of different types, say text, floats, and dates, where each type of feature requires separate preprocessing or feature extraction steps. Often it is easiest to preprocess data before applying scikit-learn methods, for example using `pandas `\_\_. Processing your data before passing it to scikit-learn might be problematic for one of the following reasons: 1. Incorporating statistics from test data into the preprocessors makes cross-validation scores unreliable (known as \*data leakage\*), for example in the case of scalers or imputing missing values. 2. You may want to include the parameters of the preprocessors in a :ref:`parameter search `. The :class:`~sklearn.compose.ColumnTransformer` helps performing different transformations for different columns of the data, within a :class:`~sklearn.pipeline.Pipeline` that is safe from data leakage and that can be parametrized. :class:`~sklearn.compose.ColumnTransformer` works on arrays, sparse matrices, and `pandas DataFrames `\_\_. To each column, a different transformation can be applied, such as preprocessing or a specific feature extraction method:: >>> import pandas as pd >>> X = pd.DataFrame( ... {'city': ['London', 'London', 'Paris', 'Sallisaw'], ... 'title': ["His Last Bow", "How Watson Learned the Trick", ... "A Moveable Feast", "The Grapes of Wrath"], ... 'expert\_rating': [5, 3, 4, 5], ... 'user\_rating': [4, 5, 4, 3]}) For this data, we might want to encode the ``'city'`` column as a categorical variable using :class:`~sklearn.preprocessing.OneHotEncoder` but apply a :class:`~sklearn.feature\_extraction.text.CountVectorizer` to the ``'title'`` column. As we might use multiple feature extraction methods on the same column, we give each transformer a unique name, say ``'city\_category'`` and ``'title\_bow'``. By default, the remaining rating columns are ignored (``remainder='drop'``):: >>> from sklearn.compose import ColumnTransformer >>> from sklearn.feature\_extraction.text import CountVectorizer >>> from sklearn.preprocessing import OneHotEncoder >>> column\_trans = ColumnTransformer( ... [('categories', OneHotEncoder(dtype='int'), ['city']), ... ('title\_bow', CountVectorizer(), 'title')], ... remainder='drop', verbose\_feature\_names\_out=False) >>> column\_trans.fit(X) ColumnTransformer(transformers=[('categories', OneHotEncoder(dtype='int'), ['city']), ('title\_bow', CountVectorizer(), 'title')], verbose\_feature\_names\_out=False) >>> column\_trans.get\_feature\_names\_out() array(['city\_London', 'city\_Paris', 'city\_Sallisaw', 'bow', 'feast', 'grapes', 'his', 'how', 'last', 'learned', 'moveable', 'of', 'the', 'trick', 'watson', 'wrath'], ...) >>> column\_trans.transform(X).toarray() array([[1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1]]...) In the above example, the :class:`~sklearn.feature\_extraction.text.CountVectorizer` expects a 1D array as input and therefore the columns were specified as a string (``'title'``). However, :class:`~sklearn.preprocessing.OneHotEncoder` as most of other transformers expects 2D data, therefore in that
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/compose.rst
main
scikit-learn
[ -0.09115653485059738, -0.004008561372756958, -0.04721246659755707, -0.028926588594913483, -0.02923833206295967, -0.06165940314531326, -0.021218497306108475, 0.001776331220753491, -0.03356199711561203, -0.06876460462808609, -0.007040216587483883, -0.04354573041200638, 0.0048241219483315945, ...
0.058123
0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1]]...) In the above example, the :class:`~sklearn.feature\_extraction.text.CountVectorizer` expects a 1D array as input and therefore the columns were specified as a string (``'title'``). However, :class:`~sklearn.preprocessing.OneHotEncoder` as most of other transformers expects 2D data, therefore in that case you need to specify the column as a list of strings (``['city']``). Apart from a scalar or a single item list, the column selection can be specified as a list of multiple items, an integer array, a slice, a boolean mask, or with a :func:`~sklearn.compose.make\_column\_selector`. The :func:`~sklearn.compose.make\_column\_selector` is used to select columns based on data type or column name:: >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.compose import make\_column\_selector >>> ct = ColumnTransformer([ ... ('scale', StandardScaler(), ... make\_column\_selector(dtype\_include=np.number)), ... ('onehot', ... OneHotEncoder(), ... make\_column\_selector(pattern='city', dtype\_include=[object, "string"]))]) >>> ct.fit\_transform(X) array([[ 0.904, 0. , 1. , 0. , 0. ], [-1.507, 1.414, 1. , 0. , 0. ], [-0.301, 0. , 0. , 1. , 0. ], [ 0.904, -1.414, 0. , 0. , 1. ]]) Strings can reference columns if the input is a DataFrame, integers are always interpreted as the positional columns. We can keep the remaining rating columns by setting ``remainder='passthrough'``. The values are appended to the end of the transformation:: >>> column\_trans = ColumnTransformer( ... [('city\_category', OneHotEncoder(dtype='int'),['city']), ... ('title\_bow', CountVectorizer(), 'title')], ... remainder='passthrough') >>> column\_trans.fit\_transform(X) array([[1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 5, 4], [1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 3, 5], [0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 4, 4], [0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 5, 3]]...) The ``remainder`` parameter can be set to an estimator to transform the remaining rating columns. The transformed values are appended to the end of the transformation:: >>> from sklearn.preprocessing import MinMaxScaler >>> column\_trans = ColumnTransformer( ... [('city\_category', OneHotEncoder(), ['city']), ... ('title\_bow', CountVectorizer(), 'title')], ... remainder=MinMaxScaler()) >>> column\_trans.fit\_transform(X)[:, -2:] array([[1. , 0.5], [0. , 1. ], [0.5, 0.5], [1. , 0. ]]) .. \_make\_column\_transformer: The :func:`~sklearn.compose.make\_column\_transformer` function is available to more easily create a :class:`~sklearn.compose.ColumnTransformer` object. Specifically, the names will be given automatically. The equivalent for the above example would be:: >>> from sklearn.compose import make\_column\_transformer >>> column\_trans = make\_column\_transformer( ... (OneHotEncoder(), ['city']), ... (CountVectorizer(), 'title'), ... remainder=MinMaxScaler()) >>> column\_trans ColumnTransformer(remainder=MinMaxScaler(), transformers=[('onehotencoder', OneHotEncoder(), ['city']), ('countvectorizer', CountVectorizer(), 'title')]) If :class:`~sklearn.compose.ColumnTransformer` is fitted with a dataframe and the dataframe only has string column names, then transforming a dataframe will use the column names to select the columns:: >>> ct = ColumnTransformer( ... [("scale", StandardScaler(), ["expert\_rating"])]).fit(X) >>> X\_new = pd.DataFrame({"expert\_rating": [5, 6, 1], ... "ignored\_new\_col": [1.2, 0.3, -0.1]}) >>> ct.transform(X\_new) array([[ 0.9], [ 2.1], [-3.9]]) .. \_visualizing\_composite\_estimators: Visualizing Composite Estimators ================================ Estimators are displayed with an HTML representation when shown in a jupyter notebook. This is useful to diagnose or visualize a Pipeline with many estimators. This visualization is activated by default:: >>> column\_trans # doctest: +SKIP It can be deactivated by setting the `display` option in :func:`~sklearn.set\_config` to 'text':: >>> from sklearn import set\_config >>> set\_config(display='text') # doctest: +SKIP >>> # displays text representation in a jupyter context >>> column\_trans # doctest: +SKIP An example of the HTML output can be seen in the \*\*HTML representation of Pipeline\*\* section of :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer\_mixed\_types.py`. As an alternative, the HTML can be written to a file using :func:`~sklearn.utils.estimator\_html\_repr`:: >>> from sklearn.utils import estimator\_html\_repr >>> with open('my\_estimator.html', 'w') as f: # doctest: +SKIP ... f.write(estimator\_html\_repr(clf)) .. rubric:: Examples
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/compose.rst
main
scikit-learn
[ 0.015948064625263214, -0.0055498345755040646, -0.09261307120323181, 0.044480446726083755, -0.01076893974095583, -0.022190473973751068, 0.00040574654121883214, -0.005463641602545977, -0.06901616603136063, -0.0381210595369339, 0.06557662785053253, -0.08323429524898529, -0.005475286860018969, ...
-0.030828
An example of the HTML output can be seen in the \*\*HTML representation of Pipeline\*\* section of :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer\_mixed\_types.py`. As an alternative, the HTML can be written to a file using :func:`~sklearn.utils.estimator\_html\_repr`:: >>> from sklearn.utils import estimator\_html\_repr >>> with open('my\_estimator.html', 'w') as f: # doctest: +SKIP ... f.write(estimator\_html\_repr(clf)) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer.py` \* :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer\_mixed\_types.py`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/compose.rst
main
scikit-learn
[ -0.06982516497373581, -0.063414566218853, -0.07302181422710419, 0.030039960518479347, 0.11676456034183502, -0.018865449354052544, -0.03840041905641556, 0.022197533398866653, -0.016712050884962082, 0.005324786528944969, 0.01571422629058361, -0.028418121859431267, 0.0008658776059746742, -0.0...
0.052375
.. \_isotonic: =================== Isotonic regression =================== .. currentmodule:: sklearn.isotonic The class :class:`IsotonicRegression` fits a non-decreasing real function to 1-dimensional data. It solves the following problem: .. math:: \min \sum\_i w\_i (y\_i - \hat{y}\_i)^2 subject to :math:`\hat{y}\_i \le \hat{y}\_j` whenever :math:`X\_i \le X\_j`, where the weights :math:`w\_i` are strictly positive, and both `X` and `y` are arbitrary real quantities. The `increasing` parameter changes the constraint to :math:`\hat{y}\_i \ge \hat{y}\_j` whenever :math:`X\_i \le X\_j`. Setting it to 'auto' will automatically choose the constraint based on `Spearman's rank correlation coefficient `\_. :class:`IsotonicRegression` produces a series of predictions :math:`\hat{y}\_i` for the training data which are the closest to the targets :math:`y` in terms of mean squared error. These predictions are interpolated for predicting to unseen data. The predictions of :class:`IsotonicRegression` thus form a function that is piecewise linear: .. figure:: ../auto\_examples/miscellaneous/images/sphx\_glr\_plot\_isotonic\_regression\_001.png :target: ../auto\_examples/miscellaneous/plot\_isotonic\_regression.html :align: center .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_isotonic\_regression.py`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/isotonic.rst
main
scikit-learn
[ -0.08043421804904938, -0.08056388050317764, 0.059820763766765594, -0.0035072844475507736, -0.030580170452594757, 0.010809839703142643, 0.03948099911212921, -0.015390840359032154, 0.020000837743282318, -0.01355904620140791, 0.05584603548049927, -0.011872166767716408, 0.013561324216425419, 0...
0.086914
.. \_neural\_networks\_unsupervised: ==================================== Neural network models (unsupervised) ==================================== .. currentmodule:: sklearn.neural\_network .. \_rbm: Restricted Boltzmann machines ============================= Restricted Boltzmann machines (RBM) are unsupervised nonlinear feature learners based on a probabilistic model. The features extracted by an RBM or a hierarchy of RBMs often give good results when fed into a linear classifier such as a linear SVM or a perceptron. The model makes assumptions regarding the distribution of inputs. At the moment, scikit-learn only provides :class:`BernoulliRBM`, which assumes the inputs are either binary values or values between 0 and 1, each encoding the probability that the specific feature would be turned on. The RBM tries to maximize the likelihood of the data using a particular graphical model. The parameter learning algorithm used (:ref:`Stochastic Maximum Likelihood `) prevents the representations from straying far from the input data, which makes them capture interesting regularities, but makes the model less useful for small datasets, and usually not useful for density estimation. The method gained popularity for initializing deep neural networks with the weights of independent RBMs. This method is known as unsupervised pre-training. .. figure:: ../auto\_examples/neural\_networks/images/sphx\_glr\_plot\_rbm\_logistic\_classification\_001.png :target: ../auto\_examples/neural\_networks/plot\_rbm\_logistic\_classification.html :align: center :scale: 100% .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neural\_networks\_plot\_rbm\_logistic\_classification.py` Graphical model and parametrization ----------------------------------- The graphical model of an RBM is a fully-connected bipartite graph. .. image:: ../images/rbm\_graph.png :align: center The nodes are random variables whose states depend on the state of the other nodes they are connected to. The model is therefore parameterized by the weights of the connections, as well as one intercept (bias) term for each visible and hidden unit, omitted from the image for simplicity. The energy function measures the quality of a joint assignment: .. math:: E(\mathbf{v}, \mathbf{h}) = -\sum\_i \sum\_j w\_{ij}v\_ih\_j - \sum\_i b\_iv\_i - \sum\_j c\_jh\_j In the formula above, :math:`\mathbf{b}` and :math:`\mathbf{c}` are the intercept vectors for the visible and hidden layers, respectively. The joint probability of the model is defined in terms of the energy: .. math:: P(\mathbf{v}, \mathbf{h}) = \frac{e^{-E(\mathbf{v}, \mathbf{h})}}{Z} The word \*restricted\* refers to the bipartite structure of the model, which prohibits direct interaction between hidden units, or between visible units. This means that the following conditional independencies are assumed: .. math:: h\_i \bot h\_j | \mathbf{v} \\ v\_i \bot v\_j | \mathbf{h} The bipartite structure allows for the use of efficient block Gibbs sampling for inference. Bernoulli Restricted Boltzmann machines --------------------------------------- In the :class:`BernoulliRBM`, all units are binary stochastic units. This means that the input data should either be binary, or real-valued between 0 and 1 signifying the probability that the visible unit would turn on or off. This is a good model for character recognition, where the interest is on which pixels are active and which aren't. For images of natural scenes it no longer fits because of background, depth and the tendency of neighbouring pixels to take the same values. The conditional probability distribution of each unit is given by the logistic sigmoid activation function of the input it receives: .. math:: P(v\_i=1|\mathbf{h}) = \sigma(\sum\_j w\_{ij}h\_j + b\_i) \\ P(h\_i=1|\mathbf{v}) = \sigma(\sum\_i w\_{ij}v\_i + c\_j) where :math:`\sigma` is the logistic sigmoid function: .. math:: \sigma(x) = \frac{1}{1 + e^{-x}} .. \_sml: Stochastic Maximum Likelihood learning -------------------------------------- The training algorithm implemented in :class:`BernoulliRBM` is known as Stochastic Maximum Likelihood (SML) or Persistent Contrastive Divergence (PCD). Optimizing maximum likelihood directly is infeasible because of the form of the data likelihood: .. math:: \log P(v) = \log \sum\_h e^{-E(v, h)} - \log \sum\_{x, y} e^{-E(x, y)} For simplicity the equation above is written for a single training example. The gradient with respect to the weights is formed of two terms corresponding to the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_unsupervised.rst
main
scikit-learn
[ -0.09546107053756714, -0.13418662548065186, -0.024582626298069954, 0.059526342898607254, 0.13363027572631836, 0.01128623727709055, -0.0695701390504837, -0.04409092292189598, -0.014504484832286835, -0.06628236174583435, -0.022412655875086784, -0.03564231097698212, 0.050087057054042816, -0.0...
0.119105
because of the form of the data likelihood: .. math:: \log P(v) = \log \sum\_h e^{-E(v, h)} - \log \sum\_{x, y} e^{-E(x, y)} For simplicity the equation above is written for a single training example. The gradient with respect to the weights is formed of two terms corresponding to the ones above. They are usually known as the positive gradient and the negative gradient, because of their respective signs. In this implementation, the gradients are estimated over mini-batches of samples. In maximizing the log-likelihood, the positive gradient makes the model prefer hidden states that are compatible with the observed training data. Because of the bipartite structure of RBMs, it can be computed efficiently. The negative gradient, however, is intractable. Its goal is to lower the energy of joint states that the model prefers, therefore making it stay true to the data. It can be approximated by Markov chain Monte Carlo using block Gibbs sampling by iteratively sampling each of :math:`v` and :math:`h` given the other, until the chain mixes. Samples generated in this way are sometimes referred as fantasy particles. This is inefficient and it is difficult to determine whether the Markov chain mixes. The Contrastive Divergence method suggests to stop the chain after a small number of iterations, :math:`k`, usually even 1. This method is fast and has low variance, but the samples are far from the model distribution. Persistent Contrastive Divergence addresses this. Instead of starting a new chain each time the gradient is needed, and performing only one Gibbs sampling step, in PCD we keep a number of chains (fantasy particles) that are updated :math:`k` Gibbs steps after each weight update. This allows the particles to explore the space more thoroughly. .. rubric:: References \* `"A fast learning algorithm for deep belief nets" `\_, G. Hinton, S. Osindero, Y.-W. Teh, 2006 \* `"Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient" `\_, T. Tieleman, 2008
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/neural_networks_unsupervised.rst
main
scikit-learn
[ -0.03628209978342056, -0.10157961398363113, 0.030721202492713928, 0.028583718463778496, 0.035693250596523285, -0.04470999166369438, 0.00517440028488636, 0.03269805759191513, 0.13950705528259277, 0.018399585038423538, -0.040775686502456665, 0.047323714941740036, 0.05525944381952286, -0.0030...
0.084126
.. \_learning\_curves: ===================================================== Validation curves: plotting scores to evaluate models ===================================================== .. currentmodule:: sklearn.model\_selection Every estimator has its advantages and drawbacks. Its generalization error can be decomposed in terms of bias, variance and noise. The \*\*bias\*\* of an estimator is its average error for different training sets. The \*\*variance\*\* of an estimator indicates how sensitive it is to varying training sets. Noise is a property of the data. In the following plot, we see a function :math:`f(x) = \cos (\frac{3}{2} \pi x)` and some noisy samples from that function. We use three different estimators to fit the function: linear regression with polynomial features of degree 1, 4 and 15. We see that the first estimator can at best provide only a poor fit to the samples and the true function because it is too simple (high bias), the second estimator approximates it almost perfectly and the last estimator approximates the training data perfectly but does not fit the true function very well, i.e. it is very sensitive to varying training data (high variance). .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_underfitting\_overfitting\_001.png :target: ../auto\_examples/model\_selection/plot\_underfitting\_overfitting.html :align: center :scale: 50% Bias and variance are inherent properties of estimators and we usually have to select learning algorithms and hyperparameters so that both bias and variance are as low as possible (see `Bias-variance dilemma `\_). Another way to reduce the variance of a model is to use more training data. However, you should only collect more training data if the true function is too complex to be approximated by an estimator with a lower variance. In the simple one-dimensional problem that we have seen in the example it is easy to see whether the estimator suffers from bias or variance. However, in high-dimensional spaces, models can become very difficult to visualize. For this reason, it is often helpful to use the tools described below. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_underfitting\_overfitting.py` \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_train\_error\_vs\_test\_error.py` \* :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_learning\_curve.py` .. \_validation\_curve: Validation curve ================ To validate a model we need a scoring function (see :ref:`model\_evaluation`), for example accuracy for classifiers. The proper way of choosing multiple hyperparameters of an estimator is of course grid search or similar methods (see :ref:`grid\_search`) that select the hyperparameter with the maximum score on a validation set or multiple validation sets. Note that if we optimize the hyperparameters based on a validation score the validation score is biased and not a good estimate of the generalization any longer. To get a proper estimate of the generalization we have to compute the score on another test set. However, it is sometimes helpful to plot the influence of a single hyperparameter on the training score and the validation score to find out whether the estimator is overfitting or underfitting for some hyperparameter values. The function :func:`validation\_curve` can help in this case:: >>> import numpy as np >>> from sklearn.model\_selection import validation\_curve >>> from sklearn.datasets import load\_iris >>> from sklearn.svm import SVC >>> np.random.seed(0) >>> X, y = load\_iris(return\_X\_y=True) >>> indices = np.arange(y.shape[0]) >>> np.random.shuffle(indices) >>> X, y = X[indices], y[indices] >>> train\_scores, valid\_scores = validation\_curve( ... SVC(kernel="linear"), X, y, param\_name="C", param\_range=np.logspace(-7, 3, 3), ... ) >>> train\_scores array([[0.90, 0.94, 0.91, 0.89, 0.92], [0.9 , 0.92, 0.93, 0.92, 0.93], [0.97, 1 , 0.98, 0.97, 0.99]]) >>> valid\_scores array([[0.9, 0.9 , 0.9 , 0.96, 0.9 ], [0.9, 0.83, 0.96, 0.96, 0.93], [1. , 0.93, 1 , 1 , 0.9 ]]) If you intend to plot the validation curves only, the class :class:`~sklearn.model\_selection.ValidationCurveDisplay` is more direct than using matplotlib manually on the results of a call to :func:`validation\_curve`. You can use the method :meth:`~sklearn.model\_selection.ValidationCurveDisplay.from\_estimator` similarly to :func:`validation\_curve` to generate and plot the validation curve: .. plot::
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/learning_curve.rst
main
scikit-learn
[ -0.11321820318698883, -0.061294738203287125, -0.022546891123056412, 0.05011799558997154, 0.10861162096261978, -0.009356766939163208, 0.024336693808436394, -0.004632085096091032, 0.03360167518258095, -0.01842307671904564, 0.005452616605907679, -0.027510421350598335, 0.01098551880568266, -0....
0.200535
, 1 , 0.9 ]]) If you intend to plot the validation curves only, the class :class:`~sklearn.model\_selection.ValidationCurveDisplay` is more direct than using matplotlib manually on the results of a call to :func:`validation\_curve`. You can use the method :meth:`~sklearn.model\_selection.ValidationCurveDisplay.from\_estimator` similarly to :func:`validation\_curve` to generate and plot the validation curve: .. plot:: :context: close-figs :align: center from sklearn.datasets import load\_iris from sklearn.model\_selection import ValidationCurveDisplay from sklearn.svm import SVC from sklearn.utils import shuffle X, y = load\_iris(return\_X\_y=True) X, y = shuffle(X, y, random\_state=0) ValidationCurveDisplay.from\_estimator( SVC(kernel="linear"), X, y, param\_name="C", param\_range=np.logspace(-7, 3, 10) ) If the training score and the validation score are both low, the estimator will be underfitting. If the training score is high and the validation score is low, the estimator is overfitting and otherwise it is working very well. A low training score and a high validation score is usually not possible. .. \_learning\_curve: Learning curve ============== A learning curve shows the validation and training score of an estimator for varying numbers of training samples. It is a tool to find out how much we benefit from adding more training data and whether the estimator suffers more from a variance error or a bias error. Consider the following example where we plot the learning curve of a naive Bayes classifier and an SVM. For the naive Bayes, both the validation score and the training score converge to a value that is quite low with increasing size of the training set. Thus, we will probably not benefit much from more training data. In contrast, for small amounts of data, the training score of the SVM is much greater than the validation score. Adding more training samples will most likely increase generalization. .. figure:: ../auto\_examples/model\_selection/images/sphx\_glr\_plot\_learning\_curve\_001.png :target: ../auto\_examples/model\_selection/plot\_learning\_curve.html :align: center :scale: 50% We can use the function :func:`learning\_curve` to generate the values that are required to plot such a learning curve (number of samples that have been used, the average scores on the training sets and the average scores on the validation sets):: >>> from sklearn.model\_selection import learning\_curve >>> from sklearn.svm import SVC >>> train\_sizes, train\_scores, valid\_scores = learning\_curve( ... SVC(kernel='linear'), X, y, train\_sizes=[50, 80, 110], cv=5) >>> train\_sizes array([ 50, 80, 110]) >>> train\_scores array([[0.98, 0.98 , 0.98, 0.98, 0.98], [0.98, 1. , 0.98, 0.98, 0.98], [0.98, 1. , 0.98, 0.98, 0.99]]) >>> valid\_scores array([[1. , 0.93, 1. , 1. , 0.96], [1. , 0.96, 1. , 1. , 0.96], [1. , 0.96, 1. , 1. , 0.96]]) If you intend to plot the learning curves only, the class :class:`~sklearn.model\_selection.LearningCurveDisplay` will be easier to use. You can use the method :meth:`~sklearn.model\_selection.LearningCurveDisplay.from\_estimator` similarly to :func:`learning\_curve` to generate and plot the learning curve: .. plot:: :context: close-figs :align: center from sklearn.datasets import load\_iris from sklearn.model\_selection import LearningCurveDisplay from sklearn.svm import SVC from sklearn.utils import shuffle X, y = load\_iris(return\_X\_y=True) X, y = shuffle(X, y, random\_state=0) LearningCurveDisplay.from\_estimator( SVC(kernel="linear"), X, y, train\_sizes=[50, 80, 110], cv=5) .. rubric:: Examples \* See :ref:`sphx\_glr\_auto\_examples\_model\_selection\_plot\_learning\_curve.py` for an example of using learning curves to check the scalability of a predictive model.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/learning_curve.rst
main
scikit-learn
[ -0.020013868808746338, -0.11220323294401169, -0.08689787238836288, -0.03710553050041199, 0.10070151835680008, -0.04705936834216118, 0.015819059684872627, 0.015465044416487217, -0.002800988033413887, -0.047251246869564056, -0.008981691673398018, -0.06669843941926956, -0.029240956529974937, ...
0.029263
.. \_svm: ======================= Support Vector Machines ======================= .. TODO: Describe tol parameter .. TODO: Describe max\_iter parameter .. currentmodule:: sklearn.svm \*\*Support vector machines (SVMs)\*\* are a set of supervised learning methods used for :ref:`classification `, :ref:`regression ` and :ref:`outliers detection `. The advantages of support vector machines are: - Effective in high dimensional spaces. - Still effective in cases where number of dimensions is greater than the number of samples. - Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. - Versatile: different :ref:`svm\_kernels` can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels. The disadvantages of support vector machines include: - If the number of features is much greater than the number of samples, avoid over-fitting in choosing :ref:`svm\_kernels` and regularization term is crucial. - SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see :ref:`Scores and probabilities `, below). The support vector machines in scikit-learn support both dense (``numpy.ndarray`` and convertible to that by ``numpy.asarray``) and sparse (any ``scipy.sparse``) sample vectors as input. However, to use an SVM to make predictions for sparse data, it must have been fit on such data. For optimal performance, use C-ordered ``numpy.ndarray`` (dense) or ``scipy.sparse.csr\_matrix`` (sparse) with ``dtype=float64``. .. \_svm\_classification: Classification ============== :class:`SVC`, :class:`NuSVC` and :class:`LinearSVC` are classes capable of performing binary and multi-class classification on a dataset. .. figure:: ../auto\_examples/svm/images/sphx\_glr\_plot\_iris\_svc\_001.png :target: ../auto\_examples/svm/plot\_iris\_svc.html :align: center :class:`SVC` and :class:`NuSVC` are similar methods, but accept slightly different sets of parameters and have different mathematical formulations (see section :ref:`svm\_mathematical\_formulation`). On the other hand, :class:`LinearSVC` is another (faster) implementation of Support Vector Classification for the case of a linear kernel. It also lacks some of the attributes of :class:`SVC` and :class:`NuSVC`, like `support\_`. :class:`LinearSVC` uses `squared\_hinge` loss and due to its implementation in `liblinear` it also regularizes the intercept, if considered. This effect can however be reduced by carefully fine tuning its `intercept\_scaling` parameter, which allows the intercept term to have a different regularization behavior compared to the other features. The classification results and score can therefore differ from the other two classifiers. As other classifiers, :class:`SVC`, :class:`NuSVC` and :class:`LinearSVC` take as input two arrays: an array `X` of shape `(n\_samples, n\_features)` holding the training samples, and an array `y` of class labels (strings or integers), of shape `(n\_samples)`:: >>> from sklearn import svm >>> X = [[0, 0], [1, 1]] >>> y = [0, 1] >>> clf = svm.SVC() >>> clf.fit(X, y) SVC() After being fitted, the model can then be used to predict new values:: >>> clf.predict([[2., 2.]]) array([1]) SVMs decision function (detailed in the :ref:`svm\_mathematical\_formulation`) depends on some subset of the training data, called the support vectors. Some properties of these support vectors can be found in attributes ``support\_vectors\_``, ``support\_`` and ``n\_support\_``:: >>> # get support vectors >>> clf.support\_vectors\_ array([[0., 0.], [1., 1.]]) >>> # get indices of support vectors >>> clf.support\_ array([0, 1]...) >>> # get number of support vectors for each class >>> clf.n\_support\_ array([1, 1]...) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_separating\_hyperplane.py` \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_svm\_anova.py` \* :ref:`sphx\_glr\_auto\_examples\_classification\_plot\_classification\_probability.py` .. \_svm\_multi\_class: Multi-class classification -------------------------- :class:`SVC` and :class:`NuSVC` implement the "one-versus-one" ("ovo") approach for multi-class classification, which constructs ``n\_classes \* (n\_classes - 1) / 2`` classifiers, each trained on data from two classes. Internally, the solver always uses this "ovo" strategy to train the models. However, by default, the `decision\_function\_shape` parameter is set to `"ovr"` ("one-vs-rest"), to have a consistent interface with other classifiers by monotonically transforming the "ovo" decision function into an "ovr" decision function of shape ``(n\_samples, n\_classes)``.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.0885516107082367, -0.04698936641216278, -0.058055292814970016, 0.016623124480247498, 0.11082026362419128, 0.02309662103652954, 0.055771179497241974, 0.0793703943490982, -0.05041979253292084, 0.00032586377346888185, -0.05169512704014778, 0.04137806594371796, -0.005229472182691097, -0.006...
0.216939
from two classes. Internally, the solver always uses this "ovo" strategy to train the models. However, by default, the `decision\_function\_shape` parameter is set to `"ovr"` ("one-vs-rest"), to have a consistent interface with other classifiers by monotonically transforming the "ovo" decision function into an "ovr" decision function of shape ``(n\_samples, n\_classes)``. >>> X = [[0], [1], [2], [3]] >>> Y = [0, 1, 2, 3] >>> clf = svm.SVC(decision\_function\_shape='ovo') >>> clf.fit(X, Y) SVC(decision\_function\_shape='ovo') >>> dec = clf.decision\_function([[1]]) >>> dec.shape[1] # 6 classes: 4\*3/2 = 6 6 >>> clf.decision\_function\_shape = "ovr" >>> dec = clf.decision\_function([[1]]) >>> dec.shape[1] # 4 classes 4 On the other hand, :class:`LinearSVC` implements a "one-vs-rest" ("ovr") multi-class strategy, thus training `n\_classes` models. >>> lin\_clf = svm.LinearSVC() >>> lin\_clf.fit(X, Y) LinearSVC() >>> dec = lin\_clf.decision\_function([[1]]) >>> dec.shape[1] 4 See :ref:`svm\_mathematical\_formulation` for a complete description of the decision function. .. dropdown:: Details on multi-class strategies Note that the :class:`LinearSVC` also implements an alternative multi-class strategy, the so-called multi-class SVM formulated by Crammer and Singer [#8]\_, by using the option ``multi\_class='crammer\_singer'``. In practice, one-vs-rest classification is usually preferred, since the results are mostly similar, but the runtime is significantly less. For "one-vs-rest" :class:`LinearSVC` the attributes ``coef\_`` and ``intercept\_`` have the shape ``(n\_classes, n\_features)`` and ``(n\_classes,)`` respectively. Each row of the coefficients corresponds to one of the ``n\_classes`` "one-vs-rest" classifiers and similar for the intercepts, in the order of the "one" class. In the case of "one-vs-one" :class:`SVC` and :class:`NuSVC`, the layout of the attributes is a little more involved. In the case of a linear kernel, the attributes ``coef\_`` and ``intercept\_`` have the shape ``(n\_classes \* (n\_classes - 1) / 2, n\_features)`` and ``(n\_classes \* (n\_classes - 1) / 2)`` respectively. This is similar to the layout for :class:`LinearSVC` described above, with each row now corresponding to a binary classifier. The order for classes 0 to n is "0 vs 1", "0 vs 2" , ... "0 vs n", "1 vs 2", "1 vs 3", "1 vs n", . . . "n-1 vs n". The shape of ``dual\_coef\_`` is ``(n\_classes-1, n\_SV)`` with a somewhat hard to grasp layout. The columns correspond to the support vectors involved in any of the ``n\_classes \* (n\_classes - 1) / 2`` "one-vs-one" classifiers. Each support vector ``v`` has a dual coefficient in each of the ``n\_classes - 1`` classifiers comparing the class of ``v`` against another class. Note that some, but not all, of these dual coefficients, may be zero. The ``n\_classes - 1`` entries in each column are these dual coefficients, ordered by the opposing class. This might be clearer with an example: consider a three class problem with class 0 having three support vectors :math:`v^{0}\_0, v^{1}\_0, v^{2}\_0` and class 1 and 2 having two support vectors :math:`v^{0}\_1, v^{1}\_1` and :math:`v^{0}\_2, v^{1}\_2` respectively. For each support vector :math:`v^{j}\_i`, there are two dual coefficients. Let's call the coefficient of support vector :math:`v^{j}\_i` in the classifier between classes :math:`i` and :math:`k` :math:`\alpha^{j}\_{i,k}`. Then ``dual\_coef\_`` looks like this: +------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+ |:math:`\alpha^{0}\_{0,1}`|:math:`\alpha^{1}\_{0,1}`|:math:`\alpha^{2}\_{0,1}`|:math:`\alpha^{0}\_{1,0}`|:math:`\alpha^{1}\_{1,0}`|:math:`\alpha^{0}\_{2,0}`|:math:`\alpha^{1}\_{2,0}`| +------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+ |:math:`\alpha^{0}\_{0,2}`|:math:`\alpha^{1}\_{0,2}`|:math:`\alpha^{2}\_{0,2}`|:math:`\alpha^{0}\_{1,2}`|:math:`\alpha^{1}\_{1,2}`|:math:`\alpha^{0}\_{2,1}`|:math:`\alpha^{1}\_{2,1}`| +------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+ |Coefficients |Coefficients |Coefficients | |for SVs of class 0 |for SVs of class 1 |for SVs of class 2 | +--------------------------------------------------------------------------+-------------------------------------------------+-------------------------------------------------+ .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_iris\_svc.py` .. \_scores\_probabilities: Scores and probabilities ------------------------ The ``decision\_function`` method of :class:`SVC` and :class:`NuSVC` gives per-class scores for each sample (or a single score per sample in the binary case). When the constructor option ``probability`` is set to ``True``, class membership probability estimates (from the methods ``predict\_proba`` and ``predict\_log\_proba``) are enabled. In the binary case, the probabilities are calibrated using Platt scaling [#1]\_: logistic regression on the SVM's scores, fit by an additional cross-validation on the training data. In
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.017592599615454674, -0.11759057641029358, -0.00687027582898736, 0.0006954038981348276, 0.054621800780296326, -0.10427076369524002, -0.030637226998806, -0.0021715578623116016, 0.012526755221188068, 0.0025435765273869038, -0.04335826262831688, 0.01027980912476778, -0.016072310507297516, -...
0.021605
case). When the constructor option ``probability`` is set to ``True``, class membership probability estimates (from the methods ``predict\_proba`` and ``predict\_log\_proba``) are enabled. In the binary case, the probabilities are calibrated using Platt scaling [#1]\_: logistic regression on the SVM's scores, fit by an additional cross-validation on the training data. In the multiclass case, this is extended as per [#2]\_. .. note:: The same probability calibration procedure is available for all estimators via the :class:`~sklearn.calibration.CalibratedClassifierCV` (see :ref:`calibration`). In the case of :class:`SVC` and :class:`NuSVC`, this procedure is builtin to `libsvm`\_ which is used under the hood, so it does not rely on scikit-learn's :class:`~sklearn.calibration.CalibratedClassifierCV`. The cross-validation involved in Platt scaling is an expensive operation for large datasets. In addition, the probability estimates may be inconsistent with the scores: - the "argmax" of the scores may not be the argmax of the probabilities - in binary classification, a sample may be labeled by ``predict`` as belonging to the positive class even if the output of `predict\_proba` is less than 0.5; and similarly, it could be labeled as negative even if the output of `predict\_proba` is more than 0.5. Platt's method is also known to have theoretical issues. If confidence scores are required, but these do not have to be probabilities, then it is advisable to set ``probability=False`` and use ``decision\_function`` instead of ``predict\_proba``. Please note that when ``decision\_function\_shape='ovr'`` and ``n\_classes > 2``, unlike ``decision\_function``, the ``predict`` method does not try to break ties by default. You can set ``break\_ties=True`` for the output of ``predict`` to be the same as ``np.argmax(clf.decision\_function(...), axis=1)``, otherwise the first class among the tied classes will always be returned; but have in mind that it comes with a computational cost. See :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_svm\_tie\_breaking.py` for an example on tie breaking. Unbalanced problems -------------------- In problems where it is desired to give more importance to certain classes or certain individual samples, the parameters ``class\_weight`` and ``sample\_weight`` can be used. :class:`SVC` (but not :class:`NuSVC`) implements the parameter ``class\_weight`` in the ``fit`` method. It's a dictionary of the form ``{class\_label : value}``, where value is a floating point number > 0 that sets the parameter ``C`` of class ``class\_label`` to ``C \* value``. The figure below illustrates the decision boundary of an unbalanced problem, with and without weight correction. .. figure:: ../auto\_examples/svm/images/sphx\_glr\_plot\_separating\_hyperplane\_unbalanced\_001.png :target: ../auto\_examples/svm/plot\_separating\_hyperplane\_unbalanced.html :align: center :scale: 75 :class:`SVC`, :class:`NuSVC`, :class:`SVR`, :class:`NuSVR`, :class:`LinearSVC`, :class:`LinearSVR` and :class:`OneClassSVM` implement also weights for individual samples in the `fit` method through the ``sample\_weight`` parameter. Similar to ``class\_weight``, this sets the parameter ``C`` for the i-th example to ``C \* sample\_weight[i]``, which will encourage the classifier to get these samples right. The figure below illustrates the effect of sample weighting on the decision boundary. The size of the circles is proportional to the sample weights: .. figure:: ../auto\_examples/svm/images/sphx\_glr\_plot\_weighted\_samples\_001.png :target: ../auto\_examples/svm/plot\_weighted\_samples.html :align: center :scale: 75 .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_separating\_hyperplane\_unbalanced.py` \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_weighted\_samples.py` .. \_svm\_regression: Regression ========== The method of Support Vector Classification can be extended to solve regression problems. This method is called Support Vector Regression. The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by Support Vector Regression depends only on a subset of the training data, because the cost function ignores samples whose prediction is close to their target. There are three different implementations of Support Vector Regression: :class:`SVR`, :class:`NuSVR` and :class:`LinearSVR`. :class:`LinearSVR` provides a faster implementation than :class:`SVR` but only considers the linear kernel, while :class:`NuSVR` implements a slightly different formulation than :class:`SVR`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.011222239583730698, -0.10133755207061768, -0.07722781598567963, -0.04009301960468292, 0.05578337609767914, -0.07391230016946793, -0.0011629321379587054, 0.019294630736112595, -0.03549860045313835, 0.027128344401717186, 0.019383499398827553, -0.11459052562713623, 0.032649021595716476, -0...
0.016062
training data, because the cost function ignores samples whose prediction is close to their target. There are three different implementations of Support Vector Regression: :class:`SVR`, :class:`NuSVR` and :class:`LinearSVR`. :class:`LinearSVR` provides a faster implementation than :class:`SVR` but only considers the linear kernel, while :class:`NuSVR` implements a slightly different formulation than :class:`SVR` and :class:`LinearSVR`. Due to its implementation in `liblinear` :class:`LinearSVR` also regularizes the intercept, if considered. This effect can however be reduced by carefully fine tuning its `intercept\_scaling` parameter, which allows the intercept term to have a different regularization behavior compared to the other features. The classification results and score can therefore differ from the other two classifiers. See :ref:`svm\_implementation\_details` for further details. As with classification classes, the fit method will take as argument vectors X, y, only that in this case y is expected to have floating point values instead of integer values:: >>> from sklearn import svm >>> X = [[0, 0], [2, 2]] >>> y = [0.5, 2.5] >>> regr = svm.SVR() >>> regr.fit(X, y) SVR() >>> regr.predict([[1, 1]]) array([1.5]) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_svm\_regression.py` .. \_svm\_outlier\_detection: Density estimation, novelty detection ======================================= The class :class:`OneClassSVM` implements a One-Class SVM which is used in outlier detection. See :ref:`outlier\_detection` for the description and usage of OneClassSVM. Complexity ========== Support Vector Machines are powerful tools, but their compute and storage requirements increase rapidly with the number of training vectors. The core of an SVM is a quadratic programming problem (QP), separating support vectors from the rest of the training data. The QP solver used by the `libsvm`\_-based implementation scales between :math:`O(n\_{features} \times n\_{samples}^2)` and :math:`O(n\_{features} \times n\_{samples}^3)` depending on how efficiently the `libsvm`\_ cache is used in practice (dataset dependent). If the data is very sparse :math:`n\_{features}` should be replaced by the average number of non-zero features in a sample vector. For the linear case, the algorithm used in :class:`LinearSVC` by the `liblinear`\_ implementation is much more efficient than its `libsvm`\_-based :class:`SVC` counterpart and can scale almost linearly to millions of samples and/or features. Tips on Practical Use ===================== \* \*\*Avoiding data copy\*\*: For :class:`SVC`, :class:`SVR`, :class:`NuSVC` and :class:`NuSVR`, if the data passed to certain methods is not C-ordered contiguous and double precision, it will be copied before calling the underlying C implementation. You can check whether a given numpy array is C-contiguous by inspecting its ``flags`` attribute. For :class:`LinearSVC` (and :class:`LogisticRegression `) any input passed as a numpy array will be copied and converted to the `liblinear`\_ internal sparse data representation (double precision floats and int32 indices of non-zero components). If you want to fit a large-scale linear classifier without copying a dense numpy C-contiguous double precision array as input, we suggest to use the :class:`SGDClassifier ` class instead. The objective function can be configured to be almost the same as the :class:`LinearSVC` model. \* \*\*Kernel cache size\*\*: For :class:`SVC`, :class:`SVR`, :class:`NuSVC` and :class:`NuSVR`, the size of the kernel cache has a strong impact on run times for larger problems. If you have enough RAM available, it is recommended to set ``cache\_size`` to a higher value than the default of 200(MB), such as 500(MB) or 1000(MB). \* \*\*Setting C\*\*: ``C`` is ``1`` by default and it's a reasonable default choice. If you have a lot of noisy observations you should decrease it: decreasing C corresponds to more regularization. :class:`LinearSVC` and :class:`LinearSVR` are less sensitive to ``C`` when it becomes large, and prediction results stop improving after a certain threshold. Meanwhile, larger ``C`` values will take more time to train, sometimes up to 10 times longer, as shown in [#3]\_. \* Support Vector Machine algorithms are not scale invariant, so \*\*it
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.05876331776380539, -0.06595907360315323, -0.04534933343529701, -0.004282702226191759, 0.16385230422019958, 0.032877348363399506, 0.00042830687016248703, -0.005206845235079527, 0.002570929704234004, 0.034374434500932693, -0.04239727929234505, 0.03770286962389946, -0.05498943850398064, -0...
0.023377
:class:`LinearSVR` are less sensitive to ``C`` when it becomes large, and prediction results stop improving after a certain threshold. Meanwhile, larger ``C`` values will take more time to train, sometimes up to 10 times longer, as shown in [#3]\_. \* Support Vector Machine algorithms are not scale invariant, so \*\*it is highly recommended to scale your data\*\*. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or standardize it to have mean 0 and variance 1. Note that the \*same\* scaling must be applied to the test vector to obtain meaningful results. This can be done easily by using a :class:`~sklearn.pipeline.Pipeline`:: >>> from sklearn.pipeline import make\_pipeline >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.svm import SVC >>> clf = make\_pipeline(StandardScaler(), SVC()) See section :ref:`preprocessing` for more details on scaling and normalization. .. \_shrinking\_svm: \* Regarding the `shrinking` parameter, quoting [#4]\_: \*We found that if the number of iterations is large, then shrinking can shorten the training time. However, if we loosely solve the optimization problem (e.g., by using a large stopping tolerance), the code without using shrinking may be much faster\* \* Parameter ``nu`` in :class:`NuSVC`/:class:`OneClassSVM`/:class:`NuSVR` approximates the fraction of training errors and support vectors. \* In :class:`SVC`, if the data is unbalanced (e.g. many positive and few negative), set ``class\_weight='balanced'`` and/or try different penalty parameters ``C``. \* \*\*Randomness of the underlying implementations\*\*: The underlying implementations of :class:`SVC` and :class:`NuSVC` use a random number generator only to shuffle the data for probability estimation (when ``probability`` is set to ``True``). This randomness can be controlled with the ``random\_state`` parameter. If ``probability`` is set to ``False`` these estimators are not random and ``random\_state`` has no effect on the results. The underlying :class:`OneClassSVM` implementation is similar to the ones of :class:`SVC` and :class:`NuSVC`. As no probability estimation is provided for :class:`OneClassSVM`, it is not random. The underlying :class:`LinearSVC` implementation uses a random number generator to select features when fitting the model with a dual coordinate descent (i.e. when ``dual`` is set to ``True``). It is thus not uncommon to have slightly different results for the same input data. If that happens, try with a smaller `tol` parameter. This randomness can also be controlled with the ``random\_state`` parameter. When ``dual`` is set to ``False`` the underlying implementation of :class:`LinearSVC` is not random and ``random\_state`` has no effect on the results. \* Using L1 penalization as provided by ``LinearSVC(penalty='l1', dual=False)`` yields a sparse solution, i.e. only a subset of feature weights is different from zero and contribute to the decision function. Increasing ``C`` yields a more complex model (more features are selected). The ``C`` value that yields a "null" model (all weights equal to zero) can be calculated using :func:`l1\_min\_c`. .. \_svm\_kernels: Kernel functions ================ The \*kernel function\* can be any of the following: \* linear: :math:`\langle x, x'\rangle`. \* polynomial: :math:`(\gamma \langle x, x'\rangle + r)^d`, where :math:`d` is specified by parameter ``degree``, :math:`r` by ``coef0``. \* rbf: :math:`\exp(-\gamma \|x-x'\|^2)`, where :math:`\gamma` is specified by parameter ``gamma``, must be greater than 0. \* sigmoid :math:`\tanh(\gamma \langle x,x'\rangle + r)`, where :math:`r` is specified by ``coef0``. Different kernels are specified by the `kernel` parameter:: >>> linear\_svc = svm.SVC(kernel='linear') >>> linear\_svc.kernel 'linear' >>> rbf\_svc = svm.SVC(kernel='rbf') >>> rbf\_svc.kernel 'rbf' See also :ref:`kernel\_approximation` for a solution to use RBF kernels that is much faster and more scalable. Parameters of the RBF Kernel ---------------------------- When training an SVM with the \*Radial Basis Function\* (RBF) kernel, two parameters must be considered: ``C`` and ``gamma``. The parameter ``C``, common to all SVM kernels, trades off misclassification of training examples against simplicity of the decision surface.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.01540304347872734, -0.043727852404117584, -0.042069729417562485, -0.00500481715425849, 0.018175257369875908, -0.04706471785902977, -0.049132030457258224, 0.0015251449076458812, -0.06116417050361633, -0.010582491755485535, -0.05252343416213989, -0.006680969148874283, -0.04465285316109657, ...
0.010937
much faster and more scalable. Parameters of the RBF Kernel ---------------------------- When training an SVM with the \*Radial Basis Function\* (RBF) kernel, two parameters must be considered: ``C`` and ``gamma``. The parameter ``C``, common to all SVM kernels, trades off misclassification of training examples against simplicity of the decision surface. A low ``C`` makes the decision surface smooth, while a high ``C`` aims at classifying all training examples correctly. ``gamma`` defines how much influence a single training example has. The larger ``gamma`` is, the closer other examples must be to be affected. Proper choice of ``C`` and ``gamma`` is critical to the SVM's performance. One is advised to use :class:`~sklearn.model\_selection.GridSearchCV` with ``C`` and ``gamma`` spaced exponentially far apart to choose good values. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_rbf\_parameters.py` \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_svm\_scale\_c.py` Custom Kernels -------------- You can define your own kernels by either giving the kernel as a python function or by precomputing the Gram matrix. Classifiers with custom kernels behave the same way as any other classifiers, except that: \* Field ``support\_vectors\_`` is now empty, only indices of support vectors are stored in ``support\_`` \* A reference (and not a copy) of the first argument in the ``fit()`` method is stored for future reference. If that array changes between the use of ``fit()`` and ``predict()`` you will have unexpected results. .. dropdown:: Using Python functions as kernels You can use your own defined kernels by passing a function to the ``kernel`` parameter. Your kernel must take as arguments two matrices of shape ``(n\_samples\_1, n\_features)``, ``(n\_samples\_2, n\_features)`` and return a kernel matrix of shape ``(n\_samples\_1, n\_samples\_2)``. The following code defines a linear kernel and creates a classifier instance that will use that kernel:: >>> import numpy as np >>> from sklearn import svm >>> def my\_kernel(X, Y): ... return np.dot(X, Y.T) ... >>> clf = svm.SVC(kernel=my\_kernel) .. dropdown:: Using the Gram matrix You can pass pre-computed kernels by using the ``kernel='precomputed'`` option. You should then pass Gram matrix instead of X to the `fit` and `predict` methods. The kernel values between \*all\* training vectors and the test vectors must be provided: >>> import numpy as np >>> from sklearn.datasets import make\_classification >>> from sklearn.model\_selection import train\_test\_split >>> from sklearn import svm >>> X, y = make\_classification(n\_samples=10, random\_state=0) >>> X\_train , X\_test , y\_train, y\_test = train\_test\_split(X, y, random\_state=0) >>> clf = svm.SVC(kernel='precomputed') >>> # linear kernel computation >>> gram\_train = np.dot(X\_train, X\_train.T) >>> clf.fit(gram\_train, y\_train) SVC(kernel='precomputed') >>> # predict on training examples >>> gram\_test = np.dot(X\_test, X\_train.T) >>> clf.predict(gram\_test) array([0, 1, 0]) .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_svm\_plot\_custom\_kernel.py` .. \_svm\_mathematical\_formulation: Mathematical formulation ======================== A support vector machine constructs a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class (so-called functional margin), since in general the larger the margin the lower the generalization error of the classifier. The figure below shows the decision function for a linearly separable problem, with three samples on the margin boundaries, called "support vectors": .. figure:: ../auto\_examples/svm/images/sphx\_glr\_plot\_separating\_hyperplane\_001.png :align: center :scale: 75 In general, when the problem isn't linearly separable, the support vectors are the samples \*within\* the margin boundaries. We recommend [#5]\_ and [#6]\_ as good references for the theory and practicalities of SVMs. SVC --- Given training vectors :math:`x\_i \in \mathbb{R}^p`, i=1,..., n, in two classes, and a vector :math:`y \in \{1, -1\}^n`, our goal is to find :math:`w \in \mathbb{R}^p` and :math:`b \in \mathbb{R}` such that the prediction given by :math:`\text{sign} (w^T\phi(x)
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.043243132531642914, -0.11935906857252121, -0.1222187727689743, 0.01699577085673809, 0.04115764796733856, -0.003877673763781786, 0.06474384665489197, 0.08207155019044876, -0.012161478400230408, -0.04797361418604851, 0.023269763216376305, 0.016410207375884056, 0.03435596451163292, -0.0156...
0.089498
as good references for the theory and practicalities of SVMs. SVC --- Given training vectors :math:`x\_i \in \mathbb{R}^p`, i=1,..., n, in two classes, and a vector :math:`y \in \{1, -1\}^n`, our goal is to find :math:`w \in \mathbb{R}^p` and :math:`b \in \mathbb{R}` such that the prediction given by :math:`\text{sign} (w^T\phi(x) + b)` is correct for most samples. SVC solves the following primal problem: .. math:: \min\_ {w, b, \zeta} \frac{1}{2} w^T w + C \sum\_{i=1}^{n} \zeta\_i \textrm {subject to } & y\_i (w^T \phi (x\_i) + b) \geq 1 - \zeta\_i,\\ & \zeta\_i \geq 0, i=1, ..., n Intuitively, we're trying to maximize the margin (by minimizing :math:`||w||^2 = w^Tw`), while incurring a penalty when a sample is misclassified or within the margin boundary. Ideally, the value :math:`y\_i (w^T \phi (x\_i) + b)` would be :math:`\geq 1` for all samples, which indicates a perfect prediction. But problems are usually not always perfectly separable with a hyperplane, so we allow some samples to be at a distance :math:`\zeta\_i` from their correct margin boundary. The penalty term `C` controls the strength of this penalty, and as a result, acts as an inverse regularization parameter (see note below). The dual problem to the primal is .. math:: \min\_{\alpha} \frac{1}{2} \alpha^T Q \alpha - e^T \alpha \textrm {subject to } & y^T \alpha = 0\\ & 0 \leq \alpha\_i \leq C, i=1, ..., n where :math:`e` is the vector of all ones, and :math:`Q` is an :math:`n` by :math:`n` positive semidefinite matrix, :math:`Q\_{ij} \equiv y\_i y\_j K(x\_i, x\_j)`, where :math:`K(x\_i, x\_j) = \phi (x\_i)^T \phi (x\_j)` is the kernel. The terms :math:`\alpha\_i` are called the dual coefficients, and they are upper-bounded by :math:`C`. This dual representation highlights the fact that training vectors are implicitly mapped into a higher (maybe infinite) dimensional space by the function :math:`\phi`: see `kernel trick `\_. Once the optimization problem is solved, the output of :term:`decision\_function` for a given sample :math:`x` becomes: .. math:: \sum\_{i\in SV} y\_i \alpha\_i K(x\_i, x) + b, and the predicted class corresponds to its sign. We only need to sum over the support vectors (i.e. the samples that lie within the margin) because the dual coefficients :math:`\alpha\_i` are zero for the other samples. These parameters can be accessed through the attributes ``dual\_coef\_`` which holds the product :math:`y\_i \alpha\_i`, ``support\_vectors\_`` which holds the support vectors, and ``intercept\_`` which holds the independent term :math:`b`. .. note:: While SVM models derived from `libsvm`\_ and `liblinear`\_ use ``C`` as regularization parameter, most other estimators use ``alpha``. The exact equivalence between the amount of regularization of two models depends on the exact objective function optimized by the model. For example, when the estimator used is :class:`~sklearn.linear\_model.Ridge` regression, the relation between them is given as :math:`C = \frac{1}{\alpha}`. .. dropdown:: LinearSVC The primal problem can be equivalently formulated as .. math:: \min\_ {w, b} \frac{1}{2} w^T w + C \sum\_{i=1}^{n}\max(0, 1 - y\_i (w^T \phi(x\_i) + b)), where we make use of the `hinge loss `\_. This is the form that is directly optimized by :class:`LinearSVC`, but unlike the dual form, this one does not involve inner products between samples, so the famous kernel trick cannot be applied. This is why only the linear kernel is supported by :class:`LinearSVC` (:math:`\phi` is the identity function). .. \_nu\_svc: .. dropdown:: NuSVC The :math:`\nu`-SVC formulation [#7]\_ is a reparameterization of the :math:`C`-SVC and therefore mathematically equivalent. We introduce a new parameter :math:`\nu` (instead of :math:`C`) which controls the number of support vectors and \*margin errors\*: :math:`\nu \in (0, 1]` is an upper bound on the fraction of margin errors and a lower bound of
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.05717703700065613, -0.07860039919614792, 0.004678871016949415, 0.0033519011922180653, 0.08159630745649338, -0.025839991867542267, 0.03677188232541084, 0.03787281736731529, -0.0015813332283869386, 0.06632456183433533, -0.08122341334819794, 0.09305183589458466, 0.00977406557649374, -0.000...
-0.043208
formulation [#7]\_ is a reparameterization of the :math:`C`-SVC and therefore mathematically equivalent. We introduce a new parameter :math:`\nu` (instead of :math:`C`) which controls the number of support vectors and \*margin errors\*: :math:`\nu \in (0, 1]` is an upper bound on the fraction of margin errors and a lower bound of the fraction of support vectors. A margin error corresponds to a sample that lies on the wrong side of its margin boundary: it is either misclassified, or it is correctly classified but does not lie beyond the margin. SVR --- Given training vectors :math:`x\_i \in \mathbb{R}^p`, i=1,..., n, and a vector :math:`y \in \mathbb{R}^n` :math:`\varepsilon`-SVR solves the following primal problem: .. math:: \min\_ {w, b, \zeta, \zeta^\*} \frac{1}{2} w^T w + C \sum\_{i=1}^{n} (\zeta\_i + \zeta\_i^\*) \textrm {subject to } & y\_i - w^T \phi (x\_i) - b \leq \varepsilon + \zeta\_i,\\ & w^T \phi (x\_i) + b - y\_i \leq \varepsilon + \zeta\_i^\*,\\ & \zeta\_i, \zeta\_i^\* \geq 0, i=1, ..., n Here, we are penalizing samples whose prediction is at least :math:`\varepsilon` away from their true target. These samples penalize the objective by :math:`\zeta\_i` or :math:`\zeta\_i^\*`, depending on whether their predictions lie above or below the :math:`\varepsilon` tube. The dual problem is .. math:: \min\_{\alpha, \alpha^\*} \frac{1}{2} (\alpha - \alpha^\*)^T Q (\alpha - \alpha^\*) + \varepsilon e^T (\alpha + \alpha^\*) - y^T (\alpha - \alpha^\*) \textrm {subject to } & e^T (\alpha - \alpha^\*) = 0\\ & 0 \leq \alpha\_i, \alpha\_i^\* \leq C, i=1, ..., n where :math:`e` is the vector of all ones, :math:`Q` is an :math:`n` by :math:`n` positive semidefinite matrix, :math:`Q\_{ij} \equiv K(x\_i, x\_j) = \phi (x\_i)^T \phi (x\_j)` is the kernel. Here training vectors are implicitly mapped into a higher (maybe infinite) dimensional space by the function :math:`\phi`. The prediction is: .. math:: \sum\_{i \in SV}(\alpha\_i - \alpha\_i^\*) K(x\_i, x) + b These parameters can be accessed through the attributes ``dual\_coef\_`` which holds the difference :math:`\alpha\_i - \alpha\_i^\*`, ``support\_vectors\_`` which holds the support vectors, and ``intercept\_`` which holds the independent term :math:`b` .. dropdown:: LinearSVR The primal problem can be equivalently formulated as .. math:: \min\_ {w, b} \frac{1}{2} w^T w + C \sum\_{i=1}^{n}\max(0, |y\_i - (w^T \phi(x\_i) + b)| - \varepsilon), where we make use of the epsilon-insensitive loss, i.e. errors of less than :math:`\varepsilon` are ignored. This is the form that is directly optimized by :class:`LinearSVR`. .. \_svm\_implementation\_details: Implementation details ====================== Internally, we use `libsvm`\_ [#4]\_ and `liblinear`\_ [#3]\_ to handle all computations. These libraries are wrapped using C and Cython. For a description of the implementation and details of the algorithms used, please refer to their respective papers. .. \_`libsvm`: https://www.csie.ntu.edu.tw/~cjlin/libsvm/ .. \_`liblinear`: https://www.csie.ntu.edu.tw/~cjlin/liblinear/ .. rubric:: References .. [#1] Platt `"Probabilistic outputs for SVMs and comparisons to regularized likelihood methods" `\_. .. [#2] Wu, Lin and Weng, `"Probability estimates for multi-class classification by pairwise coupling" `\_, JMLR 5:975-1005, 2004. .. [#3] Fan, Rong-En, et al., `"LIBLINEAR: A library for large linear classification." `\_, Journal of machine learning research 9.Aug (2008): 1871-1874. .. [#4] Chang and Lin, `LIBSVM: A Library for Support Vector Machines `\_. .. [#5] Bishop, `Pattern recognition and machine learning `\_, chapter 7 Sparse Kernel Machines. .. [#6] :doi:`"A Tutorial on Support Vector Regression" <10.1023/B:STCO.0000035301.49549.88>` Alex J. Smola, Bernhard Schölkopf - Statistics and Computing archive Volume 14 Issue 3, August 2004, p. 199-222. .. [#7] Schölkopf et. al `New Support Vector Algorithms `\_, Neural Computation 12, 1207-1245 (2000). .. [#8] Crammer and Singer `On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines `\_, JMLR 2001.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.08753825724124908, -0.04415050521492958, -0.03209054842591286, -0.03384463116526604, 0.06770346313714981, -0.029384059831500053, 0.0006669755093753338, 0.027027949690818787, 0.011451618745923042, -0.002470999723300338, -0.06277324259281158, 0.039485473185777664, 0.02929406799376011, 0.0...
0.074286
2004, p. 199-222. .. [#7] Schölkopf et. al `New Support Vector Algorithms `\_, Neural Computation 12, 1207-1245 (2000). .. [#8] Crammer and Singer `On the Algorithmic Implementation of Multiclass Kernel-based Vector Machines `\_, JMLR 2001.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/svm.rst
main
scikit-learn
[ -0.09974212199449539, -0.043624695390462875, 0.02007242478430271, -0.11583496630191803, 0.06734322756528854, 0.04141450673341751, 0.006994070950895548, -0.035983163863420486, -0.09334717690944672, -0.03159604221582413, -0.04187502712011337, 0.007661779876798391, 0.009422630071640015, -0.06...
0.083289
.. \_density\_estimation: ================== Density Estimation ================== .. sectionauthor:: Jake Vanderplas Density estimation walks the line between unsupervised learning, feature engineering, and data modeling. Some of the most popular and useful density estimation techniques are mixture models such as Gaussian Mixtures (:class:`~sklearn.mixture.GaussianMixture`), and neighbor-based approaches such as the kernel density estimate (:class:`~sklearn.neighbors.KernelDensity`). Gaussian Mixtures are discussed more fully in the context of :ref:`clustering `, because the technique is also useful as an unsupervised clustering scheme. Density estimation is a very simple concept, and most people are already familiar with one common density estimation technique: the histogram. Density Estimation: Histograms ============================== A histogram is a simple visualization of data where bins are defined, and the number of data points within each bin is tallied. An example of a histogram can be seen in the upper-left panel of the following figure: .. |hist\_to\_kde| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_kde\_1d\_001.png :target: ../auto\_examples/neighbors/plot\_kde\_1d.html :scale: 80 .. centered:: |hist\_to\_kde| A major problem with histograms, however, is that the choice of binning can have a disproportionate effect on the resulting visualization. Consider the upper-right panel of the above figure. It shows a histogram over the same data, with the bins shifted right. The results of the two visualizations look entirely different, and might lead to different interpretations of the data. Intuitively, one can also think of a histogram as a stack of blocks, one block per point. By stacking the blocks in the appropriate grid space, we recover the histogram. But what if, instead of stacking the blocks on a regular grid, we center each block on the point it represents, and sum the total height at each location? This idea leads to the lower-left visualization. It is perhaps not as clean as a histogram, but the fact that the data drive the block locations means that it is a much better representation of the underlying data. This visualization is an example of a \*kernel density estimation\*, in this case with a top-hat kernel (i.e. a square block at each point). We can recover a smoother distribution by using a smoother kernel. The bottom-right plot shows a Gaussian kernel density estimate, in which each point contributes a Gaussian curve to the total. The result is a smooth density estimate which is derived from the data, and functions as a powerful non-parametric model of the distribution of points. .. \_kernel\_density: Kernel Density Estimation ========================= Kernel density estimation in scikit-learn is implemented in the :class:`~sklearn.neighbors.KernelDensity` estimator, which uses the Ball Tree or KD Tree for efficient queries (see :ref:`neighbors` for a discussion of these). Though the above example uses a 1D data set for simplicity, kernel density estimation can be performed in any number of dimensions, though in practice the curse of dimensionality causes its performance to degrade in high dimensions. In the following figure, 100 points are drawn from a bimodal distribution, and the kernel density estimates are shown for three choices of kernels: .. |kde\_1d\_distribution| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_kde\_1d\_003.png :target: ../auto\_examples/neighbors/plot\_kde\_1d.html :scale: 80 .. centered:: |kde\_1d\_distribution| It's clear how the kernel shape affects the smoothness of the resulting distribution. The scikit-learn kernel density estimator can be used as follows: >>> from sklearn.neighbors import KernelDensity >>> import numpy as np >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> kde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X) >>> kde.score\_samples(X) array([-0.41075698, -0.41075698, -0.41076071, -0.41075698, -0.41075698, -0.41076071]) Here we have used ``kernel='gaussian'``, as seen above. Mathematically, a kernel is a positive function :math:`K(x;h)` which is controlled by the bandwidth parameter :math:`h`. Given this kernel form, the density estimate at a point :math:`y` within a group of points :math:`x\_i; i=1, \cdots, N` is given
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/density.rst
main
scikit-learn
[ -0.006475626491010189, -0.0921410620212555, -0.0010766200721263885, 0.029781421646475792, 0.06735583394765854, 0.020437855273485184, 0.04618015140295029, -0.029442014172673225, -0.08004593104124069, -0.05053189396858215, 0.032651446759700775, -0.043512679636478424, 0.10136980563402176, -0....
0.086089
-0.41076071, -0.41075698, -0.41075698, -0.41076071]) Here we have used ``kernel='gaussian'``, as seen above. Mathematically, a kernel is a positive function :math:`K(x;h)` which is controlled by the bandwidth parameter :math:`h`. Given this kernel form, the density estimate at a point :math:`y` within a group of points :math:`x\_i; i=1, \cdots, N` is given by: .. math:: \rho\_K(y) = \sum\_{i=1}^{N} K(y - x\_i; h) The bandwidth here acts as a smoothing parameter, controlling the tradeoff between bias and variance in the result. A large bandwidth leads to a very smooth (i.e. high-bias) density distribution. A small bandwidth leads to an unsmooth (i.e. high-variance) density distribution. The parameter `bandwidth` controls this smoothing. One can either set manually this parameter or use Scott's and Silverman's estimation methods. :class:`~sklearn.neighbors.KernelDensity` implements several common kernel forms, which are shown in the following figure: .. |kde\_kernels| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_kde\_1d\_002.png :target: ../auto\_examples/neighbors/plot\_kde\_1d.html :scale: 80 .. centered:: |kde\_kernels| .. dropdown:: Kernels' mathematical expressions The form of these kernels is as follows: \* Gaussian kernel (``kernel = 'gaussian'``) :math:`K(x; h) \propto \exp(- \frac{x^2}{2h^2} )` \* Tophat kernel (``kernel = 'tophat'``) :math:`K(x; h) \propto 1` if :math:`x < h` \* Epanechnikov kernel (``kernel = 'epanechnikov'``) :math:`K(x; h) \propto 1 - \frac{x^2}{h^2}` \* Exponential kernel (``kernel = 'exponential'``) :math:`K(x; h) \propto \exp(-x/h)` \* Linear kernel (``kernel = 'linear'``) :math:`K(x; h) \propto 1 - x/h` if :math:`x < h` \* Cosine kernel (``kernel = 'cosine'``) :math:`K(x; h) \propto \cos(\frac{\pi x}{2h})` if :math:`x < h` The kernel density estimator can be used with any of the valid distance metrics (see :class:`~sklearn.metrics.DistanceMetric` for a list of available metrics), though the results are properly normalized only for the Euclidean metric. One particularly useful metric is the `Haversine distance `\_ which measures the angular distance between points on a sphere. Here is an example of using a kernel density estimate for a visualization of geospatial data, in this case the distribution of observations of two different species on the South American continent: .. |species\_kde| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_species\_kde\_001.png :target: ../auto\_examples/neighbors/plot\_species\_kde.html :scale: 80 .. centered:: |species\_kde| One other useful application of kernel density estimation is to learn a non-parametric generative model of a dataset in order to efficiently draw new samples from this generative model. Here is an example of using this process to create a new set of hand-written digits, using a Gaussian kernel learned on a PCA projection of the data: .. |digits\_kde| image:: ../auto\_examples/neighbors/images/sphx\_glr\_plot\_digits\_kde\_sampling\_001.png :target: ../auto\_examples/neighbors/plot\_digits\_kde\_sampling.html :scale: 80 .. centered:: |digits\_kde| The "new" data consists of linear combinations of the input data, with weights probabilistically drawn given the KDE model. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_kde\_1d.py`: computation of simple kernel density estimates in one dimension. \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_digits\_kde\_sampling.py`: an example of using Kernel Density estimation to learn a generative model of the hand-written digits data, and drawing new samples from this model. \* :ref:`sphx\_glr\_auto\_examples\_neighbors\_plot\_species\_kde.py`: an example of Kernel Density estimation using the Haversine distance metric to visualize geospatial data
https://github.com/scikit-learn/scikit-learn/blob/main//doc/modules/density.rst
main
scikit-learn
[ -0.029126450419425964, -0.019366109743714333, -0.016197891905903816, 0.04169966280460358, 0.008377635851502419, -0.0913023129105568, 0.11341582983732224, -0.015903817489743233, -0.0013147902209311724, 0.018154002726078033, -0.0026350338011980057, 0.002513428218662739, 0.08557278662919998, ...
0.046548