content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
information about the input as well, about which you can read more `here `\_\_:: from skl2onnx import to\_onnx onx = to\_onnx(clf, X[:1].astype(numpy.float32), target\_opset=12) with open("filename.onnx", "wb") as f: f.write(onx.SerializeToString()) You can load the model in Python and use the `ONNX` runtime to get predictions:: from onnxruntime import InferenceSession with open("filename.onnx", "rb") as f: onx = f.read() sess = InferenceSession(onx, providers=["CPUExecutionProvider"]) pred\_ort = sess.run(None, {"X": X\_test.astype(numpy.float32)})[0] .. \_skops\_persistence: `skops.io` ---------- :mod:`skops.io` avoids using :mod:`pickle` and only loads files which have types and references to functions which are trusted either by default or by the user. Therefore it provides a more secure format than :mod:`pickle`, :mod:`joblib`, and `cloudpickle`\_. .. dropdown:: Using skops The API is very similar to :mod:`pickle`, and you can persist your models as explained in the `documentation `\_\_ using :func:`skops.io.dump` and :func:`skops.io.dumps`:: import skops.io as sio obj = sio.dump(clf, "filename.skops") And you can load them back using :func:`skops.io.load` and :func:`skops.io.loads`. However, you need to specify the types which are trusted by you. You can get existing unknown types in a dumped object / file using :func:`skops.io.get\_untrusted\_types`, and after checking its contents, pass it to the load function:: unknown\_types = sio.get\_untrusted\_types(file="filename.skops") # investigate the contents of unknown\_types, and only load if you trust # everything you see. clf = sio.load("filename.skops", trusted=unknown\_types) Please report issues and feature requests related to this format on the `skops issue tracker `\_\_. .. \_pickle\_persistence: `pickle`, `joblib`, and `cloudpickle` ------------------------------------- These three modules / packages, use the `pickle` protocol under the hood, but come with slight variations: - :mod:`pickle` is a module from the Python Standard Library. It can serialize and deserialize any Python object, including custom Python classes and objects. - :mod:`joblib` is more efficient than `pickle` when working with large machine learning models or large numpy arrays. - `cloudpickle`\_ can serialize certain objects which cannot be serialized by :mod:`pickle` or :mod:`joblib`, such as user defined functions and lambda functions. This can happen for instance, when using a :class:`~sklearn.preprocessing.FunctionTransformer` and using a custom function to transform the data. .. dropdown:: Using `pickle`, `joblib`, or `cloudpickle` Depending on your use-case, you can choose one of these three methods to persist and load your scikit-learn model, and they all follow the same API:: # Here you can replace pickle with joblib or cloudpickle from pickle import dump with open("filename.pkl", "wb") as f: dump(clf, f, protocol=5) Using `protocol=5` is recommended to reduce memory usage and make it faster to store and load any large NumPy array stored as a fitted attribute in the model. You can alternatively pass `protocol=pickle.HIGHEST\_PROTOCOL` which is equivalent to `protocol=5` in Python 3.8 and later (at the time of writing). And later when needed, you can load the same object from the persisted file:: # Here you can replace pickle with joblib or cloudpickle from pickle import load with open("filename.pkl", "rb") as f: clf = load(f) .. \_persistence\_limitations: Security & Maintainability Limitations -------------------------------------- :mod:`pickle` (and :mod:`joblib` and :mod:`cloudpickle` by extension), has many documented security vulnerabilities by design and should only be used if the artifact, i.e. the pickle-file, is coming from a trusted and verified source. You should never load a pickle file from an untrusted source, similarly to how you should never execute code from an untrusted source. Also note that arbitrary computations can be represented using the `ONNX` format, and it is therefore recommended to serve models using `ONNX` in a sandboxed environment to safeguard against computational and memory exploits. Also note that there are no supported ways to load a model trained with a different version of scikit-learn. While using :mod:`skops.io`, :mod:`joblib`, :mod:`pickle`, or `cloudpickle`\_, models saved using one version of
https://github.com/scikit-learn/scikit-learn/blob/main//doc/model_persistence.rst
main
scikit-learn
[ -0.09342118352651596, -0.07729027420282364, -0.03771993890404701, 0.04050479829311371, 0.0755004733800888, 0.0030859182588756084, -0.0025649627204984426, 0.03699374571442604, -0.02676333300769329, -0.06172102317214012, -0.05759754031896591, -0.04448499158024788, 0.03003423847258091, -0.015...
0.106021
therefore recommended to serve models using `ONNX` in a sandboxed environment to safeguard against computational and memory exploits. Also note that there are no supported ways to load a model trained with a different version of scikit-learn. While using :mod:`skops.io`, :mod:`joblib`, :mod:`pickle`, or `cloudpickle`\_, models saved using one version of scikit-learn might load in other versions, however, this is entirely unsupported and inadvisable. It should also be kept in mind that operations performed on such data could give different and unexpected results, or even crash your Python process. In order to rebuild a similar model with future versions of scikit-learn, additional metadata should be saved along the pickled model: \* The training data, e.g. a reference to an immutable snapshot \* The Python source code used to generate the model \* The versions of scikit-learn and its dependencies \* The cross validation score obtained on the training data This should make it possible to check that the cross-validation score is in the same range as before. Aside for a few exceptions, persisted models should be portable across operating systems and hardware architectures assuming the same versions of dependencies and Python are used. If you encounter an estimator that is not portable, please open an issue on GitHub. Persisted models are often deployed in production using containers like Docker, in order to freeze the environment and dependencies. If you want to know more about these issues, please refer to these talks: - `Adrin Jalali: Let's exploit pickle, and skops to the rescue! | PyData Amsterdam 2023 `\_\_. - `Alex Gaynor: Pickles are for Delis, not Software - PyCon 2014 `\_\_. .. \_serving\_environment: Replicating the training environment in production .................................................. If the versions of the dependencies used may differ from training to production, it may result in unexpected behaviour and errors while using the trained model. To prevent such situations it is recommended to use the same dependencies and versions in both the training and production environment. These transitive dependencies can be pinned with the help of package management tools like `pip`, `mamba`, `conda`, `poetry`, `conda-lock`, `pixi`, etc. It is not always possible to load a model trained with older versions of the scikit-learn library and its dependencies in an updated software environment. Instead, you might need to retrain the model with the new versions of all the libraries. So when training a model, it is important to record the training recipe (e.g. a Python script) and training set information, and metadata about all the dependencies to be able to automatically reconstruct the same training environment for the updated software. .. dropdown:: InconsistentVersionWarning When an estimator is loaded with a scikit-learn version that is inconsistent with the version the estimator was pickled with, an :class:`~sklearn.exceptions.InconsistentVersionWarning` is raised. This warning can be caught to obtain the original version the estimator was pickled with:: from sklearn.exceptions import InconsistentVersionWarning warnings.simplefilter("error", InconsistentVersionWarning) try: with open("model\_from\_previous\_version.pickle", "rb") as f: est = pickle.load(f) except InconsistentVersionWarning as w: print(w.original\_sklearn\_version) Serving the model artifact .......................... The last step after training a scikit-learn model is serving the model. Once the trained model is successfully loaded, it can be served to manage different prediction requests. This can involve deploying the model as a web service using containerization, or other model deployment strategies, according to the specifications. Summarizing the key points -------------------------- Based on the different approaches for model persistence, the key points for each approach can be summarized as follows: \* `ONNX`: It provides a uniform format for persisting any machine learning or deep learning model (other than scikit-learn) and is useful for model inference (predictions). It can however, result in compatibility
https://github.com/scikit-learn/scikit-learn/blob/main//doc/model_persistence.rst
main
scikit-learn
[ -0.06811366975307465, -0.11095961183309555, 0.016952024772763252, 0.005028888117522001, 0.09231399744749069, -0.03519843891263008, -0.106770820915699, -0.04850936681032181, -0.056372106075286865, -0.028874197974801064, -0.048642463982105255, 0.01879018358886242, 0.005975311156362295, -0.00...
0.059887
on the different approaches for model persistence, the key points for each approach can be summarized as follows: \* `ONNX`: It provides a uniform format for persisting any machine learning or deep learning model (other than scikit-learn) and is useful for model inference (predictions). It can however, result in compatibility issues with different frameworks. \* :mod:`skops.io`: Trained scikit-learn models can be easily shared and put into production using :mod:`skops.io`. It is more secure compared to alternate approaches based on :mod:`pickle` because it does not load arbitrary code unless explicitly asked for by the user. Such code needs to be packaged and importable in the target Python environment. \* :mod:`joblib`: Efficient memory mapping techniques make it faster when using the same persisted model in multiple Python processes when using `mmap\_mode="r"`. It also gives easy shortcuts to compress and decompress the persisted object without the need for extra code. However, it may trigger the execution of malicious code when loading a model from an untrusted source as any other pickle-based persistence mechanism. \* :mod:`pickle`: It is native to Python and most Python objects can be serialized and deserialized using :mod:`pickle`, including custom Python classes and functions as long as they are defined in a package that can be imported in the target environment. While :mod:`pickle` can be used to easily save and load scikit-learn models, it may trigger the execution of malicious code while loading a model from an untrusted source. :mod:`pickle` can also be very efficient memorywise if the model was persisted with `protocol=5` but it does not support memory mapping. \* `cloudpickle`\_: It has comparable loading efficiency as :mod:`pickle` and :mod:`joblib` (without memory mapping), but offers additional flexibility to serialize custom Python code such as lambda expressions and interactively defined functions and classes. It might be a last resort to persist pipelines with custom Python components such as a :class:`sklearn.preprocessing.FunctionTransformer` that wraps a function defined in the training script itself or more generally outside of any importable Python package. Note that `cloudpickle`\_ offers no forward compatibility guarantees and you might need the same version of `cloudpickle`\_ to load the persisted model along with the same version of all the libraries used to define the model. As the other pickle-based persistence mechanisms, it may trigger the execution of malicious code while loading a model from an untrusted source. .. \_cloudpickle: https://github.com/cloudpipe/cloudpickle
https://github.com/scikit-learn/scikit-learn/blob/main//doc/model_persistence.rst
main
scikit-learn
[ -0.1657327562570572, -0.06107658892869949, -0.07706548273563385, 0.040423061698675156, 0.07855948805809021, -0.054104410111904144, -0.032146014273166656, 0.00737617164850235, 0.01671229675412178, -0.03189152106642723, 0.004744237754493952, 0.07432522624731064, 0.029145274311304092, 0.01454...
0.204229
.. \_governance: =========================================== Scikit-learn governance and decision-making =========================================== The purpose of this document is to formalize the governance process used by the scikit-learn project, to clarify how decisions are made and how the various elements of our community interact. This document establishes a decision-making structure that takes into account feedback from all members of the community and strives to find consensus, while avoiding any deadlocks. This is a meritocratic, consensus-based community project. Anyone with an interest in the project can join the community, contribute to the project design and participate in the decision making process. This document describes how that participation takes place and how to set about earning merit within the project community. Roles And Responsibilities ========================== We distinguish between contributors, core contributors, and the technical committee. A key distinction between them is their voting rights: contributors have no voting rights, whereas the other two groups all have voting rights, as well as permissions to the tools relevant to their roles. Contributors ------------ Contributors are community members who contribute in concrete ways to the project. Anyone can become a contributor, and contributions can take many forms – not only code – as detailed in the :ref:`contributors guide `. There is no process to become a contributor: once somebody contributes to the project in any way, they are a contributor. Core Contributors ----------------- All core contributor members have the same voting rights and right to propose new members to any of the roles listed below. Their membership is represented as being an organization member on the scikit-learn `GitHub organization `\_. They are also welcome to join our `monthly core contributor meetings `\_. New members can be nominated by any existing member. Once they have been nominated, there will be a vote by the current core contributors. Voting on new members is one of the few activities that takes place on the project's private mailing list. While it is expected that most votes will be unanimous, a two-thirds majority of the cast votes is enough. The vote needs to be open for at least 1 week. Core contributors that have not contributed to the project, corresponding to their role, in the past 12 months will be asked if they want to become emeritus members and recant their rights until they become active again. The list of members, active and emeritus (with dates at which they became active) is public on the scikit-learn website. It is the responsibility of the active core contributors to send such a yearly reminder email. The following teams form the core contributors group: \* \*\*Contributor Experience Team\*\* The contributor experience team improves the experience of contributors by helping with the triage of issues and pull requests, as well as noticing any repeating patterns where people might struggle, and to help with improving those aspects of the project. To this end, they have the required permissions on GitHub to label and close issues. :ref:`Their work ` is crucial to improve the communication in the project and limit the crowding of the issue tracker. .. \_communication\_team: \* \*\*Communication Team\*\* Members of the communication team help with outreach and communication for scikit-learn. The goal of the team is to develop public awareness of scikit-learn, of its features and usage, as well as branding. For this, they can operate the scikit-learn accounts on various social networks and produce materials. They also have the required rights to our blog repository and other relevant accounts and platforms. \* \*\*Documentation Team\*\* Members of the documentation team engage with the documentation of the project among other things. They might also be involved in other aspects
https://github.com/scikit-learn/scikit-learn/blob/main//doc/governance.rst
main
scikit-learn
[ -0.014332788996398449, 0.006331027951091528, -0.06371127814054489, -0.017078517004847527, 0.04505324363708496, -0.047128982841968536, 0.07575532048940659, 0.0172110628336668, -0.02285180799663067, 0.05989440903067589, -0.06303538382053375, -0.0782097652554512, -0.000140546471811831, 0.0202...
0.111922
accounts on various social networks and produce materials. They also have the required rights to our blog repository and other relevant accounts and platforms. \* \*\*Documentation Team\*\* Members of the documentation team engage with the documentation of the project among other things. They might also be involved in other aspects of the project, but their reviews on documentation contributions are considered authoritative, and can merge such contributions. To this end, they have permissions to merge pull requests in scikit-learn's repository. \* \*\*Maintainers Team\*\* Maintainers are community members who have shown that they are dedicated to the continued development of the project through ongoing engagement with the community. They have shown they can be trusted to maintain scikit-learn with care. Being a maintainer allows contributors to more easily carry on with their project related activities by giving them direct access to the project's repository. Maintainers are expected to review code contributions, merge approved pull requests, cast votes for and against merging a pull-request, and to be involved in deciding major changes to the API. Technical Committee ------------------- The Technical Committee (TC) members are maintainers who have additional responsibilities to ensure the smooth running of the project. TC members are expected to participate in strategic planning, and approve changes to the governance model. The purpose of the TC is to ensure a smooth progress from the big-picture perspective. Indeed changes that impact the full project require a synthetic analysis and a consensus that is both explicit and informed. In cases that the core contributor community (which includes the TC members) fails to reach such a consensus in the required time frame, the TC is the entity to resolve the issue. Membership of the TC is by nomination by a core contributor. A nomination will result in discussion which cannot take more than a month and then a vote by the core contributors which will stay open for a week. TC membership votes are subject to a two-third majority of all cast votes as well as a simple majority approval of all the current TC members. TC members who do not actively engage with the TC duties are expected to resign. The Technical Committee of scikit-learn consists of :user:`Thomas Fan `, :user:`Alexandre Gramfort `, :user:`Olivier Grisel `, :user:`Adrin Jalali `, :user:`Andreas Müller `, :user:`Joel Nothman ` and :user:`Gaël Varoquaux `. Decision Making Process ======================= Decisions about the future of the project are made through discussion with all members of the community. All non-sensitive project management discussion takes place on the project contributors' `mailing list `\_ and the `issue tracker `\_. Occasionally, sensitive discussion occurs on a private list. Scikit-learn uses a "consensus seeking" process for making decisions. The group tries to find a resolution that has no open objections among core contributors. At any point during the discussion, any core contributor can call for a vote, which will conclude one month from the call for the vote. Most votes have to be backed by a :ref:`SLEP `. If no option can gather two thirds of the votes cast, the decision is escalated to the TC, which in turn will use consensus seeking with the fallback option of a simple majority vote if no consensus can be found within a month. This is what we hereafter may refer to as "\*\*the decision making process\*\*". Decisions (in addition to adding core contributors and TC membership as above) are made according to the following rules: \* \*\*Minor code and documentation changes\*\*, such as small maintenance changes without modification of code logic, typo fixes, or addition / correction of a sentence, but no change of the ``scikit-learn.org``
https://github.com/scikit-learn/scikit-learn/blob/main//doc/governance.rst
main
scikit-learn
[ -0.08812267333269119, -0.05027210712432861, -0.026939667761325836, 0.017009586095809937, 0.05304846167564392, -0.052616700530052185, 0.0424012690782547, 0.006653693038970232, 0.03810294345021248, 0.03560832887887955, -0.004259249661117792, -0.026329563930630684, -0.01712970621883869, -0.00...
0.204241
Decisions (in addition to adding core contributors and TC membership as above) are made according to the following rules: \* \*\*Minor code and documentation changes\*\*, such as small maintenance changes without modification of code logic, typo fixes, or addition / correction of a sentence, but no change of the ``scikit-learn.org`` landing page or the “about” page: Requires +1 by a core contributor, no -1 by a core contributor (lazy consensus), happens on the issue or pull request page. Core contributors are expected to give “reasonable time” to others to give their opinion on the pull request if they're not confident others would agree. \* \*\*Code changes and major documentation changes\*\* require +1 by two core contributors, no -1 by a core contributor (lazy consensus), happens on the issue of pull-request page. \* \*\*Changes to the API principles and changes to dependencies or supported versions\*\* follow the decision-making process outlined above. In particular changes to API principles are backed via a :ref:`slep`. Smaller decisions like supported versions can happen on a GitHub issue or pull request. \* \*\*Changes to the governance model\*\* follow the process outlined in `SLEP020 `\_\_. If a veto -1 vote is cast on a lazy consensus, the proposer can appeal to the community and maintainers and the change can be approved or rejected using the decision making procedure outlined above. Governance Model Changes ------------------------ Governance model changes occur through an enhancement proposal or a GitHub Pull Request. An enhancement proposal will go through "\*\*the decision-making process\*\*" described in the previous section. Alternatively, an author may propose a change directly to the governance model with a GitHub Pull Request. Logistically, an author can open a Draft Pull Request for feedback and follow up with a new revised Pull Request for voting. Once that author is happy with the state of the Pull Request, they can call for a vote on the public mailing list. During the one-month voting period, the Pull Request can not change. A Pull Request Approval will count as a positive vote, and a "Request Changes" review will count as a negative vote. If two-thirds of the cast votes are positive, then the governance model change is accepted. .. \_slep: Enhancement proposals (SLEPs) ============================== For all votes, a proposal must have been made public and discussed before the vote. Such proposal must be a consolidated document, in the form of a "Scikit-Learn Enhancement Proposal" (SLEP), rather than a long discussion on an issue. A SLEP must be submitted as a pull-request to `enhancement proposals `\_ using the `SLEP template `\_. `SLEP000 `\_\_ describes the process in more detail.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/governance.rst
main
scikit-learn
[ -0.10131893306970596, -0.037819717079401016, 0.040971171110868454, -0.013420301489531994, 0.0806286409497261, -0.03380746766924858, -0.045635342597961426, 0.03897112235426903, 0.031612854450941086, 0.0807373970746994, -0.026959262788295746, -0.013152781873941422, -0.0211940910667181, -0.03...
0.108383
.. \_visualizations: ============== Visualizations ============== Scikit-learn defines a simple API for creating visualizations for machine learning. The key feature of this API is to allow for quick plotting and visual adjustments without recalculation. We provide `Display` classes that expose two methods for creating plots: `from\_estimator` and `from\_predictions`. The `from\_estimator` method generates a `Display` object from a fitted estimator, input data (`X`, `y`), and a plot. The `from\_predictions` method creates a `Display` object from true and predicted values (`y\_test`, `y\_pred`), and a plot. Using `from\_predictions` avoids having to recompute predictions, but the user needs to take care that the prediction values passed correspond to the `pos\_label`. For :term:`predict\_proba`, select the column corresponding to the `pos\_label` class while for :term:`decision\_function`, revert the score (i.e. multiply by -1) if `pos\_label` is not the last class in the `classes\_` attribute of your estimator. The `Display` object stores the computed values (e.g., metric values or feature importance) required for plotting with Matplotlib. These values are the results derived from the raw predictions passed to `from\_predictions`, or an estimator and `X` passed to `from\_estimator`. Display objects have a plot method that creates a matplotlib plot once the display object has been initialized (note that we recommend that display objects are created via `from\_estimator` or `from\_predictions` instead of initialized directly). The plot method allows adding to an existing plot by passing the existing plots :class:`matplotlib.axes.Axes` to the `ax` parameter. In the following example, we plot a ROC curve for a fitted Logistic Regression model `from\_estimator`: .. plot:: :context: close-figs :align: center from sklearn.model\_selection import train\_test\_split from sklearn.linear\_model import LogisticRegression from sklearn.metrics import RocCurveDisplay from sklearn.datasets import load\_iris X, y = load\_iris(return\_X\_y=True) y = y == 2 # make binary X\_train, X\_test, y\_train, y\_test = train\_test\_split( X, y, test\_size=.8, random\_state=42 ) clf = LogisticRegression(random\_state=42, C=.01) clf.fit(X\_train, y\_train) clf\_disp = RocCurveDisplay.from\_estimator(clf, X\_test, y\_test) If you already have the prediction values, you could instead use `from\_predictions` to do the same thing (and save on compute): .. plot:: :context: close-figs :align: center from sklearn.model\_selection import train\_test\_split from sklearn.linear\_model import LogisticRegression from sklearn.metrics import RocCurveDisplay from sklearn.datasets import load\_iris X, y = load\_iris(return\_X\_y=True) y = y == 2 # make binary X\_train, X\_test, y\_train, y\_test = train\_test\_split( X, y, test\_size=.8, random\_state=42 ) clf = LogisticRegression(random\_state=42, C=.01) clf.fit(X\_train, y\_train) # select the probability of the class that we considered to be the positive label y\_pred = clf.predict\_proba(X\_test)[:, 1] clf\_disp = RocCurveDisplay.from\_predictions(y\_test, y\_pred) The returned `clf\_disp` object allows us to add another curve to the already computed ROC curve. In this case, the `clf\_disp` is a :class:`~sklearn.metrics.RocCurveDisplay` that stores the computed values as attributes called `roc\_auc`, `fpr`, and `tpr`. Next, we train a random forest classifier and plot the previously computed ROC curve again by using the `plot` method of the `Display` object. .. plot:: :context: close-figs :align: center import matplotlib.pyplot as plt from sklearn.ensemble import RandomForestClassifier rfc = RandomForestClassifier(n\_estimators=10, random\_state=42) rfc.fit(X\_train, y\_train) ax = plt.gca() rfc\_disp = RocCurveDisplay.from\_estimator( rfc, X\_test, y\_test, ax=ax, curve\_kwargs={"alpha": 0.8} ) clf\_disp.plot(ax=ax, curve\_kwargs={"alpha": 0.8}) Notice that we pass `alpha=0.8` to the plot functions to adjust the alpha values of the curves. .. rubric:: Examples \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_roc\_curve\_visualization\_api.py` \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_partial\_dependence\_visualization\_api.py` \* :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_display\_object\_visualization.py` \* :ref:`sphx\_glr\_auto\_examples\_calibration\_plot\_compare\_calibration.py` Available Plotting Utilities ============================ Display Objects --------------- .. currentmodule:: sklearn .. autosummary:: calibration.CalibrationDisplay inspection.PartialDependenceDisplay inspection.DecisionBoundaryDisplay metrics.ConfusionMatrixDisplay metrics.DetCurveDisplay metrics.PrecisionRecallDisplay metrics.PredictionErrorDisplay metrics.RocCurveDisplay model\_selection.LearningCurveDisplay model\_selection.ValidationCurveDisplay
https://github.com/scikit-learn/scikit-learn/blob/main//doc/visualizations.rst
main
scikit-learn
[ -0.06500551849603653, -0.02999028004705906, -0.08193684369325638, 0.04747028276324272, 0.05903971567749977, -0.06900836527347565, -0.01025331113487482, 0.04154585674405098, -0.017841331660747528, 0.0006395711679942906, 0.008331339806318283, -0.1024925708770752, 0.048084259033203125, -0.027...
0.154882
.. \_related\_projects: ===================================== Related Projects ===================================== Projects implementing the scikit-learn estimator API are encouraged to use the `scikit-learn-contrib template `\_ which facilitates best practices for testing and documenting estimators. The `scikit-learn-contrib GitHub organization `\_ also accepts high-quality contributions of repositories conforming to this template. Below is a list of sister-projects, extensions and domain specific packages. Interoperability and framework enhancements ------------------------------------------- These tools adapt scikit-learn for use with other technologies or otherwise enhance the functionality of scikit-learn's estimators. \*\*Auto-ML\*\* - `auto-sklearn `\_ An automated machine learning toolkit and a drop-in replacement for a scikit-learn estimator - `autoviml `\_ Automatically Build Multiple Machine Learning Models with a Single Line of Code. Designed as a faster way to use scikit-learn models without having to preprocess data. - `TPOT `\_ An automated machine learning toolkit that optimizes a series of scikit-learn operators to design a machine learning pipeline, including data and feature preprocessors as well as the estimators. Works as a drop-in replacement for a scikit-learn estimator. - `Featuretools `\_ A framework to perform automated feature engineering. It can be used for transforming temporal and relational datasets into feature matrices for machine learning. - `EvalML `\_ An AutoML library which builds, optimizes, and evaluates machine learning pipelines using domain-specific objective functions. It incorporates multiple modeling libraries under one API, and the objects that EvalML creates use an sklearn-compatible API. - `MLJAR AutoML `\_ A Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation. \*\*Experimentation and model registry frameworks\*\* - `MLFlow `\_ An open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. - `Neptune `\_ A metadata store for MLOps, built for teams that run a lot of experiments. It gives you a single place to log, store, display, organize, compare, and query all your model building metadata. - `Sacred `\_ A tool to help you configure, organize, log and reproduce experiments - `Scikit-Learn Laboratory `\_ A command-line wrapper around scikit-learn that makes it easy to run machine learning experiments with multiple learners and large feature sets. \*\*Model inspection and visualization\*\* - `dtreeviz `\_ A Python library for decision tree visualization and model interpretation. - `model-diagnostics `\_ Tools for diagnostics and assessment of (machine learning) models (in Python). - `sklearn-evaluation `\_ Machine learning model evaluation made easy: plots, tables, HTML reports, experiment tracking and Jupyter notebook analysis. Visual analysis, model selection, evaluation and diagnostics. - `yellowbrick `\_ A suite of custom matplotlib visualizers for scikit-learn estimators to support visual feature analysis, model selection, evaluation, and diagnostics. \*\*Model export for production\*\* - `sklearn-onnx `\_ Serialization of many Scikit-learn pipelines to `ONNX `\_ for interchange and prediction. - `skops.io `\_\_ A persistence model more secure than pickle, which can be used instead of pickle in most common cases. - `sklearn2pmml `\_ Serialization of a wide variety of scikit-learn estimators and transformers into PMML with the help of `JPMML-SkLearn `\_ library. - `treelite `\_ Compiles tree-based ensemble models into C code for minimizing prediction latency. - `emlearn `\_ Implements scikit-learn estimators in C99 for embedded devices and microcontrollers. Supports several classifier, regression and outlier detection models. \*\*Model throughput\*\* - `Intel(R) Extension for scikit-learn `\_ Mostly on high end Intel(R) hardware, accelerates some scikit-learn models for both training and inference under certain circumstances. This project is maintained by Intel(R) and scikit-learn's maintainers are not involved in the development of this project. Also note that in some cases using the tools and estimators under ``scikit-learn-intelex`` would give different results than ``scikit-learn`` itself. If you encounter issues while using this project, make sure you report potential
https://github.com/scikit-learn/scikit-learn/blob/main//doc/related_projects.rst
main
scikit-learn
[ -0.0456593781709671, -0.07770060747861862, -0.05182746797800064, 0.03871982544660568, 0.10228212922811508, -0.09358452260494232, -0.04504980891942978, -0.017600050196051598, -0.01723487116396427, -0.003920296672731638, 0.029433945193886757, -0.044618841260671616, -0.007600217126309872, -0....
0.23489
project is maintained by Intel(R) and scikit-learn's maintainers are not involved in the development of this project. Also note that in some cases using the tools and estimators under ``scikit-learn-intelex`` would give different results than ``scikit-learn`` itself. If you encounter issues while using this project, make sure you report potential issues in their respective repositories. \*\*Interface to R with genomic applications\*\* - `BiocSklearn `\_ Exposes a small number of dimension reduction facilities as an illustration of the basilisk protocol for interfacing Python with R. Intended as a springboard for more complete interop. Other estimators and tasks -------------------------- Not everything belongs or is mature enough for the central scikit-learn project. The following are projects providing interfaces similar to scikit-learn for additional learning algorithms, infrastructures and tasks. \*\*Time series and forecasting\*\* - `aeon `\_ A scikit-learn compatible toolbox for machine learning with time series (fork of `sktime`\_). - `Darts `\_ A Python library for user-friendly forecasting and anomaly detection on time series. It contains a variety of models, from classics such as ARIMA to deep neural networks. The forecasting models can all be used in the same way, using fit() and predict() functions, similar to scikit-learn. - `sktime `\_ A scikit-learn compatible toolbox for machine learning with time series including time series classification/regression and (supervised/panel) forecasting. - `skforecast `\_ A Python library that eases using scikit-learn regressors as multi-step forecasters. It also works with any regressor compatible with the scikit-learn API. - `tslearn `\_ A machine learning library for time series that offers tools for pre-processing and feature extraction as well as dedicated models for clustering, classification and regression. \*\*Gradient (tree) boosting\*\* Note scikit-learn own modern gradient boosting estimators :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and :class:`~sklearn.ensemble.HistGradientBoostingRegressor`. - `XGBoost `\_ XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. - `LightGBM `\_ LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient. \*\*Structured learning\*\* - `HMMLearn `\_ Implementation of hidden markov models that was previously part of scikit-learn. - `pomegranate `\_ Probabilistic modelling for Python, with an emphasis on hidden Markov models. \*\*Deep neural networks etc.\*\* - `skorch `\_ A scikit-learn compatible neural network library that wraps PyTorch. - `scikeras `\_ provides a wrapper around Keras to interface it with scikit-learn. SciKeras is the successor of `tf.keras.wrappers.scikit\_learn`. \*\*Federated Learning\*\* - `Flower `\_ A friendly federated learning framework with a unified approach that can federate any workload, any ML framework, and any programming language. \*\*Privacy Preserving Machine Learning\*\* - `Concrete ML `\_ A privacy preserving ML framework built on top of `Concrete `\_, with bindings to traditional ML frameworks, thanks to fully homomorphic encryption. APIs of so-called Concrete ML built-in models are very close to scikit-learn APIs. \*\*Broad scope\*\* - `mlxtend `\_ Includes a number of additional estimators as well as model visualization utilities. - `scikit-lego `\_ A number of scikit-learn compatible custom transformers, models and metrics, focusing on solving practical industry tasks. \*\*Other regression and classification\*\* - `gplearn `\_ Genetic Programming for symbolic regression tasks. - `scikit-multilearn `\_ Multi-label classification with focus on label space manipulation. \*\*Decomposition and clustering\*\* - `lda `\_: Fast implementation of latent Dirichlet allocation in Cython which uses `Gibbs sampling `\_ to sample from the true posterior distribution. (scikit-learn's :class:`~sklearn.decomposition.LatentDirichletAllocation` implementation uses `variational inference `\_ to sample from a tractable approximation of a topic model's posterior distribution.) - `kmodes `\_ k-modes clustering algorithm for categorical data, and several of its variations. - `hdbscan `\_ HDBSCAN and Robust Single Linkage clustering algorithms for robust variable density clustering. As of scikit-learn version 1.3.0, there is :class:`~sklearn.cluster.HDBSCAN`. \*\*Pre-processing\*\* -
https://github.com/scikit-learn/scikit-learn/blob/main//doc/related_projects.rst
main
scikit-learn
[ -0.15643353760242462, -0.05802124738693237, -0.049279678612947464, 0.010870500467717648, 0.07998340576887131, -0.08155890554189682, -0.0945296660065651, 0.00754808634519577, -0.08564356714487076, -0.03563247621059418, -0.03484474867582321, -0.08077802509069443, -0.08637331426143646, 0.0357...
0.139831
sample from a tractable approximation of a topic model's posterior distribution.) - `kmodes `\_ k-modes clustering algorithm for categorical data, and several of its variations. - `hdbscan `\_ HDBSCAN and Robust Single Linkage clustering algorithms for robust variable density clustering. As of scikit-learn version 1.3.0, there is :class:`~sklearn.cluster.HDBSCAN`. \*\*Pre-processing\*\* - `categorical-encoding `\_ A library of sklearn compatible categorical variable encoders. As of scikit-learn version 1.3.0, there is :class:`~sklearn.preprocessing.TargetEncoder`. - `skrub `\_ : facilitate learning on dataframes, with sklearn compatible encoders (of categories, dates, strings) and more. - `imbalanced-learn `\_ Various methods to under- and over-sample datasets. - `Feature-engine `\_ A library of sklearn compatible transformers for missing data imputation, categorical encoding, variable transformation, discretization, outlier handling and more. Feature-engine allows the application of preprocessing steps to selected groups of variables and it is fully compatible with the Scikit-learn Pipeline. \*\*Topological Data Analysis\*\* - `giotto-tda `\_ A library for `Topological Data Analysis `\_ aiming to provide a scikit-learn compatible API. It offers tools to transform data inputs (point clouds, graphs, time series, images) into forms suitable for computations of topological summaries, and components dedicated to extracting sets of scalar features of topological origin, which can be used alongside other feature extraction methods in scikit-learn. Statistical learning with Python -------------------------------- Other packages useful for data analysis and machine learning. - `Pandas `\_ Tools for working with heterogeneous and columnar data, relational queries, time series and basic statistics. - `statsmodels `\_ Estimating and analysing statistical models. More focused on statistical tests and less on prediction than scikit-learn. - `PyMC `\_ Bayesian statistical models and fitting algorithms. - `Seaborn `\_ A visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. - `scikit-survival `\_ A library implementing models to learn from censored time-to-event data (also called survival analysis). Models are fully compatible with scikit-learn. Recommendation Engine packages ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - `implicit `\_, Library for implicit feedback datasets. - `lightfm `\_ A Python/Cython implementation of a hybrid recommender system. - `Surprise Lib `\_ Library for explicit feedback datasets. Domain specific packages ~~~~~~~~~~~~~~~~~~~~~~~~ - `scikit-network `\_ Machine learning on graphs. - `scikit-image `\_ Image processing and computer vision in Python. - `Natural language toolkit (nltk) `\_ Natural language processing and some machine learning. - `gensim `\_ A library for topic modelling, document indexing and similarity retrieval - `NiLearn `\_ Machine learning for neuro-imaging. - `AstroML `\_ Machine learning for astronomy. Translations of scikit-learn documentation ------------------------------------------ Translation's purpose is to ease reading and understanding in languages other than English. Its aim is to help people who do not understand English or have doubts about its interpretation. Additionally, some people prefer to read documentation in their native language, but please bear in mind that the only official documentation is the English one [#f1]\_. Those translation efforts are community initiatives and we have no control on them. If you want to contribute or report an issue with the translation, please contact the authors of the translation. Some available translations are linked here to improve their dissemination and promote community efforts. - `Chinese translation `\_ (`source `\_\_) - `Persian translation `\_ (`source `\_\_) - `Spanish translation `\_ (`source `\_\_) - `Korean translation `\_ (`source `\_\_) .. rubric:: Footnotes .. [#f1] following `linux documentation Disclaimer `\_\_
https://github.com/scikit-learn/scikit-learn/blob/main//doc/related_projects.rst
main
scikit-learn
[ -0.005983950570225716, -0.09556157886981964, -0.038983725011348724, 0.015839820727705956, 0.09555359184741974, -0.015077773481607437, -0.057385362684726715, -0.034971579909324646, -0.05236845463514328, -0.03109128400683403, 0.005913230590522289, -0.0371728241443634, 0.011823421344161034, -...
0.107085
.. currentmodule:: sklearn .. \_metadata\_routing: Metadata Routing ================ .. note:: The Metadata Routing API is experimental, and is not yet implemented for all estimators. Please refer to the :ref:`list of supported and unsupported models ` for more information. It may change without the usual deprecation cycle. By default this feature is not enabled. You can enable it by setting the ``enable\_metadata\_routing`` flag to ``True``:: >>> import sklearn >>> sklearn.set\_config(enable\_metadata\_routing=True) Note that the methods and requirements introduced in this document are only relevant if you want to pass :term:`metadata` (e.g. ``sample\_weight``) to a method. If you're only passing ``X`` and ``y`` and no other parameter / metadata to methods such as :term:`fit`, :term:`transform`, etc., then you don't need to set anything. This guide demonstrates how :term:`metadata` can be routed and passed between objects in scikit-learn. If you are developing a scikit-learn compatible estimator or meta-estimator, you can check our related developer guide: :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_metadata\_routing.py`. Metadata is data that an estimator, scorer, or CV splitter takes into account if the user explicitly passes it as a parameter. For instance, :class:`~cluster.KMeans` accepts `sample\_weight` in its `fit()` method and considers it to calculate its centroids. `classes` are consumed by some classifiers and `groups` are used in some splitters, but any data that is passed into an object's methods apart from X and y can be considered as metadata. Prior to scikit-learn version 1.3, there was no single API for passing metadata like that if these objects were used in conjunction with other objects, e.g. a scorer accepting `sample\_weight` inside a :class:`~model\_selection.GridSearchCV`. With the Metadata Routing API, we can transfer metadata to estimators, scorers, and CV splitters using :term:`meta-estimators` (such as :class:`~pipeline.Pipeline` or :class:`~model\_selection.GridSearchCV`) or functions such as :func:`~model\_selection.cross\_validate` which route data to other objects. In order to pass metadata to a method like ``fit`` or ``score``, the object consuming the metadata, must \*request\* it. This is done via `set\_{method}\_request()` methods, where `{method}` is substituted by the name of the method that requests the metadata. For instance, estimators that use the metadata in their `fit()` method would use `set\_fit\_request()`, and scorers would use `set\_score\_request()`. These methods allow us to specify which metadata to request, for instance `set\_fit\_request(sample\_weight=True)`. For grouped splitters such as :class:`~model\_selection.GroupKFold`, a ``groups`` parameter is requested by default. This is best demonstrated by the following examples. Usage Examples \*\*\*\*\*\*\*\*\*\*\*\*\*\* Here we present a few examples to show some common use-cases. Our goal is to pass `sample\_weight` and `groups` through :func:`~model\_selection.cross\_validate`, which routes the metadata to :class:`~linear\_model.LogisticRegressionCV` and to a custom scorer made with :func:`~metrics.make\_scorer`, both of which \*can\* use the metadata in their methods. In these examples we want to individually set whether to use the metadata within the different :term:`consumers `. The examples in this section require the following imports and data:: >>> import numpy as np >>> from sklearn.metrics import make\_scorer, accuracy\_score >>> from sklearn.linear\_model import LogisticRegressionCV, LogisticRegression >>> from sklearn.model\_selection import cross\_validate, GridSearchCV, GroupKFold >>> from sklearn.feature\_selection import SelectKBest >>> from sklearn.pipeline import make\_pipeline >>> n\_samples, n\_features = 100, 4 >>> rng = np.random.RandomState(42) >>> X = rng.rand(n\_samples, n\_features) >>> y = rng.randint(0, 2, size=n\_samples) >>> my\_groups = rng.randint(0, 10, size=n\_samples) >>> my\_weights = rng.rand(n\_samples) >>> my\_other\_weights = rng.rand(n\_samples) Weighted scoring and fitting ---------------------------- The splitter used internally in :class:`~linear\_model.LogisticRegressionCV`, :class:`~model\_selection.GroupKFold`, requests ``groups`` by default. However, we need to explicitly request `sample\_weight` for it and for our custom scorer by specifying `sample\_weight=True` in :class:`~linear\_model.LogisticRegressionCV`'s `set\_fit\_request()` method and in :func:`~metrics.make\_scorer`'s `set\_score\_request()` method. Both :term:`consumers ` know how to use ``sample\_weight`` in their `fit()` or `score()` methods. We can then pass the metadata in :func:`~model\_selection.cross\_validate` which will route it to any active consumers::
https://github.com/scikit-learn/scikit-learn/blob/main//doc/metadata_routing.rst
main
scikit-learn
[ 0.027973664924502373, -0.10363207012414932, -0.05404205620288849, 0.05311944708228111, 0.01193998008966446, -0.06218217685818672, -0.019795475527644157, 0.01174046378582716, -0.0709860622882843, 0.006850349251180887, -0.003902784315869212, -0.047815438359975815, -0.009922337718307972, -0.0...
0.059619
`sample\_weight` for it and for our custom scorer by specifying `sample\_weight=True` in :class:`~linear\_model.LogisticRegressionCV`'s `set\_fit\_request()` method and in :func:`~metrics.make\_scorer`'s `set\_score\_request()` method. Both :term:`consumers ` know how to use ``sample\_weight`` in their `fit()` or `score()` methods. We can then pass the metadata in :func:`~model\_selection.cross\_validate` which will route it to any active consumers:: >>> weighted\_acc = make\_scorer(accuracy\_score).set\_score\_request(sample\_weight=True) >>> lr = LogisticRegressionCV( ... cv=GroupKFold(), ... scoring=weighted\_acc, ... use\_legacy\_attributes=False, ... ).set\_fit\_request(sample\_weight=True) >>> cv\_results = cross\_validate( ... lr, ... X, ... y, ... params={"sample\_weight": my\_weights, "groups": my\_groups}, ... cv=GroupKFold(), ... scoring=weighted\_acc, ... ) Note that in this example, :func:`~model\_selection.cross\_validate` routes ``my\_weights`` to both the scorer and :class:`~linear\_model.LogisticRegressionCV`. If we would pass `sample\_weight` in the params of :func:`~model\_selection.cross\_validate`, but not set any object to request it, `UnsetMetadataPassedError` would be raised, hinting to us that we need to explicitly set where to route it. The same applies if ``params={"sample\_weights": my\_weights, ...}`` were passed (note the typo, i.e. ``weights`` instead of ``weight``), since ``sample\_weights`` was not requested by any of its underlying objects. Weighted scoring and unweighted fitting --------------------------------------- When passing metadata such as ``sample\_weight`` into a :term:`router` (:term:`meta-estimators` or routing function), all ``sample\_weight`` :term:`consumers ` require weights to be either explicitly requested or explicitly not requested (i.e. ``True`` or ``False``). Thus, to perform an unweighted fit, we need to configure :class:`~linear\_model.LogisticRegressionCV` to not request sample weights, so that :func:`~model\_selection.cross\_validate` does not pass the weights along:: >>> weighted\_acc = make\_scorer(accuracy\_score).set\_score\_request(sample\_weight=True) >>> lr = LogisticRegressionCV( ... cv=GroupKFold(), scoring=weighted\_acc, use\_legacy\_attributes=False ... ).set\_fit\_request(sample\_weight=False) >>> cv\_results = cross\_validate( ... lr, ... X, ... y, ... cv=GroupKFold(), ... params={"sample\_weight": my\_weights, "groups": my\_groups}, ... scoring=weighted\_acc, ... ) If :meth:`linear\_model.LogisticRegressionCV.set\_fit\_request` had not been called, :func:`~model\_selection.cross\_validate` would raise an error because ``sample\_weight`` is passed but :class:`~linear\_model.LogisticRegressionCV` would not be explicitly configured to recognize the weights. Unweighted feature selection ---------------------------- Routing metadata is only possible if the object's method knows how to use the metadata, which in most cases means they have it as an explicit parameter. Only then we can set request values for metadata using `set\_fit\_request(sample\_weight=True)`, for instance. This makes the object a :term:`consumer `. Unlike :class:`~linear\_model.LogisticRegressionCV`, :class:`~feature\_selection.SelectKBest` can't consume weights and therefore no request value for ``sample\_weight`` on its instance is set and ``sample\_weight`` is not routed to it:: >>> weighted\_acc = make\_scorer(accuracy\_score).set\_score\_request(sample\_weight=True) >>> lr = LogisticRegressionCV( ... cv=GroupKFold(), scoring=weighted\_acc, use\_legacy\_attributes=False ... ).set\_fit\_request(sample\_weight=True) >>> sel = SelectKBest(k=2) >>> pipe = make\_pipeline(sel, lr) >>> cv\_results = cross\_validate( ... pipe, ... X, ... y, ... cv=GroupKFold(), ... params={"sample\_weight": my\_weights, "groups": my\_groups}, ... scoring=weighted\_acc, ... ) Different scoring and fitting weights ------------------------------------- Despite :func:`~metrics.make\_scorer` and :class:`~linear\_model.LogisticRegressionCV` both expecting the key ``sample\_weight``, we can use aliases to pass different weights to different consumers. In this example, we pass ``scoring\_weight`` to the scorer, and ``fitting\_weight`` to :class:`~linear\_model.LogisticRegressionCV`:: >>> weighted\_acc = make\_scorer(accuracy\_score).set\_score\_request( ... sample\_weight="scoring\_weight" ... ) >>> lr = LogisticRegressionCV( ... cv=GroupKFold(), scoring=weighted\_acc, use\_legacy\_attributes=False ... ).set\_fit\_request(sample\_weight="fitting\_weight") >>> cv\_results = cross\_validate( ... lr, ... X, ... y, ... cv=GroupKFold(), ... params={ ... "scoring\_weight": my\_weights, ... "fitting\_weight": my\_other\_weights, ... "groups": my\_groups, ... }, ... scoring=weighted\_acc, ... ) API Interface \*\*\*\*\*\*\*\*\*\*\*\*\* A :term:`consumer` is an object (estimator, meta-estimator, scorer, splitter) which accepts and uses some :term:`metadata` in at least one of its methods (for instance ``fit``, ``predict``, ``inverse\_transform``, ``transform``, ``score``, ``split``). Meta-estimators which only forward the metadata to other objects (child estimators, scorers, or splitters) and don't use the metadata themselves are not consumers. (Meta-)Estimators which route metadata to other objects are :term:`routers `. A(n) (meta-)estimator can be a :term:`consumer` and a :term:`router` at the same time. (Meta-)Estimators and splitters expose a `set\_{method}\_request` method for each method which accepts at least one metadata. For instance, if an estimator supports ``sample\_weight`` in ``fit`` and ``score``,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/metadata_routing.rst
main
scikit-learn
[ -0.01934555545449257, -0.04304061084985733, -0.11299864202737808, 0.05762530118227005, 0.0450393483042717, -0.03819252923130989, 0.06085973605513573, 0.05128176510334015, -0.05370986834168434, 0.02598457969725132, 0.0285987239331007, -0.19165074825286865, 0.028372494503855705, -0.004306182...
0.020726
(Meta-)Estimators which route metadata to other objects are :term:`routers `. A(n) (meta-)estimator can be a :term:`consumer` and a :term:`router` at the same time. (Meta-)Estimators and splitters expose a `set\_{method}\_request` method for each method which accepts at least one metadata. For instance, if an estimator supports ``sample\_weight`` in ``fit`` and ``score``, it exposes ``estimator.set\_fit\_request(sample\_weight=value)`` and ``estimator.set\_score\_request(sample\_weight=value)``. Here ``value`` can be: - ``True``: method requests a ``sample\_weight``. This means if the metadata is provided, it will be used, otherwise no error is raised. - ``False``: method does not request a ``sample\_weight``. - ``None``: router will raise an error if ``sample\_weight`` is passed. This is in almost all cases the default value when an object is instantiated and ensures the user sets the metadata requests explicitly when a metadata is passed. The only exception are ``Group\*Fold`` splitters. - ``"param\_name"``: alias for ``sample\_weight`` if we want to pass different weights to different consumers. If aliasing is used the meta-estimator should not forward ``"param\_name"`` to the consumer, but ``sample\_weight`` instead, because the consumer will expect a param called ``sample\_weight``. This means the mapping between the metadata required by the object, e.g. ``sample\_weight`` and the variable name provided by the user, e.g. ``my\_weights`` is done at the router level, and not by the consuming object itself. Metadata are requested in the same way for scorers using ``set\_score\_request``. If a metadata, e.g. ``sample\_weight``, is passed by the user, the metadata request for all objects which potentially can consume ``sample\_weight`` should be set by the user, otherwise an error is raised by the router object. For example, the following code raises an error, since it hasn't been explicitly specified whether ``sample\_weight`` should be passed to the estimator's scorer or not:: >>> param\_grid = {"C": [0.1, 1]} >>> lr = LogisticRegression().set\_fit\_request(sample\_weight=True) >>> try: ... GridSearchCV( ... estimator=lr, param\_grid=param\_grid ... ).fit(X, y, sample\_weight=my\_weights) ... except ValueError as e: ... print(e) [sample\_weight] are passed but are not explicitly set as requested or not requested for LogisticRegression.score, which is used within GridSearchCV.fit. Call `LogisticRegression.set\_score\_request({metadata}=True/False)` for each metadata you want to request/ignore. See the Metadata Routing User guide for more information. The issue can be fixed by explicitly setting the request value:: >>> lr = LogisticRegression().set\_fit\_request( ... sample\_weight=True ... ).set\_score\_request(sample\_weight=False) At the end of the \*\*Usage Examples\*\* section, we disable the configuration flag for metadata routing:: >>> sklearn.set\_config(enable\_metadata\_routing=False) .. \_metadata\_routing\_models: Metadata Routing Support Status \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\* All consumers (i.e. simple estimators which only consume metadata and don't route them) support metadata routing, meaning they can be used inside meta-estimators which support metadata routing. However, development of support for metadata routing for meta-estimators is in progress, and here is a list of meta-estimators and tools which support and don't yet support metadata routing. Meta-estimators and functions supporting metadata routing: - :class:`sklearn.calibration.CalibratedClassifierCV` - :class:`sklearn.compose.ColumnTransformer` - :class:`sklearn.compose.TransformedTargetRegressor` - :class:`sklearn.covariance.GraphicalLassoCV` - :class:`sklearn.ensemble.StackingClassifier` - :class:`sklearn.ensemble.StackingRegressor` - :class:`sklearn.ensemble.VotingClassifier` - :class:`sklearn.ensemble.VotingRegressor` - :class:`sklearn.ensemble.BaggingClassifier` - :class:`sklearn.ensemble.BaggingRegressor` - :class:`sklearn.feature\_selection.RFE` - :class:`sklearn.feature\_selection.RFECV` - :class:`sklearn.feature\_selection.SelectFromModel` - :class:`sklearn.feature\_selection.SequentialFeatureSelector` - :class:`sklearn.impute.IterativeImputer` - :class:`sklearn.linear\_model.ElasticNetCV` - :class:`sklearn.linear\_model.LarsCV` - :class:`sklearn.linear\_model.LassoCV` - :class:`sklearn.linear\_model.LassoLarsCV` - :class:`sklearn.linear\_model.LogisticRegressionCV` - :class:`sklearn.linear\_model.MultiTaskElasticNetCV` - :class:`sklearn.linear\_model.MultiTaskLassoCV` - :class:`sklearn.linear\_model.OrthogonalMatchingPursuitCV` - :class:`sklearn.linear\_model.RANSACRegressor` - :class:`sklearn.linear\_model.RidgeClassifierCV` - :class:`sklearn.linear\_model.RidgeCV` - :class:`sklearn.model\_selection.GridSearchCV` - :class:`sklearn.model\_selection.HalvingGridSearchCV` - :class:`sklearn.model\_selection.HalvingRandomSearchCV` - :class:`sklearn.model\_selection.RandomizedSearchCV` - :class:`sklearn.model\_selection.permutation\_test\_score` - :func:`sklearn.model\_selection.cross\_validate` - :func:`sklearn.model\_selection.cross\_val\_score` - :func:`sklearn.model\_selection.cross\_val\_predict` - :class:`sklearn.model\_selection.learning\_curve` - :class:`sklearn.model\_selection.validation\_curve` - :class:`sklearn.multiclass.OneVsOneClassifier` - :class:`sklearn.multiclass.OneVsRestClassifier` - :class:`sklearn.multiclass.OutputCodeClassifier` - :class:`sklearn.multioutput.ClassifierChain` - :class:`sklearn.multioutput.MultiOutputClassifier` - :class:`sklearn.multioutput.MultiOutputRegressor` - :class:`sklearn.multioutput.RegressorChain` - :class:`sklearn.pipeline.FeatureUnion` - :class:`sklearn.pipeline.Pipeline` - :class:`sklearn.semi\_supervised.SelfTrainingClassifier` Meta-estimators and tools not supporting metadata routing yet: - :class:`sklearn.ensemble.AdaBoostClassifier` - :class:`sklearn.ensemble.AdaBoostRegressor`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/metadata_routing.rst
main
scikit-learn
[ -0.009109053760766983, -0.05538349971175194, -0.08042354881763458, 0.04904550313949585, 0.03728028014302254, -0.03917357325553894, -0.009003260172903538, 0.0713653638958931, -0.04245074838399887, 0.024226246401667595, -0.013244600035250187, -0.08116873353719711, 0.002396373776718974, -0.02...
0.087073
supporting metadata routing yet: - :class:`sklearn.ensemble.AdaBoostClassifier` - :class:`sklearn.ensemble.AdaBoostRegressor`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/metadata_routing.rst
main
scikit-learn
[ -0.061563510447740555, -0.07587612420320511, -0.08140087872743607, 0.034703753888607025, 0.031549565494060516, -0.05598624795675278, -0.03745794668793678, -0.05576878413558006, -0.09306751936674118, 0.00788306538015604, -0.0434771329164505, -0.025977009907364845, -0.03900599107146263, -0.0...
0.08953
- Mathieu Blondel - Joris Van den Bossche - Matthieu Brucher - Lars Buitinck - David Cournapeau - Noel Dawe - Vincent Dubourg - Edouard Duchesnay - Alexander Fabisch - Virgile Fritsch - Satrajit Ghosh - Angel Soler Gollonet - Chris Gorgolewski - Jaques Grobler - Yaroslav Halchenko - Brian Holt - Nicolas Hug - Arnaud Joly - Thouis (Ray) Jones - Kyle Kastner - Manoj Kumar - Robert Layton - Wei Li - Paolo Losi - Gilles Louppe - Jan Hendrik Metzen - Vincent Michel - Jarrod Millman - Vlad Niculae - Alexandre Passos - Fabian Pedregosa - Peter Prettenhofer - Hanmin Qin - (Venkat) Raghav, Rajagopalan - Jacob Schreiber - 杜世橋 Du Shiqiao - Bertrand Thirion - Tom Dupré la Tour - Jake Vanderplas - Nelle Varoquaux - David Warde-Farley - Ron Weiss - Roman Yurchak
https://github.com/scikit-learn/scikit-learn/blob/main//doc/maintainers_emeritus.rst
main
scikit-learn
[ 0.015748972073197365, -0.013672436587512493, -0.015136652626097202, -0.02715213969349861, -0.0022494818549603224, 0.1197766363620758, -0.019943851977586746, 0.04768623411655426, 0.004157267045229673, 0.04206003621220589, -0.03224554285407066, -0.010709508322179317, 0.025104457512497902, 0....
0.008068
.. \_external\_resources: =========================================== External Resources, Videos and Talks =========================================== The scikit-learn MOOC ===================== If you are new to scikit-learn, or looking to strengthen your understanding, we highly recommend the \*\*scikit-learn MOOC (Massive Open Online Course)\*\*. The MOOC, created and maintained by some of the scikit-learn core-contributors, is \*\*free of charge\*\* and is designed to help learners of all levels master machine learning using scikit-learn. It covers topics from the fundamental machine learning concepts to more advanced areas like predictive modeling pipelines and model evaluation. The course materials are available on the `scikit-learn MOOC website `\_. This course is also hosted on the `FUN platform `\_, which additionally makes the content interactive without the need to install anything, and gives access to a discussion forum. The videos are available on the `Inria Learning Lab channel `\_ in a `playlist `\_\_. .. \_videos: Videos ====== - The `scikit-learn YouTube channel `\_ features a `playlist `\_\_ of videos showcasing talks by maintainers and community members. New to Scientific Python? ========================== For those that are still new to the scientific Python ecosystem, we highly recommend the `Python Scientific Lecture Notes `\_. This will help you find your footing a bit and will definitely improve your scikit-learn experience. A basic understanding of NumPy arrays is recommended to make the most of scikit-learn. External Tutorials =================== There are several online tutorials available which are geared toward specific subject areas: - `Machine Learning for NeuroImaging in Python `\_ - `Machine Learning for Astronomical Data Analysis `\_
https://github.com/scikit-learn/scikit-learn/blob/main//doc/presentations.rst
main
scikit-learn
[ -0.07764846086502075, -0.12951675057411194, -0.03289230167865753, 0.013307045213878155, 0.1312396377325058, -0.0927903801202774, -0.03769964352250099, 0.0314641073346138, -0.038547366857528687, -0.05218750238418579, -0.03145216405391693, -0.02300756424665451, -0.05751298367977142, -0.01966...
0.204984
.. \_data-transforms: Dataset transformations ----------------------- scikit-learn provides a library of transformers, which may clean (see :ref:`preprocessing`), reduce (see :ref:`data\_reduction`), expand (see :ref:`kernel\_approximation`) or generate (see :ref:`feature\_extraction`) feature representations. Like other estimators, these are represented by classes with a ``fit`` method, which learns model parameters (e.g. mean and standard deviation for normalization) from a training set, and a ``transform`` method which applies this transformation model to unseen data. ``fit\_transform`` may be more convenient and efficient for modelling and transforming the training data simultaneously. Combining such transformers, either in parallel or series is covered in :ref:`combining\_estimators`. :ref:`metrics` covers transforming feature spaces into affinity matrices, while :ref:`preprocessing\_targets` considers transformations of the target space (e.g. categorical labels) for use in scikit-learn. .. toctree:: :maxdepth: 2 modules/compose modules/feature\_extraction modules/preprocessing modules/impute modules/unsupervised\_reduction modules/random\_projection modules/kernel\_approximation modules/metrics modules/preprocessing\_targets
https://github.com/scikit-learn/scikit-learn/blob/main//doc/data_transforms.rst
main
scikit-learn
[ -0.1088646948337555, -0.035007648169994354, -0.030507870018482208, 0.002574533922597766, 0.021578680723905563, -0.05325816571712494, -0.03885267674922943, -0.0151644516736269, 0.0001707152696326375, -0.03325929865241051, 0.03335567191243172, -0.02120138332247734, -0.020929625257849693, -0....
0.205951
.. \_datasets: ========================= Dataset loading utilities ========================= .. currentmodule:: sklearn.datasets The ``sklearn.datasets`` package embeds some small toy datasets and provides helpers to fetch larger datasets commonly used by the machine learning community to benchmark algorithms on data that comes from the 'real world'. To evaluate the impact of the scale of the dataset (``n\_samples`` and ``n\_features``) while controlling the statistical properties of the data (typically the correlation and informativeness of the features), it is also possible to generate synthetic data. \*\*General dataset API.\*\* There are three main kinds of dataset interfaces that can be used to get datasets depending on the desired type of dataset. \*\*The dataset loaders.\*\* They can be used to load small standard datasets, described in the :ref:`toy\_datasets` section. \*\*The dataset fetchers.\*\* They can be used to download and load larger datasets, described in the :ref:`real\_world\_datasets` section. Both loaders and fetchers functions return a :class:`~sklearn.utils.Bunch` object holding at least two items: an array of shape ``n\_samples`` \* ``n\_features`` with key ``data`` (except for 20newsgroups) and a numpy array of length ``n\_samples``, containing the target values, with key ``target``. The Bunch object is a dictionary that exposes its keys as attributes. For more information about Bunch object, see :class:`~sklearn.utils.Bunch`. It's also possible for almost all of these functions to constrain the output to be a tuple containing only the data and the target, by setting the ``return\_X\_y`` parameter to ``True``. The datasets also contain a full description in their ``DESCR`` attribute and some contain ``feature\_names`` and ``target\_names``. See the dataset descriptions below for details. \*\*The dataset generation functions.\*\* They can be used to generate controlled synthetic datasets, described in the :ref:`sample\_generators` section. These functions return a tuple ``(X, y)`` consisting of a ``n\_samples`` \* ``n\_features`` numpy array ``X`` and an array of length ``n\_samples`` containing the targets ``y``. In addition, there are also miscellaneous tools to load datasets of other formats or from other locations, described in the :ref:`loading\_other\_datasets` section. .. toctree:: :maxdepth: 2 datasets/toy\_dataset datasets/real\_world datasets/sample\_generators datasets/loading\_other\_datasets
https://github.com/scikit-learn/scikit-learn/blob/main//doc/datasets.rst
main
scikit-learn
[ -0.14579610526561737, -0.04517097398638725, 0.0002729495463427156, 0.04952238127589226, 0.061041612178087234, -0.08113085478544235, -0.027507338672876358, -0.043600939214229584, -0.08325895667076111, -0.05998361483216286, -0.01717214845120907, -0.04813706874847412, 0.007783565204590559, -0...
0.164309
.. currentmodule:: sklearn .. include:: whats\_new/\_contributors.rst Release History =============== Changelogs and release notes for all scikit-learn releases are linked in this page. .. tip:: `Subscribe to scikit-learn releases `\_\_ on libraries.io to be notified when new versions are released. .. toctree:: :maxdepth: 2 whats\_new/v1.9.rst whats\_new/v1.8.rst whats\_new/v1.7.rst whats\_new/v1.6.rst whats\_new/v1.5.rst whats\_new/v1.4.rst whats\_new/v1.3.rst whats\_new/v1.2.rst whats\_new/v1.1.rst whats\_new/v1.0.rst whats\_new/v0.24.rst whats\_new/v0.23.rst whats\_new/v0.22.rst whats\_new/v0.21.rst whats\_new/v0.20.rst whats\_new/v0.19.rst whats\_new/v0.18.rst whats\_new/v0.17.rst whats\_new/v0.16.rst whats\_new/v0.15.rst whats\_new/v0.14.rst whats\_new/v0.13.rst whats\_new/older\_versions.rst
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new.rst
main
scikit-learn
[ 0.010264570824801922, -0.07359818369150162, 0.01872589997947216, -0.03687690570950508, 0.14334338903427124, -0.030117662623524666, 0.010999409481883049, -0.012396616861224174, -0.009615759365260601, 0.06263761967420578, 0.04105456918478012, -0.02634936384856701, -0.08862148970365524, -0.02...
0.101674
.. raw:: html /\* h3 headings on this page are the questions; make them rubric-like \*/ h3 { font-size: 1rem; font-weight: bold; padding-bottom: 0.2rem; margin: 2rem 0 1.15rem 0; border-bottom: 1px solid var(--pst-color-border); } /\* Increase top margin for first question in each section \*/ h2 + section > h3 { margin-top: 2.5rem; } /\* Make the headerlinks a bit more visible \*/ h3 > a.headerlink { font-size: 0.9rem; } /\* Remove the backlink decoration on the titles \*/ h2 > a.toc-backref, h3 > a.toc-backref { text-decoration: none; } .. \_faq: ========================== Frequently Asked Questions ========================== .. currentmodule:: sklearn Here we try to give some answers to questions that regularly pop up on the mailing list. .. contents:: Table of Contents :local: :depth: 2 About the project ----------------- What is the project name (a lot of people get it wrong)? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ scikit-learn, but not scikit or SciKit nor sci-kit learn. Also not scikits.learn or scikits-learn, which were previously used. How do you pronounce the project name? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ sy-kit learn. sci stands for science! Why scikit? ^^^^^^^^^^^ There are multiple scikits, which are scientific toolboxes built around SciPy. Apart from scikit-learn, another popular one is `scikit-image `\_. Do you support PyPy? ^^^^^^^^^^^^^^^^^^^^ Due to limited maintainer resources and small number of users, using scikit-learn with `PyPy `\_ (an alternative Python implementation with a built-in just-in-time compiler) is not officially supported. How can I obtain permission to use the images in scikit-learn for my work? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The images contained in the `scikit-learn repository `\_ and the images generated within the `scikit-learn documentation `\_ can be used via the `BSD 3-Clause License `\_ for your work. Citations of scikit-learn are highly encouraged and appreciated. See :ref:`citing scikit-learn `. However, the scikit-learn logo is subject to some terms and conditions. See :ref:`branding-and-logos`. Implementation decisions ------------------------ Why is there no support for deep or reinforcement learning? Will there be such support in the future? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Deep learning and reinforcement learning both require a rich vocabulary to define an architecture, with deep learning additionally requiring GPUs for efficient computing. However, neither of these fit within the design constraints of scikit-learn. As a result, deep learning and reinforcement learning are currently out of scope for what scikit-learn seeks to achieve. You can find more information about the addition of GPU support at `Will you add GPU support?`\_. Note that scikit-learn currently implements a simple multilayer perceptron in :mod:`sklearn.neural\_network`. We will only accept bug fixes for this module. If you want to implement more complex deep learning models, please turn to popular deep learning frameworks such as `tensorflow `\_, `keras `\_, and `pytorch `\_. .. \_adding\_graphical\_models: Will you add graphical models or sequence prediction to scikit-learn? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Not in the foreseeable future. scikit-learn tries to provide a unified API for the basic tasks in machine learning, with pipelines and meta-algorithms like grid search to tie everything together. The required concepts, APIs, algorithms and expertise required for structured learning are different from what scikit-learn has to offer. If we started doing arbitrary structured learning, we'd need to redesign the whole package and the project would likely collapse under its own weight. There are two projects with API similar to scikit-learn that do structured prediction: \* `pystruct `\_ handles general structured learning (focuses on SSVMs on arbitrary graph structures with approximate inference; defines the notion of sample as an instance of the graph structure). \* `seqlearn `\_ handles sequences only (focuses on exact inference; has HMMs, but mostly for the sake of completeness; treats a feature vector as a sample and uses an offset encoding for the dependencies between feature
https://github.com/scikit-learn/scikit-learn/blob/main//doc/faq.rst
main
scikit-learn
[ -0.08722952753305435, 0.043816518038511276, -0.034399136900901794, 0.04015396907925606, 0.06627592444419861, 0.04656590521335602, -0.014631963334977627, 0.02853040024638176, 0.021085722371935844, -0.01746200956404209, -0.09539386630058289, 0.0059384917840361595, 0.0445110984146595, -0.0840...
-0.024298
approximate inference; defines the notion of sample as an instance of the graph structure). \* `seqlearn `\_ handles sequences only (focuses on exact inference; has HMMs, but mostly for the sake of completeness; treats a feature vector as a sample and uses an offset encoding for the dependencies between feature vectors). Why did you remove HMMs from scikit-learn? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See :ref:`adding\_graphical\_models`. Will you add GPU support? ^^^^^^^^^^^^^^^^^^^^^^^^^ Adding GPU support by default would introduce heavy hardware-specific software dependencies and existing algorithms would need to be reimplemented. This would make it both harder for the average user to install scikit-learn and harder for the developers to maintain the code. However, since 2023, a limited but growing :ref:`list of scikit-learn estimators ` can already run on GPUs if the input data is provided as a PyTorch or CuPy array and if scikit-learn has been configured to accept such inputs as explained in :ref:`array\_api`. This Array API support allows scikit-learn to run on GPUs without introducing heavy and hardware-specific software dependencies to the main package. Most estimators that rely on NumPy for their computationally intensive operations can be considered for Array API support and therefore GPU support. However, not all scikit-learn estimators are amenable to efficiently running on GPUs via the Array API for fundamental algorithmic reasons. For instance, tree-based models currently implemented with Cython in scikit-learn are fundamentally not array-based algorithms. Other algorithms such as k-means or k-nearest neighbors rely on array-based algorithms but are also implemented in Cython. Cython is used to manually interleave consecutive array operations to avoid introducing performance killing memory access to large intermediate arrays: this low-level algorithmic rewrite is called "kernel fusion" and cannot be expressed via the Array API for the foreseeable future. Adding efficient GPU support to estimators that cannot be efficiently implemented with the Array API would require designing and adopting a more flexible extension system for scikit-learn. This possibility is being considered in the following GitHub issue (under discussion): - https://github.com/scikit-learn/scikit-learn/issues/22438 Why do categorical variables need preprocessing in scikit-learn, compared to other tools? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Most of scikit-learn assumes data is in NumPy arrays or SciPy sparse matrices of a single numeric dtype. These do not explicitly represent categorical variables at present. Thus, unlike R's ``data.frames`` or :class:`pandas.DataFrame`, we require explicit conversion of categorical features to numeric values, as discussed in :ref:`preprocessing\_categorical\_features`. See also :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer\_mixed\_types.py` for an example of working with heterogeneous (e.g. categorical and numeric) data. Note that recently, :class:`~sklearn.ensemble.HistGradientBoostingClassifier` and :class:`~sklearn.ensemble.HistGradientBoostingRegressor` gained native support for categorical features through the option `categorical\_features="from\_dtype"`. This option relies on inferring which columns of the data are categorical based on the :class:`pandas.CategoricalDtype` and :class:`polars.datatypes.Categorical` dtypes. Does scikit-learn work natively with various types of dataframes? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Scikit-learn has limited support for :class:`pandas.DataFrame` and :class:`polars.DataFrame`. Scikit-learn estimators can accept both these dataframe types as input, and scikit-learn transformers can output dataframes using the `set\_output` API. For more details, refer to :ref:`sphx\_glr\_auto\_examples\_miscellaneous\_plot\_set\_output.py`. However, the internal computations in scikit-learn estimators rely on numerical operations that are more efficiently performed on homogeneous data structures such as NumPy arrays or SciPy sparse matrices. As a result, most scikit-learn estimators will internally convert dataframe inputs into these homogeneous data structures. Similarly, dataframe outputs are generated from these homogeneous data structures. Also note that :class:`~sklearn.compose.ColumnTransformer` makes it convenient to handle heterogeneous pandas dataframes by mapping homogeneous subsets of dataframe columns selected by name or dtype to dedicated scikit-learn transformers. Therefore :class:`~sklearn.compose.ColumnTransformer` are often used in the first step of scikit-learn pipelines when dealing with heterogeneous dataframes (see :ref:`pipeline` for more details). See also :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer\_mixed\_types.py` for an example of working with heterogeneous (e.g. categorical and numeric)
https://github.com/scikit-learn/scikit-learn/blob/main//doc/faq.rst
main
scikit-learn
[ -0.08857287466526031, -0.07296634465456009, -0.0428091399371624, 0.0024091035593301058, 0.09114255756139755, -0.052105385810136795, -0.11589544266462326, 0.014974701218307018, -0.06688733398914337, -0.018103649839758873, -0.05542817711830139, 0.0020482614636421204, -0.07099428027868271, -0...
0.085255
homogeneous subsets of dataframe columns selected by name or dtype to dedicated scikit-learn transformers. Therefore :class:`~sklearn.compose.ColumnTransformer` are often used in the first step of scikit-learn pipelines when dealing with heterogeneous dataframes (see :ref:`pipeline` for more details). See also :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_column\_transformer\_mixed\_types.py` for an example of working with heterogeneous (e.g. categorical and numeric) data. Do you plan to implement transform for target ``y`` in a pipeline? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Currently transform only works for features ``X`` in a pipeline. There's a long-standing discussion about not being able to transform ``y`` in a pipeline. Follow on GitHub issue :issue:`4143`. Meanwhile, you can check out :class:`~compose.TransformedTargetRegressor`, `pipegraph `\_, and `imbalanced-learn `\_. Note that scikit-learn solved for the case where ``y`` has an invertible transformation applied before training and inverted after prediction. scikit-learn intends to solve for use cases where ``y`` should be transformed at training time and not at test time, for resampling and similar uses, like at `imbalanced-learn `\_. In general, these use cases can be solved with a custom meta estimator rather than a :class:`~pipeline.Pipeline`. Why are there so many different estimators for linear models? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Usually, there is one classifier and one regressor per model type, e.g. :class:`~ensemble.GradientBoostingClassifier` and :class:`~ensemble.GradientBoostingRegressor`. Both have similar options and both have the parameter `loss`, which is especially useful in the regression case as it enables the estimation of conditional mean as well as conditional quantiles. For linear models, there are many estimator classes which are very close to each other. Let us have a look at - :class:`~linear\_model.LinearRegression`, no penalty - :class:`~linear\_model.Ridge`, L2 penalty - :class:`~linear\_model.Lasso`, L1 penalty (sparse models) - :class:`~linear\_model.ElasticNet`, L1 + L2 penalty (less sparse models) - :class:`~linear\_model.SGDRegressor` with `loss="squared\_loss"` \*\*Maintainer perspective:\*\* They all do in principle the same and are different only by the penalty they impose. This, however, has a large impact on the way the underlying optimization problem is solved. In the end, this amounts to usage of different methods and tricks from linear algebra. A special case is :class:`~linear\_model.SGDRegressor` which comprises all 4 previous models and is different by the optimization procedure. A further side effect is that the different estimators favor different data layouts (`X` C-contiguous or F-contiguous, sparse csr or csc). This complexity of the seemingly simple linear models is the reason for having different estimator classes for different penalties. \*\*User perspective:\*\* First, the current design is inspired by the scientific literature where linear regression models with different regularization/penalty were given different names, e.g. \*ridge regression\*. Having different model classes with according names makes it easier for users to find those regression models. Secondly, if all the 5 above mentioned linear models were unified into a single class, there would be parameters with a lot of options like the ``solver`` parameter. On top of that, there would be a lot of exclusive interactions between different parameters. For example, the possible options of the parameters ``solver``, ``precompute`` and ``selection`` would depend on the chosen values of the penalty parameters ``alpha`` and ``l1\_ratio``. Contributing ------------ How can I contribute to scikit-learn? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See :ref:`contributing`. Before wanting to add a new algorithm, which is usually a major and lengthy undertaking, it is recommended to start with :ref:`known issues `. Please do not contact the contributors of scikit-learn directly regarding contributing to scikit-learn. Why is my pull request not getting any attention? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The scikit-learn review process takes a significant amount of time, and contributors should not be discouraged by a lack of activity or review on their pull request. We care a lot about getting things right the first time, as maintenance and later change comes at a high cost. We rarely
https://github.com/scikit-learn/scikit-learn/blob/main//doc/faq.rst
main
scikit-learn
[ -0.04250814765691757, -0.08896207809448242, -0.02221810817718506, -0.05880698934197426, 0.04598758742213249, -0.05283962935209274, -0.04723920673131943, -0.03267839923501015, -0.06678418815135956, -0.01469893753528595, -0.007607038598507643, -0.003060446586459875, -0.10392621159553528, 0.0...
0.020195
The scikit-learn review process takes a significant amount of time, and contributors should not be discouraged by a lack of activity or review on their pull request. We care a lot about getting things right the first time, as maintenance and later change comes at a high cost. We rarely release any "experimental" code, so all of our contributions will be subject to high use immediately and should be of the highest quality possible initially. Beyond that, scikit-learn is limited in its reviewing bandwidth; many of the reviewers and core developers are working on scikit-learn on their own time. If a review of your pull request comes slowly, it is likely because the reviewers are busy. We ask for your understanding and request that you not close your pull request or discontinue your work solely because of this reason. For tips on how to make your pull request easier to review and more likely to be reviewed quickly, see :ref:`improve\_issue\_pr`. .. \_improve\_issue\_pr: How do I improve my issue or pull request? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To help your issue receive attention or improve the likelihood of your pull request being reviewed, you can try: \* follow our :ref:`contribution guidelines `, in particular :ref:`automated\_contributions\_policy`, :ref:`filing\_bugs`, :ref:`stalled\_pull\_request` and :ref:`stalled\_unclaimed\_issues`, \* complete all sections of the issue or pull request template provided by GitHub, including a clear description of the issue or motivation and thought process behind the pull request \* ensure the title clearly describes the issue or pull request and does not include an issue number. For your pull requests specifically, the following will make it easier to review: \* ensure your PR addresses an issue for which there is clear consensus on the solution (see :ref:`issues\_tagged\_needs\_triage`), \* ensure the PR satisfies all items in the :ref:`Pull request checklist `, \* ensure the changes are minimal and directly relevant to the described issue. What does the "spam" label for issues or pull requests mean? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The "spam" label is an indication for reviewers that the issue or pull request may not have received sufficient effort or preparation from the author for a productive review. The maintainers are using this label as a way to deal with the increase of low value PRs and issues. If an issue or PR was labeled as spam and simultaneously closed, the decision is final. A common reason for this happening is when people open a PR for an issue that is still under discussion. Please wait for the discussion to converge before opening a PR. If your issue or PR was labeled as spam and not closed, see :ref:`improve\_issue\_pr` for tips on improving your issue or pull request and increasing the likelihood of the label being removed. .. \_new\_algorithms\_inclusion\_criteria: What are the inclusion criteria for new algorithms? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ We only consider well-established algorithms for inclusion. A rule of thumb is at least 3 years since publication, 200+ citations, and wide use and usefulness. A technique that provides a clear-cut improvement (e.g. an enhanced data structure or a more efficient approximation technique) on a widely-used method will also be considered for inclusion. From the algorithms or techniques that meet the above criteria, only those which fit well within the current API of scikit-learn, that is a ``fit``, ``predict/transform`` interface and ordinarily having input/output that is a numpy array or sparse matrix, are accepted. The contributor should support the importance of the proposed addition with research papers and/or implementations in other similar packages, demonstrate its usefulness via common use-cases/applications and corroborate performance improvements, if any, with benchmarks and/or plots. It is expected that the proposed algorithm should outperform the methods
https://github.com/scikit-learn/scikit-learn/blob/main//doc/faq.rst
main
scikit-learn
[ -0.03432309255003929, -0.014820069074630737, -0.0011490050237625837, 0.02036452479660511, 0.10281571000814438, -0.08275528997182846, -0.051039308309555054, 0.03010067716240883, -0.015931913629174232, 0.07969185709953308, -0.0389157310128212, -0.01169588603079319, -0.1342584192752838, -0.10...
0.004677
sparse matrix, are accepted. The contributor should support the importance of the proposed addition with research papers and/or implementations in other similar packages, demonstrate its usefulness via common use-cases/applications and corroborate performance improvements, if any, with benchmarks and/or plots. It is expected that the proposed algorithm should outperform the methods that are already implemented in scikit-learn at least in some areas. Please do not propose algorithms you (your best friend, colleague or boss) created. scikit-learn is not a good venue for advertising your own work. Inclusion of a new algorithm speeding up an existing model is easier if: - it does not introduce new hyper-parameters (as it makes the library more future-proof), - it is easy to document clearly when the contribution improves the speed and when it does not, for instance, "when ``n\_features >> n\_samples``", - benchmarks clearly show a speed up. Also, note that your implementation need not be in scikit-learn to be used together with scikit-learn tools. You can implement your favorite algorithm in a scikit-learn compatible way, upload it to GitHub and let us know. We will be happy to list it under :ref:`related\_projects`. If you already have a package on GitHub following the scikit-learn API, you may also be interested to look at `scikit-learn-contrib `\_. .. \_selectiveness: Why are you so selective on what algorithms you include in scikit-learn? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Code comes with maintenance cost, and we need to balance the amount of code we have with the size of the team (and add to this the fact that complexity scales non linearly with the number of features). The package relies on core developers using their free time to fix bugs, maintain code and review contributions. Any algorithm that is added needs future attention by the developers, at which point the original author might long have lost interest. See also :ref:`new\_algorithms\_inclusion\_criteria`. For a great read about long-term maintenance issues in open-source software, look at `the Executive Summary of Roads and Bridges `\_. Using scikit-learn ------------------ How do I get started with scikit-learn? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you are new to scikit-learn, or looking to strengthen your understanding, we highly recommend the \*\*scikit-learn MOOC (Massive Open Online Course)\*\*. See our :ref:`External Resources, Videos and Talks page ` for more details. What's the best way to get help on scikit-learn usage? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \* General machine learning questions: use `Cross Validated `\_ with the ``[machine-learning]`` tag. \* scikit-learn usage questions: use `Stack Overflow `\_ with the ``[scikit-learn]`` and ``[python]`` tags. You can alternatively use the `mailing list `\_. Please make sure to include a minimal reproduction code snippet (ideally shorter than 10 lines) that highlights your problem on a toy dataset (for instance from :mod:`sklearn.datasets` or randomly generated with functions of ``numpy.random`` with a fixed random seed). Please remove any line of code that is not necessary to reproduce your problem. The problem should be reproducible by simply copy-pasting your code snippet in a Python shell with scikit-learn installed. Do not forget to include the import statements. More guidance to write good reproduction code snippets can be found at: https://stackoverflow.com/help/mcve. If your problem raises an exception that you do not understand (even after googling it), please make sure to include the full traceback that you obtain when running the reproduction script. For bug reports or feature requests, please make use of the `issue tracker on GitHub `\_. .. warning:: Please do not email any authors directly to ask for assistance, report bugs, or for any other issue related to scikit-learn. How should I save, export or deploy estimators for production? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See :ref:`model\_persistence`. How can I create a bunch object?
https://github.com/scikit-learn/scikit-learn/blob/main//doc/faq.rst
main
scikit-learn
[ -0.04567206650972366, -0.022961873561143875, -0.09245594590902328, -0.0244884230196476, 0.03673846647143364, -0.08828019350767136, -0.07081011682748795, -0.02683847025036812, -0.07837630808353424, -0.028361011296510696, -0.010323207825422287, 0.03202259540557861, -0.0629902109503746, -0.05...
0.100691
of the `issue tracker on GitHub `\_. .. warning:: Please do not email any authors directly to ask for assistance, report bugs, or for any other issue related to scikit-learn. How should I save, export or deploy estimators for production? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ See :ref:`model\_persistence`. How can I create a bunch object? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Bunch objects are sometimes used as an output for functions and methods. They extend dictionaries by enabling values to be accessed by key, `bunch["value\_key"]`, or by an attribute, `bunch.value\_key`. They should not be used as an input. Therefore you almost never need to create a :class:`~utils.Bunch` object, unless you are extending scikit-learn's API. How can I load my own datasets into a format usable by scikit-learn? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Generally, scikit-learn works on any numeric data stored as numpy arrays or scipy sparse matrices. Other types that are convertible to numeric arrays such as :class:`pandas.DataFrame` are also acceptable. For more information on loading your data files into these usable data structures, please refer to :ref:`loading external datasets `. How do I deal with string data (or trees, graphs...)? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ scikit-learn estimators assume you'll feed them real-valued feature vectors. This assumption is hard-coded in pretty much all of the library. However, you can feed non-numerical inputs to estimators in several ways. If you have text documents, you can use a term frequency features; see :ref:`text\_feature\_extraction` for the built-in \*text vectorizers\*. For more general feature extraction from any kind of data, see :ref:`dict\_feature\_extraction` and :ref:`feature\_hashing`. Another common case is when you have non-numerical data and a custom distance (or similarity) metric on these data. Examples include strings with edit distance (aka. Levenshtein distance), for instance, DNA or RNA sequences. These can be encoded as numbers, but doing so is painful and error-prone. Working with distance metrics on arbitrary data can be done in two ways. Firstly, many estimators take precomputed distance/similarity matrices, so if the dataset is not too large, you can compute distances for all pairs of inputs. If the dataset is large, you can use feature vectors with only one "feature", which is an index into a separate data structure, and supply a custom metric function that looks up the actual data in this data structure. For instance, to use :class:`~cluster.dbscan` with Levenshtein distances:: >>> import numpy as np >>> from leven import levenshtein # doctest: +SKIP >>> from sklearn.cluster import dbscan >>> data = ["ACCTCCTAGAAG", "ACCTACTAGAAGTT", "GAATATTAGGCCGA"] >>> def lev\_metric(x, y): ... i, j = int(x[0]), int(y[0]) # extract indices ... return levenshtein(data[i], data[j]) ... >>> X = np.arange(len(data)).reshape(-1, 1) >>> X array([[0], [1], [2]]) >>> # We need to specify algorithm='brute' as the default assumes >>> # a continuous feature space. >>> dbscan(X, metric=lev\_metric, eps=5, min\_samples=2, algorithm='brute') # doctest: +SKIP (array([0, 1]), array([ 0, 0, -1])) Note that the example above uses the third-party edit distance package `leven `\_. Similar tricks can be used, with some care, for tree kernels, graph kernels, etc. Why do I sometimes get a crash/freeze with ``n\_jobs > 1`` under OSX or Linux? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Several scikit-learn tools such as :class:`~model\_selection.GridSearchCV` and :class:`~model\_selection.cross\_val\_score` rely internally on Python's :mod:`multiprocessing` module to parallelize execution onto several Python processes by passing ``n\_jobs > 1`` as an argument. The problem is that Python :mod:`multiprocessing` does a ``fork`` system call without following it with an ``exec`` system call for performance reasons. Many libraries like (some versions of) Accelerate or vecLib under OSX, (some versions of) MKL, the OpenMP runtime of GCC, nvidia's Cuda (and probably many others), manage their own internal thread pool. Upon a call to `fork`, the thread pool state in the child process is corrupted: the thread
https://github.com/scikit-learn/scikit-learn/blob/main//doc/faq.rst
main
scikit-learn
[ -0.027790091931819916, 0.013307638466358185, -0.1016656756401062, 0.05791432410478592, 0.02361735701560974, -0.032062213867902756, -0.034614648669958115, 0.028768613934516907, -0.04658830165863037, 0.01897694356739521, 0.011869009584188461, -0.05508125573396683, 0.023622047156095505, -0.02...
0.089655
reasons. Many libraries like (some versions of) Accelerate or vecLib under OSX, (some versions of) MKL, the OpenMP runtime of GCC, nvidia's Cuda (and probably many others), manage their own internal thread pool. Upon a call to `fork`, the thread pool state in the child process is corrupted: the thread pool believes it has many threads while only the main thread state has been forked. It is possible to change the libraries to make them detect when a fork happens and reinitialize the thread pool in that case: we did that for OpenBLAS (merged upstream in main since 0.2.10) and we contributed a `patch `\_ to GCC's OpenMP runtime (not yet reviewed). But in the end the real culprit is Python's :mod:`multiprocessing` that does ``fork`` without ``exec`` to reduce the overhead of starting and using new Python processes for parallel computing. Unfortunately this is a violation of the POSIX standard and therefore some software editors like Apple refuse to consider the lack of fork-safety in Accelerate and vecLib as a bug. In Python 3.4+ it is now possible to configure :mod:`multiprocessing` to use the ``"forkserver"`` or ``"spawn"`` start methods (instead of the default ``"fork"``) to manage the process pools. To work around this issue when using scikit-learn, you can set the ``JOBLIB\_START\_METHOD`` environment variable to ``"forkserver"``. However the user should be aware that using the ``"forkserver"`` method prevents :class:`joblib.Parallel` to call function interactively defined in a shell session. If you have custom code that uses :mod:`multiprocessing` directly instead of using it via :mod:`joblib` you can enable the ``"forkserver"`` mode globally for your program. Insert the following instructions in your main script:: import multiprocessing # other imports, custom code, load data, define model... if \_\_name\_\_ == "\_\_main\_\_": multiprocessing.set\_start\_method("forkserver") # call scikit-learn utils with n\_jobs > 1 here You can find more details on the new start methods in the `multiprocessing documentation `\_. .. \_faq\_mkl\_threading: Why does my job use more cores than specified with ``n\_jobs``? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is because ``n\_jobs`` only controls the number of jobs for routines that are parallelized with :mod:`joblib`, but parallel code can come from other sources: - some routines may be parallelized with OpenMP (for code written in C or Cython), - scikit-learn relies a lot on numpy, which in turn may rely on numerical libraries like MKL, OpenBLAS or BLIS which can provide parallel implementations. For more details, please refer to our :ref:`notes on parallelism `. How do I set a ``random\_state`` for an entire execution? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Please refer to :ref:`randomness`.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/faq.rst
main
scikit-learn
[ -0.06954783201217651, -0.08483503013849258, 0.01045052520930767, -0.02591334842145443, 0.06451535224914551, -0.13425730168819427, -0.06382901966571808, 0.05609261989593506, 0.0598287396132946, -0.0010046265088021755, -0.025637444108724594, -0.025833217427134514, -0.05964137241244316, -0.03...
0.073078
.. \_common\_pitfalls: ========================================= Common pitfalls and recommended practices ========================================= The purpose of this chapter is to illustrate some common pitfalls and anti-patterns that occur when using scikit-learn. It provides examples of what \*\*not\*\* to do, along with a corresponding correct example. Inconsistent preprocessing ========================== scikit-learn provides a library of :ref:`data-transforms`, which may clean (see :ref:`preprocessing`), reduce (see :ref:`data\_reduction`), expand (see :ref:`kernel\_approximation`) or generate (see :ref:`feature\_extraction`) feature representations. If these data transforms are used when training a model, they also must be used on subsequent datasets, whether it's test data or data in a production system. Otherwise, the feature space will change, and the model will not be able to perform effectively. For the following example, let's create a synthetic dataset with a single feature:: >>> from sklearn.datasets import make\_regression >>> from sklearn.model\_selection import train\_test\_split >>> random\_state = 42 >>> X, y = make\_regression(random\_state=random\_state, n\_features=1, noise=1) >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split( ... X, y, test\_size=0.4, random\_state=random\_state) \*\*Wrong\*\* The train dataset is scaled, but not the test dataset, so model performance on the test dataset is worse than expected:: >>> from sklearn.metrics import mean\_squared\_error >>> from sklearn.linear\_model import LinearRegression >>> from sklearn.preprocessing import StandardScaler >>> scaler = StandardScaler() >>> X\_train\_transformed = scaler.fit\_transform(X\_train) >>> model = LinearRegression().fit(X\_train\_transformed, y\_train) >>> mean\_squared\_error(y\_test, model.predict(X\_test)) 62.80... \*\*Right\*\* Instead of passing the non-transformed `X\_test` to `predict`, we should transform the test data, the same way we transformed the training data:: >>> X\_test\_transformed = scaler.transform(X\_test) >>> mean\_squared\_error(y\_test, model.predict(X\_test\_transformed)) 0.90... Alternatively, we recommend using a :class:`Pipeline `, which makes it easier to chain transformations with estimators, and reduces the possibility of forgetting a transformation:: >>> from sklearn.pipeline import make\_pipeline >>> model = make\_pipeline(StandardScaler(), LinearRegression()) >>> model.fit(X\_train, y\_train) Pipeline(steps=[('standardscaler', StandardScaler()), ('linearregression', LinearRegression())]) >>> mean\_squared\_error(y\_test, model.predict(X\_test)) 0.90... Pipelines also help avoiding another common pitfall: leaking the test data into the training data. .. \_data\_leakage: Data leakage ============ Data leakage occurs when information that would not be available at prediction time is used when building the model. This results in overly optimistic performance estimates, for example from :ref:`cross-validation `, and thus poorer performance when the model is used on actually novel data, for example during production. A common cause is not keeping the test and train data subsets separate. Test data should never be used to make choices about the model. \*\*The general rule is to never call\*\* `fit` \*\*on the test data\*\*. While this may sound obvious, this is easy to miss in some cases, for example when applying certain pre-processing steps. Although both train and test data subsets should receive the same preprocessing transformation (as described in the previous section), it is important that these transformations are only learnt from the training data. For example, if you have a normalization step where you divide by the average value, the average should be the average of the train subset, \*\*not\*\* the average of all the data. If the test subset is included in the average calculation, information from the test subset is influencing the model. How to avoid data leakage ------------------------- Below are some tips on avoiding data leakage: \* Always split the data into train and test subsets first, particularly before any preprocessing steps. \* Never include test data when using the `fit` and `fit\_transform` methods. Using all the data, e.g., `fit(X)`, can result in overly optimistic scores. Conversely, the `transform` method should be used on both train and test subsets as the same preprocessing should be applied to all the data. This can be achieved by using `fit\_transform` on the train subset and `transform` on the test subset. \* The scikit-learn :ref:`pipeline ` is a great way to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/common_pitfalls.rst
main
scikit-learn
[ -0.10325238108634949, 0.030768295750021935, 0.03127891942858696, 0.008949938230216503, 0.038927290588617325, -0.08230791240930557, -0.02688148245215416, -0.047063715755939484, -0.08030117303133011, -0.013167128898203373, 0.02014547772705555, -0.04015340283513069, -0.014016188681125641, -0....
0.131406
the `transform` method should be used on both train and test subsets as the same preprocessing should be applied to all the data. This can be achieved by using `fit\_transform` on the train subset and `transform` on the test subset. \* The scikit-learn :ref:`pipeline ` is a great way to prevent data leakage as it ensures that the appropriate method is performed on the correct data subset. The pipeline is ideal for use in cross-validation and hyper-parameter tuning functions. An example of data leakage during preprocessing is detailed below. Data leakage during pre-processing ---------------------------------- .. note:: We here choose to illustrate data leakage with a feature selection step. This risk of leakage is however relevant with almost all transformations in scikit-learn, including (but not limited to) :class:`~sklearn.preprocessing.StandardScaler`, :class:`~sklearn.impute.SimpleImputer`, and :class:`~sklearn.decomposition.PCA`. A number of :ref:`feature\_selection` functions are available in scikit-learn. They can help remove irrelevant, redundant and noisy features as well as improve your model build time and performance. As with any other type of preprocessing, feature selection should \*\*only\*\* use the training data. Including the test data in feature selection will optimistically bias your model. To demonstrate we will create this binary classification problem with 10,000 randomly generated features:: >>> import numpy as np >>> n\_samples, n\_features, n\_classes = 200, 10000, 2 >>> rng = np.random.RandomState(42) >>> X = rng.standard\_normal((n\_samples, n\_features)) >>> y = rng.choice(n\_classes, n\_samples) \*\*Wrong\*\* Using all the data to perform feature selection results in an accuracy score much higher than chance, even though our targets are completely random. This randomness means that our `X` and `y` are independent and we thus expect the accuracy to be around 0.5. However, since the feature selection step 'sees' the test data, the model has an unfair advantage. In the incorrect example below we first use all the data for feature selection and then split the data into training and test subsets for model fitting. The result is a much higher than expected accuracy score:: >>> from sklearn.model\_selection import train\_test\_split >>> from sklearn.feature\_selection import SelectKBest >>> from sklearn.ensemble import HistGradientBoostingClassifier >>> from sklearn.metrics import accuracy\_score >>> # Incorrect preprocessing: the entire data is transformed >>> X\_selected = SelectKBest(k=25).fit\_transform(X, y) >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split( ... X\_selected, y, random\_state=42) >>> gbc = HistGradientBoostingClassifier(random\_state=1) >>> gbc.fit(X\_train, y\_train) HistGradientBoostingClassifier(random\_state=1) >>> y\_pred = gbc.predict(X\_test) >>> accuracy\_score(y\_test, y\_pred) 0.76 \*\*Right\*\* To prevent data leakage, it is good practice to split your data into train and test subsets \*\*first\*\*. Feature selection can then be formed using just the train dataset. Notice that whenever we use `fit` or `fit\_transform`, we only use the train dataset. The score is now what we would expect for the data, close to chance:: >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split( ... X, y, random\_state=42) >>> select = SelectKBest(k=25) >>> X\_train\_selected = select.fit\_transform(X\_train, y\_train) >>> gbc = HistGradientBoostingClassifier(random\_state=1) >>> gbc.fit(X\_train\_selected, y\_train) HistGradientBoostingClassifier(random\_state=1) >>> X\_test\_selected = select.transform(X\_test) >>> y\_pred = gbc.predict(X\_test\_selected) >>> accuracy\_score(y\_test, y\_pred) 0.5 Here again, we recommend using a :class:`~sklearn.pipeline.Pipeline` to chain together the feature selection and model estimators. The pipeline ensures that only the training data is used when performing `fit` and the test data is used only for calculating the accuracy score:: >>> from sklearn.pipeline import make\_pipeline >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split( ... X, y, random\_state=42) >>> pipeline = make\_pipeline(SelectKBest(k=25), ... HistGradientBoostingClassifier(random\_state=1)) >>> pipeline.fit(X\_train, y\_train) Pipeline(steps=[('selectkbest', SelectKBest(k=25)), ('histgradientboostingclassifier', HistGradientBoostingClassifier(random\_state=1))]) >>> y\_pred = pipeline.predict(X\_test) >>> accuracy\_score(y\_test, y\_pred) 0.5 The pipeline can also be fed into a cross-validation function such as :func:`~sklearn.model\_selection.cross\_val\_score`. Again, the pipeline ensures that the correct data subset and estimator method is used during fitting and predicting:: >>> from sklearn.model\_selection import cross\_val\_score >>> scores =
https://github.com/scikit-learn/scikit-learn/blob/main//doc/common_pitfalls.rst
main
scikit-learn
[ -0.11197908967733383, 0.02750970982015133, 0.04899656027555466, 0.014213855378329754, 0.05430850386619568, -0.0727725699543953, -0.03907966613769531, -0.001320209470577538, -0.08648591488599777, -0.018602250143885612, -0.004405619576573372, -0.07470207661390305, -0.08149448782205582, -0.08...
0.104581
SelectKBest(k=25)), ('histgradientboostingclassifier', HistGradientBoostingClassifier(random\_state=1))]) >>> y\_pred = pipeline.predict(X\_test) >>> accuracy\_score(y\_test, y\_pred) 0.5 The pipeline can also be fed into a cross-validation function such as :func:`~sklearn.model\_selection.cross\_val\_score`. Again, the pipeline ensures that the correct data subset and estimator method is used during fitting and predicting:: >>> from sklearn.model\_selection import cross\_val\_score >>> scores = cross\_val\_score(pipeline, X, y) >>> print(f"Mean accuracy: {scores.mean():.2f}+/-{scores.std():.2f}") Mean accuracy: 0.43+/-0.05 .. \_randomness: Controlling randomness ====================== Some scikit-learn objects are inherently random. These are usually estimators (e.g. :class:`~sklearn.ensemble.RandomForestClassifier`) and cross-validation splitters (e.g. :class:`~sklearn.model\_selection.KFold`). The randomness of these objects is controlled via their `random\_state` parameter, as described in the :term:`Glossary `. This section expands on the glossary entry, and describes good practices and common pitfalls w.r.t. this subtle parameter. .. note:: Recommendation summary For an optimal robustness of cross-validation (CV) results, pass `RandomState` instances when creating estimators, or leave `random\_state` to `None`. Passing integers to CV splitters is usually the safest option and is preferable; passing `RandomState` instances to splitters may sometimes be useful to achieve very specific use-cases. For both estimators and splitters, passing an integer vs passing an instance (or `None`) leads to subtle but significant differences, especially for CV procedures. These differences are important to understand when reporting results. For reproducible results across executions, remove any use of `random\_state=None`. Using `None` or `RandomState` instances, and repeated calls to `fit` and `split` -------------------------------------------------------------------------------- The `random\_state` parameter determines whether multiple calls to :term:`fit` (for estimators) or to :term:`split` (for CV splitters) will produce the same results, according to these rules: - If an integer is passed, calling `fit` or `split` multiple times always yields the same results. - If `None` or a `RandomState` instance is passed: `fit` and `split` will yield different results each time they are called, and the succession of calls explores all sources of entropy. `None` is the default value for all `random\_state` parameters. We here illustrate these rules for both estimators and CV splitters. .. note:: Since passing `random\_state=None` is equivalent to passing the global `RandomState` instance from `numpy` (`random\_state=np.random.mtrand.\_rand`), we will not explicitly mention `None` here. Everything that applies to instances also applies to using `None`. Estimators .......... Passing instances means that calling `fit` multiple times will not yield the same results, even if the estimator is fitted on the same data and with the same hyper-parameters:: >>> from sklearn.linear\_model import SGDClassifier >>> from sklearn.datasets import make\_classification >>> import numpy as np >>> rng = np.random.RandomState(0) >>> X, y = make\_classification(n\_features=5, random\_state=rng) >>> sgd = SGDClassifier(random\_state=rng) >>> sgd.fit(X, y).coef\_ array([[ 8.85418642, 4.79084103, -3.13077794, 8.11915045, -0.56479934]]) >>> sgd.fit(X, y).coef\_ array([[ 6.70814003, 5.25291366, -7.55212743, 5.18197458, 1.37845099]]) We can see from the snippet above that repeatedly calling `sgd.fit` has produced different models, even if the data was the same. This is because the Random Number Generator (RNG) of the estimator is consumed (i.e. mutated) when `fit` is called, and this mutated RNG will be used in the subsequent calls to `fit`. In addition, the `rng` object is shared across all objects that use it, and as a consequence, these objects become somewhat inter-dependent. For example, two estimators that share the same `RandomState` instance will influence each other, as we will see later when we discuss cloning. This point is important to keep in mind when debugging. If we had passed an integer to the `random\_state` parameter of the :class:`~sklearn.linear\_model.SGDClassifier`, we would have obtained the same models, and thus the same scores each time. When we pass an integer, the same RNG is used across all calls to `fit`. What internally happens is that even though the RNG is consumed when `fit` is called, it is always reset to its
https://github.com/scikit-learn/scikit-learn/blob/main//doc/common_pitfalls.rst
main
scikit-learn
[ -0.021856103092432022, -0.061648230999708176, -0.04769644886255264, -0.008318264037370682, 0.016162360087037086, -0.05805700272321701, -0.022558199241757393, 0.03518778830766678, -0.08940290659666061, -0.02230781689286232, -0.1038753092288971, -0.10169399529695511, 0.043998003005981445, -0...
0.030651
we would have obtained the same models, and thus the same scores each time. When we pass an integer, the same RNG is used across all calls to `fit`. What internally happens is that even though the RNG is consumed when `fit` is called, it is always reset to its original state at the beginning of `fit`. CV splitters ............ Randomized CV splitters have a similar behavior when a `RandomState` instance is passed; calling `split` multiple times yields different data splits:: >>> from sklearn.model\_selection import KFold >>> import numpy as np >>> X = y = np.arange(10) >>> rng = np.random.RandomState(0) >>> cv = KFold(n\_splits=2, shuffle=True, random\_state=rng) >>> for train, test in cv.split(X, y): ... print(train, test) [0 3 5 6 7] [1 2 4 8 9] [1 2 4 8 9] [0 3 5 6 7] >>> for train, test in cv.split(X, y): ... print(train, test) [0 4 6 7 8] [1 2 3 5 9] [1 2 3 5 9] [0 4 6 7 8] We can see that the splits are different from the second time `split` is called. This may lead to unexpected results if you compare the performance of multiple estimators by calling `split` many times, as we will see in the next section. Common pitfalls and subtleties ------------------------------ While the rules that govern the `random\_state` parameter are seemingly simple, they do however have some subtle implications. In some cases, this can even lead to wrong conclusions. Estimators .......... \*\*Different\*\* `random\_state` \*\*types lead to different cross-validation procedures\*\* Depending on the type of the `random\_state` parameter, estimators will behave differently, especially in cross-validation procedures. Consider the following snippet:: >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.datasets import make\_classification >>> from sklearn.model\_selection import cross\_val\_score >>> import numpy as np >>> X, y = make\_classification(random\_state=0) >>> rf\_123 = RandomForestClassifier(random\_state=123) >>> cross\_val\_score(rf\_123, X, y) array([0.85, 0.95, 0.95, 0.9 , 0.9 ]) >>> rf\_inst = RandomForestClassifier(random\_state=np.random.RandomState(0)) >>> cross\_val\_score(rf\_inst, X, y) array([0.9 , 0.95, 0.95, 0.9 , 0.9 ]) We see that the cross-validated scores of `rf\_123` and `rf\_inst` are different, as should be expected since we didn't pass the same `random\_state` parameter. However, the difference between these scores is more subtle than it looks, and \*\*the cross-validation procedures that were performed by\*\* :func:`~sklearn.model\_selection.cross\_val\_score` \*\*significantly differ in each case\*\*: - Since `rf\_123` was passed an integer, every call to `fit` uses the same RNG: this means that all random characteristics of the random forest estimator will be the same for each of the 5 folds of the CV procedure. In particular, the (randomly chosen) subset of features of the estimator will be the same across all folds. - Since `rf\_inst` was passed a `RandomState` instance, each call to `fit` starts from a different RNG. As a result, the random subset of features will be different for each fold. While having a constant estimator RNG across folds isn't inherently wrong, we usually want CV results that are robust w.r.t. the estimator's randomness. As a result, passing an instance instead of an integer may be preferable, since it will allow the estimator RNG to vary for each fold. .. note:: Here, :func:`~sklearn.model\_selection.cross\_val\_score` will use a non-randomized CV splitter (as is the default), so both estimators will be evaluated on the same splits. This section is not about variability in the splits. Also, whether we pass an integer or an instance to :func:`~sklearn.datasets.make\_classification` isn't relevant for our illustration purpose: what matters is what we pass to the :class:`~sklearn.ensemble.RandomForestClassifier` estimator. .. dropdown:: Cloning Another subtle side effect of passing `RandomState` instances is how :func:`~sklearn.base.clone` will work:: >>> from sklearn import clone >>> from sklearn.ensemble import
https://github.com/scikit-learn/scikit-learn/blob/main//doc/common_pitfalls.rst
main
scikit-learn
[ -0.10481123626232147, -0.093241848051548, 0.060544416308403015, 0.06022030487656593, 0.11464840173721313, 0.0024159385357052088, -0.018139246851205826, -0.052532896399497986, 0.03553199768066406, -0.0819859653711319, -0.03459939733147621, 0.03183533623814583, -0.00727843726053834, -0.06428...
0.006915
we pass an integer or an instance to :func:`~sklearn.datasets.make\_classification` isn't relevant for our illustration purpose: what matters is what we pass to the :class:`~sklearn.ensemble.RandomForestClassifier` estimator. .. dropdown:: Cloning Another subtle side effect of passing `RandomState` instances is how :func:`~sklearn.base.clone` will work:: >>> from sklearn import clone >>> from sklearn.ensemble import RandomForestClassifier >>> import numpy as np >>> rng = np.random.RandomState(0) >>> a = RandomForestClassifier(random\_state=rng) >>> b = clone(a) Since a `RandomState` instance was passed to `a`, `a` and `b` are not clones in the strict sense, but rather clones in the statistical sense: `a` and `b` will still be different models, even when calling `fit(X, y)` on the same data. Moreover, `a` and `b` will influence each other since they share the same internal RNG: calling `a.fit` will consume `b`'s RNG, and calling `b.fit` will consume `a`'s RNG, since they are the same. This bit is true for any estimators that share a `random\_state` parameter; it is not specific to clones. If an integer were passed, `a` and `b` would be exact clones and they would not influence each other. .. warning:: Even though :func:`~sklearn.base.clone` is rarely used in user code, it is called pervasively throughout scikit-learn codebase: in particular, most meta-estimators that accept non-fitted estimators call :func:`~sklearn.base.clone` internally (:class:`~sklearn.model\_selection.GridSearchCV`, :class:`~sklearn.ensemble.StackingClassifier`, :class:`~sklearn.calibration.CalibratedClassifierCV`, etc.). CV splitters ............ When passed a `RandomState` instance, CV splitters yield different splits each time `split` is called. When comparing different estimators, this can lead to overestimating the variance of the difference in performance between the estimators:: >>> from sklearn.naive\_bayes import GaussianNB >>> from sklearn.discriminant\_analysis import LinearDiscriminantAnalysis >>> from sklearn.datasets import make\_classification >>> from sklearn.model\_selection import KFold >>> from sklearn.model\_selection import cross\_val\_score >>> import numpy as np >>> rng = np.random.RandomState(0) >>> X, y = make\_classification(random\_state=rng) >>> cv = KFold(shuffle=True, random\_state=rng) >>> lda = LinearDiscriminantAnalysis() >>> nb = GaussianNB() >>> for est in (lda, nb): ... print(cross\_val\_score(est, X, y, cv=cv)) [0.8 0.75 0.75 0.7 0.85] [0.85 0.95 0.95 0.85 0.95] Directly comparing the performance of the :class:`~sklearn.discriminant\_analysis.LinearDiscriminantAnalysis` estimator vs the :class:`~sklearn.naive\_bayes.GaussianNB` estimator \*\*on each fold\*\* would be a mistake: \*\*the splits on which the estimators are evaluated are different\*\*. Indeed, :func:`~sklearn.model\_selection.cross\_val\_score` will internally call `cv.split` on the same :class:`~sklearn.model\_selection.KFold` instance, but the splits will be different each time. This is also true for any tool that performs model selection via cross-validation, e.g. :class:`~sklearn.model\_selection.GridSearchCV` and :class:`~sklearn.model\_selection.RandomizedSearchCV`: scores are not comparable fold-to-fold across different calls to `search.fit`, since `cv.split` would have been called multiple times. Within a single call to `search.fit`, however, fold-to-fold comparison is possible since the search estimator only calls `cv.split` once. For comparable fold-to-fold results in all scenarios, one should pass an integer to the CV splitter: `cv = KFold(shuffle=True, random\_state=0)`. .. note:: While fold-to-fold comparison is not advisable with `RandomState` instances, one can however expect that average scores allow to conclude whether one estimator is better than another, as long as enough folds and data are used. .. note:: What matters in this example is what was passed to :class:`~sklearn.model\_selection.KFold`. Whether we pass a `RandomState` instance or an integer to :func:`~sklearn.datasets.make\_classification` is not relevant for our illustration purpose. Also, neither :class:`~sklearn.discriminant\_analysis.LinearDiscriminantAnalysis` nor :class:`~sklearn.naive\_bayes.GaussianNB` are randomized estimators. General recommendations ----------------------- Getting reproducible results across multiple executions ....................................................... In order to obtain reproducible (i.e. constant) results across multiple \*program executions\*, we need to remove all uses of `random\_state=None`, which is the default. The recommended way is to declare a `rng` variable at the top of the program, and pass it down to any object that accepts a `random\_state` parameter:: >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.datasets import make\_classification >>> from sklearn.model\_selection import train\_test\_split >>> import numpy
https://github.com/scikit-learn/scikit-learn/blob/main//doc/common_pitfalls.rst
main
scikit-learn
[ -0.11948851495981216, -0.08646486699581146, 0.013357614167034626, 0.0051032681949436665, 0.09876565635204315, -0.08123596757650375, 0.028122732415795326, -0.09400065988302231, -0.003009248059242964, 0.0025691513437777758, -0.04408193379640579, -0.061290062963962555, 0.04438062757253647, -0...
0.039886
`random\_state=None`, which is the default. The recommended way is to declare a `rng` variable at the top of the program, and pass it down to any object that accepts a `random\_state` parameter:: >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.datasets import make\_classification >>> from sklearn.model\_selection import train\_test\_split >>> import numpy as np >>> rng = np.random.RandomState(0) >>> X, y = make\_classification(random\_state=rng) >>> rf = RandomForestClassifier(random\_state=rng) >>> X\_train, X\_test, y\_train, y\_test = train\_test\_split(X, y, ... random\_state=rng) >>> rf.fit(X\_train, y\_train).score(X\_test, y\_test) 0.84 We are now guaranteed that the result of this script will always be 0.84, no matter how many times we run it. Changing the global `rng` variable to a different value should affect the results, as expected. It is also possible to declare the `rng` variable as an integer. This may however lead to less robust cross-validation results, as we will see in the next section. .. note:: We do not recommend setting the global `numpy` seed by calling `np.random.seed(0)`. See `here `\_ for a discussion. Robustness of cross-validation results ...................................... When we evaluate a randomized estimator performance by cross-validation, we want to make sure that the estimator can yield accurate predictions for new data, but we also want to make sure that the estimator is robust w.r.t. its random initialization. For example, we would like the random weights initialization of an :class:`~sklearn.linear\_model.SGDClassifier` to be consistently good across all folds: otherwise, when we train that estimator on new data, we might get unlucky and the random initialization may lead to bad performance. Similarly, we want a random forest to be robust w.r.t. the set of randomly selected features that each tree will be using. For these reasons, it is preferable to evaluate the cross-validation performance by letting the estimator use a different RNG on each fold. This is done by passing a `RandomState` instance (or `None`) to the estimator initialization. When we pass an integer, the estimator will use the same RNG on each fold: if the estimator performs well (or bad), as evaluated by CV, it might just be because we got lucky (or unlucky) with that specific seed. Passing instances leads to more robust CV results, and makes the comparison between various algorithms fairer. It also helps limiting the temptation to treat the estimator's RNG as a hyper-parameter that can be tuned. Whether we pass `RandomState` instances or integers to CV splitters has no impact on robustness, as long as `split` is only called once. When `split` is called multiple times, fold-to-fold comparison isn't possible anymore. As a result, passing integer to CV splitters is usually safer and covers most use-cases.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/common_pitfalls.rst
main
scikit-learn
[ -0.0889510065317154, -0.06742741912603378, -0.008588775992393494, 0.06711617857217789, 0.11714699864387512, -0.051608774811029434, 0.019624853506684303, -0.024837570264935493, -0.08395887911319733, -0.03164933994412422, -0.10782849043607712, -0.07362979650497437, 0.03140285611152649, -0.08...
0.045317
.. currentmodule:: sklearn .. \_glossary: ========================================= Glossary of Common Terms and API Elements ========================================= This glossary hopes to definitively represent the tacit and explicit conventions applied in Scikit-learn and its API, while providing a reference for users and contributors. It aims to describe the concepts and either detail their corresponding API or link to other relevant parts of the documentation which do so. By linking to glossary entries from the API Reference and User Guide, we may minimize redundancy and inconsistency. We begin by listing general concepts (and any that didn't fit elsewhere), but more specific sets of related terms are listed below: :ref:`glossary\_estimator\_types`, :ref:`glossary\_target\_types`, :ref:`glossary\_methods`, :ref:`glossary\_parameters`, :ref:`glossary\_attributes`, :ref:`glossary\_sample\_props`. General Concepts ================ .. glossary:: 1d 1d array One-dimensional array. A NumPy array whose ``.shape`` has length 1. A vector. 2d 2d array Two-dimensional array. A NumPy array whose ``.shape`` has length 2. Often represents a matrix. API Refers to both the \*specific\* interfaces for estimators implemented in Scikit-learn and the \*generalized\* conventions across types of estimators as described in this glossary and :ref:`overviewed in the contributor documentation `. The specific interfaces that constitute Scikit-learn's public API are largely documented in :ref:`api\_ref`. However, we less formally consider anything as public API if none of the identifiers required to access it begins with ``\_``. We generally try to maintain :term:`backwards compatibility` for all objects in the public API. Private API, including functions, modules and methods beginning ``\_`` are not assured to be stable. array-like The most common data format for \*input\* to Scikit-learn estimators and functions, array-like is any type object for which :func:`numpy.asarray` will produce an array of appropriate shape (usually 1 or 2-dimensional) of appropriate dtype (usually numeric). This includes: \* a numpy array \* a list of numbers \* a list of length-k lists of numbers for some fixed length k \* a :class:`pandas.DataFrame` with all columns numeric \* a numeric :class:`pandas.Series` Other array API inputs, but see :ref:`array\_api` for the preferred way of using these: \* a `PyTorch `\_ tensor on 'cpu' device \* a `JAX `\_ array It excludes: \* a :term:`sparse matrix` \* a sparse array \* an iterator \* a generator Note that \*output\* from scikit-learn estimators and functions (e.g. predictions) should generally be arrays or sparse matrices, or lists thereof (as in multi-output :class:`tree.DecisionTreeClassifier`'s ``predict\_proba``). An estimator where ``predict()`` returns a list or a `pandas.Series` is not valid. attribute attributes We mostly use attribute to refer to how model information is stored on an estimator during fitting. Any public attribute stored on an estimator instance is required to begin with an alphabetic character and end in a single underscore if it is set in :term:`fit` or :term:`partial\_fit`. These are what is documented under an estimator's \*Attributes\* documentation. The information stored in attributes is usually either: sufficient statistics used for prediction or transformation; :term:`transductive` outputs such as :term:`labels\_` or :term:`embedding\_`; or diagnostic data, such as :term:`feature\_importances\_`. Common attributes are listed :ref:`below `. A public attribute may have the same name as a constructor :term:`parameter`, with a ``\_`` appended. This is used to store a validated or estimated version of the user's input. For example, :class:`decomposition.PCA` is constructed with an ``n\_components`` parameter. From this, together with other parameters and the data, PCA estimates the attribute ``n\_components\_``. Further private attributes used in prediction/transformation/etc. may also be set when fitting. These begin with a single underscore and are not assured to be stable for public access. A public attribute on an estimator instance that does not end in an underscore should be the stored, unmodified value of an ``\_\_init\_\_`` :term:`parameter` of the same name. Because of this equivalence,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.12935975193977356, -0.06799634546041489, 0.004098111297935247, -0.012748415581882, 0.01139372680336237, -0.05748609080910683, 0.02028796635568142, 0.03503647819161415, -0.04148406907916069, -0.011842694133520126, -0.019982192665338516, -0.10106119513511658, 0.016803834587335587, -0.0337...
0.10201
when fitting. These begin with a single underscore and are not assured to be stable for public access. A public attribute on an estimator instance that does not end in an underscore should be the stored, unmodified value of an ``\_\_init\_\_`` :term:`parameter` of the same name. Because of this equivalence, these are documented under an estimator's \*Parameters\* documentation. backwards compatibility We generally try to maintain backward compatibility (i.e. interfaces and behaviors may be extended but not changed or removed) from release to release but this comes with some exceptions: Public API only The behavior of objects accessed through private identifiers (those beginning ``\_``) may be changed arbitrarily between versions. As documented We will generally assume that the users have adhered to the documented parameter types and ranges. If the documentation asks for a list and the user gives a tuple, we do not assure consistent behavior from version to version. Deprecation Behaviors may change following a :term:`deprecation` period (usually two releases long). Warnings are issued using Python's :mod:`warnings` module. Keyword arguments We may sometimes assume that all optional parameters (other than X and y to :term:`fit` and similar methods) are passed as keyword arguments only and may be positionally reordered. Bug fixes and enhancements Bug fixes and -- less often -- enhancements may change the behavior of estimators, including the predictions of an estimator trained on the same data and :term:`random\_state`. When this happens, we attempt to note it clearly in the changelog. Serialization We make no assurances that pickling an estimator in one version will allow it to be unpickled to an equivalent model in the subsequent version. (For estimators in the sklearn package, we issue a warning when this unpickling is attempted, even if it may happen to work.) See :ref:`persistence\_limitations`. :func:`utils.estimator\_checks.check\_estimator` We provide limited backwards compatibility assurances for the estimator checks: we may add extra requirements on estimators tested with this function, usually when these were informally assumed but not formally tested. Despite this informal contract with our users, the software is provided as is, as stated in the license. When a release inadvertently introduces changes that are not backward compatible, these are known as software regressions. callable A function, class or an object which implements the ``\_\_call\_\_`` method; anything that returns True when the argument of `callable() `\_. categorical feature A categorical or nominal :term:`feature` is one that has a finite set of discrete values across the population of data. These are commonly represented as columns of integers or strings. Strings will be rejected by most scikit-learn estimators, and integers will be treated as ordinal or count-valued. For the use with most estimators, categorical variables should be one-hot encoded. Notable exceptions include tree-based models such as random forests and gradient boosting models that often work better and faster with integer-coded categorical variables. :class:`~sklearn.preprocessing.OrdinalEncoder` helps encoding string-valued categorical features as ordinal integers, and :class:`~sklearn.preprocessing.OneHotEncoder` can be used to one-hot encode categorical features. See also :ref:`preprocessing\_categorical\_features` and the `categorical-encoding `\_ package for tools related to encoding categorical features. clone cloned To copy an :term:`estimator instance` and create a new one with identical :term:`parameters`, but without any fitted :term:`attributes`, using :func:`~sklearn.base.clone`. When ``fit`` is called, a :term:`meta-estimator` usually clones a wrapped estimator instance before fitting the cloned instance. (Exceptions, for legacy reasons, include :class:`~pipeline.Pipeline` and :class:`~pipeline.FeatureUnion`.) If the estimator's `random\_state` parameter is an integer (or if the estimator doesn't have a `random\_state` parameter), an \*exact clone\* is returned: the clone and the original estimator will give the exact same results. Otherwise, \*statistical clone\* is returned: the clone might yield different results from the original estimator. More details can be
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.0961763933300972, -0.004878918640315533, 0.013635452836751938, 0.059448547661304474, 0.052102282643318176, -0.06297510862350464, -0.042434293776750565, 0.017469722777605057, -0.0037826281040906906, 0.0028919975738972425, 0.06715819984674454, 0.009885911829769611, 0.03442031890153885, -0...
0.123744
`random\_state` parameter is an integer (or if the estimator doesn't have a `random\_state` parameter), an \*exact clone\* is returned: the clone and the original estimator will give the exact same results. Otherwise, \*statistical clone\* is returned: the clone might yield different results from the original estimator. More details can be found in :ref:`randomness`. common tests This refers to the tests run on almost every estimator class in Scikit-learn to check they comply with basic API conventions. They are available for external use through :func:`utils.estimator\_checks.check\_estimator` or :func:`utils.estimator\_checks.parametrize\_with\_checks`, with most of the implementation in ``sklearn/utils/estimator\_checks.py``. Note: Some exceptions to the common testing regime are currently hard-coded into the library, but we hope to replace this by marking exceptional behaviours on the estimator using semantic :term:`estimator tags`. cross-fitting cross fitting A resampling method that iteratively partitions data into mutually exclusive subsets to fit two stages. During the first stage, the mutually exclusive subsets enable predictions or transformations to be computed on data not seen during training. The computed data is then used in the second stage. The objective is to avoid having any overfitting in the first stage introduce bias into the input data distribution of the second stage. For examples of its use, see: :class:`~preprocessing.TargetEncoder`, :class:`~ensemble.StackingClassifier`, :class:`~ensemble.StackingRegressor` and :class:`~calibration.CalibratedClassifierCV`. cross-validation cross validation A resampling method that iteratively partitions data into mutually exclusive 'train' and 'test' subsets so model performance can be evaluated on unseen data. This conserves data as it avoids the need to hold out a 'validation' dataset and accounts for variability as multiple rounds of cross validation are generally performed. See :ref:`User Guide ` for more details. deprecation We use deprecation to slowly violate our :term:`backwards compatibility` assurances, usually to: \* change the default value of a parameter; or \* remove a parameter, attribute, method, class, etc. We will ordinarily issue a warning when a deprecated element is used, although there may be limitations to this. For instance, we will raise a warning when someone sets a parameter that has been deprecated, but may not when they access that parameter's attribute on the estimator instance. See the :ref:`Contributors' Guide `. dimensionality May be used to refer to the number of :term:`features` (i.e. :term:`n\_features`), or columns in a 2d feature matrix. Dimensions are, however, also used to refer to the length of a NumPy array's shape, distinguishing a 1d array from a 2d matrix. docstring The embedded documentation for a module, class, function, etc., usually in code as a string at the beginning of the object's definition, and accessible as the object's ``\_\_doc\_\_`` attribute. We try to adhere to `PEP257 `\_, and follow `NumpyDoc conventions `\_. double underscore double underscore notation When specifying parameter names for nested estimators, ``\_\_`` may be used to separate between parent and child in some contexts. The most common use is when setting parameters through a meta-estimator with :term:`set\_params` and hence in specifying a search grid in :ref:`parameter search `. See :term:`parameter`. It is also used in :meth:`pipeline.Pipeline.fit` for passing :term:`sample properties` to the ``fit`` methods of estimators in the pipeline. dtype data type NumPy arrays assume a homogeneous data type throughout, available in the ``.dtype`` attribute of an array (or sparse matrix). We generally assume simple data types for scikit-learn data: float or integer. We may support object or string data types for arrays before encoding or vectorizing. Our estimators do not work with struct arrays, for instance. Our documentation can sometimes give information about the dtype precision, e.g. `np.int32`, `np.int64`, etc. When the precision is provided, it refers to the NumPy dtype. If an arbitrary precision is used, the documentation will refer to dtype
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.08557470142841339, -0.04951373487710953, -0.0048539359122514725, 0.05292394384741783, 0.1107991635799408, -0.11962755769491196, 0.02519654482603073, -0.04796481505036354, 0.008632268756628036, 0.034682925790548325, 0.02982836775481701, -0.09138364344835281, 0.003696114756166935, -0.0793...
0.101607
encoding or vectorizing. Our estimators do not work with struct arrays, for instance. Our documentation can sometimes give information about the dtype precision, e.g. `np.int32`, `np.int64`, etc. When the precision is provided, it refers to the NumPy dtype. If an arbitrary precision is used, the documentation will refer to dtype `integer` or `floating`. Note that in this case, the precision can be platform dependent. The `numeric` dtype refers to accepting both `integer` and `floating`. When it comes to choosing between 64-bit dtype (i.e. `np.float64` and `np.int64`) and 32-bit dtype (i.e. `np.float32` and `np.int32`), it boils down to a trade-off between efficiency and precision. The 64-bit types offer more accurate results due to their lower floating-point error, but demand more computational resources, resulting in slower operations and increased memory usage. In contrast, 32-bit types promise enhanced operation speed and reduced memory consumption, but introduce a larger floating-point error. The efficiency improvements are dependent on lower level optimization such as vectorization, single instruction multiple dispatch (SIMD), or cache optimization but crucially on the compatibility of the algorithm in use. Specifically, the choice of precision should account for whether the employed algorithm can effectively leverage `np.float32`. Some algorithms, especially certain minimization methods, are exclusively coded for `np.float64`, meaning that even if `np.float32` is passed, it triggers an automatic conversion back to `np.float64`. This not only negates the intended computational savings but also introduces additional overhead, making operations with `np.float32` unexpectedly slower and more memory-intensive due to this extra conversion step. duck typing We try to apply `duck typing `\_ to determine how to handle some input values (e.g. checking whether a given estimator is a classifier). That is, we avoid using ``isinstance`` where possible, and rely on the presence or absence of attributes to determine an object's behaviour. Some nuance is required when following this approach: \* For some estimators, an attribute may only be available once it is :term:`fitted`. For instance, we cannot a priori determine if :term:`predict\_proba` is available in a grid search where the grid includes alternating between a probabilistic and a non-probabilistic predictor in the final step of the pipeline. In the following, we can only determine if ``clf`` is probabilistic after fitting it on some data:: >>> from sklearn.model\_selection import GridSearchCV >>> from sklearn.linear\_model import SGDClassifier >>> clf = GridSearchCV(SGDClassifier(), ... param\_grid={'loss': ['log\_loss', 'hinge']}) This means that we can only check for duck-typed attributes after fitting, and that we must be careful to make :term:`meta-estimators` only present attributes according to the state of the underlying estimator after fitting. \* Checking if an attribute is present (using ``hasattr``) is in general just as expensive as getting the attribute (``getattr`` or dot notation). In some cases, getting the attribute may indeed be expensive (e.g. for some implementations of :term:`feature\_importances\_`, which may suggest this is an API design flaw). So code which does ``hasattr`` followed by ``getattr`` should be avoided; ``getattr`` within a try-except block is preferred. \* For determining some aspects of an estimator's expectations or support for some feature, we use :term:`estimator tags` instead of duck typing. early stopping This consists in stopping an iterative optimization method before the convergence of the training loss, to avoid over-fitting. This is generally done by monitoring the generalization score on a validation set. When available, it is activated through the parameter ``early\_stopping`` or by setting a positive :term:`n\_iter\_no\_change`. estimator instance We sometimes use this terminology to distinguish an :term:`estimator` class from a constructed instance. For example, in the following, ``cls`` is an estimator class, while ``est1`` and ``est2`` are instances:: cls = RandomForestClassifier est1 = cls() est2 = RandomForestClassifier() examples We try
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ 0.026080479845404625, -0.016463570296764374, -0.04091484472155571, 0.002973499009385705, 0.04840962588787079, -0.11150240898132324, -0.041848137974739075, -0.009602759964764118, -0.03932533785700798, -0.016199473291635513, -0.0951734408736229, -0.034485507756471634, -0.036563724279403687, ...
0.144612
or by setting a positive :term:`n\_iter\_no\_change`. estimator instance We sometimes use this terminology to distinguish an :term:`estimator` class from a constructed instance. For example, in the following, ``cls`` is an estimator class, while ``est1`` and ``est2`` are instances:: cls = RandomForestClassifier est1 = cls() est2 = RandomForestClassifier() examples We try to give examples of basic usage for most functions and classes in the API: \* as doctests in their docstrings (i.e. within the ``sklearn/`` library code itself). \* as examples in the :ref:`example gallery ` rendered (using `sphinx-gallery `\_) from scripts in the ``examples/`` directory, exemplifying key features or parameters of the estimator/function. These should also be referenced from the User Guide. \* sometimes in the :ref:`User Guide ` (built from ``doc/``) alongside a technical description of the estimator. experimental An experimental tool is already usable but its public API, such as default parameter values or fitted attributes, is still subject to change in future versions without the usual :term:`deprecation` warning policy. evaluation metric evaluation metrics Evaluation metrics give a measure of how well a model performs. We may use this term specifically to refer to the functions in :mod:`~sklearn.metrics` (disregarding :mod:`~sklearn.metrics.pairwise`), as distinct from the :term:`score` method and the :term:`scoring` API used in cross validation. See :ref:`model\_evaluation`. These functions usually accept a ground truth (or the raw data where the metric evaluates clustering without a ground truth) and a prediction, be it the output of :term:`predict` (``y\_pred``), of :term:`predict\_proba` (``y\_proba``), or of an arbitrary score function including :term:`decision\_function` (``y\_score``). Functions are usually named to end with ``\_score`` if a greater score indicates a better model, and ``\_loss`` if a lesser score indicates a better model. This diversity of interface motivates the scoring API. Note that some estimators can calculate metrics that are not included in :mod:`~sklearn.metrics` and are estimator-specific, notably model likelihoods. estimator tags Estimator tags describe certain capabilities of an estimator. This would enable some runtime behaviors based on estimator inspection, but it also allows each estimator to be tested for appropriate invariances while being excepted from other :term:`common tests`. Some aspects of estimator tags are currently determined through the :term:`duck typing` of methods like ``predict\_proba`` and through some special attributes on estimator objects: For more detailed info, see :ref:`estimator\_tags`. feature features feature vector In the abstract, a feature is a function (in its mathematical sense) mapping a sampled object to a numeric or categorical quantity. "Feature" is also commonly used to refer to these quantities, being the individual elements of a vector representing a sample. In a data matrix, features are represented as columns: each column contains the result of applying a feature function to a set of samples. Elsewhere features are known as attributes, predictors, regressors, or independent variables. Nearly all estimators in scikit-learn assume that features are numeric, finite and not missing, even when they have semantically distinct domains and distributions (categorical, ordinal, count-valued, real-valued, interval). See also :term:`categorical feature` and :term:`missing values`. ``n\_features`` indicates the number of features in a dataset. fitting Calling :term:`fit` (or :term:`fit\_transform`, :term:`fit\_predict`, etc.) on an estimator. fitted The state of an estimator after :term:`fitting`. There is no conventional procedure for checking if an estimator is fitted. However, an estimator that is not fitted: \* should raise :class:`exceptions.NotFittedError` when a prediction method (:term:`predict`, :term:`transform`, etc.) is called. (:func:`utils.validation.check\_is\_fitted` is used internally for this purpose.) \* should not have any :term:`attributes` beginning with an alphabetic character and ending with an underscore. (Note that a descriptor for the attribute may still be present on the class, but hasattr should return False) function We provide ad hoc function interfaces for many algorithms, while
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.08567741513252258, -0.02655703015625477, -0.029046902433037758, 0.031716879457235336, 0.09491913765668869, -0.018839698284864426, 0.034461427479982376, -0.0006033331155776978, 0.015012982301414013, 0.05093446001410484, 0.022869672626256943, -0.022900819778442383, 0.07868263125419617, -0...
0.206222
internally for this purpose.) \* should not have any :term:`attributes` beginning with an alphabetic character and ending with an underscore. (Note that a descriptor for the attribute may still be present on the class, but hasattr should return False) function We provide ad hoc function interfaces for many algorithms, while :term:`estimator` classes provide a more consistent interface. In particular, Scikit-learn may provide a function interface that fits a model to some data and returns the learnt model parameters, as in :func:`linear\_model.enet\_path`. For transductive models, this also returns the embedding or cluster labels, as in :func:`manifold.spectral\_embedding` or :func:`cluster.dbscan`. Many preprocessing transformers also provide a function interface, akin to calling :term:`fit\_transform`, as in :func:`preprocessing.maxabs\_scale`. Users should be careful to avoid :term:`data leakage` when making use of these ``fit\_transform``-equivalent functions. We do not have a strict policy about when to or when not to provide function forms of estimators, but maintainers should consider consistency with existing interfaces, and whether providing a function would lead users astray from best practices (as regards data leakage, etc.) gallery See :term:`examples`. hyperparameter hyper-parameter See :term:`parameter`. impute imputation Most machine learning algorithms require that their inputs have no :term:`missing values`, and will not work if this requirement is violated. Algorithms that attempt to fill in (or impute) missing values are referred to as imputation algorithms. indexable An :term:`array-like`, :term:`sparse matrix`, pandas DataFrame or sequence (usually a list). induction inductive Inductive (contrasted with :term:`transductive`) machine learning builds a model of some data that can then be applied to new instances. Most estimators in Scikit-learn are inductive, having :term:`predict` and/or :term:`transform` methods. joblib A Python library (https://joblib.readthedocs.io) used in Scikit-learn to facilitate simple parallelism and caching. Joblib is oriented towards efficiently working with numpy arrays, such as through use of :term:`memory mapping`. See :ref:`parallelism` for more information. label indicator format label indicator matrix multilabel indicator matrix multilabel indicator matrices This format can be used to represent binary or multilabel data. Each row of a 2d array or sparse matrix corresponds to a sample, each column corresponds to a class, and each element is 1 if the sample is labeled with the class and 0 if not. :ref:`LabelBinarizer ` can be used to create a multilabel indicator matrix from :term:`multiclass` labels. leakage data leakage A problem in cross validation where generalization performance can be over-estimated since knowledge of the test data was inadvertently included in training a model. This is a risk, for instance, when applying a :term:`transformer` to the entirety of a dataset rather than each training portion in a cross validation split. We aim to provide interfaces (such as :mod:`~sklearn.pipeline` and :mod:`~sklearn.model\_selection`) that shield the user from data leakage. memmapping memory map memory mapping A memory efficiency strategy that keeps data on disk rather than copying it into main memory. Memory maps can be created for arrays that can be read, written, or both, using :obj:`numpy.memmap`. When using :term:`joblib` to parallelize operations in Scikit-learn, it may automatically memmap large arrays to reduce memory duplication overhead in multiprocessing. missing values Most Scikit-learn estimators do not work with missing values. When they do (e.g. in :class:`impute.SimpleImputer`), NaN is the preferred representation of missing values in float arrays. If the array has integer dtype, NaN cannot be represented. For this reason, we support specifying another ``missing\_values`` value when :term:`imputation` or learning can be performed in integer space. :term:`Unlabeled data ` is a special case of missing values in the :term:`target`. ``n\_features`` The number of :term:`features`. ``n\_outputs`` The number of :term:`outputs` in the :term:`target`. ``n\_samples`` The number of :term:`samples`. ``n\_targets`` Synonym for :term:`n\_outputs`. narrative docs narrative documentation An alias for :ref:`User Guide `,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.03721952810883522, -0.081710584461689, -0.05983467027544975, 0.055842600762844086, 0.01485790591686964, -0.01990501768887043, -0.005411234218627214, 0.0021706989500671625, -0.054633207619190216, -0.023841502144932747, 0.03931732103228569, -0.09807364642620087, 0.025074608623981476, 0.01...
0.20598
be performed in integer space. :term:`Unlabeled data ` is a special case of missing values in the :term:`target`. ``n\_features`` The number of :term:`features`. ``n\_outputs`` The number of :term:`outputs` in the :term:`target`. ``n\_samples`` The number of :term:`samples`. ``n\_targets`` Synonym for :term:`n\_outputs`. narrative docs narrative documentation An alias for :ref:`User Guide `, i.e. documentation written in ``doc/modules/``. Unlike the :ref:`API reference ` provided through docstrings, the User Guide aims to: \* group tools provided by Scikit-learn together thematically or in terms of usage; \* motivate why someone would use each particular tool, often through comparison; \* provide both intuitive and technical descriptions of tools; \* provide or link to :term:`examples` of using key features of a tool. np A shorthand for Numpy due to the conventional import statement:: import numpy as np ovo One-vs-one one-vs-one Method of decomposing a :term:`multiclass` problem into `n\_classes \* (n\_classes - 1) / 2` :term:`binary` problems, one for each pairwise combination of classes. A metric is computed or a classifier is fitted for each pair combination. :class:`~sklearn.multiclass.OneVsOneClassifier` implements this method for binary classifiers. ovr One-vs-Rest one-vs-rest Method for decomposing a :term:`multiclass` problem into `n\_classes` :term:`binary` problems. For each class a metric is computed or classifier fitted, with that class being treated as the positive class while all other classes are negative. :class:`~sklearn.multiclass.OneVsRestClassifier` implements this method for binary classifiers. online learning Where a model is iteratively updated by receiving each batch of ground truth :term:`targets` soon after making predictions on corresponding batch of data. Intrinsically, the model must be usable for prediction after each batch. See :term:`partial\_fit`. out-of-core An efficiency strategy where not all the data is stored in main memory at once, usually by performing learning on batches of data. See :term:`partial\_fit`. outputs Individual scalar/categorical variables per sample in the :term:`target`. For example, in multilabel classification each possible label corresponds to a binary output. Also called \*responses\*, \*tasks\* or \*targets\*. See :term:`multiclass multioutput` and :term:`continuous multioutput`. pair A tuple of length two. parameter parameters param params We mostly use \*parameter\* to refer to the aspects of an estimator that can be specified in its construction. For example, ``max\_depth`` and ``random\_state`` are parameters of :class:`~ensemble.RandomForestClassifier`. Parameters to an estimator's constructor are stored unmodified as attributes on the estimator instance, and conventionally start with an alphabetic character and end with an alphanumeric character. Each estimator's constructor parameters are described in the estimator's docstring. We do not use parameters in the statistical sense, where parameters are values that specify a model and can be estimated from data. What we call parameters might be what statisticians call hyperparameters to the model: aspects for configuring model structure that are often not directly learnt from data. However, our parameters are also used to prescribe modeling operations that do not affect the learnt model, such as :term:`n\_jobs` for controlling parallelism. When talking about the parameters of a :term:`meta-estimator`, we may also be including the parameters of the estimators wrapped by the meta-estimator. Ordinarily, these nested parameters are denoted by using a :term:`double underscore` (``\_\_``) to separate between the estimator-as-parameter and its parameter. Thus ``clf = BaggingClassifier(estimator=DecisionTreeClassifier(max\_depth=3))`` has a deep parameter ``estimator\_\_max\_depth`` with value ``3``, which is accessible with ``clf.estimator.max\_depth`` or ``clf.get\_params()['estimator\_\_max\_depth']``. The list of parameters and their current values can be retrieved from an :term:`estimator instance` using its :term:`get\_params` method. Between construction and fitting, parameters may be modified using :term:`set\_params`. To enable this, parameters are not ordinarily validated or altered when the estimator is constructed, or when each parameter is set. Parameter validation is performed when :term:`fit` is called. Common parameters are listed :ref:`below `. pairwise metric pairwise metrics In its broad sense, a
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.0181701872497797, 0.00715654669329524, -0.0631885901093483, 0.049220696091651917, 0.07029996067285538, -0.04009270668029785, 0.033552803099155426, 0.0284736230969429, 0.01596129685640335, 0.0001788693043636158, 0.010928264819085598, -0.0491265207529068, 0.042954038828611374, -0.04534453...
0.224218
parameters may be modified using :term:`set\_params`. To enable this, parameters are not ordinarily validated or altered when the estimator is constructed, or when each parameter is set. Parameter validation is performed when :term:`fit` is called. Common parameters are listed :ref:`below `. pairwise metric pairwise metrics In its broad sense, a pairwise metric defines a function for measuring similarity or dissimilarity between two samples (with each ordinarily represented as a :term:`feature vector`). We particularly provide implementations of distance metrics (as well as improper metrics like Cosine Distance) through :func:`metrics.pairwise\_distances`, and of kernel functions (a constrained class of similarity functions) in :func:`metrics.pairwise.pairwise\_kernels`. These can compute pairwise distance matrices that are symmetric and hence store data redundantly. See also :term:`precomputed` and :term:`metric`. Note that for most distance metrics, we rely on implementations from :mod:`scipy.spatial.distance`, but may reimplement for efficiency in our context. The :class:`metrics.DistanceMetric` interface is used to implement distance metrics for integration with efficient neighbors search. pd A shorthand for `Pandas `\_ due to the conventional import statement:: import pandas as pd precomputed Where algorithms rely on :term:`pairwise metrics`, and can be computed from pairwise metrics alone, we often allow the user to specify that the :term:`X` provided is already in the pairwise (dis)similarity space, rather than in a feature space. That is, when passed to :term:`fit`, it is a square, symmetric matrix, with each vector indicating (dis)similarity to every sample, and when passed to prediction/transformation methods, each row corresponds to a testing sample and each column to a training sample. Use of precomputed X is usually indicated by setting a ``metric``, ``affinity`` or ``kernel`` parameter to the string 'precomputed'. If this is the case, then the estimator should set the `pairwise` estimator tag as True. rectangular Data that can be represented as a matrix with :term:`samples` on the first axis and a fixed, finite set of :term:`features` on the second is called rectangular. This term excludes samples with non-vectorial structures, such as text, an image of arbitrary size, a time series of arbitrary length, a set of vectors, etc. The purpose of a :term:`vectorizer` is to produce rectangular forms of such data. sample samples We usually use this term as a noun to indicate a single feature vector. Elsewhere a sample is called an instance, data point, or observation. ``n\_samples`` indicates the number of samples in a dataset, being the number of rows in a data array :term:`X`. Note that this definition is standard in machine learning and deviates from statistics where it means \*a set of individuals or objects collected or selected\*. sample property sample properties A sample property is data for each sample (e.g. an array of length n\_samples) passed to an estimator method or a similar function, alongside but distinct from the :term:`features` (``X``) and :term:`target` (``y``). The most prominent example is :term:`sample\_weight`; see others at :ref:`glossary\_sample\_props`. As of version 0.19 we do not have a consistent approach to handling sample properties and their routing in :term:`meta-estimators`, though a ``fit\_params`` parameter is often used. scikit-learn-contrib A venue for publishing Scikit-learn-compatible libraries that are broadly authorized by the core developers and the contrib community, but not maintained by the core developer team. See https://scikit-learn-contrib.github.io. scikit-learn enhancement proposals SLEP SLEPs Changes to the API principles and changes to dependencies or supported versions happen via a :ref:`SLEP ` and follows the decision-making process outlined in :ref:`governance`. For all votes, a proposal must have been made public and discussed before the vote. Such a proposal must be a consolidated document, in the form of a "Scikit-Learn Enhancement Proposal" (SLEP), rather than a long discussion on an issue. A SLEP must be submitted
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.04099366068840027, -0.10205627232789993, -0.10406257212162018, -0.07398124784231186, -0.0020468304865062237, -0.0014965401496738195, -0.022327246144413948, -0.013851040042936802, -0.027149882167577744, -0.041604552417993546, 0.05591239407658577, -0.029723646119236946, 0.015576486475765705...
0.130549
decision-making process outlined in :ref:`governance`. For all votes, a proposal must have been made public and discussed before the vote. Such a proposal must be a consolidated document, in the form of a "Scikit-Learn Enhancement Proposal" (SLEP), rather than a long discussion on an issue. A SLEP must be submitted as a pull-request to `enhancement proposals `\_ using the `SLEP template `\_. semi-supervised semi-supervised learning semisupervised Learning where the expected prediction (label or ground truth) is only available for some samples provided as training data when :term:`fitting` the model. We conventionally apply the label ``-1`` to :term:`unlabeled` samples in semi-supervised classification. sparse matrix sparse graph A representation of two-dimensional numeric data that is more memory efficient than the corresponding dense numpy array where almost all elements are zero. We use the :mod:`scipy.sparse` framework, which provides several underlying sparse data representations, or \*formats\*. Some formats are more efficient than others for particular tasks, and when a particular format provides especial benefit, we try to document this fact in Scikit-learn parameter descriptions. Some sparse matrix formats (notably CSR, CSC, COO and LIL) distinguish between \*implicit\* and \*explicit\* zeros. Explicit zeros are stored (i.e. they consume memory in a ``data`` array) in the data structure, while implicit zeros correspond to every element not otherwise defined in explicit storage. Two semantics for sparse matrices are used in Scikit-learn: matrix semantics The sparse matrix is interpreted as an array with implicit and explicit zeros being interpreted as the number 0. This is the interpretation most often adopted, e.g. when sparse matrices are used for feature matrices or :term:`multilabel indicator matrices`. graph semantics As with :mod:`scipy.sparse.csgraph`, explicit zeros are interpreted as the number 0, but implicit zeros indicate a masked or absent value, such as the absence of an edge between two vertices of a graph, where an explicit value indicates an edge's weight. This interpretation is adopted to represent connectivity in clustering, in representations of nearest neighborhoods (e.g. :func:`neighbors.kneighbors\_graph`), and for precomputed distance representation where only distances in the neighborhood of each point are required. When working with sparse matrices, we assume that it is sparse for a good reason, and avoid writing code that densifies a user-provided sparse matrix, instead maintaining sparsity or raising an error if not possible (i.e. if an estimator does not / cannot support sparse matrices). stateless An estimator is stateless if it does not store any information that is obtained during :term:`fit`. This information can be either parameters learned during :term:`fit` or statistics computed from the training data. An estimator is stateless if it has no :term:`attributes` apart from ones set in `\_\_init\_\_`. Calling :term:`fit` for these estimators will only validate the public :term:`attributes` passed in `\_\_init\_\_`. supervised supervised learning Learning where the expected prediction (label or ground truth) is available for each sample when :term:`fitting` the model, provided as :term:`y`. This is the approach taken in a :term:`classifier` or :term:`regressor` among other estimators. target targets The \*dependent variable\* in :term:`supervised` (and :term:`semisupervised`) learning, passed as :term:`y` to an estimator's :term:`fit` method. Also known as \*dependent variable\*, \*outcome variable\*, \*response variable\*, \*ground truth\* or \*label\*. Scikit-learn works with targets that have minimal structure: a class from a finite set, a finite real-valued number, multiple classes, or multiple numbers. See :ref:`glossary\_target\_types`. transduction transductive A transductive (contrasted with :term:`inductive`) machine learning method is designed to model a specific dataset, but not to apply that model to unseen data. Examples include :class:`manifold.TSNE`, :class:`cluster.AgglomerativeClustering` and :class:`neighbors.LocalOutlierFactor`. unlabeled unlabeled data Samples with an unknown ground truth when fitting; equivalently, :term:`missing values` in the :term:`target`. See also :term:`semisupervised` and :term:`unsupervised` learning. unsupervised unsupervised learning Learning
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.05646313354372978, -0.013339784927666187, -0.07647973299026489, 0.040940847247838974, 0.13790537416934967, -0.03934284299612045, -0.023653222247958183, -0.04650512710213661, -0.08840522170066833, 0.04479776322841644, -0.011061967350542545, 0.016238324344158173, -0.005824516527354717, 0....
0.13485
method is designed to model a specific dataset, but not to apply that model to unseen data. Examples include :class:`manifold.TSNE`, :class:`cluster.AgglomerativeClustering` and :class:`neighbors.LocalOutlierFactor`. unlabeled unlabeled data Samples with an unknown ground truth when fitting; equivalently, :term:`missing values` in the :term:`target`. See also :term:`semisupervised` and :term:`unsupervised` learning. unsupervised unsupervised learning Learning where the expected prediction (label or ground truth) is not available for each sample when :term:`fitting` the model, as in :term:`clusterers` and :term:`outlier detectors`. Unsupervised estimators ignore any :term:`y` passed to :term:`fit`. .. \_glossary\_estimator\_types: Class APIs and Estimator Types ============================== .. glossary:: classifier classifiers A :term:`supervised` (or :term:`semi-supervised`) :term:`predictor` with a finite set of discrete possible output values. A classifier supports modeling some of :term:`binary`, :term:`multiclass`, :term:`multilabel`, or :term:`multiclass multioutput` targets. Within scikit-learn, all classifiers support multi-class classification, defaulting to using a one-vs-rest strategy over the binary classification problem. Classifiers must store a :term:`classes\_` attribute after fitting, and inherit from :class:`base.ClassifierMixin`, which sets their corresponding :term:`estimator tags` correctly. A classifier can be distinguished from other estimators with :func:`~base.is\_classifier`. A classifier must implement: \* :term:`fit` \* :term:`predict` \* :term:`score` It may also be appropriate to implement :term:`decision\_function`, :term:`predict\_proba` and :term:`predict\_log\_proba`. clusterer clusterers A :term:`unsupervised` :term:`predictor` with a finite set of discrete output values. A clusterer usually stores :term:`labels\_` after fitting, and must do so if it is :term:`transductive`. A clusterer must implement: \* :term:`fit` \* :term:`fit\_predict` if :term:`transductive` \* :term:`predict` if :term:`inductive` density estimator An :term:`unsupervised` estimation of input probability density function. Commonly used techniques are: \* :ref:`kernel\_density` - uses a kernel function, controlled by the bandwidth parameter to represent density; \* :ref:`Gaussian mixture ` - uses mixture of Gaussian models to represent density. estimator estimators An object which manages the estimation and decoding of a model. The model is estimated as a deterministic function of: \* :term:`parameters` provided in object construction or with :term:`set\_params`; \* the global :mod:`numpy.random` random state if the estimator's :term:`random\_state` parameter is set to None; and \* any data or :term:`sample properties` passed to the most recent call to :term:`fit`, :term:`fit\_transform` or :term:`fit\_predict`, or data similarly passed in a sequence of calls to :term:`partial\_fit`. The estimated model is stored in public and private :term:`attributes` on the estimator instance, facilitating decoding through prediction and transformation methods. Estimators must provide a :term:`fit` method, and should provide :term:`set\_params` and :term:`get\_params`, although these are usually provided by inheritance from :class:`base.BaseEstimator`. The core functionality of some estimators may also be available as a :term:`function`. feature extractor feature extractors A :term:`transformer` which takes input where each sample is not represented as an :term:`array-like` object of fixed length, and produces an :term:`array-like` object of :term:`features` for each sample (and thus a 2-dimensional array-like for a set of samples). In other words, it (lossily) maps a non-rectangular data representation into :term:`rectangular` data. Feature extractors must implement at least: \* :term:`fit` \* :term:`transform` \* :term:`get\_feature\_names\_out` meta-estimator meta-estimators metaestimator metaestimators An :term:`estimator` which takes another estimator as a parameter. Examples include :class:`pipeline.Pipeline`, :class:`model\_selection.GridSearchCV`, :class:`feature\_selection.SelectFromModel` and :class:`ensemble.BaggingClassifier`. In a meta-estimator's :term:`fit` method, any contained estimators should be :term:`cloned` before they are fit. .. FIXME: Pipeline and FeatureUnion do not do this currently An exception to this is that an estimator may explicitly document that it accepts a pre-fitted estimator (e.g. using ``prefit=True`` in :class:`feature\_selection.SelectFromModel`). One known issue with this is that the pre-fitted estimator will lose its model if the meta-estimator is cloned. A meta-estimator should have ``fit`` called before prediction, even if all contained estimators are pre-fitted. In cases where a meta-estimator's primary behaviors (e.g. :term:`predict` or :term:`transform` implementation) are functions of prediction/transformation methods of the provided \*base estimator\* (or multiple base estimators), a meta-estimator should
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.11279477179050446, -0.09795864671468735, 0.029366621747612953, 0.10369522124528885, 0.07184886932373047, -0.020135797560214996, -0.030239731073379517, -0.06091153621673584, -0.037814490497112274, 0.0044911447912454605, 0.053199946880340576, -0.08423657715320587, 0.05761914700269699, -0....
0.020732
model if the meta-estimator is cloned. A meta-estimator should have ``fit`` called before prediction, even if all contained estimators are pre-fitted. In cases where a meta-estimator's primary behaviors (e.g. :term:`predict` or :term:`transform` implementation) are functions of prediction/transformation methods of the provided \*base estimator\* (or multiple base estimators), a meta-estimator should provide at least the standard methods provided by the base estimator. It may not be possible to identify which methods are provided by the underlying estimator until the meta-estimator has been :term:`fitted` (see also :term:`duck typing`), for which :func:`utils.metaestimators.available\_if` may help. It should also provide (or modify) the :term:`estimator tags` and :term:`classes\_` attribute provided by the base estimator. Meta-estimators should be careful to validate data as minimally as possible before passing it to an underlying estimator. This saves computation time, and may, for instance, allow the underlying estimator to easily work with data that is not :term:`rectangular`. outlier detector outlier detectors An :term:`unsupervised` binary :term:`predictor` which models the distinction between core and outlying samples. Outlier detectors must implement: \* :term:`fit` \* :term:`fit\_predict` if :term:`transductive` \* :term:`predict` if :term:`inductive` Inductive outlier detectors may also implement :term:`decision\_function` to give a normalized inlier score where outliers have score below 0. :term:`score\_samples` may provide an unnormalized score per sample. predictor predictors An :term:`estimator` supporting :term:`predict` and/or :term:`fit\_predict`. This encompasses :term:`classifier`, :term:`regressor`, :term:`outlier detector` and :term:`clusterer`. In statistics, "predictors" refers to :term:`features`. regressor regressors A :term:`supervised` (or :term:`semi-supervised`) :term:`predictor` with :term:`continuous` output values. Regressors inherit from :class:`base.RegressorMixin`, which sets their :term:`estimator tags` correctly. A regressor can be distinguished from other estimators with :func:`~base.is\_regressor`. A regressor must implement: \* :term:`fit` \* :term:`predict` \* :term:`score` transformer transformers An estimator supporting :term:`transform` and/or :term:`fit\_transform`. A purely :term:`transductive` transformer, such as :class:`manifold.TSNE`, may not implement ``transform``. vectorizer vectorizers See :term:`feature extractor`. There are further APIs specifically related to a small family of estimators, such as: .. glossary:: cross-validation splitter CV splitter cross-validation generator A non-estimator family of classes used to split a dataset into a sequence of train and test portions (see :ref:`cross\_validation`), by providing :term:`split` and :term:`get\_n\_splits` methods. Note that unlike estimators, these do not have :term:`fit` methods and do not provide :term:`set\_params` or :term:`get\_params`. Parameter validation may be performed in ``\_\_init\_\_``. cross-validation estimator An estimator that has built-in cross-validation capabilities to automatically select the best hyper-parameters (see the :ref:`User Guide `). Some example of cross-validation estimators are :class:`ElasticNetCV ` and :class:`LogisticRegressionCV `. Cross-validation estimators are named `EstimatorCV` and tend to be roughly equivalent to `GridSearchCV(Estimator(), ...)`. The advantage of using a cross-validation estimator over the canonical :term:`estimator` class along with :ref:`grid search ` is that they can take advantage of warm-starting by reusing precomputed results in the previous steps of the cross-validation process. This generally leads to speed improvements. An exception is the :class:`RidgeCV ` class, which can instead perform efficient Leave-One-Out (LOO) CV. By default, all these estimators, apart from :class:`RidgeCV ` with an LOO-CV, will be refitted on the full training dataset after finding the best combination of hyper-parameters. scorer A non-estimator callable object which evaluates an estimator on given test data, returning a number. Unlike :term:`evaluation metrics`, a greater returned number must correspond with a \*better\* score. See :ref:`scoring\_parameter`. Further examples: \* :class:`metrics.DistanceMetric` \* :class:`gaussian\_process.kernels.Kernel` \* ``tree.Criterion`` .. \_glossary\_metadata\_routing: Metadata Routing ================ .. glossary:: consumer An object which consumes :term:`metadata`. This object is usually an :term:`estimator`, a :term:`scorer`, or a :term:`CV splitter`. Consuming metadata means using it in calculations, e.g. using :term:`sample\_weight` to calculate a certain type of score. Being a consumer doesn't mean that the object always receives a certain metadata, rather it means it can use it if it is provided. metadata
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.020497558638453484, -0.050868887454271317, -0.031860414892435074, 0.03411175683140755, 0.11391328275203705, -0.021921489387750626, -0.005736950319260359, 0.05727871507406235, -0.04014803469181061, 0.04680420458316803, 0.019010379910469055, -0.06136801466345787, -0.01773754321038723, -0....
0.059221
an :term:`estimator`, a :term:`scorer`, or a :term:`CV splitter`. Consuming metadata means using it in calculations, e.g. using :term:`sample\_weight` to calculate a certain type of score. Being a consumer doesn't mean that the object always receives a certain metadata, rather it means it can use it if it is provided. metadata Data which is related to the given :term:`X` and :term:`y` data, but is not directly a part of the data, e.g. :term:`sample\_weight` or :term:`groups`, and is passed along to different objects and methods, e.g. to a :term:`scorer` or a :term:`CV splitter`. router An object which routes metadata to :term:`consumers `. This object is usually a :term:`meta-estimator`, e.g. :class:`~pipeline.Pipeline` or :class:`~model\_selection.GridSearchCV`. Some routers can also be a consumer. This happens for example when a meta-estimator uses the given :term:`groups`, and it also passes it along to some of its sub-objects, such as a :term:`CV splitter`. Please refer to :ref:`Metadata Routing User Guide ` for more information. .. \_glossary\_target\_types: Target Types ============ .. glossary:: binary A classification problem consisting of two classes. A binary target may be represented as for a :term:`multiclass` problem but with only two labels. A binary decision function is represented as a 1d array. Semantically, one class is often considered the "positive" class. Unless otherwise specified (e.g. using :term:`pos\_label` in :term:`evaluation metrics`), we consider the class label with the greater value (numerically or lexicographically) as the positive class: of labels [0, 1], 1 is the positive class; of [1, 2], 2 is the positive class; of ['no', 'yes'], 'yes' is the positive class; of ['no', 'YES'], 'no' is the positive class. This affects the output of :term:`decision\_function`, for instance. Note that a dataset sampled from a multiclass ``y`` or a continuous ``y`` may appear to be binary. :func:`~utils.multiclass.type\_of\_target` will return 'binary' for binary input, or a similar array with only a single class present. continuous A regression problem where each sample's target is a finite floating point number represented as a 1-dimensional array of floats (or sometimes ints). :func:`~utils.multiclass.type\_of\_target` will return 'continuous' for continuous input, but if the data is all integers, it will be identified as 'multiclass'. continuous multioutput continuous multi-output multioutput continuous multi-output continuous A regression problem where each sample's target consists of ``n\_outputs`` :term:`outputs`, each one a finite floating point number, for a fixed int ``n\_outputs > 1`` in a particular dataset. Continuous multioutput targets are represented as multiple :term:`continuous` targets, horizontally stacked into an array of shape ``(n\_samples, n\_outputs)``. :func:`~utils.multiclass.type\_of\_target` will return 'continuous-multioutput' for continuous multioutput input, but if the data is all integers, it will be identified as 'multiclass-multioutput'. multiclass multi-class A classification problem consisting of more than two classes. A multiclass target may be represented as a 1-dimensional array of strings or integers. A 2d column vector of integers (i.e. a single output in :term:`multioutput` terms) is also accepted. We do not officially support other orderable, hashable objects as class labels, even if estimators may happen to work when given classification targets of such type. For semi-supervised classification, :term:`unlabeled` samples should have the special label -1 in ``y``. Within scikit-learn, all estimators supporting binary classification also support multiclass classification, using One-vs-Rest by default. A :class:`preprocessing.LabelEncoder` helps to canonicalize multiclass targets as integers. :func:`~utils.multiclass.type\_of\_target` will return 'multiclass' for multiclass input. The user may also want to handle 'binary' input identically to 'multiclass'. multiclass multioutput multi-class multi-output multioutput multiclass multi-output multi-class A classification problem where each sample's target consists of ``n\_outputs`` :term:`outputs`, each a class label, for a fixed int ``n\_outputs > 1`` in a particular dataset. Each output has a fixed set of available classes, and each sample is labeled with a class
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.008397940546274185, -0.02717626839876175, -0.09454729408025742, 0.017776593565940857, 0.05148285999894142, -0.06491013616323471, 0.011709513142704964, 0.005286691710352898, 0.012426713481545448, 0.029818521812558174, 0.04027561470866203, -0.1219695582985878, -0.0007245514425449073, -0.0...
0.055659
multioutput multi-class multi-output multioutput multiclass multi-output multi-class A classification problem where each sample's target consists of ``n\_outputs`` :term:`outputs`, each a class label, for a fixed int ``n\_outputs > 1`` in a particular dataset. Each output has a fixed set of available classes, and each sample is labeled with a class for each output. An output may be binary or multiclass, and in the case where all outputs are binary, the target is :term:`multilabel`. Multiclass multioutput targets are represented as multiple :term:`multiclass` targets, horizontally stacked into an array of shape ``(n\_samples, n\_outputs)``. Note: For simplicity, we may not always support string class labels for multiclass multioutput, and integer class labels should be used. :mod:`~sklearn.multioutput` provides estimators which estimate multi-output problems using multiple single-output estimators. This may not fully account for dependencies among the different outputs, which methods natively handling the multioutput case (e.g. decision trees, nearest neighbors, neural networks) may do better. :func:`~utils.multiclass.type\_of\_target` will return 'multiclass-multioutput' for multiclass multioutput input. multilabel multi-label A :term:`multiclass multioutput` target where each output is :term:`binary`. This may be represented as a 2d (dense) array or sparse matrix of integers, such that each column is a separate binary target, where positive labels are indicated with 1 and negative labels are usually -1 or 0. Sparse multilabel targets are not supported everywhere that dense multilabel targets are supported. Semantically, a multilabel target can be thought of as a set of labels for each sample. While not used internally, :class:`preprocessing.MultiLabelBinarizer` is provided as a utility to convert from a list of sets representation to a 2d array or sparse matrix. One-hot encoding a multiclass target with :class:`preprocessing.LabelBinarizer` turns it into a multilabel problem. :func:`~utils.multiclass.type\_of\_target` will return 'multilabel-indicator' for multilabel input, whether sparse or dense. multioutput multi-output A target where each sample has multiple classification/regression labels. See :term:`multiclass multioutput` and :term:`continuous multioutput`. We do not currently support modelling mixed classification and regression targets. .. \_glossary\_methods: Methods ======= .. glossary:: ``decision\_function`` In a fitted :term:`classifier` or :term:`outlier detector`, predicts a "soft" score for each sample in relation to each class, rather than the "hard" categorical prediction produced by :term:`predict`. Its input is usually only some observed data, :term:`X`. If the estimator was not already :term:`fitted`, calling this method should raise a :class:`exceptions.NotFittedError`. Output conventions: binary classification A 1-dimensional array, where values strictly greater than zero indicate the positive class (i.e. the last class in :term:`classes\_`). multiclass classification A 2-dimensional array, where the row-wise arg-maximum is the predicted class. Columns are ordered according to :term:`classes\_`. multilabel classification Scikit-learn is inconsistent in its representation of :term:`multilabel` decision functions. It may be represented one of two ways: - List of 2d arrays, each array of shape: (`n\_samples`, 2), like in multiclass multioutput. List is of length `n\_labels`. - Single 2d array of shape (`n\_samples`, `n\_labels`), with each 'column' in the array corresponding to the individual binary classification decisions. This is identical to the multiclass classification format, though its semantics differ: it should be interpreted, like in the binary case, by thresholding at 0. multioutput classification A list of 2d arrays, corresponding to each multiclass decision function. outlier detection A 1-dimensional array, where a value greater than or equal to zero indicates an inlier. ``fit`` The ``fit`` method is provided on every estimator. It usually takes some :term:`samples` ``X``, :term:`targets` ``y`` if the model is supervised, and potentially other :term:`sample properties` such as :term:`sample\_weight`. It should: \* clear any prior :term:`attributes` stored on the estimator, unless :term:`warm\_start` is used; \* validate and interpret any :term:`parameters`, ideally raising an error if invalid; \* validate the input data; \* estimate and store model attributes from the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ 0.004167795646935701, -0.08259252458810806, -0.03389577567577362, -0.07674777507781982, 0.07062612473964691, 0.016171498224139214, 0.002223236719146371, -0.09552539139986038, -0.051622308790683746, -0.08918564021587372, -0.07445130497217178, -0.07082751393318176, 0.03796859830617905, -0.05...
0.102497
is supervised, and potentially other :term:`sample properties` such as :term:`sample\_weight`. It should: \* clear any prior :term:`attributes` stored on the estimator, unless :term:`warm\_start` is used; \* validate and interpret any :term:`parameters`, ideally raising an error if invalid; \* validate the input data; \* estimate and store model attributes from the estimated parameters and provided data; and \* return the now :term:`fitted` estimator to facilitate method chaining. :ref:`glossary\_target\_types` describes possible formats for ``y``. ``fit\_predict`` Used especially for :term:`unsupervised`, :term:`transductive` estimators, this fits the model and returns the predictions (similar to :term:`predict`) on the training data. In clusterers, these predictions are also stored in the :term:`labels\_` attribute, and the output of ``.fit\_predict(X)`` is usually equivalent to ``.fit(X).predict(X)``. The parameters to ``fit\_predict`` are the same as those to ``fit``. ``fit\_transform`` A method on :term:`transformers` which fits the estimator and returns the transformed training data. It takes parameters as in :term:`fit` and its output should have the same shape as calling ``.fit(X, ...).transform(X)``. There are nonetheless rare cases where ``.fit\_transform(X, ...)`` and ``.fit(X, ...).transform(X)`` do not return the same value, wherein training data needs to be handled differently (due to model blending in stacked ensembles, for instance; such cases should be clearly documented). :term:`Transductive ` transformers may also provide ``fit\_transform`` but not :term:`transform`. One reason to implement ``fit\_transform`` is that performing ``fit`` and ``transform`` separately would be less efficient than together. :class:`base.TransformerMixin` provides a default implementation, providing a consistent interface across transformers where ``fit\_transform`` is or is not specialized. In :term:`inductive` learning -- where the goal is to learn a generalized model that can be applied to new data -- users should be careful not to apply ``fit\_transform`` to the entirety of a dataset (i.e. training and test data together) before further modelling, as this results in :term:`data leakage`. ``get\_feature\_names\_out`` Primarily for :term:`feature extractors`, but also used for other transformers to provide string names for each column in the output of the estimator's :term:`transform` method. It outputs an array of strings and may take an array-like of strings as input, corresponding to the names of input columns from which output column names can be generated. If `input\_features` is not passed in, then the `feature\_names\_in\_` attribute will be used. If the `feature\_names\_in\_` attribute is not defined, then the input names are named `[x0, x1, ..., x(n\_features\_in\_ - 1)]`. ``get\_n\_splits`` On a :term:`CV splitter` (not an estimator), returns the number of elements one would get if iterating through the return value of :term:`split` given the same parameters. Takes the same parameters as split. ``get\_params`` Gets all :term:`parameters`, and their values, that can be set using :term:`set\_params`. A parameter ``deep`` can be used, when set to False to only return those parameters not including ``\_\_``, i.e. not due to indirection via contained estimators. Most estimators adopt the definition from :class:`base.BaseEstimator`, which simply adopts the parameters defined for ``\_\_init\_\_``. :class:`pipeline.Pipeline`, among others, reimplements ``get\_params`` to declare the estimators named in its ``steps`` parameters as themselves being parameters. ``partial\_fit`` Facilitates fitting an estimator in an online fashion. Unlike ``fit``, repeatedly calling ``partial\_fit`` does not clear the model, but updates it with the data provided. The portion of data provided to ``partial\_fit`` may be called a mini-batch. Each mini-batch must be of consistent shape, etc. In iterative estimators, ``partial\_fit`` often only performs a single iteration. ``partial\_fit`` may also be used for :term:`out-of-core` learning, although usually limited to the case where learning can be performed online, i.e. the model is usable after each ``partial\_fit`` and there is no separate processing needed to finalize the model. :class:`cluster.Birch` introduces the convention that calling ``partial\_fit(X)`` will produce a model that is not finalized, but
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.08002863079309464, -0.06812077015638351, -0.006976176984608173, 0.0852995365858078, 0.04425644874572754, 0.005987727083265781, 0.03311862051486969, -0.014537855051457882, -0.04215807467699051, 0.03244510293006897, 0.024076826870441437, -0.10902222245931625, 0.024326857179403305, -0.0333...
0.131444
for :term:`out-of-core` learning, although usually limited to the case where learning can be performed online, i.e. the model is usable after each ``partial\_fit`` and there is no separate processing needed to finalize the model. :class:`cluster.Birch` introduces the convention that calling ``partial\_fit(X)`` will produce a model that is not finalized, but the model can be finalized by calling ``partial\_fit()`` i.e. without passing a further mini-batch. Generally, estimator parameters should not be modified between calls to ``partial\_fit``, although ``partial\_fit`` should validate them as well as the new mini-batch of data. In contrast, ``warm\_start`` is used to repeatedly fit the same estimator with the same data but varying parameters. Like ``fit``, ``partial\_fit`` should return the estimator object. To clear the model, a new estimator should be constructed, for instance with :func:`base.clone`. Note: Using ``partial\_fit`` after ``fit`` results in undefined behavior. ``predict`` Makes a prediction for each sample, usually only taking :term:`X` as input (but see under regressor output conventions below). In a :term:`classifier` or :term:`regressor`, this prediction is in the same target space used in fitting (e.g. one of {'red', 'amber', 'green'} if the ``y`` in fitting consisted of these strings). Despite this, even when ``y`` passed to :term:`fit` is a list or other array-like, the output of ``predict`` should always be an array or sparse matrix. In a :term:`clusterer` or :term:`outlier detector` the prediction is an integer. If the estimator was not already :term:`fitted`, calling this method should raise a :class:`exceptions.NotFittedError`. Output conventions: classifier An array of shape ``(n\_samples,)`` ``(n\_samples, n\_outputs)``. :term:`Multilabel ` data may be represented as a sparse matrix if a sparse matrix was used in fitting. Each element should be one of the values in the classifier's :term:`classes\_` attribute. clusterer An array of shape ``(n\_samples,)`` where each value is from 0 to ``n\_clusters - 1`` if the corresponding sample is clustered, and -1 if the sample is not clustered, as in :func:`cluster.dbscan`. outlier detector An array of shape ``(n\_samples,)`` where each value is -1 for an outlier and 1 otherwise. regressor A numeric array of shape ``(n\_samples,)``, usually float64. Some regressors have extra options in their ``predict`` method, allowing them to return standard deviation (``return\_std=True``) or covariance (``return\_cov=True``) relative to the predicted value. In this case, the return value is a tuple of arrays corresponding to (prediction mean, std, cov) as required. ``predict\_log\_proba`` The natural logarithm of the output of :term:`predict\_proba`, provided to facilitate numerical stability. ``predict\_proba`` A method in :term:`classifiers` and :term:`clusterers` that can return probability estimates for each class/cluster. Its input is usually only some observed data, :term:`X`. If the estimator was not already :term:`fitted`, calling this method should raise a :class:`exceptions.NotFittedError`. Output conventions are like those for :term:`decision\_function` except in the :term:`binary` classification case, where one column is output for each class (while ``decision\_function`` outputs a 1d array). For binary and multiclass predictions, each row should add to 1. Like other methods, ``predict\_proba`` should only be present when the estimator can make probabilistic predictions (see :term:`duck typing`). This means that the presence of the method may depend on estimator parameters (e.g. in :class:`linear\_model.SGDClassifier`) or training data (e.g. in :class:`model\_selection.GridSearchCV`) and may only appear after fitting. ``score`` A method on an estimator, usually a :term:`predictor`, which evaluates its predictions on a given dataset, and returns a single numerical score. A greater return value should indicate better predictions; accuracy is used for classifiers and R^2 for regressors by default. If the estimator was not already :term:`fitted`, calling this method should raise a :class:`exceptions.NotFittedError`. Some estimators implement a custom, estimator-specific score function, often the likelihood of the data under the model. ``score\_samples`` A method that returns a score for
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.037078432738780975, -0.07483484596014023, -0.03330893814563751, 0.1059921532869339, 0.09932677447795868, -0.0012433413648977876, -0.05818621441721916, -0.0030259904451668262, -0.060123011469841, 0.041345562785863876, 0.06585904955863953, -0.003503961255773902, -0.004776588641107082, -0....
0.07777
accuracy is used for classifiers and R^2 for regressors by default. If the estimator was not already :term:`fitted`, calling this method should raise a :class:`exceptions.NotFittedError`. Some estimators implement a custom, estimator-specific score function, often the likelihood of the data under the model. ``score\_samples`` A method that returns a score for each given sample. The exact definition of \*score\* varies from one class to another. In the case of density estimation, it can be the log density model on the data, and in the case of outlier detection, it can be the opposite of the outlier factor of the data. If the estimator was not already :term:`fitted`, calling this method should raise a :class:`exceptions.NotFittedError`. ``set\_params`` Available in any estimator, takes keyword arguments corresponding to keys in :term:`get\_params`. Each is provided a new value to assign such that calling ``get\_params`` after ``set\_params`` will reflect the changed :term:`parameters`. Most estimators use the implementation in :class:`base.BaseEstimator`, which handles nested parameters and otherwise sets the parameter as an attribute on the estimator. The method is overridden in :class:`pipeline.Pipeline` and related estimators. ``split`` On a :term:`CV splitter` (not an estimator), this method accepts parameters (:term:`X`, :term:`y`, :term:`groups`), where all may be optional, and returns an iterator over ``(train\_idx, test\_idx)`` pairs. Each of {train,test}\_idx is a 1d integer array, with values from 0 from ``X.shape[0] - 1`` of any length, such that no values appear in both some ``train\_idx`` and its corresponding ``test\_idx``. ``transform`` In a :term:`transformer`, transforms the input, usually only :term:`X`, into some transformed space (conventionally notated as :term:`Xt`). Output is an array or sparse matrix of length :term:`n\_samples` and with the number of columns fixed after :term:`fitting`. If the estimator was not already :term:`fitted`, calling this method should raise a :class:`exceptions.NotFittedError`. .. \_glossary\_parameters: Parameters ========== These common parameter names, specifically used in estimator construction (see concept :term:`parameter`), sometimes also appear as parameters of functions or non-estimator constructors. .. glossary:: ``class\_weight`` Used to specify sample weights when fitting classifiers as a function of the :term:`target` class. Where :term:`sample\_weight` is also supported and given, it is multiplied by the ``class\_weight`` contribution. Similarly, where ``class\_weight`` is used in a :term:`multioutput` (including :term:`multilabel`) tasks, the weights are multiplied across outputs (i.e. columns of ``y``). By default, all samples have equal weight such that classes are effectively weighted by their prevalence in the training data. This could be achieved explicitly with ``class\_weight={label1: 1, label2: 1, ...}`` for all class labels. More generally, ``class\_weight`` is specified as a dict mapping class labels to weights (``{class\_label: weight}``), such that each sample of the named class is given that weight. ``class\_weight='balanced'`` can be used to give all classes equal weight by giving each sample a weight inversely related to its class's prevalence in the training data: ``n\_samples / (n\_classes \* np.bincount(y))``. Class weights will be used differently depending on the algorithm: for linear models (such as linear SVM or logistic regression), the class weights will alter the loss function by weighting the loss of each sample by its class weight. For tree-based algorithms, the class weights will be used for reweighting the splitting criterion. \*\*Note\*\* however that this rebalancing does not take the weight of samples in each class into account. For multioutput classification, a list of dicts is used to specify weights for each output. For example, for four-class multilabel classification weights should be ``[{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}]`` instead of ``[{1:1}, {2:5}, {3:1}, {4:1}]``. The ``class\_weight`` parameter is validated and interpreted with :func:`utils.class\_weight.compute\_class\_weight`. ``cv`` Determines a cross validation splitting strategy, as used in cross-validation based routines. ``cv`` is also
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.014384415931999683, -0.04970740154385567, -0.08104956895112991, 0.10709003359079361, 0.07140419632196426, -0.05965174734592438, 0.013448436744511127, 0.07223715633153915, -0.029500523582100868, 0.007467120420187712, 0.03423872962594032, -0.033406857401132584, 0.10066275298595428, -0.034...
0.079823
weights should be ``[{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}]`` instead of ``[{1:1}, {2:5}, {3:1}, {4:1}]``. The ``class\_weight`` parameter is validated and interpreted with :func:`utils.class\_weight.compute\_class\_weight`. ``cv`` Determines a cross validation splitting strategy, as used in cross-validation based routines. ``cv`` is also available in estimators such as :class:`multioutput.ClassifierChain` or :class:`calibration.CalibratedClassifierCV` which use the predictions of one estimator as training data for another, to not overfit the training supervision. Possible inputs for ``cv`` are usually: - An integer, specifying the number of folds in K-fold cross validation. K-fold will be stratified over classes if the estimator is a classifier (determined by :func:`base.is\_classifier`) and the :term:`targets` may represent a binary or multiclass (but not multioutput) classification problem (determined by :func:`utils.multiclass.type\_of\_target`). - A :term:`cross-validation splitter` instance. Refer to the :ref:`User Guide ` for splitters available within Scikit-learn. - An iterable yielding train/test splits. With some exceptions (especially where not using cross validation at all is an option), the default is 5-fold. ``cv`` values are validated and interpreted with :func:`model\_selection.check\_cv`. ``kernel`` Specifies the kernel function to be used by Kernel Method algorithms. For example, the estimators :class:`svm.SVC` and :class:`gaussian\_process.GaussianProcessClassifier` both have a ``kernel`` parameter that takes the name of the kernel to use as string or a callable kernel function used to compute the kernel matrix. For more reference, see the :ref:`kernel\_approximation` and the :ref:`gaussian\_process` user guides. ``max\_iter`` For estimators involving iterative optimization, this determines the maximum number of iterations to be performed in :term:`fit`. If ``max\_iter`` iterations are run without convergence, a :class:`exceptions.ConvergenceWarning` should be raised. Note that the interpretation of "a single iteration" is inconsistent across estimators: some, but not all, use it to mean a single epoch (i.e. a pass over every sample in the data). .. FIXME: perhaps we should have some common tests about the relationship between ConvergenceWarning and max\_iter. ``memory`` Some estimators make use of :class:`joblib.Memory` to store partial solutions during fitting. Thus when ``fit`` is called again, those partial solutions have been memoized and can be reused. A ``memory`` parameter can be specified as a string with a path to a directory, or a :class:`joblib.Memory` instance (or an object with a similar interface, i.e. a ``cache`` method) can be used. ``memory`` values are validated and interpreted with :func:`utils.validation.check\_memory`. ``metric`` As a parameter, this is the scheme for determining the distance between two data points. See :func:`metrics.pairwise\_distances`. In practice, for some algorithms, an improper distance metric (one that does not obey the triangle inequality, such as Cosine Distance) may be used. Note: Hierarchical clustering uses ``affinity`` with this meaning. We also use \*metric\* to refer to :term:`evaluation metrics`, but avoid using this sense as a parameter name. ``n\_components`` The number of features which a :term:`transformer` should transform the input into. See :term:`components\_` for the special case of affine projection. ``n\_iter\_no\_change`` Number of iterations with no improvement to wait before stopping the iterative procedure. This is also known as a \*patience\* parameter. It is typically used with :term:`early stopping` to avoid stopping too early. ``n\_jobs`` This parameter is used to specify how many concurrent processes or threads should be used for routines that are parallelized with :term:`joblib`. ``n\_jobs`` is an integer, specifying the maximum number of concurrently running workers. If 1 is given, no joblib parallelism is used at all, which is useful for debugging. If set to -1, all CPUs are used. For ``n\_jobs`` below -1, (n\_cpus + 1 + n\_jobs) are used. For example with ``n\_jobs=-2``, all CPUs but one are used. ``n\_jobs`` is ``None`` by default, which means \*unset\*; it will generally be interpreted as ``n\_jobs=1``, unless the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.1036740094423294, -0.012914384715259075, -0.05248212814331055, 0.009137476794421673, 0.021597320213913918, -0.008429284207522869, 0.001062526018358767, -0.019801834598183632, -0.06839875131845474, -0.052668940275907516, -0.041662268340587616, -0.1412651538848877, 0.02950308844447136, -0...
0.001542
is useful for debugging. If set to -1, all CPUs are used. For ``n\_jobs`` below -1, (n\_cpus + 1 + n\_jobs) are used. For example with ``n\_jobs=-2``, all CPUs but one are used. ``n\_jobs`` is ``None`` by default, which means \*unset\*; it will generally be interpreted as ``n\_jobs=1``, unless the current :class:`joblib.Parallel` backend context specifies otherwise. Note that even if ``n\_jobs=1``, low-level parallelism (via Numpy and OpenMP) might be used in some configuration. For more details on the use of ``joblib`` and its interactions with scikit-learn, please refer to our :ref:`parallelism notes `. ``pos\_label`` Value with which positive labels must be encoded in binary classification problems in which the positive class is not assumed. This value is typically required to compute asymmetric evaluation metrics such as precision and recall. ``random\_state`` Whenever randomization is part of a Scikit-learn algorithm, a ``random\_state`` parameter may be provided to control the random number generator used. Note that the mere presence of ``random\_state`` doesn't mean that randomization is always used, as it may be dependent on another parameter, e.g. ``shuffle``, being set. The passed value will have an effect on the reproducibility of the results returned by the function (:term:`fit`, :term:`split`, or any other function like :func:`~sklearn.cluster.k\_means`). `random\_state`'s value may be: None (default) Use the global random state instance from :mod:`numpy.random`. Calling the function multiple times will reuse the same instance, and will produce different results. An integer Use a new random number generator seeded by the given integer. Using an int will produce the same results across different calls. However, it may be worthwhile checking that your results are stable across a number of different distinct random seeds. Popular integer random seeds are 0 and `42 `\_. Integer values must be in the range `[0, 2\*\*32 - 1]`. A :class:`numpy.random.RandomState` instance Use the provided random state, only affecting other users of that same random state instance. Calling the function multiple times will reuse the same instance, and will produce different results. :func:`utils.check\_random\_state` is used internally to validate the input ``random\_state`` and return a :class:`~numpy.random.RandomState` instance. For more details on how to control the randomness of scikit-learn objects and avoid common pitfalls, you may refer to :ref:`randomness`. ``scoring`` Depending on the object, can specify: \* the score function to be maximized (usually by :ref:`cross validation `), \* the multiple score functions to be reported, \* the score function to be used to check early stopping, or \* for visualization related objects, the score function to output or plot The score function can be a string accepted by :func:`metrics.get\_scorer` or a callable :term:`scorer`, not to be confused with an :term:`evaluation metric`, as the latter have a more diverse API. ``scoring`` may also be set to None, in which case the estimator's :term:`score` method is used. See :ref:`scoring\_parameter` in the User Guide. Where multiple metrics can be evaluated, ``scoring`` may be given either as a list of unique strings, a dictionary with names as keys and callables as values or a callable that returns a dictionary. Note that this does \*not\* specify which score function is to be maximized, and another parameter such as ``refit`` may be used for this purpose. The ``scoring`` parameter is validated and interpreted using :func:`metrics.check\_scoring`. ``verbose`` Logging is not handled very consistently in Scikit-learn at present, but when it is provided as an option, the ``verbose`` parameter is usually available to choose no logging (set to False). Any True value should enable some logging, but larger integers (e.g. above 10) may be needed for full verbosity. Verbose logs are usually printed to Standard Output. Estimators should not produce any output on Standard
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.12415469437837601, -0.0414511077105999, -0.06102138012647629, 0.02361167222261429, 0.02623078040778637, -0.14108344912528992, -0.044367074966430664, -0.041265033185482025, -0.041598059237003326, -0.06033868342638016, 0.00010110926086781546, 0.021413762122392654, -0.01945546083152294, -0...
0.16777
an option, the ``verbose`` parameter is usually available to choose no logging (set to False). Any True value should enable some logging, but larger integers (e.g. above 10) may be needed for full verbosity. Verbose logs are usually printed to Standard Output. Estimators should not produce any output on Standard Output with the default ``verbose`` setting. ``warm\_start`` When fitting an estimator repeatedly on the same dataset, but for multiple parameter values (such as to find the value maximizing performance as in :ref:`grid search `), it may be possible to reuse aspects of the model learned from the previous parameter value, saving time. When ``warm\_start`` is true, the existing :term:`fitted` model :term:`attributes` are used to initialize the new model in a subsequent call to :term:`fit`. Note that this is only applicable for some models and some parameters, and even some orders of parameter values. In general, there is an interaction between ``warm\_start`` and the parameter controlling the number of iterations of the estimator. For estimators imported from :mod:`~sklearn.ensemble`, ``warm\_start`` will interact with ``n\_estimators`` or ``max\_iter``. For these models, the number of iterations, reported via ``len(estimators\_)`` or ``n\_iter\_``, corresponds the total number of estimators/iterations learnt since the initialization of the model. Thus, if a model was already initialized with `N` estimators, and `fit` is called with ``n\_estimators`` or ``max\_iter`` set to `M`, the model will train `M - N` new estimators. Other models, usually using gradient-based solvers, have a different behavior. They all expose a ``max\_iter`` parameter. The reported ``n\_iter\_`` corresponds to the number of iterations done during the last call to ``fit`` and will be at most ``max\_iter``. Thus, we do not consider the state of the estimator since the initialization. :term:`partial\_fit` also retains the model between calls, but differs: with ``warm\_start`` the parameters change and the data is (more-or-less) constant across calls to ``fit``; with ``partial\_fit``, the mini-batch of data changes and model parameters stay fixed. There are cases where you want to use ``warm\_start`` to fit on different, but closely related data. For example, one may initially fit to a subset of the data, then fine-tune the parameter search on the full dataset. For classification, all data in a sequence of ``warm\_start`` calls to ``fit`` must include samples from each class. .. \_glossary\_attributes: Attributes ========== See concept :term:`attribute`. .. glossary:: ``classes\_`` A list of class labels known to the :term:`classifier`, mapping each label to a numerical index used in the model representation our output. For instance, the array output from :term:`predict\_proba` has columns aligned with ``classes\_``. For :term:`multi-output` classifiers, ``classes\_`` should be a list of lists, with one class listing for each output. For each output, the classes should be sorted (numerically, or lexicographically for strings). ``classes\_`` and the mapping to indices is often managed with :class:`preprocessing.LabelEncoder`. ``components\_`` An affine transformation matrix of shape ``(n\_components, n\_features)`` used in many linear :term:`transformers` where :term:`n\_components` is the number of output features and :term:`n\_features` is the number of input features. See also :term:`coef\_` which is a similar attribute for linear predictors. ``coef\_`` The weight/coefficient matrix of a generalized linear model :term:`predictor`, of shape ``(n\_features,)`` for binary classification and single-output regression, ``(n\_classes, n\_features)`` for multiclass classification and ``(n\_targets, n\_features)`` for multi-output regression. Note this does not include the intercept (or bias) term, which is stored in ``intercept\_``. When available, ``feature\_importances\_`` is not usually provided as well, but can be calculated as the norm of each feature's entry in ``coef\_``. See also :term:`components\_` which is a similar attribute for linear transformers. ``embedding\_`` An embedding of the training data in :ref:`manifold learning ` estimators, with shape ``(n\_samples, n\_components)``, identical to the output of :term:`fit\_transform`. See
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.0473538413643837, -0.04524535313248634, -0.0609581433236599, 0.1504916399717331, 0.08356330543756485, -0.010446188971400261, -0.0049779764376580715, -0.04656580835580826, 0.007717950735241175, 0.04609422758221626, -0.009119821712374687, 0.006018118467181921, 0.01456805132329464, -0.0393...
0.029893
provided as well, but can be calculated as the norm of each feature's entry in ``coef\_``. See also :term:`components\_` which is a similar attribute for linear transformers. ``embedding\_`` An embedding of the training data in :ref:`manifold learning ` estimators, with shape ``(n\_samples, n\_components)``, identical to the output of :term:`fit\_transform`. See also :term:`labels\_`. ``n\_iter\_`` The number of iterations actually performed when fitting an iterative estimator that may stop upon convergence. See also :term:`max\_iter`. ``feature\_importances\_`` A vector of shape ``(n\_features,)`` available in some :term:`predictors` to provide a relative measure of the importance of each feature in the predictions of the model. ``labels\_`` A vector containing a cluster label for each sample of the training data in :term:`clusterers`, identical to the output of :term:`fit\_predict`. See also :term:`embedding\_`. .. \_glossary\_sample\_props: Data and sample properties ========================== See concept :term:`sample property`. .. glossary:: ``groups`` Used in cross-validation routines to identify samples that are correlated. Each value is an identifier such that, in a supporting :term:`CV splitter`, samples from some ``groups`` value may not appear in both a training set and its corresponding test set. See :ref:`group\_cv`. ``sample\_weight`` A weight for each data point. Intuitively, if all weights are integers, using them in an estimator or a :term:`scorer` is like duplicating each data point as many times as the weight value. Weights can also be specified as floats, and can have the same effect as above, as many estimators and scorers are scale invariant. For example, weights ``[1, 2, 3]`` would be equivalent to weights ``[0.1, 0.2, 0.3]`` as they differ by a constant factor of 10. Note however that several estimators are not invariant to the scale of weights. `sample\_weight` can be both an argument of the estimator's :term:`fit` method for model training or a parameter of a :term:`scorer` for model evaluation. These callables are said to \*consume\* the sample weights while other components of scikit-learn can \*route\* the weights to the underlying estimators or scorers (see :ref:`glossary\_metadata\_routing`). Weighting samples can be useful in several contexts. For instance, if the training data is not uniformly sampled from the target population, it can be corrected by weighting the training data points based on the `inverse probability `\_ of their selection for training (e.g. inverse propensity weighting). Some model hyper-parameters are expressed in terms of a discrete number of data points in a region of the feature space. When fitting with sample weights, a count of data points is often automatically converted to a sum of their weights, but this is not always the case. Please refer to the model docstring for details. In classification, weights can also be specified for all samples belonging to a given target class with the :term:`class\_weight` estimator :term:`parameter`. If both ``sample\_weight`` and ``class\_weight`` are provided, the final weight assigned to a sample is the product of the two. At the time of writing (version 1.8), not all scikit-learn estimators correctly implement the weight-repetition equivalence property. The `#16298 meta issue `\_ tracks ongoing work to detect and fix remaining discrepancies. Furthermore, some estimators have a stochastic fit method. For instance, :class:`cluster.KMeans` depends on a random initialization, bagging models randomly resample from the training data, etc. In this case, the sample weight-repetition equivalence property described above does not hold exactly. However, it should hold at least in expectation over the randomness of the fitting procedure. ``X`` Denotes data that is observed at training and prediction time, used as independent variables in learning. The notation is uppercase to denote that it is ordinarily a matrix (see :term:`rectangular`). When a matrix, each sample may be represented by a :term:`feature` vector, or a vector of :term:`precomputed` (dis)similarity
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.05931313335895538, -0.11082403361797333, -0.018749617040157318, -0.033398453146219254, 0.029161950573325157, -0.012205315753817558, -0.027866732329130173, 0.04240497946739197, -0.05879909172654152, -0.059618301689624786, -0.009021134115755558, 0.040140341967344284, 0.05371921882033348, ...
0.088758
procedure. ``X`` Denotes data that is observed at training and prediction time, used as independent variables in learning. The notation is uppercase to denote that it is ordinarily a matrix (see :term:`rectangular`). When a matrix, each sample may be represented by a :term:`feature` vector, or a vector of :term:`precomputed` (dis)similarity with each training sample. ``X`` may also not be a matrix, and may require a :term:`feature extractor` or a :term:`pairwise metric` to turn it into one before learning a model. ``Xt`` Shorthand for "transformed :term:`X`". ``y`` ``Y`` Denotes data that may be observed at training time as the dependent variable in learning, but which is unavailable at prediction time, and is usually the :term:`target` of prediction. The notation may be uppercase to denote that it is a matrix, representing :term:`multi-output` targets, for instance; but usually we use ``y`` and sometimes do so even when multiple outputs are assumed.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/glossary.rst
main
scikit-learn
[ -0.011142265982925892, -0.070913165807724, -0.08929028362035751, -0.005945795681327581, 0.04216556251049042, -0.03371512517333031, 0.02805113047361374, 0.019325558096170425, 0.05670642480254173, 0.0009550020331516862, 0.09758267551660538, 0.03883485123515129, 0.05901404470205307, 0.0321912...
0.156907
.. \_about: ======== About us ======== History ======= This project was started in 2007 as a Google Summer of Code project by David Cournapeau. Later that year, Matthieu Brucher started working on this project as part of his thesis. In 2010 Fabian Pedregosa, Gael Varoquaux, Alexandre Gramfort and Vincent Michel of INRIA took leadership of the project and made the first public release, February the 1st 2010. Since then, several releases have appeared following an approximately 3-month cycle, and a thriving international community has been leading the development. As a result, INRIA holds the copyright over the work done by people who were employed by INRIA at the time of the contribution. Governance ========== The decision making process and governance structure of scikit-learn, like roles and responsibilities, is laid out in the :ref:`governance document `. .. The "author" anchors below is there to ensure that old html links (in the form of "about.html#author" still work) .. \_authors: The people behind scikit-learn ============================== scikit-learn is a community project, developed by a large group of people, all across the world. A few core contributor teams, listed below, have central roles, however a more complete list of contributors can be found `on GitHub `\_\_. Active Core Contributors ------------------------ Maintainers Team ................ The following people are currently maintainers, in charge of consolidating scikit-learn's development and maintenance: .. include:: maintainers.rst .. note:: Please do not email the authors directly to ask for assistance or report issues. Instead, please see `What's the best way to ask questions about scikit-learn `\_ in the FAQ. .. seealso:: How you can :ref:`contribute to the project `. Documentation Team .................. The following people help with documenting the project: .. include:: documentation\_team.rst Contributor Experience Team ........................... The following people are active contributors who also help with :ref:`triaging issues `, PRs, and general maintenance: .. include:: contributor\_experience\_team.rst Communication Team .................. The following people help with :ref:`communication around scikit-learn `. .. include:: communication\_team.rst Emeritus Core Contributors -------------------------- Emeritus Maintainers Team ......................... The following people have been active contributors in the past, but are no longer active in the project: .. rst-class:: grid-list-three-columns .. include:: maintainers\_emeritus.rst Emeritus Communication Team ........................... The following people have been active in the communication team in the past, but no longer have communication responsibilities: .. include:: communication\_team\_emeritus.rst Emeritus Contributor Experience Team .................................... The following people have been active in the contributor experience team in the past: .. include:: contributor\_experience\_team\_emeritus.rst .. \_citing-scikit-learn: Citing scikit-learn =================== If you use scikit-learn in a scientific publication, we would appreciate citations to the following paper: `Scikit-learn: Machine Learning in Python `\_, Pedregosa \*et al.\*, JMLR 12, pp. 2825-2830, 2011. Bibtex entry:: @article{scikit-learn, title={Scikit-learn: Machine Learning in {P}ython}, author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, journal={Journal of Machine Learning Research}, volume={12}, pages={2825--2830}, year={2011} } If you want to cite scikit-learn for its API or design, you may also want to consider the following paper: :arxiv:`API design for machine learning software: experiences from the scikit-learn project <1309.0238>`, Buitinck \*et al.\*, 2013. Bibtex entry:: @inproceedings{sklearn\_api, author = {Lars Buitinck and Gilles Louppe and Mathieu Blondel and Fabian Pedregosa and Andreas Mueller and Olivier Grisel and Vlad Niculae and Peter Prettenhofer and Alexandre Gramfort and Jaques Grobler and Robert Layton and Jake VanderPlas and Arnaud Joly and Brian Holt and Ga{\"{e}}l Varoquaux}, title = {{API} design for machine learning software: experiences from the scikit-learn project}, booktitle = {ECML
https://github.com/scikit-learn/scikit-learn/blob/main//doc/about.rst
main
scikit-learn
[ -0.08642878383398056, -0.03253087028861046, -0.02735988423228264, -0.0628497451543808, 0.10069382190704346, -0.07571513950824738, -0.05645763501524925, -0.013698192313313484, 0.027251962572336197, 0.05360493063926697, 0.013730091042816639, 0.01937972567975521, -0.06672022491693497, -0.0412...
0.155269
Pedregosa and Andreas Mueller and Olivier Grisel and Vlad Niculae and Peter Prettenhofer and Alexandre Gramfort and Jaques Grobler and Robert Layton and Jake VanderPlas and Arnaud Joly and Brian Holt and Ga{\"{e}}l Varoquaux}, title = {{API} design for machine learning software: experiences from the scikit-learn project}, booktitle = {ECML PKDD Workshop: Languages for Data Mining and Machine Learning}, year = {2013}, pages = {108--122}, } .. \_branding-and-logos: Branding & Logos ================ The scikit-learn brand is subject to the following `terms of use and guidelines `\_. High quality PNG and SVG logos are available in the `doc/logos `\_ source directory. The color palette is available in the `Branding Guide `\_. .. image:: images/scikit-learn-logo-notext.png :align: center Funding ======= Scikit-learn is a community driven project, however institutional and private grants help to assure its sustainability. The project would like to thank the following funders. ................................... .. div:: sk-text-image-grid-small .. div:: text-box `:probabl. `\_ manages the whole sponsorship program and employs the full-time core maintainers Adrin Jalali, Arturo Amor, François Goupil, Guillaume Lemaitre, Jérémie du Boisberranger, Loïc Estève, Olivier Grisel, and Stefanie Senger. .. div:: image-box .. image:: images/probabl.png :target: https://probabl.ai :width: 40% .......... Active Sponsors =============== Founding sponsors ----------------- .. div:: sk-text-image-grid-small .. div:: text-box `Inria `\_ supports scikit-learn through their sponsorship. .. div:: image-box .. image:: images/inria-logo.jpg :target: https://www.inria.fr .......... Gold sponsors ------------- .. div:: sk-text-image-grid-small .. div:: text-box `Chanel `\_ supports scikit-learn through their sponsorship. .. div:: image-box .. image:: images/chanel.png :target: https://www.chanel.com .......... Silver sponsors --------------- .. div:: sk-text-image-grid-small .. div:: text-box `BNP Paribas Group `\_ supports scikit-learn through their sponsorship. .. div:: image-box .. image:: images/bnp-paribas.jpg :target: https://group.bnpparibas/ .......... Bronze sponsors --------------- .. div:: sk-text-image-grid-small .. div:: text-box `NVIDIA `\_ supports scikit-learn through their sponsorship and employs full-time core maintainer Tim Head. .. div:: image-box .. image:: images/nvidia.png :target: https://nvidia.com .......... Other contributions ------------------- .. |chanel| image:: images/chanel.png :target: https://www.chanel.com .. |axa| image:: images/axa.png :target: https://www.axa.fr/ .. |bnp| image:: images/bnp.png :target: https://www.bnpparibascardif.com/ .. |bnpparibasgroup| image:: images/bnp-paribas.jpg :target: https://group.bnpparibas/ .. |dataiku| image:: images/dataiku.png :target: https://www.dataiku.com/ .. |nvidia| image:: images/nvidia.png :target: https://www.nvidia.com .. |inria| image:: images/inria-logo.jpg :target: https://www.inria.fr .. raw:: html table.image-subtable tr { border-color: transparent; } table.image-subtable td { width: 50%; vertical-align: middle; text-align: center; } table.image-subtable td img { max-height: 40px !important; max-width: 90% !important; } \* `Microsoft `\_ funds Andreas Müller since 2020. \* `Quansight Labs `\_ funds Lucy Liu since 2022. \* `The Chan-Zuckerberg Initiative `\_ and `Wellcome Trust `\_ fund scikit-learn through the `Essential Open Source Software for Science (EOSS) `\_ cycle 6. It supports Lucy Liu and diversity & inclusion initiatives that will be announced in the future. \* `Tidelift `\_ supports the project via their service agreement. Past Sponsors ============= `Quansight Labs `\_ funded Meekail Zain in 2022 and 2023, and funded Thomas J. Fan from 2021 to 2023. `Columbia University `\_ funded Andreas Müller (2016-2020). `The University of Sydney `\_ funded Joel Nothman (2017-2021). Andreas Müller received a grant to improve scikit-learn from the `Alfred P. Sloan Foundation `\_ . This grant supported the position of Nicolas Hug and Thomas J. Fan. `INRIA `\_ has provided funding for Fabian Pedregosa (2010-2012), Jaques Grobler (2012-2013) and Olivier Grisel (2013-2017) to work on this project full-time. It also hosts coding sprints and other events. `Paris-Saclay Center for Data Science `\_ funded one year for a developer to work on the project full-time (2014-2015), 50% of the time of Guillaume Lemaitre (2016-2017) and 50% of the time of Joris van den Bossche (2017-2018). `NYU Moore-Sloan Data Science Environment `\_ funded Andreas Mueller (2014-2016) to work on this project. The Moore-Sloan Data Science Environment also funds
https://github.com/scikit-learn/scikit-learn/blob/main//doc/about.rst
main
scikit-learn
[ -0.023354193195700645, 0.024636656045913696, -0.027735937386751175, -0.037251733243465424, 0.11481906473636627, -0.06847971677780151, -0.0033416145015507936, 0.0006987170199863613, -0.015061501413583755, 0.0013836538419127464, -0.021382208913564682, 0.000045683820644626394, -0.00435093976557...
0.149384
a developer to work on the project full-time (2014-2015), 50% of the time of Guillaume Lemaitre (2016-2017) and 50% of the time of Joris van den Bossche (2017-2018). `NYU Moore-Sloan Data Science Environment `\_ funded Andreas Mueller (2014-2016) to work on this project. The Moore-Sloan Data Science Environment also funds several students to work on the project part-time. `Télécom Paristech `\_ funded Manoj Kumar (2014), Tom Dupré la Tour (2015), Raghav RV (2015-2017), Thierry Guillemot (2016-2017) and Albert Thomas (2017) to work on scikit-learn. `The Labex DigiCosme `\_ funded Nicolas Goix (2015-2016), Tom Dupré la Tour (2015-2016 and 2017-2018), Mathurin Massias (2018-2019) to work part time on scikit-learn during their PhDs. It also funded a scikit-learn coding sprint in 2015. `The Chan-Zuckerberg Initiative `\_ funded Nicolas Hug to work full-time on scikit-learn in 2020. The following students were sponsored by `Google `\_ to work on scikit-learn through the `Google Summer of Code `\_ program. - 2007 - David Cournapeau - 2011 - `Vlad Niculae`\_ - 2012 - `Vlad Niculae`\_, Immanuel Bayer - 2013 - Kemal Eren, Nicolas Trésegnie - 2014 - Hamzeh Alsalhi, Issam Laradji, Maheshakya Wijewardena, Manoj Kumar - 2015 - `Raghav RV `\_, Wei Xue - 2016 - `Nelson Liu `\_, `YenChen Lin `\_ .. \_Vlad Niculae: https://vene.ro/ ................... The `NeuroDebian `\_ project providing `Debian `\_ packaging and contributions is supported by `Dr. James V. Haxby `\_ (`Dartmouth College `\_). ................... The following organizations funded the scikit-learn consortium at Inria in the past: .. |msn| image:: images/microsoft.png :target: https://www.microsoft.com/ .. |bcg| image:: images/bcg.png :target: https://www.bcg.com/beyond-consulting/bcg-gamma/default.aspx .. |fujitsu| image:: images/fujitsu.png :target: https://www.fujitsu.com/global/ .. |aphp| image:: images/logo\_APHP\_text.png :target: https://aphp.fr/ .. |hf| image:: images/huggingface\_logo-noborder.png :target: https://huggingface.co .. raw:: html div.image-subgrid img { max-height: 50px; max-width: 90%; } .. grid:: 2 2 4 4 :class-row: image-subgrid :gutter: 1 .. grid-item:: :class: sd-text-center :child-align: center |msn| .. grid-item:: :class: sd-text-center :child-align: center |bcg| .. grid-item:: :class: sd-text-center :child-align: center |fujitsu| .. grid-item:: :class: sd-text-center :child-align: center |aphp| .. grid-item:: :class: sd-text-center :child-align: center |hf| .. grid-item:: :class: sd-text-center :child-align: center |dataiku| .. grid-item:: :class: sd-text-center :child-align: center |bnp| .. grid-item:: :class: sd-text-center :child-align: center |axa| Donations in Kind ----------------- The following organizations provide non-financial contributions to the scikit-learn project. .. raw:: html | Company | Contribution | | --- | --- | | [Anaconda Inc](https://www.anaconda.com) | Storage for our staging and nightly builds | | [CircleCI](https://circleci.com/) | CPU time on their Continuous Integration servers | | [GitHub](https://www.github.com) | Teams account | | [Microsoft Azure](https://azure.microsoft.com/en-us/) | CPU time on their Continuous Integration servers | Coding Sprints -------------- The scikit-learn project has a long history of `open source coding sprints `\_ with over 50 sprint events from 2010 to present day. There are scores of sponsors who contributed to costs which include venue, food, travel, developer time and more. See `scikit-learn sprints `\_ for a full list of events. Donating to the project ======================= If you have found scikit-learn to be useful in your work, research, or company, please consider making a donation to the project commensurate with your resources. There are several options for making donations: .. raw:: html [Donate via NumFOCUS](https://numfocus.org/donate-to-scikit-learn) [Donate via GitHub Sponsors](https://github.com/sponsors/scikit-learn) [Donate via Benevity](https://causes.benevity.org/projects/433725) \*\*Donation Options:\*\* \* \*\*NumFOCUS\*\*: Donate via the `NumFOCUS Donations Page `\_, scikit-learn's fiscal sponsor. \* \*\*GitHub Sponsors\*\*: Support the project directly through `GitHub Sponsors `\_. \* \*\*Benevity\*\*: If your company uses scikit-learn, you can also support the project through Benevity, a platform to manage employee donations. It is widely used by hundreds of Fortune 1000 companies to streamline and scale their social impact initiatives. If your company uses Benevity, you are able to make a
https://github.com/scikit-learn/scikit-learn/blob/main//doc/about.rst
main
scikit-learn
[ -0.051606446504592896, -0.0694793239235878, 0.04455495625734329, 0.009338879957795143, 0.05220599099993706, -0.11593775451183319, -0.039560507982969284, -0.028582915663719177, -0.08259506523609161, 0.021492142230272293, -0.025853099301457405, -0.03570723906159401, -0.08327614516019821, 0.0...
0.13574
\* \*\*Benevity\*\*: If your company uses scikit-learn, you can also support the project through Benevity, a platform to manage employee donations. It is widely used by hundreds of Fortune 1000 companies to streamline and scale their social impact initiatives. If your company uses Benevity, you are able to make a donation with a company match as high as 100%. Our project ID is `433725 `\_. All donations are managed by `NumFOCUS `\_, a 501(c)(3) non-profit organization based in Austin, Texas, USA. The NumFOCUS board consists of `SciPy community members `\_. Contributions are tax-deductible to the extent allowed by law. .. rubric:: Notes Contributions support the maintenance of the project, including development, documentation, infrastructure and coding sprints. scikit-learn Swag ----------------- Official scikit-learn swag is available for purchase at the `NumFOCUS online store `\_. A portion of the proceeds from each sale goes to support the scikit-learn project.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/about.rst
main
scikit-learn
[ -0.036937542259693146, -0.09083393961191177, -0.03941841423511505, -0.05454469099640846, 0.0610647052526474, 0.01883864402770996, 0.08869347721338272, 0.033648863434791565, -0.049058910459280014, 0.01888784021139145, 0.0419355109333992, -0.03534149378538132, -0.0014715204015374184, 0.00410...
0.253426
.. include:: \_contributors.rst .. currentmodule:: sklearn .. \_release\_notes\_1\_5: =========== Version 1.5 =========== For a short description of the main highlights of the release, please refer to :ref:`sphx\_glr\_auto\_examples\_release\_highlights\_plot\_release\_highlights\_1\_5\_0.py`. .. include:: changelog\_legend.inc .. \_changes\_1\_5\_2: Version 1.5.2 ============= \*\*September 2024\*\* Changes impacting many modules ------------------------------ - |Fix| Fixed performance regression in a few Cython modules in `sklearn.\_loss`, `sklearn.manifold`, `sklearn.metrics` and `sklearn.utils`, which were built without OpenMP support. :pr:`29694` by :user:`Loïc Estèvce `. Changelog --------- :mod:`sklearn.calibration` .......................... - |Fix| Raise error when :class:`~sklearn.model\_selection.LeaveOneOut` used in `cv`, matching what would happen if `KFold(n\_splits=n\_samples)` was used. :pr:`29545` by :user:`Lucy Liu ` :mod:`sklearn.compose` ...................... - |Fix| Fixed :class:`compose.TransformedTargetRegressor` not to raise `UserWarning` if transform output is set to `pandas` or `polars`, since it isn't a transformer. :pr:`29401` by :user:`Stefanie Senger `. :mod:`sklearn.decomposition` ............................ - |Fix| Increase rank deficiency threshold in the whitening step of :class:`decomposition.FastICA` with `whiten\_solver="eigh"` to improve the platform-agnosticity of the estimator. :pr:`29612` by :user:`Olivier Grisel `. :mod:`sklearn.metrics` ...................... - |Fix| Fix a regression in :func:`metrics.accuracy\_score` and in :func:`metrics.zero\_one\_loss` causing an error for Array API dispatch with multilabel inputs. :pr:`29336` by :user:`Edoardo Abati `. :mod:`sklearn.svm` .................. - |Fix| Fixed a regression in :class:`svm.SVC` and :class:`svm.SVR` such that we accept `C=float("inf")`. :pr:`29780` by :user:`Guillaume Lemaitre `. .. \_changes\_1\_5\_1: Version 1.5.1 ============= \*\*July 2024\*\* Changes impacting many modules ------------------------------ - |Fix| Fixed a regression in the validation of the input data of all estimators where an unexpected error was raised when passing a DataFrame backed by a read-only buffer. :pr:`29018` by :user:`Jérémie du Boisberranger `. - |Fix| Fixed a regression causing a dead-lock at import time in some settings. :pr:`29235` by :user:`Jérémie du Boisberranger `. Changelog --------- :mod:`sklearn.compose` ...................... - |Efficiency| Fix a performance regression in :class:`compose.ColumnTransformer` where the full input data was copied for each transformer when `n\_jobs > 1`. :pr:`29330` by :user:`Jérémie du Boisberranger `. :mod:`sklearn.metrics` ...................... - |Fix| Fix a regression in :func:`metrics.r2\_score`. Passing torch CPU tensors with array API dispatched disabled would complain about non-CPU devices instead of implicitly converting those inputs as regular NumPy arrays. :pr:`29119` by :user:`Olivier Grisel`. - |Fix| Fix a regression in :func:`metrics.zero\_one\_loss` causing an error for Array API dispatch with multilabel inputs. :pr:`29269` by :user:`Yaroslav Korobko `. :mod:`sklearn.model\_selection` .............................. - |Fix| Fix a regression in :class:`model\_selection.GridSearchCV` for parameter grids that have heterogeneous parameter values. :pr:`29078` by :user:`Loïc Estève `. - |Fix| Fix a regression in :class:`model\_selection.GridSearchCV` for parameter grids that have estimators as parameter values. :pr:`29179` by :user:`Marco Gorelli`. - |Fix| Fix a regression in :class:`model\_selection.GridSearchCV` for parameter grids that have arrays of different sizes as parameter values. :pr:`29314` by :user:`Marco Gorelli`. :mod:`sklearn.tree` ................... - |Fix| Fix an issue in :func:`tree.export\_graphviz` and :func:`tree.plot\_tree` that could potentially result in exception or wrong results on 32bit OSes. :pr:`29327` by :user:`Loïc Estève`. :mod:`sklearn.utils` .................... - |API| :func:`utils.validation.check\_array` has a new parameter, `force\_writeable`, to control the writeability of the output array. If set to `True`, the output array will be guaranteed to be writeable and a copy will be made if the input array is read-only. If set to `False`, no guarantee is made about the writeability of the output array. :pr:`29018` by :user:`Jérémie du Boisberranger `. .. \_changes\_1\_5: Version 1.5.0 ============= \*\*May 2024\*\* Security -------- - |Fix| :class:`feature\_extraction.text.CountVectorizer` and :class:`feature\_extraction.text.TfidfVectorizer` no longer store discarded tokens from the training set in their `stop\_words\_` attribute. This attribute would hold too frequent (above `max\_df`) but also too rare tokens (below `min\_df`). This fixes a potential security issue (data leak) if the discarded rare tokens hold sensitive information from the training set without the model developer's knowledge. Note: users of those classes are encouraged to either retrain their pipelines with the new scikit-learn version or
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.5.rst
main
scikit-learn
[ -0.09002605825662613, -0.025241365656256676, 0.010366018861532211, -0.026040533557534218, 0.11060133576393127, 0.03173203766345978, -0.03642594814300537, 0.017654430121183395, -0.03223511949181557, -0.031593311578035355, 0.07265583425760269, 0.04700779914855957, -0.006297237239778042, -0.0...
0.059261
but also too rare tokens (below `min\_df`). This fixes a potential security issue (data leak) if the discarded rare tokens hold sensitive information from the training set without the model developer's knowledge. Note: users of those classes are encouraged to either retrain their pipelines with the new scikit-learn version or to manually clear the `stop\_words\_` attribute from previously trained instances of those transformers. This attribute was designed only for model inspection purposes and has no impact on the behavior of the transformers. :pr:`28823` by :user:`Olivier Grisel `. Changed models -------------- - |Efficiency| The subsampling in :class:`preprocessing.QuantileTransformer` is now more efficient for dense arrays but the fitted quantiles and the results of `transform` may be slightly different than before (keeping the same statistical properties). :pr:`27344` by :user:`Xuefeng Xu `. - |Enhancement| :class:`decomposition.PCA`, :class:`decomposition.SparsePCA` and :class:`decomposition.TruncatedSVD` now set the sign of the `components\_` attribute based on the component values instead of using the transformed data as reference. This change is needed to be able to offer consistent component signs across all `PCA` solvers, including the new `svd\_solver="covariance\_eigh"` option introduced in this release. Changes impacting many modules ------------------------------ - |Fix| Raise `ValueError` with an informative error message when passing 1D sparse arrays to methods that expect 2D sparse inputs. :pr:`28988` by :user:`Olivier Grisel `. - |API| The name of the input of the `inverse\_transform` method of estimators has been standardized to `X`. As a consequence, `Xt` is deprecated and will be removed in version 1.7 in the following estimators: :class:`cluster.FeatureAgglomeration`, :class:`decomposition.MiniBatchNMF`, :class:`decomposition.NMF`, :class:`model\_selection.GridSearchCV`, :class:`model\_selection.RandomizedSearchCV`, :class:`pipeline.Pipeline` and :class:`preprocessing.KBinsDiscretizer`. :pr:`28756` by :user:`Will Dean `. Support for Array API --------------------- Additional estimators and functions have been updated to include support for all `Array API `\_ compliant inputs. See :ref:`array\_api` for more details. \*\*Functions:\*\* - :func:`sklearn.metrics.r2\_score` now supports Array API compliant inputs. :pr:`27904` by :user:`Eric Lindgren `, :user:`Franck Charras `, :user:`Olivier Grisel ` and :user:`Tim Head `. \*\*Classes:\*\* - :class:`linear\_model.Ridge` now supports the Array API for the `svd` solver. See :ref:`array\_api` for more details. :pr:`27800` by :user:`Franck Charras `, :user:`Olivier Grisel ` and :user:`Tim Head `. Support for building with Meson ------------------------------- From scikit-learn 1.5 onwards, Meson is the main supported way to build scikit-learn. Unless we discover a major blocker, setuptools support will be dropped in scikit-learn 1.6. The 1.5.x releases will support building scikit-learn with setuptools. Meson support for building scikit-learn was added in :pr:`28040` by :user:`Loïc Estève ` Metadata Routing ---------------- The following models now support metadata routing in one or more of their methods. Refer to the :ref:`Metadata Routing User Guide ` for more details. - |Feature| :class:`impute.IterativeImputer` now supports metadata routing in its `fit` method. :pr:`28187` by :user:`Stefanie Senger `. - |Feature| :class:`ensemble.BaggingClassifier` and :class:`ensemble.BaggingRegressor` now support metadata routing. The fit methods now accept ``\*\*fit\_params`` which are passed to the underlying estimators via their `fit` methods. :pr:`28432` by :user:`Adam Li ` and :user:`Benjamin Bossan `. - |Feature| :class:`linear\_model.RidgeCV` and :class:`linear\_model.RidgeClassifierCV` now support metadata routing in their `fit` method and route metadata to the underlying :class:`model\_selection.GridSearchCV` object or the underlying scorer. :pr:`27560` by :user:`Omar Salman `. - |Feature| :class:`GraphicalLassoCV` now supports metadata routing in its `fit` method and routes metadata to the CV splitter. :pr:`27566` by :user:`Omar Salman `. - |Feature| :class:`linear\_model.RANSACRegressor` now supports metadata routing in its ``fit``, ``score`` and ``predict`` methods and route metadata to its underlying estimator's ``fit``, ``score`` and ``predict`` methods. :pr:`28261` by :user:`Stefanie Senger `. - |Feature| :class:`ensemble.VotingClassifier` and :class:`ensemble.VotingRegressor` now support metadata routing and pass ``\*\*fit\_params`` to the underlying estimators via their `fit` methods. :pr:`27584` by :user:`Stefanie Senger `. - |Feature| :class:`pipeline.FeatureUnion` now supports metadata routing in its ``fit`` and ``fit\_transform`` methods and route metadata to the
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.5.rst
main
scikit-learn
[ -0.13085563480854034, -0.01199520193040371, 0.0009344188729301095, -0.0038841268979012966, 0.0007906858227215707, -0.09461841732263565, -0.011728852055966854, -0.0016294606029987335, 0.023003656417131424, -0.00873417779803276, 0.03430407866835594, -0.006809209939092398, -0.0415281318128109, ...
0.059834
``predict`` methods. :pr:`28261` by :user:`Stefanie Senger `. - |Feature| :class:`ensemble.VotingClassifier` and :class:`ensemble.VotingRegressor` now support metadata routing and pass ``\*\*fit\_params`` to the underlying estimators via their `fit` methods. :pr:`27584` by :user:`Stefanie Senger `. - |Feature| :class:`pipeline.FeatureUnion` now supports metadata routing in its ``fit`` and ``fit\_transform`` methods and route metadata to the underlying transformers' ``fit`` and ``fit\_transform``. :pr:`28205` by :user:`Stefanie Senger `. - |Fix| Fix an issue when resolving default routing requests set via class attributes. :pr:`28435` by `Adrin Jalali`\_. - |Fix| Fix an issue when `set\_{method}\_request` methods are used as unbound methods, which can happen if one tries to decorate them. :pr:`28651` by `Adrin Jalali`\_. - |FIX| Prevent a `RecursionError` when estimators with the default `scoring` param (`None`) route metadata. :pr:`28712` by :user:`Stefanie Senger `. Changelog --------- .. Entries should be grouped by module (in alphabetic order) and prefixed with one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|, |Fix| or |API| (see whats\_new.rst for descriptions). Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|). Changes not specific to a module should be listed under \*Multiple Modules\* or \*Miscellaneous\*. Entries should end with: :pr:`123456` by :user:`Joe Bloggs `. where 123455 is the \*pull request\* number, not the issue number. :mod:`sklearn.calibration` .......................... - |Fix| Fixed a regression in :class:`calibration.CalibratedClassifierCV` where an error was wrongly raised with string targets. :pr:`28843` by :user:`Jérémie du Boisberranger `. :mod:`sklearn.cluster` ...................... - |Fix| The :class:`cluster.MeanShift` class now properly converges for constant data. :pr:`28951` by :user:`Akihiro Kuno `. - |FIX| Create copy of precomputed sparse matrix within the `fit` method of :class:`~cluster.OPTICS` to avoid in-place modification of the sparse matrix. :pr:`28491` by :user:`Thanh Lam Dang `. - |Fix| :class:`cluster.HDBSCAN` now supports all metrics supported by :func:`sklearn.metrics.pairwise\_distances` when `algorithm="brute"` or `"auto"`. :pr:`28664` by :user:`Manideep Yenugula `. :mod:`sklearn.compose` ...................... - |Feature| A fitted :class:`compose.ColumnTransformer` now implements `\_\_getitem\_\_` which returns the fitted transformers by name. :pr:`27990` by `Thomas Fan`\_. - |Enhancement| :class:`compose.TransformedTargetRegressor` now raises an error in `fit` if only `inverse\_func` is provided without `func` (that would default to identity) being explicitly set as well. :pr:`28483` by :user:`Stefanie Senger `. - |Enhancement| :class:`compose.ColumnTransformer` can now expose the "remainder" columns in the fitted `transformers\_` attribute as column names or boolean masks, rather than column indices. :pr:`27657` by :user:`Jérôme Dockès `. - |Fix| Fixed a bug in :class:`compose.ColumnTransformer` with `n\_jobs > 1`, where the intermediate selected columns were passed to the transformers as read-only arrays. :pr:`28822` by :user:`Jérémie du Boisberranger `. :mod:`sklearn.cross\_decomposition` .................................. - |Fix| The `coef\_` fitted attribute of :class:`cross\_decomposition.PLSRegression` now takes into account both the scale of `X` and `Y` when `scale=True`. Note that the previous predicted values were not affected by this bug. :pr:`28612` by :user:`Guillaume Lemaitre `. - |API| Deprecates `Y` in favor of `y` in the methods `fit`, `transform` and `inverse\_transform` of: :class:`cross\_decomposition.PLSRegression`, :class:`cross\_decomposition.PLSCanonical`, and :class:`cross\_decomposition.CCA`, and methods `fit` and `transform` of: :class:`cross\_decomposition.PLSSVD`. `Y` will be removed in version 1.7. :pr:`28604` by :user:`David Leon `. :mod:`sklearn.datasets` ....................... - |Enhancement| Adds optional arguments `n\_retries` and `delay` to functions :func:`datasets.fetch\_20newsgroups`, :func:`datasets.fetch\_20newsgroups\_vectorized`, :func:`datasets.fetch\_california\_housing`, :func:`datasets.fetch\_covtype`, :func:`datasets.fetch\_kddcup99`, :func:`datasets.fetch\_lfw\_pairs`, :func:`datasets.fetch\_lfw\_people`, :func:`datasets.fetch\_olivetti\_faces`, :func:`datasets.fetch\_rcv1`, and :func:`datasets.fetch\_species\_distributions`. By default, the functions will retry up to 3 times in case of network failures. :pr:`28160` by :user:`Zhehao Liu ` and :user:`Filip Karlo Došilović `. :mod:`sklearn.decomposition` ............................ - |Efficiency| :class:`decomposition.PCA` with `svd\_solver="full"` now assigns a contiguous `components\_` attribute instead of a non-contiguous slice of the singular vectors. When `n\_components << n\_features`, this can save some memory and, more importantly, help speed-up subsequent calls to the `transform` method by more than an order of magnitude by leveraging cache locality of BLAS GEMM on contiguous arrays. :pr:`27491` by :user:`Olivier Grisel `. - |Enhancement| :class:`~decomposition.PCA` now automatically selects the ARPACK solver for
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.5.rst
main
scikit-learn
[ -0.12247972935438156, -0.029582057148218155, -0.03198416531085968, 0.041682563722133636, -0.019946953281760216, -0.028845304623246193, -0.04292802885174751, 0.03924141079187393, -0.13448172807693481, 0.035520877689123154, -0.007016323506832123, -0.020307930186390877, -0.02213985100388527, ...
0.050211
<< n\_features`, this can save some memory and, more importantly, help speed-up subsequent calls to the `transform` method by more than an order of magnitude by leveraging cache locality of BLAS GEMM on contiguous arrays. :pr:`27491` by :user:`Olivier Grisel `. - |Enhancement| :class:`~decomposition.PCA` now automatically selects the ARPACK solver for sparse inputs when `svd\_solver="auto"` instead of raising an error. :pr:`28498` by :user:`Thanh Lam Dang `. - |Enhancement| :class:`decomposition.PCA` now supports a new solver option named `svd\_solver="covariance\_eigh"` which offers an order of magnitude speed-up and reduced memory usage for datasets with a large number of data points and a small number of features (say, `n\_samples >> 1000 > n\_features`). The `svd\_solver="auto"` option has been updated to use the new solver automatically for such datasets. This solver also accepts sparse input data. :pr:`27491` by :user:`Olivier Grisel `. - |Fix| :class:`decomposition.PCA` fit with `svd\_solver="arpack"`, `whiten=True` and a value for `n\_components` that is larger than the rank of the training set, no longer returns infinite values when transforming hold-out data. :pr:`27491` by :user:`Olivier Grisel `. :mod:`sklearn.dummy` .................... - |Enhancement| :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor` now have the `n\_features\_in\_` and `feature\_names\_in\_` attributes after `fit`. :pr:`27937` by :user:`Marco vd Boom `. :mod:`sklearn.ensemble` ....................... - |Efficiency| Improves runtime of `predict` of :class:`ensemble.HistGradientBoostingClassifier` by avoiding to call `predict\_proba`. :pr:`27844` by :user:`Christian Lorentzen `. - |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` are now a tiny bit faster by pre-sorting the data before finding the thresholds for binning. :pr:`28102` by :user:`Christian Lorentzen `. - |Fix| Fixes a bug in :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` when `monotonic\_cst` is specified for non-categorical features. :pr:`28925` by :user:`Xiao Yuan `. :mod:`sklearn.feature\_extraction` ................................. - |Efficiency| :class:`feature\_extraction.text.TfidfTransformer` is now faster and more memory-efficient by using a NumPy vector instead of a sparse matrix for storing the inverse document frequency. :pr:`18843` by :user:`Paolo Montesel `. - |Enhancement| :class:`feature\_extraction.text.TfidfTransformer` now preserves the data type of the input matrix if it is `np.float64` or `np.float32`. :pr:`28136` by :user:`Guillaume Lemaitre `. :mod:`sklearn.feature\_selection` ................................ - |Enhancement| :func:`feature\_selection.mutual\_info\_regression` and :func:`feature\_selection.mutual\_info\_classif` now support `n\_jobs` parameter. :pr:`28085` by :user:`Neto Menoci ` and :user:`Florin Andrei `. - |Enhancement| The `cv\_results\_` attribute of :class:`feature\_selection.RFECV` has a new key, `n\_features`, containing an array with the number of features selected at each step. :pr:`28670` by :user:`Miguel Silva `. :mod:`sklearn.impute` ..................... - |Enhancement| :class:`impute.SimpleImputer` now supports custom strategies by passing a function in place of a strategy name. :pr:`28053` by :user:`Mark Elliot `. :mod:`sklearn.inspection` ......................... - |Fix| :meth:`inspection.DecisionBoundaryDisplay.from\_estimator` no longer warns about missing feature names when provided a `polars.DataFrame`. :pr:`28718` by :user:`Patrick Wang `. :mod:`sklearn.linear\_model` ........................... - |Enhancement| Solver `"newton-cg"` in :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV` now emits information when `verbose` is set to positive values. :pr:`27526` by :user:`Christian Lorentzen `. - |Fix| :class:`linear\_model.ElasticNet`, :class:`linear\_model.ElasticNetCV`, :class:`linear\_model.Lasso` and :class:`linear\_model.LassoCV` now explicitly don't accept large sparse data formats. :pr:`27576` by :user:`Stefanie Senger `. - |Fix| :class:`linear\_model.RidgeCV` and :class:`RidgeClassifierCV` correctly pass `sample\_weight` to the underlying scorer when `cv` is None. :pr:`27560` by :user:`Omar Salman `. - |Fix| `n\_nonzero\_coefs\_` attribute in :class:`linear\_model.OrthogonalMatchingPursuit` will now always be `None` when `tol` is set, as `n\_nonzero\_coefs` is ignored in this case. :pr:`28557` by :user:`Lucy Liu `. - |API| :class:`linear\_model.RidgeCV` and :class:`linear\_model.RidgeClassifierCV` will now allow `alpha=0` when `cv != None`, which is consistent with :class:`linear\_model.Ridge` and :class:`linear\_model.RidgeClassifier`. :pr:`28425` by :user:`Lucy Liu `. - |API| Passing `average=0` to disable averaging is deprecated in :class:`linear\_model.PassiveAggressiveClassifier`, :class:`linear\_model.PassiveAggressiveRegressor`, :class:`linear\_model.SGDClassifier`, :class:`linear\_model.SGDRegressor` and :class:`linear\_model.SGDOneClassSVM`. Pass `average=False` instead. :pr:`28582` by :user:`Jérémie du Boisberranger `. - |API| Parameter `multi\_class` was deprecated in :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV`. `multi\_class` will be removed in 1.8, and internally, for 3 and more classes, it will always use multinomial. If you still want to use the one-vs-rest scheme, you can use `OneVsRestClassifier(LogisticRegression(..))`. :pr:`28703` by :user:`Christian Lorentzen `. - |API|
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.5.rst
main
scikit-learn
[ -0.08471070230007172, -0.01093687117099762, -0.05851716548204422, 0.04200649634003639, 0.010133548639714718, -0.09228705614805222, -0.026798289269208908, -0.00889780092984438, -0.0559215322136879, 0.020702781155705452, 0.017072834074497223, 0.0649295300245285, -0.06782020628452301, 0.00337...
0.033594
Boisberranger `. - |API| Parameter `multi\_class` was deprecated in :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV`. `multi\_class` will be removed in 1.8, and internally, for 3 and more classes, it will always use multinomial. If you still want to use the one-vs-rest scheme, you can use `OneVsRestClassifier(LogisticRegression(..))`. :pr:`28703` by :user:`Christian Lorentzen `. - |API| Parameters `store\_cv\_values` and `cv\_values\_` are deprecated in favor of `store\_cv\_results` and `cv\_results\_` in `~linear\_model.RidgeCV` and `~linear\_model.RidgeClassifierCV`. :pr:`28915` by :user:`Lucy Liu `. :mod:`sklearn.manifold` ....................... - |API| Deprecates `n\_iter` in favor of `max\_iter` in :class:`manifold.TSNE`. `n\_iter` will be removed in version 1.7. This makes :class:`manifold.TSNE` consistent with the rest of the estimators. :pr:`28471` by :user:`Lucy Liu ` :mod:`sklearn.metrics` ...................... - |Feature| :func:`metrics.pairwise\_distances` accepts calculating pairwise distances for non-numeric arrays as well. This is supported through custom metrics only. :pr:`27456` by :user:`Venkatachalam N `, :user:`Kshitij Mathur ` and :user:`Julian Libiseller-Egger `. - |Feature| :func:`sklearn.metrics.check\_scoring` now returns a multi-metric scorer when `scoring` as a `dict`, `set`, `tuple`, or `list`. :pr:`28360` by `Thomas Fan`\_. - |Feature| :func:`metrics.d2\_log\_loss\_score` has been added which calculates the D^2 score for the log loss. :pr:`28351` by :user:`Omar Salman `. - |Efficiency| Improve efficiency of functions :func:`~metrics.brier\_score\_loss`, :func:`~calibration.calibration\_curve`, :func:`~metrics.det\_curve`, :func:`~metrics.precision\_recall\_curve`, :func:`~metrics.roc\_curve` when `pos\_label` argument is specified. Also improve efficiency of methods `from\_estimator` and `from\_predictions` in :class:`~metrics.RocCurveDisplay`, :class:`~metrics.PrecisionRecallDisplay`, :class:`~metrics.DetCurveDisplay`, :class:`~calibration.CalibrationDisplay`. :pr:`28051` by :user:`Pierre de Fréminville `. - |Fix|:class:`metrics.classification\_report` now shows only accuracy and not micro-average when input is a subset of labels. :pr:`28399` by :user:`Vineet Joshi `. - |Fix| Fix OpenBLAS 0.3.26 dead-lock on Windows in pairwise distances computation. This is likely to affect neighbor-based algorithms. :pr:`28692` by :user:`Loïc Estève `. - |API| :func:`metrics.precision\_recall\_curve` deprecated the keyword argument `probas\_pred` in favor of `y\_score`. `probas\_pred` will be removed in version 1.7. :pr:`28092` by :user:`Adam Li `. - |API| :func:`metrics.brier\_score\_loss` deprecated the keyword argument `y\_prob` in favor of `y\_proba`. `y\_prob` will be removed in version 1.7. :pr:`28092` by :user:`Adam Li `. - |API| For classifiers and classification metrics, labels encoded as bytes is deprecated and will raise an error in v1.7. :pr:`18555` by :user:`Kaushik Amar Das `. :mod:`sklearn.mixture` ...................... - |Fix| The `converged\_` attribute of :class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture` now reflects the convergence status of the best fit whereas it was previously `True` if any of the fits converged. :pr:`26837` by :user:`Krsto Proroković `. :mod:`sklearn.model\_selection` .............................. - |MajorFeature| :class:`model\_selection.TunedThresholdClassifierCV` finds the decision threshold of a binary classifier that maximizes a classification metric through cross-validation. :class:`model\_selection.FixedThresholdClassifier` is an alternative when one wants to use a fixed decision threshold without any tuning scheme. :pr:`26120` by :user:`Guillaume Lemaitre `. - |Enhancement| :term:`CV splitters ` that ignores the group parameter now raises a warning when groups are passed in to :term:`split`. :pr:`28210` by `Thomas Fan`\_. - |Enhancement| The HTML diagram representation of :class:`~model\_selection.GridSearchCV`, :class:`~model\_selection.RandomizedSearchCV`, :class:`~model\_selection.HalvingGridSearchCV`, and :class:`~model\_selection.HalvingRandomSearchCV` will show the best estimator when `refit=True`. :pr:`28722` by :user:`Yao Xiao ` and `Thomas Fan`\_. - |Fix| the ``cv\_results\_`` attribute (of :class:`model\_selection.GridSearchCV`) now returns masked arrays of the appropriate NumPy dtype, as opposed to always returning dtype ``object``. :pr:`28352` by :user:`Marco Gorelli`. - |Fix| :func:`model\_selection.train\_test\_split` works with Array API inputs. Previously indexing was not handled correctly leading to exceptions when using strict implementations of the Array API like CuPY. :pr:`28407` by :user:`Tim Head `. :mod:`sklearn.multioutput` .......................... - |Enhancement| `chain\_method` parameter added to :class:`multioutput.ClassifierChain`. :pr:`27700` by :user:`Lucy Liu `. :mod:`sklearn.neighbors` ........................ - |Fix| Fixes :class:`neighbors.NeighborhoodComponentsAnalysis` such that `get\_feature\_names\_out` returns the correct number of feature names. :pr:`28306` by :user:`Brendan Lu `. :mod:`sklearn.pipeline` ....................... - |Feature| :class:`pipeline.FeatureUnion` can now use the `verbose\_feature\_names\_out` attribute. If `True`, `get\_feature\_names\_out` will prefix all feature names with the name of the transformer that generated that feature. If `False`, `get\_feature\_names\_out` will not prefix any feature names and will error if feature
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.5.rst
main
scikit-learn
[ -0.09251054376363754, -0.03266341984272003, -0.11322258412837982, 0.005277442745864391, 0.04261890798807144, 0.017542965710163116, -0.06916620582342148, 0.0425538569688797, -0.018716011196374893, 0.023714115843176842, 0.05869411677122116, -0.0709226131439209, -0.019964255392551422, -0.0344...
-0.038659
feature names. :pr:`28306` by :user:`Brendan Lu `. :mod:`sklearn.pipeline` ....................... - |Feature| :class:`pipeline.FeatureUnion` can now use the `verbose\_feature\_names\_out` attribute. If `True`, `get\_feature\_names\_out` will prefix all feature names with the name of the transformer that generated that feature. If `False`, `get\_feature\_names\_out` will not prefix any feature names and will error if feature names are not unique. :pr:`25991` by :user:`Jiawei Zhang `. :mod:`sklearn.preprocessing` ............................ - |Enhancement| :class:`preprocessing.QuantileTransformer` and :func:`preprocessing.quantile\_transform` now supports disabling subsampling explicitly. :pr:`27636` by :user:`Ralph Urlus `. :mod:`sklearn.tree` ................... - |Enhancement| Plotting trees in matplotlib via :func:`tree.plot\_tree` now show a "True/False" label to indicate the directionality the samples traverse given the split condition. :pr:`28552` by :user:`Adam Li `. :mod:`sklearn.utils` .................... - |Fix| :func:`~utils.\_safe\_indexing` now works correctly for polars DataFrame when `axis=0` and supports indexing polars Series. :pr:`28521` by :user:`Yao Xiao `. - |API| :data:`utils.IS\_PYPY` is deprecated and will be removed in version 1.7. :pr:`28768` by :user:`Jérémie du Boisberranger `. - |API| :func:`utils.tosequence` is deprecated and will be removed in version 1.7. :pr:`28763` by :user:`Jérémie du Boisberranger `. - |API| `utils.parallel\_backend` and `utils.register\_parallel\_backend` are deprecated and will be removed in version 1.7. Use `joblib.parallel\_backend` and `joblib.register\_parallel\_backend` instead. :pr:`28847` by :user:`Jérémie du Boisberranger `. - |API| Raise informative warning message in :func:`~utils.multiclass.type\_of\_target` when represented as bytes. For classifiers and classification metrics, labels encoded as bytes is deprecated and will raise an error in v1.7. :pr:`18555` by :user:`Kaushik Amar Das `. - |API| :func:`utils.estimator\_checks.check\_estimator\_sparse\_data` was split into two functions: :func:`utils.estimator\_checks.check\_estimator\_sparse\_matrix` and :func:`utils.estimator\_checks.check\_estimator\_sparse\_array`. :pr:`27576` by :user:`Stefanie Senger `. .. rubric:: Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.4, including: 101AlexMartin, Abdulaziz Aloqeely, Adam J. Stewart, Adam Li, Adarsh Wase, Adeyemi Biola, Aditi Juneja, Adrin Jalali, Advik Sinha, Aisha, Akash Srivastava, Akihiro Kuno, Alan Guedes, Alberto Torres, Alexis IMBERT, alexqiao, Ana Paula Gomes, Anderson Nelson, Andrei Dzis, Arif Qodari, Arnaud Capitaine, Arturo Amor, Aswathavicky, Audrey Flanders, awwwyan, baggiponte, Bharat Raghunathan, bme-git, brdav, Brendan Lu, Brigitta Sipőcz, Bruno, Cailean Carter, Cemlyn, Christian Lorentzen, Christian Veenhuis, Cindy Liang, Claudio Salvatore Arcidiacono, Connor Boyle, Conrad Stevens, crispinlogan, David Matthew Cherney, Davide Chicco, davidleon123, dependabot[bot], DerWeh, dinga92, Dipan Banik, Drew Craeton, Duarte São José, DUONG, Eddie Bergman, Edoardo Abati, Egehan Gunduz, Emad Izadifar, EmilyXinyi, Erich Schubert, Evelyn, Filip Karlo Došilović, Franck Charras, Gael Varoquaux, Gönül Aycı, Guillaume Lemaitre, Gyeongjae Choi, Harmanan Kohli, Hong Xiang Yue, Ian Faust, Ilya Komarov, itsaphel, Ivan Wiryadi, Jack Bowyer, Javier Marin Tur, Jérémie du Boisberranger, Jérôme Dockès, Jiawei Zhang, João Morais, Joe Cainey, Joel Nothman, Johanna Bayer, John Cant, John Enblom, John Hopfensperger, jpcars, jpienaar-tuks, Julian Chan, Julian Libiseller-Egger, Julien Jerphanion, KanchiMoe, Kaushik Amar Das, keyber, Koustav Ghosh, kraktus, Krsto Proroković, Lars, ldwy4, LeoGrin, lihaitao, Linus Sommer, Loic Esteve, Lucy Liu, Lukas Geiger, m-maggi, manasimj, Manuel Labbé, Manuel Morales, Marco Edward Gorelli, Marco Wolsza, Maren Westermann, Marija Vlajic, Mark Elliot, Martin Helm, Mateusz Sokół, mathurinm, Mavs, Michael Dawson, Michael Higgins, Michael Mayer, miguelcsilva, Miki Watanabe, Mohammed Hamdy, myenugula, Nathan Goldbaum, Naziya Mahimkar, nbrown-ScottLogic, Neto, Nithish Bolleddula, notPlancha, Olivier Grisel, Omar Salman, ParsifalXu, Patrick Wang, Pierre de Fréminville, Piotr, Priyank Shroff, Priyansh Gupta, Priyash Shah, Puneeth K, Rahil Parikh, raisadz, Raj Pulapakura, Ralf Gommers, Ralph Urlus, Randolf Scholz, renaissance0ne, Reshama Shaikh, Richard Barnes, Robert Pollak, Roberto Rosati, Rodrigo Romero, rwelsch427, Saad Mahmood, Salim Dohri, Sandip Dutta, SarahRemus, scikit-learn-bot, Shaharyar Choudhry, Shubham, sperret6, Stefanie Senger, Steffen Schneider, Suha Siddiqui, Thanh Lam DANG, thebabush, Thomas, Thomas J. Fan, Thomas Lazarus, Tialo, Tim Head, Tuhin Sharma, Tushar Parimi, VarunChaduvula, Vineet Joshi, virchan, Waël Boukhobza, Weyb, Will Dean, Xavier Beltran, Xiao Yuan, Xuefeng Xu, Yao Xiao, yareyaredesuyo, Ziad Amerr, Štěpán Sršeň
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.5.rst
main
scikit-learn
[ -0.1234385147690773, 0.040986914187669754, -0.01369998324662447, -0.0303497314453125, -0.020020028576254845, -0.059718359261751175, -0.0074817645363509655, -0.021327737718820572, -0.08232026547193527, -0.05140308663249016, 0.07311902195215225, -0.035704392939805984, -0.0009012077352963388, ...
0.043339
Senger, Steffen Schneider, Suha Siddiqui, Thanh Lam DANG, thebabush, Thomas, Thomas J. Fan, Thomas Lazarus, Tialo, Tim Head, Tuhin Sharma, Tushar Parimi, VarunChaduvula, Vineet Joshi, virchan, Waël Boukhobza, Weyb, Will Dean, Xavier Beltran, Xiao Yuan, Xuefeng Xu, Yao Xiao, yareyaredesuyo, Ziad Amerr, Štěpán Sršeň
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.5.rst
main
scikit-learn
[ -0.06164471432566643, -0.10029125213623047, 0.014781844802200794, -0.0656508132815361, 0.008247386664152145, 0.0883319079875946, 0.07005977630615234, 0.0748862698674202, 0.006676965393126011, 0.06261000782251358, -0.00016406150825787336, -0.07118616253137589, -0.014341050758957863, -0.0096...
0.11654
.. include:: \_contributors.rst .. currentmodule:: sklearn .. \_release\_notes\_1\_4: =========== Version 1.4 =========== For a short description of the main highlights of the release, please refer to :ref:`sphx\_glr\_auto\_examples\_release\_highlights\_plot\_release\_highlights\_1\_4\_0.py`. .. include:: changelog\_legend.inc .. \_changes\_1\_4\_2: Version 1.4.2 ============= \*\*April 2024\*\* This release only includes support for numpy 2. .. \_changes\_1\_4\_1: Version 1.4.1 ============= \*\*February 2024\*\* Changed models -------------- - |API| The `tree\_.value` attribute in :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and :class:`tree.ExtraTreeRegressor` changed from a weighted absolute count of number of samples to a weighted fraction of the total number of samples. :pr:`27639` by :user:`Samuel Ronsin `. Metadata Routing ---------------- - |FIX| Fix routing issue with :class:`~compose.ColumnTransformer` when used inside another meta-estimator. :pr:`28188` by `Adrin Jalali`\_. - |Fix| No error is raised when no metadata is passed to a metaestimator that includes a sub-estimator which doesn't support metadata routing. :pr:`28256` by `Adrin Jalali`\_. - |Fix| Fix :class:`multioutput.MultiOutputRegressor` and :class:`multioutput.MultiOutputClassifier` to work with estimators that don't consume any metadata when metadata routing is enabled. :pr:`28240` by `Adrin Jalali`\_. DataFrame Support ----------------- - |Enhancement| |Fix| Pandas and Polars dataframe are validated directly without ducktyping checks. :pr:`28195` by `Thomas Fan`\_. Changes impacting many modules ------------------------------ - |Efficiency| |Fix| Partial revert of :pr:`28191` to avoid a performance regression for estimators relying on euclidean pairwise computation with sparse matrices. The impacted estimators are: - :func:`sklearn.metrics.pairwise\_distances\_argmin` - :func:`sklearn.metrics.pairwise\_distances\_argmin\_min` - :class:`sklearn.cluster.AffinityPropagation` - :class:`sklearn.cluster.Birch` - :class:`sklearn.cluster.SpectralClustering` - :class:`sklearn.neighbors.KNeighborsClassifier` - :class:`sklearn.neighbors.KNeighborsRegressor` - :class:`sklearn.neighbors.RadiusNeighborsClassifier` - :class:`sklearn.neighbors.RadiusNeighborsRegressor` - :class:`sklearn.neighbors.LocalOutlierFactor` - :class:`sklearn.neighbors.NearestNeighbors` - :class:`sklearn.manifold.Isomap` - :class:`sklearn.manifold.TSNE` - :func:`sklearn.manifold.trustworthiness` :pr:`28235` by :user:`Julien Jerphanion `. - |Fix| Fixes a bug for all scikit-learn transformers when using `set\_output` with `transform` set to `pandas` or `polars`. The bug could lead to wrong naming of the columns of the returned dataframe. :pr:`28262` by :user:`Guillaume Lemaitre `. - |Fix| When users try to use a method in :class:`~ensemble.StackingClassifier`, :class:`~ensemble.StackingClassifier`, :class:`~ensemble.StackingClassifier`, :class:`~feature\_selection.SelectFromModel`, :class:`~feature\_selection.RFE`, :class:`~semi\_supervised.SelfTrainingClassifier`, :class:`~multiclass.OneVsOneClassifier`, :class:`~multiclass.OutputCodeClassifier` or :class:`~multiclass.OneVsRestClassifier` that their sub-estimators don't implement, the `AttributeError` now reraises in the traceback. :pr:`28167` by :user:`Stefanie Senger `. Changelog --------- :mod:`sklearn.calibration` .......................... - |Fix| `calibration.CalibratedClassifierCV` supports :term:`predict\_proba` with float32 output from the inner estimator. :pr:`28247` by `Thomas Fan`\_. :mod:`sklearn.cluster` ...................... - |Fix| :class:`cluster.AffinityPropagation` now avoids assigning multiple different clusters for equal points. :pr:`28121` by :user:`Pietro Peterlongo ` and :user:`Yao Xiao `. - |Fix| Avoid infinite loop in :class:`cluster.KMeans` when the number of clusters is larger than the number of non-duplicate samples. :pr:`28165` by :user:`Jérémie du Boisberranger `. :mod:`sklearn.compose` ...................... - |Fix| :class:`compose.ColumnTransformer` now transforms into a polars dataframe when `verbose\_feature\_names\_out=True` and the transformers internally used several times the same columns. Previously, it would raise a due to duplicated column names. :pr:`28262` by :user:`Guillaume Lemaitre `. :mod:`sklearn.ensemble` ....................... - |Fix| :class:`HistGradientBoostingClassifier` and :class:`HistGradientBoostingRegressor` when fitted on `pandas` `DataFrame` with extension dtypes, for example `pd.Int64Dtype` :pr:`28385` by :user:`Loïc Estève `. - |Fix| Fixes error message raised by :class:`ensemble.VotingClassifier` when the target is multilabel or multiclass-multioutput in a DataFrame format. :pr:`27702` by :user:`Guillaume Lemaitre `. :mod:`sklearn.impute` ..................... - |Fix|: :class:`impute.SimpleImputer` now raises an error in `.fit` and `.transform` if `fill\_value` can not be cast to input value dtype with `casting='same\_kind'`. :pr:`28365` by :user:`Leo Grinsztajn `. :mod:`sklearn.inspection` ......................... - |Fix| :func:`inspection.permutation\_importance` now handles properly `sample\_weight` together with subsampling (i.e. `max\_features` < 1.0). :pr:`28184` by :user:`Michael Mayer `. :mod:`sklearn.linear\_model` ........................... - |Fix| :class:`linear\_model.ARDRegression` now handles pandas input types for `predict(X, return\_std=True)`. :pr:`28377` by :user:`Eddie Bergman `. :mod:`sklearn.preprocessing` ............................ - |Fix| make :class:`preprocessing.FunctionTransformer` more lenient and overwrite output column names with the `get\_feature\_names\_out` in the following cases: (i) the input and output column names remain the same (happen when using NumPy `ufunc`); (ii) the input column names are numbers; (iii) the output will be set to Pandas or Polars dataframe. :pr:`28241` by :user:`Guillaume Lemaitre
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.033100079745054245, -0.05470006912946701, 0.05677076429128647, -0.06060762330889702, 0.10570142418146133, 0.02391687221825123, -0.06141577661037445, 0.00531813595443964, -0.04989604651927948, -0.00511557050049305, 0.05958849936723709, -0.014685452915728092, -0.04665008932352066, -0.0069...
0.083855
lenient and overwrite output column names with the `get\_feature\_names\_out` in the following cases: (i) the input and output column names remain the same (happen when using NumPy `ufunc`); (ii) the input column names are numbers; (iii) the output will be set to Pandas or Polars dataframe. :pr:`28241` by :user:`Guillaume Lemaitre `. - |Fix| :class:`preprocessing.FunctionTransformer` now also warns when `set\_output` is called with `transform="polars"` and `func` does not return a Polars dataframe or `feature\_names\_out` is not specified. :pr:`28263` by :user:`Guillaume Lemaitre `. - |Fix| :class:`preprocessing.TargetEncoder` no longer fails when `target\_type="continuous"` and the input is read-only. In particular, it now works with pandas copy-on-write mode enabled. :pr:`28233` by :user:`John Hopfensperger `. :mod:`sklearn.tree` ................... - |Fix| :class:`tree.DecisionTreeClassifier` and :class:`tree.DecisionTreeRegressor` are handling missing values properly. The internal criterion was not initialized when no missing values were present in the data, leading to potentially wrong criterion values. :pr:`28295` by :user:`Guillaume Lemaitre ` and :pr:`28327` by :user:`Adam Li `. :mod:`sklearn.utils` .................... - |Enhancement| |Fix| :func:`utils.metaestimators.available\_if` now reraises the error from the `check` function as the cause of the `AttributeError`. :pr:`28198` by `Thomas Fan`\_. - |Fix| :func:`utils.\_safe\_indexing` now raises a `ValueError` when `X` is a Python list and `axis=1`, as documented in the docstring. :pr:`28222` by :user:`Guillaume Lemaitre `. .. \_changes\_1\_4: Version 1.4.0 ============= \*\*January 2024\*\* Changed models -------------- The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures. - |Efficiency| :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV` now have much better convergence for solvers `"lbfgs"` and `"newton-cg"`. Both solvers can now reach much higher precision for the coefficients depending on the specified `tol`. Additionally, lbfgs can make better use of `tol`, i.e., stop sooner or reach higher precision. Note: The lbfgs is the default solver, so this change might affect many models. This change also means that with this new version of scikit-learn, the resulting coefficients `coef\_` and `intercept\_` of your models will change for these two solvers (when fit on the same data again). The amount of change depends on the specified `tol`, for small values you will get more precise results. :pr:`26721` by :user:`Christian Lorentzen `. - |Fix| fixes a memory leak seen in PyPy for estimators using the Cython loss functions. :pr:`27670` by :user:`Guillaume Lemaitre `. Changes impacting all modules ----------------------------- - |MajorFeature| Transformers now support polars output with `set\_output(transform="polars")`. :pr:`27315` by `Thomas Fan`\_. - |Enhancement| All estimators now recognize the column names from any dataframe that adopts the `DataFrame Interchange Protocol `\_\_. Dataframes that return a correct representation through `np.asarray(df)` is expected to work with our estimators and functions. :pr:`26464` by `Thomas Fan`\_. - |Enhancement| The HTML representation of estimators now includes a link to the documentation and is color-coded to denote whether the estimator is fitted or not (unfitted estimators are orange, fitted estimators are blue). :pr:`26616` by :user:`Riccardo Cappuzzo `, :user:`Ines Ibnukhsein `, :user:`Gael Varoquaux `, `Joel Nothman`\_ and :user:`Lilian Boulard `. - |Fix| Fixed a bug in most estimators and functions where setting a parameter to a large integer would cause a `TypeError`. :pr:`26648` by :user:`Naoise Holohan `. Metadata Routing ---------------- The following models now support metadata routing in one or more of their methods. Refer to the :ref:`Metadata Routing User Guide ` for more details. - |Feature| :class:`LarsCV` and :class:`LassoLarsCV` now support metadata routing in their `fit` method and route metadata to the CV splitter. :pr:`27538` by :user:`Omar Salman `. - |Feature| :class:`multiclass.OneVsRestClassifier`, :class:`multiclass.OneVsOneClassifier` and :class:`multiclass.OutputCodeClassifier` now support metadata routing in their ``fit`` and ``partial\_fit``, and route metadata to the underlying estimator's
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.063994400203228, -0.047911643981933594, -0.11853593587875366, -0.005071372725069523, 0.020698126405477524, -0.05663905292749405, -0.023383375257253647, 0.00377238099463284, -0.04423510655760765, 0.01417042501270771, 0.0861140787601471, 0.01052519679069519, -0.044013988226652145, -0.0272...
0.023341
for more details. - |Feature| :class:`LarsCV` and :class:`LassoLarsCV` now support metadata routing in their `fit` method and route metadata to the CV splitter. :pr:`27538` by :user:`Omar Salman `. - |Feature| :class:`multiclass.OneVsRestClassifier`, :class:`multiclass.OneVsOneClassifier` and :class:`multiclass.OutputCodeClassifier` now support metadata routing in their ``fit`` and ``partial\_fit``, and route metadata to the underlying estimator's ``fit`` and ``partial\_fit``. :pr:`27308` by :user:`Stefanie Senger `. - |Feature| :class:`pipeline.Pipeline` now supports metadata routing according to :ref:`metadata routing user guide `. :pr:`26789` by `Adrin Jalali`\_. - |Feature| :func:`~model\_selection.cross\_validate`, :func:`~model\_selection.cross\_val\_score`, and :func:`~model\_selection.cross\_val\_predict` now support metadata routing. The metadata are routed to the estimator's `fit`, the scorer, and the CV splitter's `split`. The metadata is accepted via the new `params` parameter. `fit\_params` is deprecated and will be removed in version 1.6. `groups` parameter is also not accepted as a separate argument when metadata routing is enabled and should be passed via the `params` parameter. :pr:`26896` by `Adrin Jalali`\_. - |Feature| :class:`~model\_selection.GridSearchCV`, :class:`~model\_selection.RandomizedSearchCV`, :class:`~model\_selection.HalvingGridSearchCV`, and :class:`~model\_selection.HalvingRandomSearchCV` now support metadata routing in their ``fit`` and ``score``, and route metadata to the underlying estimator's ``fit``, the CV splitter, and the scorer. :pr:`27058` by `Adrin Jalali`\_. - |Feature| :class:`~compose.ColumnTransformer` now supports metadata routing according to :ref:`metadata routing user guide `. :pr:`27005` by `Adrin Jalali`\_. - |Feature| :class:`linear\_model.LogisticRegressionCV` now supports metadata routing. :meth:`linear\_model.LogisticRegressionCV.fit` now accepts ``\*\*params`` which are passed to the underlying splitter and scorer. :meth:`linear\_model.LogisticRegressionCV.score` now accepts ``\*\*score\_params`` which are passed to the underlying scorer. :pr:`26525` by :user:`Omar Salman `. - |Feature| :class:`feature\_selection.SelectFromModel` now supports metadata routing in `fit` and `partial\_fit`. :pr:`27490` by :user:`Stefanie Senger `. - |Feature| :class:`linear\_model.OrthogonalMatchingPursuitCV` now supports metadata routing. Its `fit` now accepts ``\*\*fit\_params``, which are passed to the underlying splitter. :pr:`27500` by :user:`Stefanie Senger `. - |Feature| :class:`ElasticNetCV`, :class:`LassoCV`, :class:`MultiTaskElasticNetCV` and :class:`MultiTaskLassoCV` now support metadata routing and route metadata to the CV splitter. :pr:`27478` by :user:`Omar Salman `. - |Fix| All meta-estimators for which metadata routing is not yet implemented now raise a `NotImplementedError` on `get\_metadata\_routing` and on `fit` if metadata routing is enabled and any metadata is passed to them. :pr:`27389` by `Adrin Jalali`\_. Support for SciPy sparse arrays ------------------------------- Several estimators are now supporting SciPy sparse arrays. The following functions and classes are impacted: \*\*Functions:\*\* - :func:`cluster.compute\_optics\_graph` in :pr:`27104` by :user:`Maren Westermann ` and in :pr:`27250` by :user:`Yao Xiao `; - :func:`cluster.kmeans\_plusplus` in :pr:`27179` by :user:`Nurseit Kamchyev `; - :func:`decomposition.non\_negative\_factorization` in :pr:`27100` by :user:`Isaac Virshup `; - :func:`feature\_selection.f\_regression` in :pr:`27239` by :user:`Yaroslav Korobko `; - :func:`feature\_selection.r\_regression` in :pr:`27239` by :user:`Yaroslav Korobko `; - :func:`manifold.trustworthiness` in :pr:`27250` by :user:`Yao Xiao `; - :func:`manifold.spectral\_embedding` in :pr:`27240` by :user:`Yao Xiao `; - :func:`metrics.pairwise\_distances` in :pr:`27250` by :user:`Yao Xiao `; - :func:`metrics.pairwise\_distances\_chunked` in :pr:`27250` by :user:`Yao Xiao `; - :func:`metrics.pairwise.pairwise\_kernels` in :pr:`27250` by :user:`Yao Xiao `; - :func:`utils.multiclass.type\_of\_target` in :pr:`27274` by :user:`Yao Xiao `. \*\*Classes:\*\* - :class:`cluster.HDBSCAN` in :pr:`27250` by :user:`Yao Xiao `; - :class:`cluster.KMeans` in :pr:`27179` by :user:`Nurseit Kamchyev `; - :class:`cluster.MiniBatchKMeans` in :pr:`27179` by :user:`Nurseit Kamchyev `; - :class:`cluster.OPTICS` in :pr:`27104` by :user:`Maren Westermann ` and in :pr:`27250` by :user:`Yao Xiao `; - :class:`cluster.SpectralClustering` in :pr:`27161` by :user:`Bharat Raghunathan `; - :class:`decomposition.MiniBatchNMF` in :pr:`27100` by :user:`Isaac Virshup `; - :class:`decomposition.NMF` in :pr:`27100` by :user:`Isaac Virshup `; - :class:`feature\_extraction.text.TfidfTransformer` in :pr:`27219` by :user:`Yao Xiao `; - :class:`manifold.Isomap` in :pr:`27250` by :user:`Yao Xiao `; - :class:`manifold.SpectralEmbedding` in :pr:`27240` by :user:`Yao Xiao `; - :class:`manifold.TSNE` in :pr:`27250` by :user:`Yao Xiao `; - :class:`impute.SimpleImputer` in :pr:`27277` by :user:`Yao Xiao `; - :class:`impute.IterativeImputer` in :pr:`27277` by :user:`Yao Xiao `; - :class:`impute.KNNImputer` in :pr:`27277` by :user:`Yao Xiao `; - :class:`kernel\_approximation.PolynomialCountSketch` in :pr:`27301` by :user:`Lohit SundaramahaLingam `; - :class:`neural\_network.BernoulliRBM` in :pr:`27252` by :user:`Yao Xiao `; - :class:`preprocessing.PolynomialFeatures` in :pr:`27166` by :user:`Mohit Joshi `; - :class:`random\_projection.GaussianRandomProjection` in :pr:`27314`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.0518733374774456, -0.04114225134253502, -0.05877712741494179, -0.004193046595901251, 0.046361349523067474, -0.05352814868092537, -0.08778464049100876, 0.022007105872035027, -0.10286755859851837, -0.0012449680361896753, -0.009898259304463863, -0.0642736405134201, -0.04121815785765648, -0...
-0.009401
in :pr:`27277` by :user:`Yao Xiao `; - :class:`impute.IterativeImputer` in :pr:`27277` by :user:`Yao Xiao `; - :class:`impute.KNNImputer` in :pr:`27277` by :user:`Yao Xiao `; - :class:`kernel\_approximation.PolynomialCountSketch` in :pr:`27301` by :user:`Lohit SundaramahaLingam `; - :class:`neural\_network.BernoulliRBM` in :pr:`27252` by :user:`Yao Xiao `; - :class:`preprocessing.PolynomialFeatures` in :pr:`27166` by :user:`Mohit Joshi `; - :class:`random\_projection.GaussianRandomProjection` in :pr:`27314` by :user:`Stefanie Senger `; - :class:`random\_projection.SparseRandomProjection` in :pr:`27314` by :user:`Stefanie Senger `. Support for Array API --------------------- Several estimators and functions support the `Array API `\_. Such changes allow for using the estimators and functions with other libraries such as JAX, CuPy, and PyTorch. This therefore enables some GPU-accelerated computations. See :ref:`array\_api` for more details. \*\*Functions:\*\* - :func:`sklearn.metrics.accuracy\_score` and :func:`sklearn.metrics.zero\_one\_loss` in :pr:`27137` by :user:`Edoardo Abati `; - :func:`sklearn.model\_selection.train\_test\_split` in :pr:`26855` by `Tim Head`\_; - :func:`~utils.multiclass.is\_multilabel` in :pr:`27601` by :user:`Yaroslav Korobko `. \*\*Classes:\*\* - :class:`decomposition.PCA` for the `full` and `randomized` solvers (with QR power iterations) in :pr:`26315`, :pr:`27098` and :pr:`27431` by :user:`Mateusz Sokół `, :user:`Olivier Grisel ` and :user:`Edoardo Abati `; - :class:`preprocessing.KernelCenterer` in :pr:`27556` by :user:`Edoardo Abati `; - :class:`preprocessing.MaxAbsScaler` in :pr:`27110` by :user:`Edoardo Abati `; - :class:`preprocessing.MinMaxScaler` in :pr:`26243` by `Tim Head`\_; - :class:`preprocessing.Normalizer` in :pr:`27558` by :user:`Edoardo Abati `. Private Loss Function Module ---------------------------- - |FIX| The gradient computation of the binomial log loss is now numerically more stable for very large, in absolute value, input (raw predictions). Before, it could result in `np.nan`. Among the models that profit from this change are :class:`ensemble.GradientBoostingClassifier`, :class:`ensemble.HistGradientBoostingClassifier` and :class:`linear\_model.LogisticRegression`. :pr:`28048` by :user:`Christian Lorentzen `. Changelog --------- .. Entries should be grouped by module (in alphabetic order) and prefixed with one of the labels: |MajorFeature|, |Feature|, |Efficiency|, |Enhancement|, |Fix| or |API| (see whats\_new.rst for descriptions). Entries should be ordered by those labels (e.g. |Fix| after |Efficiency|). Changes not specific to a module should be listed under \*Multiple Modules\* or \*Miscellaneous\*. Entries should end with: :pr:`123456` by :user:`Joe Bloggs `. where 123455 is the \*pull request\* number, not the issue number. :mod:`sklearn.base` ................... - |Enhancement| :meth:`base.ClusterMixin.fit\_predict` and :meth:`base.OutlierMixin.fit\_predict` now accept ``\*\*kwargs`` which are passed to the ``fit`` method of the estimator. :pr:`26506` by `Adrin Jalali`\_. - |Enhancement| :meth:`base.TransformerMixin.fit\_transform` and :meth:`base.OutlierMixin.fit\_predict` now raise a warning if ``transform`` / ``predict`` consume metadata, but no custom ``fit\_transform`` / ``fit\_predict`` is defined in the class inheriting from them correspondingly. :pr:`26831` by `Adrin Jalali`\_. - |Enhancement| :func:`base.clone` now supports `dict` as input and creates a copy. :pr:`26786` by `Adrin Jalali`\_. - |API|:func:`~utils.metadata\_routing.process\_routing` now has a different signature. The first two (the object and the method) are positional only, and all metadata are passed as keyword arguments. :pr:`26909` by `Adrin Jalali`\_. :mod:`sklearn.calibration` .......................... - |Enhancement| The internal objective and gradient of the `sigmoid` method of :class:`calibration.CalibratedClassifierCV` have been replaced by the private loss module. :pr:`27185` by :user:`Omar Salman `. :mod:`sklearn.cluster` ...................... - |Fix| The `degree` parameter in the :class:`cluster.SpectralClustering` constructor now accepts real values instead of only integral values in accordance with the `degree` parameter of the :class:`sklearn.metrics.pairwise.polynomial\_kernel`. :pr:`27668` by :user:`Nolan McMahon `. - |Fix| Fixes a bug in :class:`cluster.OPTICS` where the cluster correction based on predecessor was not using the right indexing. It would lead to inconsistent results dependent on the order of the data. :pr:`26459` by :user:`Haoying Zhang ` and :user:`Guillaume Lemaitre `. - |Fix| Improve error message when checking the number of connected components in the `fit` method of :class:`cluster.HDBSCAN`. :pr:`27678` by :user:`Ganesh Tata `. - |Fix| Create copy of precomputed sparse matrix within the `fit` method of :class:`cluster.DBSCAN` to avoid in-place modification of the sparse matrix. :pr:`27651` by :user:`Ganesh Tata `. - |Fix| Raises a proper `ValueError` when `metric="precomputed"` and requested storing centers via the parameter `store\_centers`. :pr:`27898` by :user:`Guillaume Lemaitre `. - |API|
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.06822739541530609, -0.03889991343021393, -0.05745222792029381, -0.022971419617533684, 0.011533903889358044, -0.049987271428108215, 0.05373100936412811, 0.05157661437988281, -0.027044344693422318, -0.035815685987472534, 0.05654804781079292, 0.0038447242695838213, 0.11631052196025848, -0....
0.079327
- |Fix| Create copy of precomputed sparse matrix within the `fit` method of :class:`cluster.DBSCAN` to avoid in-place modification of the sparse matrix. :pr:`27651` by :user:`Ganesh Tata `. - |Fix| Raises a proper `ValueError` when `metric="precomputed"` and requested storing centers via the parameter `store\_centers`. :pr:`27898` by :user:`Guillaume Lemaitre `. - |API| `kdtree` and `balltree` values are now deprecated and are renamed as `kd\_tree` and `ball\_tree` respectively for the `algorithm` parameter of :class:`cluster.HDBSCAN` ensuring consistency in naming convention. `kdtree` and `balltree` values will be removed in 1.6. :pr:`26744` by :user:`Shreesha Kumar Bhat `. - |API| The option `metric=None` in :class:`cluster.AgglomerativeClustering` and :class:`cluster.FeatureAgglomeration` is deprecated in version 1.4 and will be removed in version 1.6. Use the default value instead. :pr:`27828` by :user:`Guillaume Lemaitre `. :mod:`sklearn.compose` ...................... - |MajorFeature| Adds `polars `\_\_ input support to :class:`compose.ColumnTransformer` through the `DataFrame Interchange Protocol `\_\_. The minimum supported version for polars is `0.19.12`. :pr:`26683` by `Thomas Fan`\_. - |Fix| :func:`cluster.spectral\_clustering` and :class:`cluster.SpectralClustering` now raise an explicit error message indicating that sparse matrices and arrays with `np.int64` indices are not supported. :pr:`27240` by :user:`Yao Xiao `. - |API| outputs that use pandas extension dtypes and contain `pd.NA` in :class:`~compose.ColumnTransformer` now result in a `FutureWarning` and will cause a `ValueError` in version 1.6, unless the output container has been configured as "pandas" with `set\_output(transform="pandas")`. Before, such outputs resulted in numpy arrays of dtype `object` containing `pd.NA` which could not be converted to numpy floats and caused errors when passed to other scikit-learn estimators. :pr:`27734` by :user:`Jérôme Dockès `. :mod:`sklearn.covariance` ......................... - |Enhancement| Allow :func:`covariance.shrunk\_covariance` to process multiple covariance matrices at once by handling nd-arrays. :pr:`25275` by :user:`Quentin Barthélemy `. - |API| |FIX| :class:`~compose.ColumnTransformer` now replaces `"passthrough"` with a corresponding :class:`~preprocessing.FunctionTransformer` in the fitted ``transformers\_`` attribute. :pr:`27204` by `Adrin Jalali`\_. :mod:`sklearn.datasets` ....................... - |Enhancement| :func:`datasets.make\_sparse\_spd\_matrix` now uses a more memory-efficient sparse layout. It also accepts a new keyword `sparse\_format` that allows specifying the output format of the sparse matrix. By default `sparse\_format=None`, which returns a dense numpy ndarray as before. :pr:`27438` by :user:`Yao Xiao `. - |Fix| :func:`datasets.dump\_svmlight\_file` now does not raise `ValueError` when `X` is read-only, e.g., a `numpy.memmap` instance. :pr:`28111` by :user:`Yao Xiao `. - |API| :func:`datasets.make\_sparse\_spd\_matrix` deprecated the keyword argument ``dim`` in favor of ``n\_dim``. ``dim`` will be removed in version 1.6. :pr:`27718` by :user:`Adam Li `. :mod:`sklearn.decomposition` ............................ - |Feature| :class:`decomposition.PCA` now supports :class:`scipy.sparse.sparray` and :class:`scipy.sparse.spmatrix` inputs when using the `arpack` solver. When used on sparse data like :func:`datasets.fetch\_20newsgroups\_vectorized` this can lead to speed-ups of 100x (single threaded) and 70x lower memory usage. Based on :user:`Alexander Tarashansky `'s implementation in `scanpy `\_. :pr:`18689` by :user:`Isaac Virshup ` and :user:`Andrey Portnoy `. - |Enhancement| An "auto" option was added to the `n\_components` parameter of :func:`decomposition.non\_negative\_factorization`, :class:`decomposition.NMF` and :class:`decomposition.MiniBatchNMF` to automatically infer the number of components from W or H shapes when using a custom initialization. The default value of this parameter will change from `None` to `auto` in version 1.6. :pr:`26634` by :user:`Alexandre Landeau ` and :user:`Alexandre Vigny `. - |Fix| :func:`decomposition.dict\_learning\_online` does not ignore anymore the parameter `max\_iter`. :pr:`27834` by :user:`Guillaume Lemaitre `. - |Fix| The `degree` parameter in the :class:`decomposition.KernelPCA` constructor now accepts real values instead of only integral values in accordance with the `degree` parameter of the :class:`sklearn.metrics.pairwise.polynomial\_kernel`. :pr:`27668` by :user:`Nolan McMahon `. - |API| The option `max\_iter=None` in :class:`decomposition.MiniBatchDictionaryLearning`, :class:`decomposition.MiniBatchSparsePCA`, and :func:`decomposition.dict\_learning\_online` is deprecated and will be removed in version 1.6. Use the default value instead. :pr:`27834` by :user:`Guillaume Lemaitre `. :mod:`sklearn.ensemble` ....................... - |MajorFeature| :class:`ensemble.RandomForestClassifier` and :class:`ensemble.RandomForestRegressor` support missing values when the criterion is `gini`, `entropy`, or `log\_loss`, for classification or `squared\_error`, `friedman\_mse`, or `poisson` for regression. :pr:`26391` by `Thomas Fan`\_.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.016101006418466568, 0.01545039564371109, -0.1036832258105278, 0.002935272641479969, -0.022648368030786514, -0.030100887641310692, -0.04041865095496178, -0.01434354018419981, -0.050218429416418076, 0.0348358117043972, 0.029084380716085434, -0.062029317021369934, 0.06505289673805237, -0.0...
0.11037
is deprecated and will be removed in version 1.6. Use the default value instead. :pr:`27834` by :user:`Guillaume Lemaitre `. :mod:`sklearn.ensemble` ....................... - |MajorFeature| :class:`ensemble.RandomForestClassifier` and :class:`ensemble.RandomForestRegressor` support missing values when the criterion is `gini`, `entropy`, or `log\_loss`, for classification or `squared\_error`, `friedman\_mse`, or `poisson` for regression. :pr:`26391` by `Thomas Fan`\_. - |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` support `categorical\_features="from\_dtype"`, which treats columns with Pandas or Polars Categorical dtype as categories in the algorithm. `categorical\_features="from\_dtype"` will become the default in v1.6. Categorical features no longer need to be encoded with numbers. When categorical features are numbers, the maximum value no longer needs to be smaller than `max\_bins`; only the number of (unique) categories must be smaller than `max\_bins`. :pr:`26411` by `Thomas Fan`\_ and :pr:`27835` by :user:`Jérôme Dockès `. - |MajorFeature| :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` got the new parameter `max\_features` to specify the proportion of randomly chosen features considered in each split. :pr:`27139` by :user:`Christian Lorentzen `. - |Feature| :class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`, :class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor` now support monotonic constraints, useful when features are supposed to have a positive/negative effect on the target. Missing values in the train data and multi-output targets are not supported. :pr:`13649` by :user:`Samuel Ronsin `, initiated by :user:`Patrick O'Reilly `. - |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` are now a bit faster by reusing the parent node's histogram as children node's histogram in the subtraction trick. In effect, less memory has to be allocated and deallocated. :pr:`27865` by :user:`Christian Lorentzen `. - |Efficiency| :class:`ensemble.GradientBoostingClassifier` is faster, for binary and in particular for multiclass problems thanks to the private loss function module. :pr:`26278` and :pr:`28095` by :user:`Christian Lorentzen `. - |Efficiency| Improves runtime and memory usage for :class:`ensemble.GradientBoostingClassifier` and :class:`ensemble.GradientBoostingRegressor` when trained on sparse data. :pr:`26957` by `Thomas Fan`\_. - |Efficiency| :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` is now faster when `scoring` is a predefined metric listed in :func:`metrics.get\_scorer\_names` and early stopping is enabled. :pr:`26163` by `Thomas Fan`\_. - |Enhancement| A fitted property, ``estimators\_samples\_``, was added to all Forest methods, including :class:`ensemble.RandomForestClassifier`, :class:`ensemble.RandomForestRegressor`, :class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor`, which allows to retrieve the training sample indices used for each tree estimator. :pr:`26736` by :user:`Adam Li `. - |Fix| Fixes :class:`ensemble.IsolationForest` when the input is a sparse matrix and `contamination` is set to a float value. :pr:`27645` by :user:`Guillaume Lemaitre `. - |Fix| Raises a `ValueError` in :class:`ensemble.RandomForestRegressor` and :class:`ensemble.ExtraTreesRegressor` when requesting OOB score with multioutput model for the targets being all rounded to integer. It was recognized as a multiclass problem. :pr:`27817` by :user:`Daniele Ongari ` - |Fix| Changes estimator tags to acknowledge that :class:`ensemble.VotingClassifier`, :class:`ensemble.VotingRegressor`, :class:`ensemble.StackingClassifier`, :class:`ensemble.StackingRegressor`, support missing values if all `estimators` support missing values. :pr:`27710` by :user:`Guillaume Lemaitre `. - |Fix| Support loading pickles of :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` when the pickle has been generated on a platform with a different bitness. A typical example is to train and pickle the model on 64 bit machine and load the model on a 32 bit machine for prediction. :pr:`28074` by :user:`Christian Lorentzen ` and :user:`Loïc Estève `. - |API| In :class:`ensemble.AdaBoostClassifier`, the `algorithm` argument `SAMME.R` was deprecated and will be removed in 1.6. :pr:`26830` by :user:`Stefanie Senger `. :mod:`sklearn.feature\_extraction` ................................. - |API| Changed error type from :class:`AttributeError` to :class:`exceptions.NotFittedError` in unfitted instances of :class:`feature\_extraction.DictVectorizer` for the following methods: :func:`feature\_extraction.DictVectorizer.inverse\_transform`, :func:`feature\_extraction.DictVectorizer.restrict`, :func:`feature\_extraction.DictVectorizer.transform`. :pr:`24838` by :user:`Lorenz Hertel `. :mod:`sklearn.feature\_selection` ................................ - |Enhancement| :class:`feature\_selection.SelectKBest`, :class:`feature\_selection.SelectPercentile`, and :class:`feature\_selection.GenericUnivariateSelect` now support unsupervised feature selection by providing a `score\_func` taking `X` and `y=None`. :pr:`27721` by :user:`Guillaume Lemaitre `. - |Enhancement| :class:`feature\_selection.SelectKBest` and :class:`feature\_selection.GenericUnivariateSelect` with `mode='k\_best'` now shows a warning when `k` is greater than the number of features. :pr:`27841` by `Thomas Fan`\_. - |Fix| :class:`feature\_selection.RFE` and :class:`feature\_selection.RFECV` do not check for nans during input validation.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.041513122618198395, -0.053707100450992584, -0.0976373627781868, 0.014286349527537823, 0.07867899537086487, -0.02372034266591072, -0.04100828617811203, 0.0321861207485199, -0.06524691730737686, -0.010343188419938087, -0.022002628073096275, -0.06032593920826912, 0.020675750449299812, -0.0...
0.033911
by providing a `score\_func` taking `X` and `y=None`. :pr:`27721` by :user:`Guillaume Lemaitre `. - |Enhancement| :class:`feature\_selection.SelectKBest` and :class:`feature\_selection.GenericUnivariateSelect` with `mode='k\_best'` now shows a warning when `k` is greater than the number of features. :pr:`27841` by `Thomas Fan`\_. - |Fix| :class:`feature\_selection.RFE` and :class:`feature\_selection.RFECV` do not check for nans during input validation. :pr:`21807` by `Thomas Fan`\_. :mod:`sklearn.inspection` ......................... - |Enhancement| :class:`inspection.DecisionBoundaryDisplay` now accepts a parameter `class\_of\_interest` to select the class of interest when plotting the response provided by `response\_method="predict\_proba"` or `response\_method="decision\_function"`. It allows to plot the decision boundary for both binary and multiclass classifiers. :pr:`27291` by :user:`Guillaume Lemaitre `. - |Fix| :meth:`inspection.DecisionBoundaryDisplay.from\_estimator` and :class:`inspection.PartialDependenceDisplay.from\_estimator` now return the correct type for subclasses. :pr:`27675` by :user:`John Cant `. - |API| :class:`inspection.DecisionBoundaryDisplay` raises an `AttributeError` instead of a `ValueError` when an estimator does not implement the requested response method. :pr:`27291` by :user:`Guillaume Lemaitre `. :mod:`sklearn.kernel\_ridge` ........................... - |Fix| The `degree` parameter in the :class:`kernel\_ridge.KernelRidge` constructor now accepts real values instead of only integral values in accordance with the `degree` parameter of the :class:`sklearn.metrics.pairwise.polynomial\_kernel`. :pr:`27668` by :user:`Nolan McMahon `. :mod:`sklearn.linear\_model` ........................... - |Efficiency| :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV` now have much better convergence for solvers `"lbfgs"` and `"newton-cg"`. Both solvers can now reach much higher precision for the coefficients depending on the specified `tol`. Additionally, lbfgs can make better use of `tol`, i.e., stop sooner or reach higher precision. This is accomplished by better scaling of the objective function, i.e., using average per sample losses instead of sum of per sample losses. :pr:`26721` by :user:`Christian Lorentzen `. - |Efficiency| :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV` with solver `"newton-cg"` can now be considerably faster for some data and parameter settings. This is accomplished by a better line search convergence check for negligible loss improvements that takes into account gradient information. :pr:`26721` by :user:`Christian Lorentzen `. - |Efficiency| Solver `"newton-cg"` in :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV` uses a little less memory. The effect is proportional to the number of coefficients (`n\_features \* n\_classes`). :pr:`27417` by :user:`Christian Lorentzen `. - |Fix| Ensure that the `sigma\_` attribute of :class:`linear\_model.ARDRegression` and :class:`linear\_model.BayesianRidge` always has a `float32` dtype when fitted on `float32` data, even with the type promotion rules of NumPy 2. :pr:`27899` by :user:`Olivier Grisel `. - |API| The attribute `loss\_function\_` of :class:`linear\_model.SGDClassifier` and :class:`linear\_model.SGDOneClassSVM` has been deprecated and will be removed in version 1.6. :pr:`27979` by :user:`Christian Lorentzen `. :mod:`sklearn.metrics` ...................... - |Efficiency| Computing pairwise distances via :class:`metrics.DistanceMetric` for CSR x CSR, Dense x CSR, and CSR x Dense datasets is now 1.5x faster. :pr:`26765` by :user:`Meekail Zain `. - |Efficiency| Computing distances via :class:`metrics.DistanceMetric` for CSR x CSR, Dense x CSR, and CSR x Dense now uses ~50% less memory, and outputs distances in the same dtype as the provided data. :pr:`27006` by :user:`Meekail Zain `. - |Enhancement| Improve the rendering of the plot obtained with the :class:`metrics.PrecisionRecallDisplay` and :class:`metrics.RocCurveDisplay` classes. The x- and y-axis limits are set to [0, 1] and the aspect ratio between both axes is set to be 1 to get a square plot. :pr:`26366` by :user:`Mojdeh Rastgoo `. - |Enhancement| Added `neg\_root\_mean\_squared\_log\_error\_scorer` as scorer :pr:`26734` by :user:`Alejandro Martin Gil <101AlexMartin>`. - |Enhancement| :func:`metrics.confusion\_matrix` now warns when only one label was found in `y\_true` and `y\_pred`. :pr:`27650` by :user:`Lucy Liu `. - |Fix| computing pairwise distances with :func:`metrics.pairwise.euclidean\_distances` no longer raises an exception when `X` is provided as a `float64` array and `X\_norm\_squared` as a `float32` array. :pr:`27624` by :user:`Jérôme Dockès `. - |Fix| :func:`f1\_score` now provides correct values when handling various cases in which division by zero occurs by using a formulation that does not depend on the precision and recall values. :pr:`27577` by :user:`Omar Salman ` and :user:`Guillaume Lemaitre `.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ 0.007449271157383919, 0.0035104902926832438, 0.012989113107323647, -0.01510985754430294, 0.07565248757600784, -0.020607108250260353, 0.14421385526657104, 0.07799790054559708, -0.08622775971889496, 0.05043502151966095, -0.0015550493262708187, -0.07635477185249329, 0.050827618688344955, 0.00...
0.052893
`X\_norm\_squared` as a `float32` array. :pr:`27624` by :user:`Jérôme Dockès `. - |Fix| :func:`f1\_score` now provides correct values when handling various cases in which division by zero occurs by using a formulation that does not depend on the precision and recall values. :pr:`27577` by :user:`Omar Salman ` and :user:`Guillaume Lemaitre `. - |Fix| :func:`metrics.make\_scorer` now raises an error when using a regressor on a scorer requesting a non-thresholded decision function (from `decision\_function` or `predict\_proba`). Such scorers are specific to classification. :pr:`26840` by :user:`Guillaume Lemaitre `. - |Fix| :meth:`metrics.DetCurveDisplay.from\_predictions`, :class:`metrics.PrecisionRecallDisplay.from\_predictions`, :class:`metrics.PredictionErrorDisplay.from\_predictions`, and :class:`metrics.RocCurveDisplay.from\_predictions` now return the correct type for subclasses. :pr:`27675` by :user:`John Cant `. - |API| Deprecated `needs\_threshold` and `needs\_proba` from :func:`metrics.make\_scorer`. These parameters will be removed in version 1.6. Instead, use `response\_method` that accepts `"predict"`, `"predict\_proba"` or `"decision\_function"` or a list of such values. `needs\_proba=True` is equivalent to `response\_method="predict\_proba"` and `needs\_threshold=True` is equivalent to `response\_method=("decision\_function", "predict\_proba")`. :pr:`26840` by :user:`Guillaume Lemaitre `. - |API| The `squared` parameter of :func:`metrics.mean\_squared\_error` and :func:`metrics.mean\_squared\_log\_error` is deprecated and will be removed in 1.6. Use the new functions :func:`metrics.root\_mean\_squared\_error` and :func:`metrics.root\_mean\_squared\_log\_error` instead. :pr:`26734` by :user:`Alejandro Martin Gil <101AlexMartin>`. :mod:`sklearn.model\_selection` .............................. - |Enhancement| :func:`model\_selection.learning\_curve` raises a warning when every cross validation fold fails. :pr:`26299` by :user:`Rahil Parikh `. - |Fix| :class:`model\_selection.GridSearchCV`, :class:`model\_selection.RandomizedSearchCV`, and :class:`model\_selection.HalvingGridSearchCV` now don't change the given object in the parameter grid if it's an estimator. :pr:`26786` by `Adrin Jalali`\_. :mod:`sklearn.multioutput` .......................... - |Enhancement| Add method `predict\_log\_proba` to :class:`multioutput.ClassifierChain`. :pr:`27720` by :user:`Guillaume Lemaitre `. :mod:`sklearn.neighbors` ........................ - |Efficiency| :meth:`sklearn.neighbors.KNeighborsRegressor.predict` and :meth:`sklearn.neighbors.KNeighborsClassifier.predict\_proba` now efficiently support pairs of dense and sparse datasets. :pr:`27018` by :user:`Julien Jerphanion `. - |Efficiency| The performance of :meth:`neighbors.RadiusNeighborsClassifier.predict` and of :meth:`neighbors.RadiusNeighborsClassifier.predict\_proba` has been improved when `radius` is large and `algorithm="brute"` with non-Euclidean metrics. :pr:`26828` by :user:`Omar Salman `. - |Fix| Improve error message for :class:`neighbors.LocalOutlierFactor` when it is invoked with `n\_samples=n\_neighbors`. :pr:`23317` by :user:`Bharat Raghunathan `. - |Fix| :meth:`neighbors.KNeighborsClassifier.predict` and :meth:`neighbors.KNeighborsClassifier.predict\_proba` now raise an error when the weights of all neighbors of some sample are zero. This can happen when `weights` is a user-defined function. :pr:`26410` by :user:`Yao Xiao `. - |API| :class:`neighbors.KNeighborsRegressor` now accepts :class:`metrics.DistanceMetric` objects directly via the `metric` keyword argument allowing for the use of accelerated third-party :class:`metrics.DistanceMetric` objects. :pr:`26267` by :user:`Meekail Zain `. :mod:`sklearn.preprocessing` ............................ - |Efficiency| :class:`preprocessing.OrdinalEncoder` avoids calculating missing indices twice to improve efficiency. :pr:`27017` by :user:`Xuefeng Xu `. - |Efficiency| Improves efficiency in :class:`preprocessing.OneHotEncoder` and :class:`preprocessing.OrdinalEncoder` in checking `nan`. :pr:`27760` by :user:`Xuefeng Xu `. - |Enhancement| Improves warnings in :class:`preprocessing.FunctionTransformer` when `func` returns a pandas dataframe and the output is configured to be pandas. :pr:`26944` by `Thomas Fan`\_. - |Enhancement| :class:`preprocessing.TargetEncoder` now supports `target\_type` 'multiclass'. :pr:`26674` by :user:`Lucy Liu `. - |Fix| :class:`preprocessing.OneHotEncoder` and :class:`preprocessing.OrdinalEncoder` raise an exception when `nan` is a category and is not the last in the user's provided categories. :pr:`27309` by :user:`Xuefeng Xu `. - |Fix| :class:`preprocessing.OneHotEncoder` and :class:`preprocessing.OrdinalEncoder` raise an exception if the user provided categories contain duplicates. :pr:`27328` by :user:`Xuefeng Xu `. - |Fix| :class:`preprocessing.FunctionTransformer` raises an error at `transform` if the output of `get\_feature\_names\_out` is not consistent with the column names of the output container if those are defined. :pr:`27801` by :user:`Guillaume Lemaitre `. - |Fix| Raise a `NotFittedError` in :class:`preprocessing.OrdinalEncoder` when calling `transform` without calling `fit` since `categories` always requires to be checked. :pr:`27821` by :user:`Guillaume Lemaitre `. :mod:`sklearn.tree` ................... - |Feature| :class:`tree.DecisionTreeClassifier`, :class:`tree.DecisionTreeRegressor`, :class:`tree.ExtraTreeClassifier` and :class:`tree.ExtraTreeRegressor` now support monotonic constraints, useful when features are supposed to have a positive/negative effect on the target. Missing values in the train data and multi-output targets are not supported. :pr:`13649` by :user:`Samuel Ronsin `, initiated by :user:`Patrick O'Reilly `. :mod:`sklearn.utils` .................... - |Enhancement| :func:`sklearn.utils.estimator\_html\_repr` dynamically adapts diagram colors based on
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.04815300554037094, 0.010768985375761986, -0.09422525763511658, 0.003531760536134243, 0.03921575844287872, -0.0392344668507576, 0.03562416508793831, 0.03657020255923271, 0.008767827413976192, 0.0282084122300148, 0.025471506640315056, -0.10201412439346313, 0.050671257078647614, 0.02191853...
0.032081
now support monotonic constraints, useful when features are supposed to have a positive/negative effect on the target. Missing values in the train data and multi-output targets are not supported. :pr:`13649` by :user:`Samuel Ronsin `, initiated by :user:`Patrick O'Reilly `. :mod:`sklearn.utils` .................... - |Enhancement| :func:`sklearn.utils.estimator\_html\_repr` dynamically adapts diagram colors based on the browser's `prefers-color-scheme`, providing improved adaptability to dark mode environments. :pr:`26862` by :user:`Andrew Goh Yisheng <9y5>`, `Thomas Fan`\_, `Adrin Jalali`\_. - |Enhancement| :class:`~utils.metadata\_routing.MetadataRequest` and :class:`~utils.metadata\_routing.MetadataRouter` now have a ``consumes`` method which can be used to check whether a given set of parameters would be consumed. :pr:`26831` by `Adrin Jalali`\_. - |Enhancement| Make :func:`sklearn.utils.check\_array` attempt to output `int32`-indexed CSR and COO arrays when converting from DIA arrays if the number of non-zero entries is small enough. This ensures that estimators implemented in Cython and that do not accept `int64`-indexed sparse datastucture, now consistently accept the same sparse input formats for SciPy sparse matrices and arrays. :pr:`27372` by :user:`Guillaume Lemaitre `. - |Fix| :func:`sklearn.utils.check\_array` should accept both matrix and array from the sparse SciPy module. The previous implementation would fail if `copy=True` by calling specific NumPy `np.may\_share\_memory` that does not work with SciPy sparse array and does not return the correct result for SciPy sparse matrix. :pr:`27336` by :user:`Guillaume Lemaitre `. - |Fix| :func:`~utils.estimator\_checks.check\_estimators\_pickle` with `readonly\_memmap=True` now relies on joblib's own capability to allocate aligned memory mapped arrays when loading a serialized estimator instead of calling a dedicated private function that would crash when OpenBLAS misdetects the CPU architecture. :pr:`27614` by :user:`Olivier Grisel `. - |Fix| Error message in :func:`~utils.check\_array` when a sparse matrix was passed but `accept\_sparse` is `False` now suggests to use `.toarray()` and not `X.toarray()`. :pr:`27757` by :user:`Lucy Liu `. - |Fix| Fix the function :func:`~utils.check\_array` to output the right error message when the input is a Series instead of a DataFrame. :pr:`28090` by :user:`Stan Furrer ` and :user:`Yao Xiao `. - |API| :func:`sklearn.utils.extmath.log\_logistic` is deprecated and will be removed in 1.6. Use `-np.logaddexp(0, -x)` instead. :pr:`27544` by :user:`Christian Lorentzen `. .. rubric:: Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.3, including: 101AlexMartin, Abhishek Singh Kushwah, Adam Li, Adarsh Wase, Adrin Jalali, Advik Sinha, Alex, Alexander Al-Feghali, Alexis IMBERT, AlexL, Alex Molas, Anam Fatima, Andrew Goh, andyscanzio, Aniket Patil, Artem Kislovskiy, Arturo Amor, ashah002, avm19, Ben Holmes, Ben Mares, Benoit Chevallier-Mames, Bharat Raghunathan, Binesh Bannerjee, Brendan Lu, Brevin Kunde, Camille Troillard, Carlo Lemos, Chad Parmet, Christian Clauss, Christian Lorentzen, Christian Veenhuis, Christos Aridas, Cindy Liang, Claudio Salvatore Arcidiacono, Connor Boyle, cynthias13w, DaminK, Daniele Ongari, Daniel Schmitz, Daniel Tinoco, David Brochart, Deborah L. Haar, DevanshKyada27, Dimitri Papadopoulos Orfanos, Dmitry Nesterov, DUONG, Edoardo Abati, Eitan Hemed, Elabonga Atuo, Elisabeth Günther, Emma Carballal, Emmanuel Ferdman, epimorphic, Erwan Le Floch, Fabian Egli, Filip Karlo Došilović, Florian Idelberger, Franck Charras, Gael Varoquaux, Ganesh Tata, Hleb Levitski, Guillaume Lemaitre, Haoying Zhang, Harmanan Kohli, Ily, ioangatop, IsaacTrost, Isaac Virshup, Iwona Zdzieblo, Jakub Kaczmarzyk, James McDermott, Jarrod Millman, JB Mountford, Jérémie du Boisberranger, Jérôme Dockès, Jiawei Zhang, Joel Nothman, John Cant, John Hopfensperger, Jona Sassenhagen, Jon Nordby, Julien Jerphanion, Kennedy Waweru, kevin moore, Kian Eliasi, Kishan Ved, Konstantinos Pitas, Koustav Ghosh, Kushan Sharma, ldwy4, Linus, Lohit SundaramahaLingam, Loic Esteve, Lorenz, Louis Fouquet, Lucy Liu, Luis Silvestrin, Lukáš Folwarczný, Lukas Geiger, Malte Londschien, Marcus Fraaß, Marek Hanuš, Maren Westermann, Mark Elliot, Martin Larralde, Mateusz Sokół, mathurinm, mecopur, Meekail Zain, Michael Higgins, Miki Watanabe, Milton Gomez, MN193, Mohammed Hamdy, Mohit Joshi, mrastgoo, Naman Dhingra, Naoise Holohan, Narendra Singh dangi, Noa Malem-Shinitski, Nolan, Nurseit Kamchyev, Oleksii Kachaiev, Olivier Grisel, Omar Salman, partev, Peter Hull, Peter Steinbach, Pierre
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.021090654656291008, -0.03568526357412338, 0.005966242402791977, 0.022684365510940552, 0.10307559370994568, -0.018563026562333107, 0.027489222586154938, -0.03872821852564812, -0.08774806559085846, -0.055955007672309875, 0.026856761425733566, -0.045544084161520004, 0.059124693274497986, -...
0.110313
Marek Hanuš, Maren Westermann, Mark Elliot, Martin Larralde, Mateusz Sokół, mathurinm, mecopur, Meekail Zain, Michael Higgins, Miki Watanabe, Milton Gomez, MN193, Mohammed Hamdy, Mohit Joshi, mrastgoo, Naman Dhingra, Naoise Holohan, Narendra Singh dangi, Noa Malem-Shinitski, Nolan, Nurseit Kamchyev, Oleksii Kachaiev, Olivier Grisel, Omar Salman, partev, Peter Hull, Peter Steinbach, Pierre de Fréminville, Pooja Subramaniam, Puneeth K, qmarcou, Quentin Barthélemy, Rahil Parikh, Rahul Mahajan, Raj Pulapakura, Raphael, Ricardo Peres, Riccardo Cappuzzo, Roman Lutz, Salim Dohri, Samuel O. Ronsin, Sandip Dutta, Sayed Qaiser Ali, scaja, scikit-learn-bot, Sebastian Berg, Shreesha Kumar Bhat, Shubhal Gupta, Søren Fuglede Jørgensen, Stefanie Senger, Tamara, Tanjina Afroj, THARAK HEGDE, thebabush, Thomas J. Fan, Thomas Roehr, Tialo, Tim Head, tongyu, Venkatachalam N, Vijeth Moudgalya, Vincent M, Vivek Reddy P, Vladimir Fokow, Xiao Yuan, Xuefeng Xu, Yang Tao, Yao Xiao, Yuchen Zhou, Yuusuke Hiramatsu
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.4.rst
main
scikit-learn
[ -0.031565483659505844, -0.07602104544639587, -0.05645735189318657, 0.02797809988260269, -0.03731849417090416, 0.039725080132484436, -0.06435111910104752, 0.0891655832529068, -0.016376692801713943, 0.0024720977526158094, -0.02547549642622471, 0.011025997810065746, -0.01754416711628437, 0.04...
0.186608
.. include:: \_contributors.rst .. currentmodule:: sklearn .. \_release\_notes\_1\_6: =========== Version 1.6 =========== For a short description of the main highlights of the release, please refer to :ref:`sphx\_glr\_auto\_examples\_release\_highlights\_plot\_release\_highlights\_1\_6\_0.py`. .. include:: changelog\_legend.inc .. towncrier release notes start .. \_changes\_1\_6\_1: Version 1.6.1 ============= \*\*January 2025\*\* Changed models -------------- - |Fix| The `tags.input\_tags.sparse` flag was corrected for a majority of estimators. By :user:`Antoine Baker ` :pr:`30187` Changes impacting many modules ------------------------------ - |Fix| `\_more\_tags`, `\_get\_tags`, and `\_safe\_tags` are now raising a :class:`DeprecationWarning` instead of a :class:`FutureWarning` to only notify developers instead of end-users. By :user:`Guillaume Lemaitre ` in :pr:`30573` :mod:`sklearn.metrics` ---------------------- - |Fix| Fix regression when scikit-learn metric called on PyTorch CPU tensors would raise an error (with array API dispatch disabled which is the default). By :user:`Loïc Estève ` :pr:`30454` :mod:`sklearn.model\_selection` ------------------------------ - |Fix| :func:`~model\_selection.cross\_validate`, :func:`~model\_selection.cross\_val\_predict`, and :func:`~model\_selection.cross\_val\_score` now accept `params=None` when metadata routing is enabled. By `Adrin Jalali`\_ :pr:`30451` :mod:`sklearn.tree` ------------------- - |Fix| Use `log2` instead of `ln` for building trees to maintain behavior of previous versions. By `Thomas Fan`\_ :pr:`30557` :mod:`sklearn.utils` -------------------- - |Enhancement| :func:`utils.estimator\_checks.check\_estimator\_sparse\_tag` ensures that the estimator tag `input\_tags.sparse` is consistent with its `fit` method (accepting sparse input `X` or raising the appropriate error). By :user:`Antoine Baker ` :pr:`30187` - |Fix| Raise a `DeprecationWarning` when there is no concrete implementation of `\_\_sklearn\_tags\_\_` in the MRO of the estimator. We request to inherit from `BaseEstimator` that implements `\_\_sklearn\_tags\_\_`. By :user:`Guillaume Lemaitre ` :pr:`30516` .. \_changes\_1\_6\_0: Version 1.6.0 ============= \*\*December 2024\*\* Changes impacting many modules ------------------------------ - |Enhancement| `\_\_sklearn\_tags\_\_` was introduced for setting tags in estimators. More details in :ref:`estimator\_tags`. By :user:`Thomas Fan ` and :user:`Adrin Jalali ` :pr:`29677` - |Enhancement| Scikit-learn classes and functions can be used while only having a `import sklearn` import line. For example, `import sklearn; sklearn.svm.SVC()` now works. By :user:`Thomas Fan ` :pr:`29793` - |Fix| Classes :class:`metrics.ConfusionMatrixDisplay`, :class:`metrics.RocCurveDisplay`, :class:`calibration.CalibrationDisplay`, :class:`metrics.PrecisionRecallDisplay`, :class:`metrics.PredictionErrorDisplay` and :class:`inspection.PartialDependenceDisplay` now properly handle Matplotlib aliases for style parameters (e.g., `c` and `color`, `ls` and `linestyle`, etc). By :user:`Joseph Barbier ` :pr:`30023` - |API| :func:`utils.validation.validate\_data` is introduced and replaces previously private `base.BaseEstimator.\_validate\_data` method. This is intended for third party estimator developers, who should use this function in most cases instead of :func:`utils.check\_array` and :func:`utils.check\_X\_y`. By :user:`Adrin Jalali ` :pr:`29696` Support for Array API --------------------- Additional estimators and functions have been updated to include support for all `Array API `\_ compliant inputs. See :ref:`array\_api` for more details. - |Feature| :class:`model\_selection.GridSearchCV`, :class:`model\_selection.RandomizedSearchCV`, :class:`model\_selection.HalvingGridSearchCV` and :class:`model\_selection.HalvingRandomSearchCV` now support Array API compatible inputs when their base estimators do. By :user:`Tim Head ` and :user:`Olivier Grisel ` :pr:`27096` - |Feature| :func:`sklearn.metrics.f1\_score` now supports Array API compatible inputs. By :user:`Omar Salman ` :pr:`27369` - |Feature| :class:`preprocessing.LabelEncoder` now supports Array API compatible inputs. By :user:`Omar Salman ` :pr:`27381` - |Feature| :func:`sklearn.metrics.mean\_absolute\_error` now supports Array API compatible inputs. By :user:`Edoardo Abati ` :pr:`27736` - |Feature| :func:`sklearn.metrics.mean\_tweedie\_deviance` now supports Array API compatible inputs. By :user:`Thomas Li ` :pr:`28106` - |Feature| :func:`sklearn.metrics.pairwise.cosine\_similarity` now supports Array API compatible inputs. By :user:`Edoardo Abati ` :pr:`29014` - |Feature| :func:`sklearn.metrics.pairwise.paired\_cosine\_distances` now supports Array API compatible inputs. By :user:`Edoardo Abati ` :pr:`29112` - |Feature| :func:`sklearn.metrics.cluster.entropy` now supports Array API compatible inputs. By :user:`Yaroslav Korobko ` :pr:`29141` - |Feature| :func:`sklearn.metrics.mean\_squared\_error` now supports Array API compatible inputs. By :user:`Yaroslav Korobko ` :pr:`29142` - |Feature| :func:`sklearn.metrics.pairwise.additive\_chi2\_kernel` now supports Array API compatible inputs. By :user:`Yaroslav Korobko ` :pr:`29144` - |Feature| :func:`sklearn.metrics.d2\_tweedie\_score` now supports Array API compatible inputs. By :user:`Emily Chen ` :pr:`29207` - |Feature| :func:`sklearn.metrics.max\_error` now supports Array API compatible inputs. By :user:`Edoardo Abati ` :pr:`29212` - |Feature| :func:`sklearn.metrics.mean\_poisson\_deviance` now supports Array API compatible inputs. By :user:`Emily Chen ` :pr:`29227` - |Feature| :func:`sklearn.metrics.mean\_gamma\_deviance` now supports Array API compatible inputs. By :user:`Emily Chen `
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.6.rst
main
scikit-learn
[ -0.03696080669760704, -0.03064725175499916, 0.003909222315996885, -0.011181768961250782, 0.11985921859741211, 0.012636889703571796, -0.015478109009563923, 0.03991958871483803, -0.06332419067621231, 0.005812917370349169, 0.077179454267025, 0.0028713615611195564, -0.0258974377065897, -0.0532...
0.052212
Array API compatible inputs. By :user:`Emily Chen ` :pr:`29207` - |Feature| :func:`sklearn.metrics.max\_error` now supports Array API compatible inputs. By :user:`Edoardo Abati ` :pr:`29212` - |Feature| :func:`sklearn.metrics.mean\_poisson\_deviance` now supports Array API compatible inputs. By :user:`Emily Chen ` :pr:`29227` - |Feature| :func:`sklearn.metrics.mean\_gamma\_deviance` now supports Array API compatible inputs. By :user:`Emily Chen ` :pr:`29239` - |Feature| :func:`sklearn.metrics.pairwise.cosine\_distances` now supports Array API compatible inputs. By :user:`Emily Chen ` :pr:`29265` - |Feature| :func:`sklearn.metrics.pairwise.chi2\_kernel` now supports Array API compatible inputs. By :user:`Yaroslav Korobko ` :pr:`29267` - |Feature| :func:`sklearn.metrics.mean\_absolute\_percentage\_error` now supports Array API compatible inputs. By :user:`Emily Chen ` :pr:`29300` - |Feature| :func:`sklearn.metrics.pairwise.paired\_euclidean\_distances` now supports Array API compatible inputs. By :user:`Emily Chen ` :pr:`29389` - |Feature| :func:`sklearn.metrics.pairwise.euclidean\_distances` and :func:`sklearn.metrics.pairwise.rbf\_kernel` now support Array API compatible inputs. By :user:`Omar Salman ` :pr:`29433` - |Feature| :func:`sklearn.metrics.pairwise.linear\_kernel`, :func:`sklearn.metrics.pairwise.sigmoid\_kernel`, and :func:`sklearn.metrics.pairwise.polynomial\_kernel` now support Array API compatible inputs. By :user:`Omar Salman ` :pr:`29475` - |Feature| :func:`sklearn.metrics.mean\_squared\_log\_error` and :func:`sklearn.metrics.root\_mean\_squared\_log\_error` now support Array API compatible inputs. By :user:`Virgil Chan ` :pr:`29709` - |Feature| :class:`preprocessing.MinMaxScaler` with `clip=True` now supports Array API compatible inputs. By :user:`Shreekant Nandiyawar ` :pr:`29751` - Support for the soon to be deprecated `cupy.array\_api` module has been removed in favor of directly supporting the top level `cupy` module, possibly via the `array\_api\_compat.cupy` compatibility wrapper. By :user:`Olivier Grisel ` :pr:`29639` Metadata routing ---------------- Refer to the :ref:`Metadata Routing User Guide ` for more details. - |Feature| :class:`semi\_supervised.SelfTrainingClassifier` now supports metadata routing. The fit method now accepts ``\*\*fit\_params`` which are passed to the underlying estimators via their `fit` methods. In addition, the :meth:`~semi\_supervised.SelfTrainingClassifier.predict`, :meth:`~semi\_supervised.SelfTrainingClassifier.predict\_proba`, :meth:`~semi\_supervised.SelfTrainingClassifier.predict\_log\_proba`, :meth:`~semi\_supervised.SelfTrainingClassifier.score` and :meth:`~semi\_supervised.SelfTrainingClassifier.decision\_function` methods also accept ``\*\*params`` which are passed to the underlying estimators via their respective methods. By :user:`Adam Li ` :pr:`28494` - |Feature| :class:`ensemble.StackingClassifier` and :class:`ensemble.StackingRegressor` now support metadata routing and pass ``\*\*fit\_params`` to the underlying estimators via their `fit` methods. By :user:`Stefanie Senger ` :pr:`28701` - |Feature| :func:`model\_selection.learning\_curve` now supports metadata routing for the `fit` method of its estimator and for its underlying CV splitter and scorer. By :user:`Stefanie Senger ` :pr:`28975` - |Feature| :class:`compose.TransformedTargetRegressor` now supports metadata routing in its :meth:`~compose.TransformedTargetRegressor.fit` and :meth:`~compose.TransformedTargetRegressor.predict` methods and routes the corresponding params to the underlying regressor. By :user:`Omar Salman ` :pr:`29136` - |Feature| :class:`feature\_selection.SequentialFeatureSelector` now supports metadata routing in its `fit` method and passes the corresponding params to the :func:`model\_selection.cross\_val\_score` function. By :user:`Omar Salman ` :pr:`29260` - |Feature| :func:`model\_selection.permutation\_test\_score` now supports metadata routing for the `fit` method of its estimator and for its underlying CV splitter and scorer. By :user:`Adam Li ` :pr:`29266` - |Feature| :class:`feature\_selection.RFE` and :class:`feature\_selection.RFECV` now support metadata routing. By :user:`Omar Salman ` :pr:`29312` - |Feature| :func:`model\_selection.validation\_curve` now supports metadata routing for the `fit` method of its estimator and for its underlying CV splitter and scorer. By :user:`Stefanie Senger ` :pr:`29329` - |Fix| Metadata is routed correctly to grouped CV splitters via :class:`linear\_model.RidgeCV` and :class:`linear\_model.RidgeClassifierCV` and `UnsetMetadataPassedError` is fixed for :class:`linear\_model.RidgeClassifierCV` with default scoring. By :user:`Stefanie Senger ` :pr:`29634` - |Fix| Many method arguments which shouldn't be included in the routing mechanism are now excluded and the `set\_{method}\_request` methods are not generated for them. By `Adrin Jalali`\_ :pr:`29920` Dropping official support for PyPy ---------------------------------- Due to limited maintainer resources and small number of users, official PyPy support has been dropped. Some parts of scikit-learn may still work but PyPy is not tested anymore in the scikit-learn Continuous Integration. By :user:`Loïc Estève ` :pr:`29128` Dropping support for building with setuptools --------------------------------------------- From scikit-learn 1.6 onwards, support for building with setuptools has been removed. Meson is the only supported way to build scikit-learn. By :user:`Loïc Estève ` :pr:`29400` Free-threaded CPython 3.13 support ---------------------------------- scikit-learn has preliminary support for free-threaded CPython, in particular free-threaded wheels are available for all
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.6.rst
main
scikit-learn
[ 0.018941519781947136, -0.08984053879976273, -0.1644701063632965, -0.0490848571062088, -0.030341150239109993, -0.03072417713701725, 0.01335261482745409, 0.010257133282721043, -0.043293971568346024, -0.04723206162452698, 0.02081283926963806, -0.03692229837179184, 0.004580710548907518, 0.0549...
0.110605
building with setuptools --------------------------------------------- From scikit-learn 1.6 onwards, support for building with setuptools has been removed. Meson is the only supported way to build scikit-learn. By :user:`Loïc Estève ` :pr:`29400` Free-threaded CPython 3.13 support ---------------------------------- scikit-learn has preliminary support for free-threaded CPython, in particular free-threaded wheels are available for all of our supported platforms. Free-threaded (also known as nogil) CPython 3.13 is an experimental version of CPython 3.13 which aims at enabling efficient multi-threaded use cases by removing the Global Interpreter Lock (GIL). For more details about free-threaded CPython see `py-free-threading doc `\_, in particular `how to install a free-threaded CPython `\_ and `Ecosystem compatibility tracking `\_. Feel free to try free-threaded on your use case and report any issues! By :user:`Loïc Estève ` and many other people in the wider Scientific Python and CPython ecosystem, for example :user:`Nathan Goldbaum `, :user:`Ralf Gommers `, :user:`Edgar Andrés Margffoy Tuay `. :pr:`30360` :mod:`sklearn.base` ------------------- - |Enhancement| Added a function :func:`base.is\_clusterer` which determines whether a given estimator is of category clusterer. By :user:`Christian Veenhuis ` :pr:`28936` - |API| Passing a class object to :func:`~sklearn.base.is\_classifier`, :func:`~sklearn.base.is\_regressor`, and :func:`~sklearn.base.is\_outlier\_detector` is now deprecated. Pass an instance instead. By `Adrin Jalali`\_ :pr:`30122` :mod:`sklearn.calibration` -------------------------- - |API| `cv="prefit"` is deprecated for :class:`~sklearn.calibration.CalibratedClassifierCV`. Use :class:`~sklearn.frozen.FrozenEstimator` instead, as `CalibratedClassifierCV(FrozenEstimator(estimator))`. By `Adrin Jalali`\_ :pr:`30171` :mod:`sklearn.cluster` ---------------------- - |API| The `copy` parameter of :class:`cluster.Birch` was deprecated in 1.6 and will be removed in 1.8. It has no effect as the estimator does not perform in-place operations on the input data. By :user:`Yao Xiao ` :pr:`29124` :mod:`sklearn.compose` ---------------------- - |Enhancement| :func:`sklearn.compose.ColumnTransformer` `verbose\_feature\_names\_out` now accepts string format or callable to generate feature names. By :user:`Marc Bresson ` :pr:`28934` :mod:`sklearn.covariance` ------------------------- - |Efficiency| :class:`covariance.MinCovDet` fitting is now slightly faster. By :user:`Antony Lee ` :pr:`29835` :mod:`sklearn.cross\_decomposition` ---------------------------------- - |Fix| :class:`cross\_decomposition.PLSRegression` properly raises an error when `n\_components` is larger than `n\_samples`. By :user:`Thomas Fan ` :pr:`29710` :mod:`sklearn.datasets` ----------------------- - |Feature| :func:`datasets.fetch\_file` allows downloading arbitrary data-file from the web. It handles local caching, integrity checks with SHA256 digests and automatic retries in case of HTTP errors. By :user:`Olivier Grisel ` :pr:`29354` :mod:`sklearn.decomposition` ---------------------------- - |Enhancement| :class:`~sklearn.decomposition.LatentDirichletAllocation` now has a ``normalize`` parameter in :meth:`~sklearn.decomposition.LatentDirichletAllocation.transform` and :meth:`~sklearn.decomposition.LatentDirichletAllocation.fit\_transform` methods to control whether the document topic distribution is normalized. By `Adrin Jalali`\_ :pr:`30097` - |Fix| :class:`~sklearn.decomposition.IncrementalPCA` will now only raise a ``ValueError`` when the number of samples in the input data to ``partial\_fit`` is less than the number of components on the first call to ``partial\_fit``. Subsequent calls to ``partial\_fit`` no longer face this restriction. By :user:`Thomas Gessey-Jones ` :pr:`30224` :mod:`sklearn.discriminant\_analysis` ------------------------------------ - |Fix| :class:`discriminant\_analysis.QuadraticDiscriminantAnalysis` will now cause `LinAlgWarning` in case of collinear variables. These errors can be silenced using the `reg\_param` attribute. By :user:`Alihan Zihna ` :pr:`19731` :mod:`sklearn.ensemble` ----------------------- - |Feature| :class:`ensemble.ExtraTreesClassifier` and :class:`ensemble.ExtraTreesRegressor` now support missing-values in the data matrix `X`. Missing-values are handled by randomly moving all of the samples to the left, or right child node as the tree is traversed. By :user:`Adam Li ` :pr:`28268` - |Efficiency| Small runtime improvement of fitting :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` by parallelizing the initial search for bin thresholds. By :user:`Christian Lorentzen ` :pr:`28064` - |Efficiency| :class:`ensemble.IsolationForest` now runs parallel jobs during :term:`predict` offering a speedup of up to 2-4x on sample sizes larger than 2000 using `joblib`. By :user:`Adam Li ` and :user:`Sérgio Pereira ` :pr:`28622` - |Enhancement| The verbosity of :class:`ensemble.HistGradientBoostingClassifier` and :class:`ensemble.HistGradientBoostingRegressor` got a more granular control. Now, `verbose = 1` prints only summary messages, `verbose >= 2` prints the full information as before. By :user:`Christian Lorentzen ` :pr:`28179` - |API| The parameter `algorithm` of :class:`ensemble.AdaBoostClassifier` is deprecated and will be removed in 1.8. By :user:`Jérémie du Boisberranger ` :pr:`29997`
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.6.rst
main
scikit-learn
[ -0.09345956146717072, -0.016782095655798912, -0.06336396932601929, -0.029058409854769707, 0.03135782852768898, -0.13019190728664398, -0.03212536498904228, 0.011794445104897022, -0.0158846415579319, -0.00016705608868505806, 0.05508135259151459, -0.06367775052785873, -0.04034208133816719, -0...
0.109182
and :class:`ensemble.HistGradientBoostingRegressor` got a more granular control. Now, `verbose = 1` prints only summary messages, `verbose >= 2` prints the full information as before. By :user:`Christian Lorentzen ` :pr:`28179` - |API| The parameter `algorithm` of :class:`ensemble.AdaBoostClassifier` is deprecated and will be removed in 1.8. By :user:`Jérémie du Boisberranger ` :pr:`29997` :mod:`sklearn.feature\_extraction` --------------------------------- - |Fix| :class:`feature\_extraction.text.TfidfVectorizer` now correctly preserves the `dtype` of `idf\_` based on the input data. By :user:`Guillaume Lemaitre ` :pr:`30022` :mod:`sklearn.frozen` --------------------- - |MajorFeature| :class:`~sklearn.frozen.FrozenEstimator` is now introduced which allows freezing an estimator. This means calling `.fit` on it has no effect, and doing a `clone(frozenestimator)` returns the same estimator instead of an unfitted clone. :pr:`29705` By `Adrin Jalali`\_ :pr:`29705` :mod:`sklearn.impute` --------------------- - |Fix| :class:`impute.KNNImputer` excludes samples with nan distances when computing the mean value for uniform weights. By :user:`Xuefeng Xu ` :pr:`29135` - |Fix| When `min\_value` and `max\_value` are array-like and some features are dropped due to `keep\_empty\_features=False`, :class:`impute.IterativeImputer` no longer raises an error and now indexes correctly. By :user:`Guntitat Sawadwuthikul ` :pr:`29451` - |Fix| Fixed :class:`impute.IterativeImputer` to make sure that it does not skip the iterative process when `keep\_empty\_features` is set to `True`. By :user:`Arif Qodari ` :pr:`29779` - |API| Add a warning in :class:`impute.SimpleImputer` when `keep\_empty\_feature=False` and `strategy="constant"`. In this case empty features are not dropped and this behaviour will change in 1.8. By :user:`Arthur Courselle ` and :user:`Simon Riou ` :pr:`29950` :mod:`sklearn.linear\_model` --------------------------- - |Enhancement| The `solver="newton-cholesky"` in :class:`linear\_model.LogisticRegression` and :class:`linear\_model.LogisticRegressionCV` is extended to support the full multinomial loss in a multiclass setting. By :user:`Christian Lorentzen ` :pr:`28840` - |Fix| In :class:`linear\_model.Ridge` and :class:`linear\_model.RidgeCV`, after `fit`, the `coef\_` attribute is now of shape `(n\_samples,)` like other linear models. By :user:`Maxwell Liu`, `Guillaume Lemaitre`\_, and `Adrin Jalali`\_ :pr:`19746` - |Fix| :class:`linear\_model.LogisticRegressionCV` corrects sample weight handling for the calculation of test scores. By :user:`Shruti Nath ` :pr:`29419` - |Fix| :class:`linear\_model.LassoCV` and :class:`linear\_model.ElasticNetCV` now take sample weights into accounts to define the search grid for the internally tuned `alpha` hyper-parameter. By :user:`John Hopfensperger ` and :user:`Shruti Nath ` :pr:`29442` - |Fix| :class:`linear\_model.LogisticRegression`, :class:`linear\_model.PoissonRegressor`, :class:`linear\_model.GammaRegressor`, :class:`linear\_model.TweedieRegressor` now take sample weights into account to decide when to fall back to `solver='lbfgs'` whenever `solver='newton-cholesky'` becomes numerically unstable. By :user:`Antoine Baker ` :pr:`29818` - |Fix| :class:`linear\_model.RidgeCV` now properly uses predictions on the same scale as the target seen during `fit`. These predictions are stored in `cv\_results\_` when `scoring != None`. Previously, the predictions were rescaled by the square root of the sample weights and offset by the mean of the target, leading to an incorrect estimate of the score. By :user:`Guillaume Lemaitre `, :user:`Jérôme Dockes ` and :user:`Hanmin Qin ` :pr:`29842` - |Fix| :class:`linear\_model.RidgeCV` now properly supports custom multioutput scorers by letting the scorer manage the multioutput averaging. Previously, the predictions and true targets were both squeezed to a 1D array before computing the error. By :user:`Guillaume Lemaitre ` :pr:`29884` - |Fix| :class:`linear\_model.LinearRegression` now sets the `cond` parameter when calling the `scipy.linalg.lstsq` solver on dense input data. This ensures more numerically robust results on rank-deficient data. In particular, it empirically fixes the expected equivalence property between fitting with reweighted or with repeated data points. By :user:`Antoine Baker ` :pr:`30040` - |Fix| :class:`linear\_model.LogisticRegression` and other linear models that accept `solver="newton-cholesky"` now report the correct number of iterations when they fall back to the `"lbfgs"` solver because of a rank deficient Hessian matrix. By :user:`Olivier Grisel ` :pr:`30100` - |Fix| :class:`~sklearn.linear\_model.SGDOneClassSVM` now correctly inherits from :class:`~sklearn.base.OutlierMixin` and the tags are correctly set. By :user:`Guillaume Lemaitre ` :pr:`30227` - |API| Deprecates `copy\_X` in :class:`linear\_model.TheilSenRegressor` as the parameter has no effect. `copy\_X` will be removed in 1.8. By :user:`Adam Li ` :pr:`29105` :mod:`sklearn.manifold` ----------------------- -
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.6.rst
main
scikit-learn
[ -0.0825405865907669, -0.01692572608590126, -0.0005311131244525313, 0.03776486963033676, 0.05950538069009781, -0.005534003023058176, 0.028590194880962372, -4.972569058736553e-7, -0.08712486177682877, -0.023886438459157944, -0.03663534298539162, -0.06076134741306305, -0.008892335928976536, -...
0.092908
By :user:`Olivier Grisel ` :pr:`30100` - |Fix| :class:`~sklearn.linear\_model.SGDOneClassSVM` now correctly inherits from :class:`~sklearn.base.OutlierMixin` and the tags are correctly set. By :user:`Guillaume Lemaitre ` :pr:`30227` - |API| Deprecates `copy\_X` in :class:`linear\_model.TheilSenRegressor` as the parameter has no effect. `copy\_X` will be removed in 1.8. By :user:`Adam Li ` :pr:`29105` :mod:`sklearn.manifold` ----------------------- - |Efficiency| :func:`manifold.locally\_linear\_embedding` and :class:`manifold.LocallyLinearEmbedding` now allocate more efficiently the memory of sparse matrices in the Hessian, Modified and LTSA methods. By :user:`Giorgio Angelotti ` :pr:`28096` :mod:`sklearn.metrics` ---------------------- - |Efficiency| :func:`sklearn.metrics.classification\_report` is now faster by caching classification labels. By :user:`Adrin Jalali ` :pr:`29738` - |Enhancement| :meth:`metrics.RocCurveDisplay.from\_estimator`, :meth:`metrics.RocCurveDisplay.from\_predictions`, :meth:`metrics.PrecisionRecallDisplay.from\_estimator`, and :meth:`metrics.PrecisionRecallDisplay.from\_predictions` now accept a new keyword `despine` to remove the top and right spines of the plot in order to make it clearer. By :user:`Yao Xiao ` :pr:`26367` - |Enhancement| :func:`sklearn.metrics.check\_scoring` now accepts `raise\_exc` to specify whether to raise an exception if a subset of the scorers in multimetric scoring fails or to return an error code. By :user:`Stefanie Senger ` :pr:`28992` - |Fix| :func:`metrics.roc\_auc\_score` will now correctly return np.nan and warn user if only one class is present in the labels. By :user:`Hleb Levitski ` and :user:`Janez Demšar ` :pr:`27412`, :pr:`30013` - |Fix| The functions :func:`metrics.mean\_squared\_log\_error` and :func:`metrics.root\_mean\_squared\_log\_error` now check whether the inputs are within the correct domain for the function :math:`y=\log(1+x)`, rather than :math:`y=\log(x)`. The functions :func:`metrics.mean\_absolute\_error`, :func:`metrics.mean\_absolute\_percentage\_error`, :func:`metrics.mean\_squared\_error` and :func:`metrics.root\_mean\_squared\_error` now explicitly check whether a scalar will be returned when `multioutput=uniform\_average`. By :user:`Virgil Chan ` :pr:`29709` - |API| The `assert\_all\_finite` parameter of functions :func:`metrics.pairwise.check\_pairwise\_arrays` and :func:`metrics.pairwise\_distances` is renamed into `ensure\_all\_finite`. `force\_all\_finite` will be removed in 1.8. By :user:`Jérémie du Boisberranger ` :pr:`29404` - |API| `scoring="neg\_max\_error"` should be used instead of `scoring="max\_error"` which is now deprecated. By :user:`Farid "Freddie" Taba ` :pr:`29462` - |API| The default value of the `response\_method` parameter of :func:`metrics.make\_scorer` will change from `None` to `"predict"` and `None` will be removed in 1.8. In the meantime, `None` is equivalent to `"predict"`. By :user:`Jérémie du Boisberranger ` :pr:`30001` :mod:`sklearn.model\_selection` ------------------------------ - |Enhancement| :class:`~model\_selection.GroupKFold` now has the ability to shuffle groups into different folds when `shuffle=True`. By :user:`Zachary Vealey ` :pr:`28519` - |Enhancement| There is no need to call `fit` on a :class:`~sklearn.model\_selection.FixedThresholdClassifier` if the underlying estimator is already fitted. By :user:`Adrin Jalali ` :pr:`30172` - |Fix| Improve error message when :func:`model\_selection.RepeatedStratifiedKFold.split` is called without a `y` argument By :user:`Anurag Varma ` :pr:`29402` :mod:`sklearn.neighbors` ------------------------ - |Enhancement| :class:`neighbors.NearestNeighbors`, :class:`neighbors.KNeighborsClassifier`, :class:`neighbors.KNeighborsRegressor`, :class:`neighbors.RadiusNeighborsClassifier`, :class:`neighbors.RadiusNeighborsRegressor`, :class:`neighbors.KNeighborsTransformer`, :class:`neighbors.RadiusNeighborsTransformer`, and :class:`neighbors.LocalOutlierFactor` now work with `metric="nan\_euclidean"`, supporting `nan` inputs. By :user:`Carlo Lemos `, `Guillaume Lemaitre`\_, and `Adrin Jalali`\_ :pr:`25330` - |Enhancement| Add :meth:`neighbors.NearestCentroid.decision\_function`, :meth:`neighbors.NearestCentroid.predict\_proba` and :meth:`neighbors.NearestCentroid.predict\_log\_proba` to the :class:`neighbors.NearestCentroid` estimator class. Support the case when `X` is sparse and `shrinking\_threshold` is not `None` in :class:`neighbors.NearestCentroid`. By :user:`Matthew Ning ` :pr:`26689` - |Enhancement| Make `predict`, `predict\_proba`, and `score` of :class:`neighbors.KNeighborsClassifier` and :class:`neighbors.RadiusNeighborsClassifier` accept `X=None` as input. In this case predictions for all training set points are returned, and points are not included into their own neighbors. By :user:`Dmitry Kobak ` :pr:`30047` - |Fix| :class:`neighbors.LocalOutlierFactor` raises a warning in the `fit` method when duplicate values in the training data lead to inaccurate outlier detection. By :user:`Henrique Caroço ` :pr:`28773` :mod:`sklearn.neural\_network` ----------------------------- - |Fix| :class:`neural\_network.MLPRegressor` does no longer crash when the model diverges and that `early\_stopping` is enabled. By :user:`Marc Bresson ` :pr:`29773` :mod:`sklearn.pipeline` ----------------------- - |MajorFeature| :class:`pipeline.Pipeline` can now transform metadata up to the step requiring the metadata, which can be set using the `transform\_input` parameter. By `Adrin Jalali`\_ :pr:`28901` - |Enhancement| :class:`pipeline.Pipeline` now warns about not being fitted before calling methods that require the pipeline to be fitted. This warning will become an error in 1.8. By `Adrin Jalali`\_ :pr:`29868` - |Fix| Fixed an issue with
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.6.rst
main
scikit-learn
[ -0.11071792989969254, -0.045163583010435104, -0.060788366943597794, 0.02520919218659401, 0.07219801098108292, -0.05990692973136902, -0.00821300595998764, 0.039215996861457825, -0.04753413796424866, -0.04619133472442627, 0.1368228793144226, 0.024685870856046677, 0.05971367657184601, -0.0187...
0.00827
metadata, which can be set using the `transform\_input` parameter. By `Adrin Jalali`\_ :pr:`28901` - |Enhancement| :class:`pipeline.Pipeline` now warns about not being fitted before calling methods that require the pipeline to be fitted. This warning will become an error in 1.8. By `Adrin Jalali`\_ :pr:`29868` - |Fix| Fixed an issue with tags and estimator type of :class:`~sklearn.pipeline.Pipeline` when pipeline is empty. This allows the HTML representation of an empty pipeline to be rendered correctly. By :user:`Gennaro Daniele Acciaro ` :pr:`30203` :mod:`sklearn.preprocessing` ---------------------------- - |Enhancement| Added `warn` option to `handle\_unknown` parameter in :class:`preprocessing.OneHotEncoder`. By :user:`Hleb Levitski ` :pr:`28637` - |Enhancement| The HTML representation of :class:`preprocessing.FunctionTransformer` will show the function name in the label. By :user:`Yao Xiao ` :pr:`29158` - |Fix| :class:`preprocessing.PowerTransformer` now uses `scipy.special.inv\_boxcox` to output `nan` if the input of BoxCox's inverse is invalid. By :user:`Xuefeng Xu ` :pr:`27875` :mod:`sklearn.semi\_supervised` ------------------------------ - |API| :class:`semi\_supervised.SelfTrainingClassifier` deprecated the `base\_estimator` parameter in favor of `estimator`. By :user:`Adam Li ` :pr:`28494` :mod:`sklearn.tree` ------------------- - |Feature| :class:`tree.ExtraTreeClassifier` and :class:`tree.ExtraTreeRegressor` now support missing-values in the data matrix ``X``. Missing-values are handled by randomly moving all of the samples to the left, or right child node as the tree is traversed. By :user:`Adam Li ` and :user:`Loïc Estève ` :pr:`27966`, :pr:`30318` - |Fix| Escape double quotes for labels and feature names when exporting trees to Graphviz format. By :user:`Santiago M. Mola `. :pr:`17575` :mod:`sklearn.utils` -------------------- - |Enhancement| :func:`utils.check\_array` now accepts `ensure\_non\_negative` to check for negative values in the passed array, until now only available through calling :func:`utils.check\_non\_negative`. By :user:`Tamara Atanasoska ` :pr:`29540` - |Enhancement| :func:`~sklearn.utils.estimator\_checks.check\_estimator` and :func:`~sklearn.utils.estimator\_checks.parametrize\_with\_checks` now check and fail if the classifier has the `tags.classifier\_tags.multi\_class = False` tag but does not fail on multi-class data. By `Adrin Jalali`\_ :pr:`29874` - |Enhancement| :func:`utils.validation.check\_is\_fitted` now passes on stateless estimators. An estimator can indicate it's stateless by setting the `requires\_fit` tag. See :ref:`estimator\_tags` for more information. By :user:`Adrin Jalali ` :pr:`29880` - |Enhancement| Changes to :func:`~utils.estimator\_checks.check\_estimator` and :func:`~utils.estimator\_checks.parametrize\_with\_checks`. - :func:`~utils.estimator\_checks.check\_estimator` introduces new arguments: ``on\_skip``, ``on\_fail``, and ``callback`` to control the behavior of the check runner. Refer to the API documentation for more details. - ``generate\_only=True`` is deprecated in :func:`~utils.estimator\_checks.check\_estimator`. Use :func:`~utils.estimator\_checks.estimator\_checks\_generator` instead. - The ``\_xfail\_checks`` estimator tag is now removed, and now in order to indicate which tests are expected to fail, you can pass a dictionary to the :func:`~utils.estimator\_checks.check\_estimator` as the ``expected\_failed\_checks`` parameter. Similarly, the ``expected\_failed\_checks`` parameter in :func:`~utils.estimator\_checks.parametrize\_with\_checks` can be used, which is a callable returning a dictionary of the form:: { "check\_name": "reason to mark this check as xfail", } By `Adrin Jalali`\_ :pr:`30149` - |Fix| :func:`utils.estimator\_checks.parametrize\_with\_checks` and :func:`utils.estimator\_checks.check\_estimator` now support estimators that have `set\_output` called on them. By :user:`Adrin Jalali ` :pr:`29869` - |API| The `assert\_all\_finite` parameter of functions :func:`utils.check\_array`, :func:`utils.check\_X\_y`, :func:`utils.as\_float\_array` is renamed into `ensure\_all\_finite`. `force\_all\_finite` will be removed in 1.8. By :user:`Jérémie du Boisberranger ` :pr:`29404` - |API| `utils.estimator\_checks.check\_sample\_weights\_invariance` replaced by `utils.estimator\_checks.check\_sample\_weight\_equivalence\_on\_dense\_data` which uses integer (including zero) weights and `utils.estimator\_checks.check\_sample\_weight\_equivalence\_on\_sparse\_data` which does the same on sparse data. By :user:`Antoine Baker ` :pr:`29818`, :pr:`30137` - |API| Using `\_estimator\_type` to set the estimator type is deprecated. Inherit from :class:`~sklearn.base.ClassifierMixin`, :class:`~sklearn.base.RegressorMixin`, :class:`~sklearn.base.TransformerMixin`, or :class:`~sklearn.base.OutlierMixin` instead. Alternatively, you can set `estimator\_type` in :class:`~sklearn.utils.Tags` in the `\_\_sklearn\_tags\_\_` method. By `Adrin Jalali`\_ :pr:`30122` .. rubric:: Code and documentation contributors Thanks to everyone who has contributed to the maintenance and improvement of the project since version 1.5, including: Aaron Schumacher, Abdulaziz Aloqeely, abhi-jha, Acciaro Gennaro Daniele, Adam J. Stewart, Adam Li, Adeel Hassan, Adeyemi Biola, Aditi Juneja, Adrin Jalali, Aisha, Akanksha Mhadolkar, Akihiro Kuno, Alberto Torres, alexqiao, Alihan Zihna, Aniruddha Saha, antoinebaker, Antony Lee, Anurag Varma, Arif Qodari, Arthur Courselle, ArthurDbrn, Arturo Amor, Aswathavicky, Audrey Flanders, aurelienmorgan, Austin, awwwyan, AyGeeEm,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.6.rst
main
scikit-learn
[ -0.06707257032394409, 0.0015015191165730357, -0.052491262555122375, -0.004208624362945557, -0.0028185038827359676, -0.0637173280119896, -0.08307038992643356, 0.07864739000797272, -0.049357298761606216, -0.04547608643770218, -0.020840179175138474, -0.08745802193880081, -0.026434609666466713, ...
0.028268
Abdulaziz Aloqeely, abhi-jha, Acciaro Gennaro Daniele, Adam J. Stewart, Adam Li, Adeel Hassan, Adeyemi Biola, Aditi Juneja, Adrin Jalali, Aisha, Akanksha Mhadolkar, Akihiro Kuno, Alberto Torres, alexqiao, Alihan Zihna, Aniruddha Saha, antoinebaker, Antony Lee, Anurag Varma, Arif Qodari, Arthur Courselle, ArthurDbrn, Arturo Amor, Aswathavicky, Audrey Flanders, aurelienmorgan, Austin, awwwyan, AyGeeEm, a.zy.lee, baggiponte, BlazeStorm001, bme-git, Boney Patel, brdav, Brigitta Sipőcz, Cailean Carter, Camille Troillard, Carlo Lemos, Christian Lorentzen, Christian Veenhuis, Christine P. Chai, claudio, Conrad Stevens, datarollhexasphericon, Davide Chicco, David Matthew Cherney, Dea María Léon, Deepak Saldanha, Deepyaman Datta, dependabot[bot], dinga92, Dmitry Kobak, Domenico, Drew Craeton, dymil, Edoardo Abati, EmilyXinyi, Eric Larson, Evelyn, fabianhenning, Farid "Freddie" Taba, Gael Varoquaux, Giorgio Angelotti, Hleb Levitski, Guillaume Lemaitre, Guntitat Sawadwuthikul, Haesun Park, Hanjun Kim, Henrique Caroço, hhchen1105, Hugo Boulenger, Ilya Komarov, Inessa Pawson, Ivan Pan, Ivan Wiryadi, Jaimin Chauhan, Jakob Bull, James Lamb, Janez Demšar, Jérémie du Boisberranger, Jérôme Dockès, Jirair Aroyan, João Morais, Joe Cainey, Joel Nothman, John Enblom, JorgeCardenas, Joseph Barbier, jpienaar-tuks, Julian Chan, K.Bharat Reddy, Kevin Doshi, Lars, Loic Esteve, Lucas Colley, Lucy Liu, lunovian, Marc Bresson, Marco Edward Gorelli, Marco Maggi, Marco Wolsza, Maren Westermann, MarieS-WiMLDS, Martin Helm, Mathew Shen, mathurinm, Matthew Feickert, Maxwell Liu, Meekail Zain, Michael Dawson, Miguel Cárdenas, m-maggi, mrastgoo, Natalia Mokeeva, Nathan Goldbaum, Nathan Orgera, nbrown-ScottLogic, Nikita Chistyakov, Nithish Bolleddula, Noam Keidar, NoPenguinsLand, Norbert Preining, notPlancha, Olivier Grisel, Omar Salman, ParsifalXu, Piotr, Priyank Shroff, Priyansh Gupta, Quentin Barthélemy, Rachit23110261, Rahil Parikh, raisadz, Rajath, renaissance0ne, Reshama Shaikh, Roberto Rosati, Robert Pollak, rwelsch427, Santiago Castro, Santiago M. Mola, scikit-learn-bot, sean moiselle, SHREEKANT VITTHAL NANDIYAWAR, Shruti Nath, Søren Bredlund Caspersen, Stefanie Senger, Stefano Gaspari, Steffen Schneider, Štěpán Sršeň, Sylvain Combettes, Tamara, Thomas, Thomas Gessey-Jones, Thomas J. Fan, Thomas Li, ThorbenMaa, Tialo, Tim Head, Tuhin Sharma, Tushar Parimi, Umberto Fasci, UV, vedpawar2254, Velislav Babatchev, Victoria Shevchenko, viktor765, Vince Carey, Virgil Chan, Wang Jiayi, Xiao Yuan, Xuefeng Xu, Yao Xiao, yareyaredesuyo, Zachary Vealey, Ziad Amerr
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.6.rst
main
scikit-learn
[ -0.03056134469807148, -0.08533679693937302, -0.05814601108431816, 0.06028781831264496, -0.03219684958457947, 0.03828554227948189, 0.006270998157560825, 0.05416665971279144, -0.031486574560403824, 0.009157816879451275, 0.029581503942608833, -0.014343520626425743, 0.03852656111121178, 0.0011...
0.101291
.. include:: \_contributors.rst .. currentmodule:: sklearn ============ Version 0.19 ============ .. \_changes\_0\_19: Version 0.19.2 ============== \*\*July, 2018\*\* This release is exclusively in order to support Python 3.7. Related changes --------------- - ``n\_iter\_`` may vary from previous releases in :class:`linear\_model.LogisticRegression` with ``solver='lbfgs'`` and :class:`linear\_model.HuberRegressor`. For Scipy <= 1.0.0, the optimizer could perform more than the requested maximum number of iterations. Now both estimators will report at most ``max\_iter`` iterations even if more were performed. :issue:`10723` by `Joel Nothman`\_. Version 0.19.1 ============== \*\*October 23, 2017\*\* This is a bug-fix release with some minor documentation improvements and enhancements to features released in 0.19.0. Note there may be minor differences in TSNE output in this release (due to :issue:`9623`), in the case where multiple samples have equal distance to some sample. Changelog --------- API changes ........... - Reverted the addition of ``metrics.ndcg\_score`` and ``metrics.dcg\_score`` which had been merged into version 0.19.0 by error. The implementations were broken and undocumented. - ``return\_train\_score`` which was added to :class:`model\_selection.GridSearchCV`, :class:`model\_selection.RandomizedSearchCV` and :func:`model\_selection.cross\_validate` in version 0.19.0 will be changing its default value from True to False in version 0.21. We found that calculating training score could have a great effect on cross validation runtime in some cases. Users should explicitly set ``return\_train\_score`` to False if prediction or scoring functions are slow, resulting in a deleterious effect on CV runtime, or to True if they wish to use the calculated scores. :issue:`9677` by :user:`Kumar Ashutosh ` and `Joel Nothman`\_. - ``correlation\_models`` and ``regression\_models`` from the legacy gaussian processes implementation have been belatedly deprecated. :issue:`9717` by :user:`Kumar Ashutosh `. Bug fixes ......... - Avoid integer overflows in :func:`metrics.matthews\_corrcoef`. :issue:`9693` by :user:`Sam Steingold `. - Fixed a bug in the objective function for :class:`manifold.TSNE` (both exact and with the Barnes-Hut approximation) when ``n\_components >= 3``. :issue:`9711` by :user:`goncalo-rodrigues`. - Fix regression in :func:`model\_selection.cross\_val\_predict` where it raised an error with ``method='predict\_proba'`` for some probabilistic classifiers. :issue:`9641` by :user:`James Bourbeau `. - Fixed a bug where :func:`datasets.make\_classification` modified its input ``weights``. :issue:`9865` by :user:`Sachin Kelkar `. - :class:`model\_selection.StratifiedShuffleSplit` now works with multioutput multiclass or multilabel data with more than 1000 columns. :issue:`9922` by :user:`Charlie Brummitt `. - Fixed a bug with nested and conditional parameter setting, e.g. setting a pipeline step and its parameter at the same time. :issue:`9945` by `Andreas Müller`\_ and `Joel Nothman`\_. Regressions in 0.19.0 fixed in 0.19.1: - Fixed a bug where parallelised prediction in random forests was not thread-safe and could (rarely) result in arbitrary errors. :issue:`9830` by `Joel Nothman`\_. - Fix regression in :func:`model\_selection.cross\_val\_predict` where it no longer accepted ``X`` as a list. :issue:`9600` by :user:`Rasul Kerimov `. - Fixed handling of :func:`model\_selection.cross\_val\_predict` for binary classification with ``method='decision\_function'``. :issue:`9593` by :user:`Reiichiro Nakano ` and core devs. - Fix regression in :class:`pipeline.Pipeline` where it no longer accepted ``steps`` as a tuple. :issue:`9604` by :user:`Joris Van den Bossche `. - Fix bug where ``n\_iter`` was not properly deprecated, leaving ``n\_iter`` unavailable for interim use in :class:`linear\_model.SGDClassifier`, :class:`linear\_model.SGDRegressor`, :class:`linear\_model.PassiveAggressiveClassifier`, :class:`linear\_model.PassiveAggressiveRegressor` and :class:`linear\_model.Perceptron`. :issue:`9558` by `Andreas Müller`\_. - Dataset fetchers make sure temporary files are closed before removing them, which caused errors on Windows. :issue:`9847` by :user:`Joan Massich `. - Fixed a regression in :class:`manifold.TSNE` where it no longer supported metrics other than 'euclidean' and 'precomputed'. :issue:`9623` by :user:`Oli Blum `. Enhancements ............ - Our test suite and :func:`utils.estimator\_checks.check\_estimator` can now be run without Nose installed. :issue:`9697` by :user:`Joan Massich `. - To improve usability of version 0.19's :class:`pipeline.Pipeline` caching, ``memory`` now allows ``joblib.Memory`` instances. This make use of the new :func:`utils.validation.check\_memory` helper. :issue:`9584` by :user:`Kumar Ashutosh ` - Some fixes to examples: :issue:`9750`, :issue:`9788`, :issue:`9815` - Made a FutureWarning in
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.05072654038667679, -0.11958499252796173, -0.03555191680788994, 0.042050670832395554, 0.08284159749746323, -0.0871552899479866, -0.08248540014028549, 0.04639124125242233, -0.044574059545993805, 0.04765041172504425, 0.0008900930406525731, 0.0106288967654109, 0.013261128216981888, -0.00194...
0.01673
be run without Nose installed. :issue:`9697` by :user:`Joan Massich `. - To improve usability of version 0.19's :class:`pipeline.Pipeline` caching, ``memory`` now allows ``joblib.Memory`` instances. This make use of the new :func:`utils.validation.check\_memory` helper. :issue:`9584` by :user:`Kumar Ashutosh ` - Some fixes to examples: :issue:`9750`, :issue:`9788`, :issue:`9815` - Made a FutureWarning in SGD-based estimators less verbose. :issue:`9802` by :user:`Vrishank Bhardwaj `. Code and Documentation Contributors ----------------------------------- With thanks to: Joel Nothman, Loic Esteve, Andreas Mueller, Kumar Ashutosh, Vrishank Bhardwaj, Hanmin Qin, Rasul Kerimov, James Bourbeau, Nagarjuna Kumar, Nathaniel Saul, Olivier Grisel, Roman Yurchak, Reiichiro Nakano, Sachin Kelkar, Sam Steingold, Yaroslav Halchenko, diegodlh, felix, goncalo-rodrigues, jkleint, oliblum90, pasbi, Anthony Gitter, Ben Lawson, Charlie Brummitt, Didi Bar-Zev, Gael Varoquaux, Joan Massich, Joris Van den Bossche, nielsenmarkus11 Version 0.19 ============ \*\*August 12, 2017\*\* Highlights ---------- We are excited to release a number of great new features including :class:`neighbors.LocalOutlierFactor` for anomaly detection, :class:`preprocessing.QuantileTransformer` for robust feature transformation, and the :class:`multioutput.ClassifierChain` meta-estimator to simply account for dependencies between classes in multilabel problems. We have some new algorithms in existing estimators, such as multiplicative update in :class:`decomposition.NMF` and multinomial :class:`linear\_model.LogisticRegression` with L1 loss (use ``solver='saga'``). Cross validation is now able to return the results from multiple metric evaluations. The new :func:`model\_selection.cross\_validate` can return many scores on the test data as well as training set performance and timings, and we have extended the ``scoring`` and ``refit`` parameters for grid/randomized search :ref:`to handle multiple metrics `. You can also learn faster. For instance, the :ref:`new option to cache transformations ` in :class:`pipeline.Pipeline` makes grid search over pipelines including slow transformations much more efficient. And you can predict faster: if you're sure you know what you're doing, you can turn off validating that the input is finite using :func:`config\_context`. We've made some important fixes too. We've fixed a longstanding implementation error in :func:`metrics.average\_precision\_score`, so please be cautious with prior results reported from that function. A number of errors in the :class:`manifold.TSNE` implementation have been fixed, particularly in the default Barnes-Hut approximation. :class:`semi\_supervised.LabelSpreading` and :class:`semi\_supervised.LabelPropagation` have had substantial fixes. LabelPropagation was previously broken. LabelSpreading should now correctly respect its alpha parameter. Changed models -------------- The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures. - :class:`cluster.KMeans` with sparse X and initial centroids given (bug fix) - :class:`cross\_decomposition.PLSRegression` with ``scale=True`` (bug fix) - :class:`ensemble.GradientBoostingClassifier` and :class:`ensemble.GradientBoostingRegressor` where ``min\_impurity\_split`` is used (bug fix) - gradient boosting ``loss='quantile'`` (bug fix) - :class:`ensemble.IsolationForest` (bug fix) - :class:`feature\_selection.SelectFdr` (bug fix) - :class:`linear\_model.RANSACRegressor` (bug fix) - :class:`linear\_model.LassoLars` (bug fix) - :class:`linear\_model.LassoLarsIC` (bug fix) - :class:`manifold.TSNE` (bug fix) - :class:`neighbors.NearestCentroid` (bug fix) - :class:`semi\_supervised.LabelSpreading` (bug fix) - :class:`semi\_supervised.LabelPropagation` (bug fix) - tree based models where ``min\_weight\_fraction\_leaf`` is used (enhancement) - :class:`model\_selection.StratifiedKFold` with ``shuffle=True`` (this change, due to :issue:`7823` was not mentioned in the release notes at the time) Details are listed in the changelog below. (While we are trying to better inform users by providing this information, we cannot assure that this list is complete.) Changelog --------- New features ............ Classifiers and regressors - Added :class:`multioutput.ClassifierChain` for multi-label classification. By :user:`Adam Kleczewski `. - Added solver ``'saga'`` that implements the improved version of Stochastic Average Gradient, in :class:`linear\_model.LogisticRegression` and :class:`linear\_model.Ridge`. It allows the use of L1 penalty with multinomial logistic loss, and behaves marginally better than 'sag' during the first epochs of ridge and logistic regression. :issue:`8446` by `Arthur Mensch`\_. Other estimators - Added the :class:`neighbors.LocalOutlierFactor` class for anomaly detection based on nearest neighbors. :issue:`5279` by `Nicolas Goix`\_ and
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.03353854641318321, -0.017781393602490425, -0.037976641207933426, -0.01768551394343376, 0.01668870821595192, -0.03153207525610924, -0.018088212236762047, 0.04031135141849518, -0.09304571151733398, -0.02588893286883831, 0.006533850450068712, 0.00421034824103117, -0.0355198010802269, -0.04...
0.076119
:class:`linear\_model.Ridge`. It allows the use of L1 penalty with multinomial logistic loss, and behaves marginally better than 'sag' during the first epochs of ridge and logistic regression. :issue:`8446` by `Arthur Mensch`\_. Other estimators - Added the :class:`neighbors.LocalOutlierFactor` class for anomaly detection based on nearest neighbors. :issue:`5279` by `Nicolas Goix`\_ and `Alexandre Gramfort`\_. - Added :class:`preprocessing.QuantileTransformer` class and :func:`preprocessing.quantile\_transform` function for features normalization based on quantiles. :issue:`8363` by :user:`Denis Engemann `, :user:`Guillaume Lemaitre `, `Olivier Grisel`\_, `Raghav RV`\_, :user:`Thierry Guillemot `, and `Gael Varoquaux`\_. - The new solver ``'mu'`` implements a Multiplicate Update in :class:`decomposition.NMF`, allowing the optimization of all beta-divergences, including the Frobenius norm, the generalized Kullback-Leibler divergence and the Itakura-Saito divergence. :issue:`5295` by `Tom Dupre la Tour`\_. Model selection and evaluation - :class:`model\_selection.GridSearchCV` and :class:`model\_selection.RandomizedSearchCV` now support simultaneous evaluation of multiple metrics. Refer to the :ref:`multimetric\_grid\_search` section of the user guide for more information. :issue:`7388` by `Raghav RV`\_ - Added the :func:`model\_selection.cross\_validate` which allows evaluation of multiple metrics. This function returns a dict with more useful information from cross-validation such as the train scores, fit times and score times. Refer to :ref:`multimetric\_cross\_validation` section of the userguide for more information. :issue:`7388` by `Raghav RV`\_ - Added :func:`metrics.mean\_squared\_log\_error`, which computes the mean square error of the logarithmic transformation of targets, particularly useful for targets with an exponential trend. :issue:`7655` by :user:`Karan Desai `. - Added :func:`metrics.dcg\_score` and :func:`metrics.ndcg\_score`, which compute Discounted cumulative gain (DCG) and Normalized discounted cumulative gain (NDCG). :issue:`7739` by :user:`David Gasquez `. - Added the :class:`model\_selection.RepeatedKFold` and :class:`model\_selection.RepeatedStratifiedKFold`. :issue:`8120` by `Neeraj Gangwar`\_. Miscellaneous - Validation that input data contains no NaN or inf can now be suppressed using :func:`config\_context`, at your own risk. This will save on runtime, and may be particularly useful for prediction time. :issue:`7548` by `Joel Nothman`\_. - Added a test to ensure parameter listing in docstrings matches the function/class signature. :issue:`9206` by `Alexandre Gramfort`\_ and `Raghav RV`\_. Enhancements ............ Trees and ensembles - The ``min\_weight\_fraction\_leaf`` constraint in tree construction is now more efficient, taking a fast path to declare a node a leaf if its weight is less than 2 \* the minimum. Note that the constructed tree will be different from previous versions where ``min\_weight\_fraction\_leaf`` is used. :issue:`7441` by :user:`Nelson Liu `. - :class:`ensemble.GradientBoostingClassifier` and :class:`ensemble.GradientBoostingRegressor` now support sparse input for prediction. :issue:`6101` by :user:`Ibraim Ganiev `. - :class:`ensemble.VotingClassifier` now allows changing estimators by using :meth:`ensemble.VotingClassifier.set\_params`. An estimator can also be removed by setting it to ``None``. :issue:`7674` by :user:`Yichuan Liu `. - :func:`tree.export\_graphviz` now shows configurable number of decimal places. :issue:`8698` by :user:`Guillaume Lemaitre `. - Added ``flatten\_transform`` parameter to :class:`ensemble.VotingClassifier` to change output shape of `transform` method to 2 dimensional. :issue:`7794` by :user:`Ibraim Ganiev ` and :user:`Herilalaina Rakotoarison `. Linear, kernelized and related models - :class:`linear\_model.SGDClassifier`, :class:`linear\_model.SGDRegressor`, :class:`linear\_model.PassiveAggressiveClassifier`, :class:`linear\_model.PassiveAggressiveRegressor` and :class:`linear\_model.Perceptron` now expose ``max\_iter`` and ``tol`` parameters, to handle convergence more precisely. ``n\_iter`` parameter is deprecated, and the fitted estimator exposes a ``n\_iter\_`` attribute, with actual number of iterations before convergence. :issue:`5036` by `Tom Dupre la Tour`\_. - Added ``average`` parameter to perform weight averaging in :class:`linear\_model.PassiveAggressiveClassifier`. :issue:`4939` by :user:`Andrea Esuli `. - :class:`linear\_model.RANSACRegressor` no longer throws an error when calling ``fit`` if no inliers are found in its first iteration. Furthermore, causes of skipped iterations are tracked in newly added attributes, ``n\_skips\_\*``. :issue:`7914` by :user:`Michael Horrell `. - In :class:`gaussian\_process.GaussianProcessRegressor`, method ``predict`` is a lot faster with ``return\_std=True``. :issue:`8591` by :user:`Hadrien Bertrand `. - Added ``return\_std`` to ``predict`` method of :class:`linear\_model.ARDRegression` and :class:`linear\_model.BayesianRidge`. :issue:`7838` by :user:`Sergey Feldman `. - Memory usage enhancements: Prevent cast from float32 to float64 in: :class:`linear\_model.MultiTaskElasticNet`; :class:`linear\_model.LogisticRegression` when using newton-cg solver; and :class:`linear\_model.Ridge` when using svd,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.06959667056798935, -0.03386235237121582, 0.027350114658474922, -0.04899417981505394, 0.042063407599925995, -0.06351315230131149, -0.011915259063243866, 0.015715422108769417, -0.05535978451371193, 0.031185835599899292, 0.08317985385656357, -0.01589847542345524, -0.00757389422506094, -0.0...
0.049712
method ``predict`` is a lot faster with ``return\_std=True``. :issue:`8591` by :user:`Hadrien Bertrand `. - Added ``return\_std`` to ``predict`` method of :class:`linear\_model.ARDRegression` and :class:`linear\_model.BayesianRidge`. :issue:`7838` by :user:`Sergey Feldman `. - Memory usage enhancements: Prevent cast from float32 to float64 in: :class:`linear\_model.MultiTaskElasticNet`; :class:`linear\_model.LogisticRegression` when using newton-cg solver; and :class:`linear\_model.Ridge` when using svd, sparse\_cg, cholesky or lsqr solvers. :issue:`8835`, :issue:`8061` by :user:`Joan Massich ` and :user:`Nicolas Cordier ` and :user:`Thierry Guillemot `. Other predictors - Custom metrics for the :mod:`sklearn.neighbors` binary trees now have fewer constraints: they must take two 1d-arrays and return a float. :issue:`6288` by `Jake Vanderplas`\_. - ``algorithm='auto`` in :mod:`sklearn.neighbors` estimators now chooses the most appropriate algorithm for all input types and metrics. :issue:`9145` by :user:`Herilalaina Rakotoarison ` and :user:`Reddy Chinthala `. Decomposition, manifold learning and clustering - :class:`cluster.MiniBatchKMeans` and :class:`cluster.KMeans` now use significantly less memory when assigning data points to their nearest cluster center. :issue:`7721` by :user:`Jon Crall `. - :class:`decomposition.PCA`, :class:`decomposition.IncrementalPCA` and :class:`decomposition.TruncatedSVD` now expose the singular values from the underlying SVD. They are stored in the attribute ``singular\_values\_``, like in :class:`decomposition.IncrementalPCA`. :issue:`7685` by :user:`Tommy Löfstedt ` - :class:`decomposition.NMF` now faster when ``beta\_loss=0``. :issue:`9277` by :user:`hongkahjun`. - Memory improvements for method ``barnes\_hut`` in :class:`manifold.TSNE` :issue:`7089` by :user:`Thomas Moreau ` and `Olivier Grisel`\_. - Optimization schedule improvements for Barnes-Hut :class:`manifold.TSNE` so the results are closer to the one from the reference implementation `lvdmaaten/bhtsne `\_ by :user:`Thomas Moreau ` and `Olivier Grisel`\_. - Memory usage enhancements: Prevent cast from float32 to float64 in :class:`decomposition.PCA` and `decomposition.randomized\_svd\_low\_rank`. :issue:`9067` by `Raghav RV`\_. Preprocessing and feature selection - Added ``norm\_order`` parameter to :class:`feature\_selection.SelectFromModel` to enable selection of the norm order when ``coef\_`` is more than 1D. :issue:`6181` by :user:`Antoine Wendlinger `. - Added ability to use sparse matrices in :func:`feature\_selection.f\_regression` with ``center=True``. :issue:`8065` by :user:`Daniel LeJeune `. - Small performance improvement to n-gram creation in :mod:`sklearn.feature\_extraction.text` by binding methods for loops and special-casing unigrams. :issue:`7567` by :user:`Jaye Doepke ` - Relax assumption on the data for the :class:`kernel\_approximation.SkewedChi2Sampler`. Since the Skewed-Chi2 kernel is defined on the open interval :math:`(-skewedness; +\infty)^d`, the transform function should not check whether ``X < 0`` but whether ``X < -self.skewedness``. :issue:`7573` by :user:`Romain Brault `. - Made default kernel parameters kernel-dependent in :class:`kernel\_approximation.Nystroem`. :issue:`5229` by :user:`Saurabh Bansod ` and `Andreas Müller`\_. Model evaluation and meta-estimators - :class:`pipeline.Pipeline` is now able to cache transformers within a pipeline by using the ``memory`` constructor parameter. :issue:`7990` by :user:`Guillaume Lemaitre `. - :class:`pipeline.Pipeline` steps can now be accessed as attributes of its ``named\_steps`` attribute. :issue:`8586` by :user:`Herilalaina Rakotoarison `. - Added ``sample\_weight`` parameter to :meth:`pipeline.Pipeline.score`. :issue:`7723` by :user:`Mikhail Korobov `. - Added ability to set ``n\_jobs`` parameter to :func:`pipeline.make\_union`. A ``TypeError`` will be raised for any other kwargs. :issue:`8028` by :user:`Alexander Booth `. - :class:`model\_selection.GridSearchCV`, :class:`model\_selection.RandomizedSearchCV` and :func:`model\_selection.cross\_val\_score` now allow estimators with callable kernels which were previously prohibited. :issue:`8005` by `Andreas Müller`\_ . - :func:`model\_selection.cross\_val\_predict` now returns output of the correct shape for all values of the argument ``method``. :issue:`7863` by :user:`Aman Dalmia `. - Added ``shuffle`` and ``random\_state`` parameters to shuffle training data before taking prefixes of it based on training sizes in :func:`model\_selection.learning\_curve`. :issue:`7506` by :user:`Narine Kokhlikyan `. - :class:`model\_selection.StratifiedShuffleSplit` now works with multioutput multiclass (or multilabel) data. :issue:`9044` by `Vlad Niculae`\_. - Speed improvements to :class:`model\_selection.StratifiedShuffleSplit`. :issue:`5991` by :user:`Arthur Mensch ` and `Joel Nothman`\_. - Add ``shuffle`` parameter to :func:`model\_selection.train\_test\_split`. :issue:`8845` by :user:`themrmax ` - :class:`multioutput.MultiOutputRegressor` and :class:`multioutput.MultiOutputClassifier` now support online learning using ``partial\_fit``. :issue: `8053` by :user:`Peng Yu `. - Add ``max\_train\_size`` parameter to :class:`model\_selection.TimeSeriesSplit` :issue:`8282` by :user:`Aman Dalmia `. - More clustering metrics are now available through :func:`metrics.get\_scorer` and ``scoring`` parameters. :issue:`8117` by `Raghav RV`\_. - A
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.06969310343265533, -0.0633452907204628, -0.07635389268398285, 0.014871527440845966, 0.027720363810658455, -0.050173282623291016, -0.028883401304483414, 0.04965290054678917, -0.004686163272708654, 0.03219545632600784, 0.07133638113737106, 0.0037813945673406124, -0.004445761442184448, -0....
-0.011852
:func:`model\_selection.train\_test\_split`. :issue:`8845` by :user:`themrmax ` - :class:`multioutput.MultiOutputRegressor` and :class:`multioutput.MultiOutputClassifier` now support online learning using ``partial\_fit``. :issue: `8053` by :user:`Peng Yu `. - Add ``max\_train\_size`` parameter to :class:`model\_selection.TimeSeriesSplit` :issue:`8282` by :user:`Aman Dalmia `. - More clustering metrics are now available through :func:`metrics.get\_scorer` and ``scoring`` parameters. :issue:`8117` by `Raghav RV`\_. - A scorer based on :func:`metrics.explained\_variance\_score` is also available. :issue:`9259` by :user:`Hanmin Qin `. Metrics - :func:`metrics.matthews\_corrcoef` now supports multiclass classification. :issue:`8094` by :user:`Jon Crall `. - Add ``sample\_weight`` parameter to :func:`metrics.cohen\_kappa\_score`. :issue:`8335` by :user:`Victor Poughon `. Miscellaneous - :func:`utils.estimator\_checks.check\_estimator` now attempts to ensure that methods transform, predict, etc. do not set attributes on the estimator. :issue:`7533` by :user:`Ekaterina Krivich `. - Added type checking to the ``accept\_sparse`` parameter in :mod:`sklearn.utils.validation` methods. This parameter now accepts only boolean, string, or list/tuple of strings. ``accept\_sparse=None`` is deprecated and should be replaced by ``accept\_sparse=False``. :issue:`7880` by :user:`Josh Karnofsky `. - Make it possible to load a chunk of an svmlight formatted file by passing a range of bytes to :func:`datasets.load\_svmlight\_file`. :issue:`935` by :user:`Olivier Grisel `. - :class:`dummy.DummyClassifier` and :class:`dummy.DummyRegressor` now accept non-finite features. :issue:`8931` by :user:`Attractadore`. Bug fixes ......... Trees and ensembles - Fixed a memory leak in trees when using trees with ``criterion='mae'``. :issue:`8002` by `Raghav RV`\_. - Fixed a bug where :class:`ensemble.IsolationForest` uses an incorrect formula for the average path length :issue:`8549` by `Peter Wang `\_. - Fixed a bug where :class:`ensemble.AdaBoostClassifier` throws ``ZeroDivisionError`` while fitting data with single class labels. :issue:`7501` by :user:`Dominik Krzeminski `. - Fixed a bug in :class:`ensemble.GradientBoostingClassifier` and :class:`ensemble.GradientBoostingRegressor` where a float being compared to ``0.0`` using ``==`` caused a divide by zero error. :issue:`7970` by :user:`He Chen `. - Fix a bug where :class:`ensemble.GradientBoostingClassifier` and :class:`ensemble.GradientBoostingRegressor` ignored the ``min\_impurity\_split`` parameter. :issue:`8006` by :user:`Sebastian Pölsterl `. - Fixed ``oob\_score`` in :class:`ensemble.BaggingClassifier`. :issue:`8936` by :user:`Michael Lewis ` - Fixed excessive memory usage in prediction for random forests estimators. :issue:`8672` by :user:`Mike Benfield `. - Fixed a bug where ``sample\_weight`` as a list broke random forests in Python 2 :issue:`8068` by :user:`xor`. - Fixed a bug where :class:`ensemble.IsolationForest` fails when ``max\_features`` is less than 1. :issue:`5732` by :user:`Ishank Gulati `. - Fix a bug where gradient boosting with ``loss='quantile'`` computed negative errors for negative values of ``ytrue - ypred`` leading to wrong values when calling ``\_\_call\_\_``. :issue:`8087` by :user:`Alexis Mignon ` - Fix a bug where :class:`ensemble.VotingClassifier` raises an error when a numpy array is passed in for weights. :issue:`7983` by :user:`Vincent Pham `. - Fixed a bug where :func:`tree.export\_graphviz` raised an error when the length of features\_names does not match n\_features in the decision tree. :issue:`8512` by :user:`Li Li `. Linear, kernelized and related models - Fixed a bug where :func:`linear\_model.RANSACRegressor.fit` may run until ``max\_iter`` if it finds a large inlier group early. :issue:`8251` by :user:`aivision2020`. - Fixed a bug where :class:`naive\_bayes.MultinomialNB` and :class:`naive\_bayes.BernoulliNB` failed when ``alpha=0``. :issue:`5814` by :user:`Yichuan Liu ` and :user:`Herilalaina Rakotoarison `. - Fixed a bug where :class:`linear\_model.LassoLars` does not give the same result as the LassoLars implementation available in R (lars library). :issue:`7849` by :user:`Jair Montoya Martinez `. - Fixed a bug in `linear\_model.RandomizedLasso`, :class:`linear\_model.Lars`, :class:`linear\_model.LassoLars`, :class:`linear\_model.LarsCV` and :class:`linear\_model.LassoLarsCV`, where the parameter ``precompute`` was not used consistently across classes, and some values proposed in the docstring could raise errors. :issue:`5359` by `Tom Dupre la Tour`\_. - Fix inconsistent results between :class:`linear\_model.RidgeCV` and :class:`linear\_model.Ridge` when using ``normalize=True``. :issue:`9302` by `Alexandre Gramfort`\_. - Fix a bug where :func:`linear\_model.LassoLars.fit` sometimes left ``coef\_`` as a list, rather than an ndarray. :issue:`8160` by :user:`CJ Carey `. - Fix :func:`linear\_model.BayesianRidge.fit` to return ridge parameter ``alpha\_`` and ``lambda\_`` consistent with calculated coefficients ``coef\_`` and ``intercept\_``. :issue:`8224` by :user:`Peter Gedeck `.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.015857525169849396, -0.07816219329833984, -0.04171767830848694, 0.0357506163418293, 0.012093656696379185, 0.03149580955505371, -0.019674673676490784, 0.06463618576526642, -0.04930718615651131, -0.01500500738620758, 0.009364082477986813, -0.09488417208194733, -0.03961581736803055, -0.036...
0.108814
when using ``normalize=True``. :issue:`9302` by `Alexandre Gramfort`\_. - Fix a bug where :func:`linear\_model.LassoLars.fit` sometimes left ``coef\_`` as a list, rather than an ndarray. :issue:`8160` by :user:`CJ Carey `. - Fix :func:`linear\_model.BayesianRidge.fit` to return ridge parameter ``alpha\_`` and ``lambda\_`` consistent with calculated coefficients ``coef\_`` and ``intercept\_``. :issue:`8224` by :user:`Peter Gedeck `. - Fixed a bug in :class:`svm.OneClassSVM` where it returned floats instead of integer classes. :issue:`8676` by :user:`Vathsala Achar `. - Fix AIC/BIC criterion computation in :class:`linear\_model.LassoLarsIC`. :issue:`9022` by `Alexandre Gramfort`\_ and :user:`Mehmet Basbug `. - Fixed a memory leak in our LibLinear implementation. :issue:`9024` by :user:`Sergei Lebedev ` - Fix bug where stratified CV splitters did not work with :class:`linear\_model.LassoCV`. :issue:`8973` by :user:`Paulo Haddad `. - Fixed a bug in :class:`gaussian\_process.GaussianProcessRegressor` when the standard deviation and covariance predicted without fit would fail with a meaningless error by default. :issue:`6573` by :user:`Quazi Marufur Rahman ` and `Manoj Kumar`\_. Other predictors - Fix `semi\_supervised.BaseLabelPropagation` to correctly implement ``LabelPropagation`` and ``LabelSpreading`` as done in the referenced papers. :issue:`9239` by :user:`Andre Ambrosio Boechat `, :user:`Utkarsh Upadhyay `, and `Joel Nothman`\_. Decomposition, manifold learning and clustering - Fixed the implementation of :class:`manifold.TSNE`: - ``early\_exaggeration`` parameter had no effect and is now used for the first 250 optimization iterations. - Fixed the ``AssertionError: Tree consistency failed`` exception reported in :issue:`8992`. - Improve the learning schedule to match the one from the reference implementation `lvdmaaten/bhtsne `\_. by :user:`Thomas Moreau ` and `Olivier Grisel`\_. - Fix a bug in :class:`decomposition.LatentDirichletAllocation` where the ``perplexity`` method was returning incorrect results because the ``transform`` method returns normalized document topic distributions as of version 0.18. :issue:`7954` by :user:`Gary Foreman `. - Fix output shape and bugs with n\_jobs > 1 in :class:`decomposition.SparseCoder` transform and :func:`decomposition.sparse\_encode` for one-dimensional data and one component. This also impacts the output shape of :class:`decomposition.DictionaryLearning`. :issue:`8086` by `Andreas Müller`\_. - Fixed the implementation of ``explained\_variance\_`` in :class:`decomposition.PCA`, `decomposition.RandomizedPCA` and :class:`decomposition.IncrementalPCA`. :issue:`9105` by `Hanmin Qin `\_. - Fixed the implementation of ``noise\_variance\_`` in :class:`decomposition.PCA`. :issue:`9108` by `Hanmin Qin `\_. - Fixed a bug where :class:`cluster.DBSCAN` gives incorrect result when input is a precomputed sparse matrix with initial rows all zero. :issue:`8306` by :user:`Akshay Gupta ` - Fix a bug regarding fitting :class:`cluster.KMeans` with a sparse array X and initial centroids, where X's means were unnecessarily being subtracted from the centroids. :issue:`7872` by :user:`Josh Karnofsky `. - Fixes to the input validation in :class:`covariance.EllipticEnvelope`. :issue:`8086` by `Andreas Müller`\_. - Fixed a bug in :class:`covariance.MinCovDet` where inputting data that produced a singular covariance matrix would cause the helper method ``\_c\_step`` to throw an exception. :issue:`3367` by :user:`Jeremy Steward ` - Fixed a bug in :class:`manifold.TSNE` affecting convergence of the gradient descent. :issue:`8768` by :user:`David DeTomaso `. - Fixed a bug in :class:`manifold.TSNE` where it stored the incorrect ``kl\_divergence\_``. :issue:`6507` by :user:`Sebastian Saeger `. - Fixed improper scaling in :class:`cross\_decomposition.PLSRegression` with ``scale=True``. :issue:`7819` by :user:`jayzed82 `. - :class:`cluster.SpectralCoclustering` and :class:`cluster.SpectralBiclustering` ``fit`` method conforms with API by accepting ``y`` and returning the object. :issue:`6126`, :issue:`7814` by :user:`Laurent Direr ` and :user:`Maniteja Nandana `. - Fix bug where :mod:`sklearn.mixture` ``sample`` methods did not return as many samples as requested. :issue:`7702` by :user:`Levi John Wolf `. - Fixed the shrinkage implementation in :class:`neighbors.NearestCentroid`. :issue:`9219` by `Hanmin Qin `\_. Preprocessing and feature selection - For sparse matrices, :func:`preprocessing.normalize` with ``return\_norm=True`` will now raise a ``NotImplementedError`` with 'l1' or 'l2' norm and with norm 'max' the norms returned will be the same as for dense matrices. :issue:`7771` by `Ang Lu `\_. - Fix a bug where :class:`feature\_selection.SelectFdr` did not exactly implement Benjamini-Hochberg procedure. It formerly may have selected fewer features than it should. :issue:`7490` by :user:`Peng Meng
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.05358521640300751, -0.029655296355485916, -0.07686086744070053, -0.006450039334595203, 0.006099551450461149, -0.015840593725442886, -0.0020527055021375418, 0.027730898931622505, -0.1174742728471756, 0.0294649638235569, 0.08818355947732925, -0.018638908863067627, -0.04293214529752731, -0...
0.015976
'l1' or 'l2' norm and with norm 'max' the norms returned will be the same as for dense matrices. :issue:`7771` by `Ang Lu `\_. - Fix a bug where :class:`feature\_selection.SelectFdr` did not exactly implement Benjamini-Hochberg procedure. It formerly may have selected fewer features than it should. :issue:`7490` by :user:`Peng Meng `. - Fixed a bug where `linear\_model.RandomizedLasso` and `linear\_model.RandomizedLogisticRegression` break for sparse input. :issue:`8259` by :user:`Aman Dalmia `. - Fix a bug where :class:`feature\_extraction.FeatureHasher` mandatorily applied a sparse random projection to the hashed features, preventing the use of :class:`feature\_extraction.text.HashingVectorizer` in a pipeline with :class:`feature\_extraction.text.TfidfTransformer`. :issue:`7565` by :user:`Roman Yurchak `. - Fix a bug where :class:`feature\_selection.mutual\_info\_regression` did not correctly use ``n\_neighbors``. :issue:`8181` by :user:`Guillaume Lemaitre `. Model evaluation and meta-estimators - Fixed a bug where `model\_selection.BaseSearchCV.inverse\_transform` returns ``self.best\_estimator\_.transform()`` instead of ``self.best\_estimator\_.inverse\_transform()``. :issue:`8344` by :user:`Akshay Gupta ` and :user:`Rasmus Eriksson `. - Added ``classes\_`` attribute to :class:`model\_selection.GridSearchCV`, :class:`model\_selection.RandomizedSearchCV`, `grid\_search.GridSearchCV`, and `grid\_search.RandomizedSearchCV` that matches the ``classes\_`` attribute of ``best\_estimator\_``. :issue:`7661` and :issue:`8295` by :user:`Alyssa Batula `, :user:`Dylan Werner-Meier `, and :user:`Stephen Hoover `. - Fixed a bug where :func:`model\_selection.validation\_curve` reused the same estimator for each parameter value. :issue:`7365` by :user:`Aleksandr Sandrovskii `. - :func:`model\_selection.permutation\_test\_score` now works with Pandas types. :issue:`5697` by :user:`Stijn Tonk `. - Several fixes to input validation in :class:`multiclass.OutputCodeClassifier` :issue:`8086` by `Andreas Müller`\_. - :class:`multiclass.OneVsOneClassifier`'s ``partial\_fit`` now ensures all classes are provided up-front. :issue:`6250` by :user:`Asish Panda `. - Fix :func:`multioutput.MultiOutputClassifier.predict\_proba` to return a list of 2d arrays, rather than a 3d array. In the case where different target columns had different numbers of classes, a ``ValueError`` would be raised on trying to stack matrices with different dimensions. :issue:`8093` by :user:`Peter Bull `. - Cross validation now works with Pandas datatypes that have a read-only index. :issue:`9507` by `Loic Esteve`\_. Metrics - :func:`metrics.average\_precision\_score` no longer linearly interpolates between operating points, and instead weighs precisions by the change in recall since the last operating point, as per the `Wikipedia entry `\_. (`#7356 `\_). By :user:`Nick Dingwall ` and `Gael Varoquaux`\_. - Fix a bug in `metrics.classification.\_check\_targets` which would return ``'binary'`` if ``y\_true`` and ``y\_pred`` were both ``'binary'`` but the union of ``y\_true`` and ``y\_pred`` was ``'multiclass'``. :issue:`8377` by `Loic Esteve`\_. - Fixed an integer overflow bug in :func:`metrics.confusion\_matrix` and hence :func:`metrics.cohen\_kappa\_score`. :issue:`8354`, :issue:`7929` by `Joel Nothman`\_ and :user:`Jon Crall `. - Fixed passing of ``gamma`` parameter to the ``chi2`` kernel in :func:`metrics.pairwise.pairwise\_kernels` :issue:`5211` by :user:`Nick Rhinehart `, :user:`Saurabh Bansod ` and `Andreas Müller`\_. Miscellaneous - Fixed a bug when :func:`datasets.make\_classification` fails when generating more than 30 features. :issue:`8159` by :user:`Herilalaina Rakotoarison `. - Fixed a bug where :func:`datasets.make\_moons` gives an incorrect result when ``n\_samples`` is odd. :issue:`8198` by :user:`Josh Levy `. - Some ``fetch\_`` functions in :mod:`sklearn.datasets` were ignoring the ``download\_if\_missing`` keyword. :issue:`7944` by :user:`Ralf Gommers `. - Fix estimators to accept a ``sample\_weight`` parameter of type ``pandas.Series`` in their ``fit`` function. :issue:`7825` by `Kathleen Chen`\_. - Fix a bug in cases where ``numpy.cumsum`` may be numerically unstable, raising an exception if instability is identified. :issue:`7376` and :issue:`7331` by `Joel Nothman`\_ and :user:`yangarbiter`. - Fix a bug where `base.BaseEstimator.\_\_getstate\_\_` obstructed pickling customizations of child-classes, when used in a multiple inheritance context. :issue:`8316` by :user:`Holger Peters `. - Update Sphinx-Gallery from 0.1.4 to 0.1.7 for resolving links in documentation build with Sphinx>1.5 :issue:`8010`, :issue:`7986` by :user:`Oscar Najera ` - Add ``data\_home`` parameter to :func:`sklearn.datasets.fetch\_kddcup99`. :issue:`9289` by `Loic Esteve`\_. - Fix dataset loaders using Python 3 version of makedirs to also work in Python 2. :issue:`9284` by :user:`Sebastin Santy `. - Several minor issues were fixed with thanks to the alerts of `lgtm.com `\_. :issue:`9278` by :user:`Jean Helie `, among others. API changes
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.01751779206097126, -0.0361902192234993, -0.02928452007472515, -0.03334338963031769, 0.025616340339183807, -0.025562666356563568, -0.02690703794360161, 0.034296564757823944, -0.1475064605474472, -0.020954227074980736, 0.05697469413280487, -0.010831587947905064, 0.02606540359556675, -0.09...
-0.034594
to :func:`sklearn.datasets.fetch\_kddcup99`. :issue:`9289` by `Loic Esteve`\_. - Fix dataset loaders using Python 3 version of makedirs to also work in Python 2. :issue:`9284` by :user:`Sebastin Santy `. - Several minor issues were fixed with thanks to the alerts of `lgtm.com `\_. :issue:`9278` by :user:`Jean Helie `, among others. API changes summary ------------------- Trees and ensembles - Gradient boosting base models are no longer estimators. By `Andreas Müller`\_. - All tree-based estimators now accept a ``min\_impurity\_decrease`` parameter in lieu of the ``min\_impurity\_split``, which is now deprecated. The ``min\_impurity\_decrease`` helps stop splitting the nodes in which the weighted impurity decrease from splitting is no longer at least ``min\_impurity\_decrease``. :issue:`8449` by `Raghav RV`\_. Linear, kernelized and related models - ``n\_iter`` parameter is deprecated in :class:`linear\_model.SGDClassifier`, :class:`linear\_model.SGDRegressor`, :class:`linear\_model.PassiveAggressiveClassifier`, :class:`linear\_model.PassiveAggressiveRegressor` and :class:`linear\_model.Perceptron`. By `Tom Dupre la Tour`\_. Other predictors - `neighbors.LSHForest` has been deprecated and will be removed in 0.21 due to poor performance. :issue:`9078` by :user:`Laurent Direr `. - :class:`neighbors.NearestCentroid` no longer purports to support ``metric='precomputed'`` which now raises an error. :issue:`8515` by :user:`Sergul Aydore `. - The ``alpha`` parameter of :class:`semi\_supervised.LabelPropagation` now has no effect and is deprecated to be removed in 0.21. :issue:`9239` by :user:`Andre Ambrosio Boechat `, :user:`Utkarsh Upadhyay `, and `Joel Nothman`\_. Decomposition, manifold learning and clustering - Deprecate the ``doc\_topic\_distr`` argument of the ``perplexity`` method in :class:`decomposition.LatentDirichletAllocation` because the user no longer has access to the unnormalized document topic distribution needed for the perplexity calculation. :issue:`7954` by :user:`Gary Foreman `. - The ``n\_topics`` parameter of :class:`decomposition.LatentDirichletAllocation` has been renamed to ``n\_components`` and will be removed in version 0.21. :issue:`8922` by :user:`Attractadore`. - :meth:`decomposition.SparsePCA.transform`'s ``ridge\_alpha`` parameter is deprecated in preference for class parameter. :issue:`8137` by :user:`Naoya Kanai `. - :class:`cluster.DBSCAN` now has a ``metric\_params`` parameter. :issue:`8139` by :user:`Naoya Kanai `. Preprocessing and feature selection - :class:`feature\_selection.SelectFromModel` now has a ``partial\_fit`` method only if the underlying estimator does. By `Andreas Müller`\_. - :class:`feature\_selection.SelectFromModel` now validates the ``threshold`` parameter and sets the ``threshold\_`` attribute during the call to ``fit``, and no longer during the call to ``transform``. By `Andreas Müller`\_. - The ``non\_negative`` parameter in :class:`feature\_extraction.FeatureHasher` has been deprecated, and replaced with a more principled alternative, ``alternate\_sign``. :issue:`7565` by :user:`Roman Yurchak `. - `linear\_model.RandomizedLogisticRegression`, and `linear\_model.RandomizedLasso` have been deprecated and will be removed in version 0.21. :issue:`8995` by :user:`Ramana.S `. Model evaluation and meta-estimators - Deprecate the ``fit\_params`` constructor input to the :class:`model\_selection.GridSearchCV` and :class:`model\_selection.RandomizedSearchCV` in favor of passing keyword parameters to the ``fit`` methods of those classes. Data-dependent parameters needed for model training should be passed as keyword arguments to ``fit``, and conforming to this convention will allow the hyperparameter selection classes to be used with tools such as :func:`model\_selection.cross\_val\_predict`. :issue:`2879` by :user:`Stephen Hoover `. - In version 0.21, the default behavior of splitters that use the ``test\_size`` and ``train\_size`` parameter will change, such that specifying ``train\_size`` alone will cause ``test\_size`` to be the remainder. :issue:`7459` by :user:`Nelson Liu `. - :class:`multiclass.OneVsRestClassifier` now has ``partial\_fit``, ``decision\_function`` and ``predict\_proba`` methods only when the underlying estimator does. :issue:`7812` by `Andreas Müller`\_ and :user:`Mikhail Korobov `. - :class:`multiclass.OneVsRestClassifier` now has a ``partial\_fit`` method only if the underlying estimator does. By `Andreas Müller`\_. - The ``decision\_function`` output shape for binary classification in :class:`multiclass.OneVsRestClassifier` and :class:`multiclass.OneVsOneClassifier` is now ``(n\_samples,)`` to conform to scikit-learn conventions. :issue:`9100` by `Andreas Müller`\_. - The :func:`multioutput.MultiOutputClassifier.predict\_proba` function used to return a 3d array (``n\_samples``, ``n\_classes``, ``n\_outputs``). In the case where different target columns had different numbers of classes, a ``ValueError`` would be raised on trying to stack matrices with different dimensions. This function now returns a list of arrays where the length of the list is ``n\_outputs``, and each array is
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.13727083802223206, -0.07899738848209381, 0.054034385830163956, 0.03982416167855263, 0.10758234560489655, -0.0813819020986557, -0.06567631661891937, 0.012625860050320625, -0.06459599733352661, 0.004487291909754276, 0.020312780514359474, -0.10254350304603577, 0.00794692151248455, -0.07941...
0.000702
3d array (``n\_samples``, ``n\_classes``, ``n\_outputs``). In the case where different target columns had different numbers of classes, a ``ValueError`` would be raised on trying to stack matrices with different dimensions. This function now returns a list of arrays where the length of the list is ``n\_outputs``, and each array is (``n\_samples``, ``n\_classes``) for that particular output. :issue:`8093` by :user:`Peter Bull `. - Replace attribute ``named\_steps`` ``dict`` to :class:`utils.Bunch` in :class:`pipeline.Pipeline` to enable tab completion in interactive environment. In the case conflict value on ``named\_steps`` and ``dict`` attribute, ``dict`` behavior will be prioritized. :issue:`8481` by :user:`Herilalaina Rakotoarison `. Miscellaneous - Deprecate the ``y`` parameter in ``transform`` and ``inverse\_transform``. The method should not accept ``y`` parameter, as it's used at the prediction time. :issue:`8174` by :user:`Tahar Zanouda `, `Alexandre Gramfort`\_ and `Raghav RV`\_. - SciPy >= 0.13.3 and NumPy >= 1.8.2 are now the minimum supported versions for scikit-learn. The following backported functions in :mod:`sklearn.utils` have been removed or deprecated accordingly. :issue:`8854` and :issue:`8874` by :user:`Naoya Kanai ` - The ``store\_covariances`` and ``covariances\_`` parameters of :class:`discriminant\_analysis.QuadraticDiscriminantAnalysis` have been renamed to ``store\_covariance`` and ``covariance\_`` to be consistent with the corresponding parameter names of the :class:`discriminant\_analysis.LinearDiscriminantAnalysis`. They will be removed in version 0.21. :issue:`7998` by :user:`Jiacheng ` Removed in 0.19: - ``utils.fixes.argpartition`` - ``utils.fixes.array\_equal`` - ``utils.fixes.astype`` - ``utils.fixes.bincount`` - ``utils.fixes.expit`` - ``utils.fixes.frombuffer\_empty`` - ``utils.fixes.in1d`` - ``utils.fixes.norm`` - ``utils.fixes.rankdata`` - ``utils.fixes.safe\_copy`` Deprecated in 0.19, to be removed in 0.21: - ``utils.arpack.eigs`` - ``utils.arpack.eigsh`` - ``utils.arpack.svds`` - ``utils.extmath.fast\_dot`` - ``utils.extmath.logsumexp`` - ``utils.extmath.norm`` - ``utils.extmath.pinvh`` - ``utils.graph.graph\_laplacian`` - ``utils.random.choice`` - ``utils.sparsetools.connected\_components`` - ``utils.stats.rankdata`` - Estimators with both methods ``decision\_function`` and ``predict\_proba`` are now required to have a monotonic relation between them. The method ``check\_decision\_proba\_consistency`` has been added in \*\*utils.estimator\_checks\*\* to check their consistency. :issue:`7578` by :user:`Shubham Bhardwaj ` - All checks in ``utils.estimator\_checks``, in particular :func:`utils.estimator\_checks.check\_estimator` now accept estimator instances. Most other checks do not accept estimator classes any more. :issue:`9019` by `Andreas Müller`\_. - Ensure that estimators' attributes ending with ``\_`` are not set in the constructor but only in the ``fit`` method. Most notably, ensemble estimators (deriving from `ensemble.BaseEnsemble`) now only have ``self.estimators\_`` available after ``fit``. :issue:`7464` by `Lars Buitinck`\_ and `Loic Esteve`\_. Code and Documentation Contributors ----------------------------------- Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0.18, including: Joel Nothman, Loic Esteve, Andreas Mueller, Guillaume Lemaitre, Olivier Grisel, Hanmin Qin, Raghav RV, Alexandre Gramfort, themrmax, Aman Dalmia, Gael Varoquaux, Naoya Kanai, Tom Dupré la Tour, Rishikesh, Nelson Liu, Taehoon Lee, Nelle Varoquaux, Aashil, Mikhail Korobov, Sebastin Santy, Joan Massich, Roman Yurchak, RAKOTOARISON Herilalaina, Thierry Guillemot, Alexandre Abadie, Carol Willing, Balakumaran Manoharan, Josh Karnofsky, Vlad Niculae, Utkarsh Upadhyay, Dmitry Petrov, Minghui Liu, Srivatsan, Vincent Pham, Albert Thomas, Jake VanderPlas, Attractadore, JC Liu, alexandercbooth, chkoar, Óscar Nájera, Aarshay Jain, Kyle Gilliam, Ramana Subramanyam, CJ Carey, Clement Joudet, David Robles, He Chen, Joris Van den Bossche, Karan Desai, Katie Luangkote, Leland McInnes, Maniteja Nandana, Michele Lacchia, Sergei Lebedev, Shubham Bhardwaj, akshay0724, omtcyfz, rickiepark, waterponey, Vathsala Achar, jbDelafosse, Ralf Gommers, Ekaterina Krivich, Vivek Kumar, Ishank Gulati, Dave Elliott, ldirer, Reiichiro Nakano, Levi John Wolf, Mathieu Blondel, Sid Kapur, Dougal J. Sutherland, midinas, mikebenfield, Sourav Singh, Aseem Bansal, Ibraim Ganiev, Stephen Hoover, AishwaryaRK, Steven C. Howell, Gary Foreman, Neeraj Gangwar, Tahar, Jon Crall, dokato, Kathy Chen, ferria, Thomas Moreau, Charlie Brummitt, Nicolas Goix, Adam Kleczewski, Sam Shleifer, Nikita Singh, Basil Beirouti, Giorgio Patrini, Manoj Kumar, Rafael Possas, James Bourbeau, James A. Bednar, Janine Harper, Jaye, Jean Helie, Jeremy Steward, Artsiom, John Wei, Jonathan LIgo, Jonathan Rahn, seanpwilliams, Arthur Mensch, Josh Levy, Julian Kuhlmann, Julien Aubert, Jörn Hees, Kai, shivamgargsya, Kat Hempstalk, Kaushik
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ -0.02651185728609562, 0.024120353162288666, -0.049863725900650024, -0.03446268290281296, -0.031192943453788757, -0.10802540928125381, 0.016850512474775314, -0.012846507132053375, -0.09611048549413681, -0.03545441851019859, 0.016004091128706932, 0.014281382784247398, -0.008432048372924328, ...
-0.013743
Goix, Adam Kleczewski, Sam Shleifer, Nikita Singh, Basil Beirouti, Giorgio Patrini, Manoj Kumar, Rafael Possas, James Bourbeau, James A. Bednar, Janine Harper, Jaye, Jean Helie, Jeremy Steward, Artsiom, John Wei, Jonathan LIgo, Jonathan Rahn, seanpwilliams, Arthur Mensch, Josh Levy, Julian Kuhlmann, Julien Aubert, Jörn Hees, Kai, shivamgargsya, Kat Hempstalk, Kaushik Lakshmikanth, Kennedy, Kenneth Lyons, Kenneth Myers, Kevin Yap, Kirill Bobyrev, Konstantin Podshumok, Arthur Imbert, Lee Murray, toastedcornflakes, Lera, Li Li, Arthur Douillard, Mainak Jas, tobycheese, Manraj Singh, Manvendra Singh, Marc Meketon, MarcoFalke, Matthew Brett, Matthias Gilch, Mehul Ahuja, Melanie Goetz, Meng, Peng, Michael Dezube, Michal Baumgartner, vibrantabhi19, Artem Golubin, Milen Paskov, Antonin Carette, Morikko, MrMjauh, NALEPA Emmanuel, Namiya, Antoine Wendlinger, Narine Kokhlikyan, NarineK, Nate Guerin, Angus Williams, Ang Lu, Nicole Vavrova, Nitish Pandey, Okhlopkov Daniil Olegovich, Andy Craze, Om Prakash, Parminder Singh, Patrick Carlson, Patrick Pei, Paul Ganssle, Paulo Haddad, Paweł Lorek, Peng Yu, Pete Bachant, Peter Bull, Peter Csizsek, Peter Wang, Pieter Arthur de Jong, Ping-Yao, Chang, Preston Parry, Puneet Mathur, Quentin Hibon, Andrew Smith, Andrew Jackson, 1kastner, Rameshwar Bhaskaran, Rebecca Bilbro, Remi Rampin, Andrea Esuli, Rob Hall, Robert Bradshaw, Romain Brault, Aman Pratik, Ruifeng Zheng, Russell Smith, Sachin Agarwal, Sailesh Choyal, Samson Tan, Samuël Weber, Sarah Brown, Sebastian Pölsterl, Sebastian Raschka, Sebastian Saeger, Alyssa Batula, Abhyuday Pratap Singh, Sergey Feldman, Sergul Aydore, Sharan Yalburgi, willduan, Siddharth Gupta, Sri Krishna, Almer, Stijn Tonk, Allen Riddell, Theofilos Papapanagiotou, Alison, Alexis Mignon, Tommy Boucher, Tommy Löfstedt, Toshihiro Kamishima, Tyler Folkman, Tyler Lanigan, Alexander Junge, Varun Shenoy, Victor Poughon, Vilhelm von Ehrenheim, Aleksandr Sandrovskii, Alan Yee, Vlasios Vasileiou, Warut Vijitbenjaronk, Yang Zhang, Yaroslav Halchenko, Yichuan Liu, Yuichi Fujikawa, affanv14, aivision2020, xor, andreh7, brady salz, campustrampus, Agamemnon Krasoulis, ditenberg, elena-sharova, filipj8, fukatani, gedeck, guiniol, guoci, hakaa1, hongkahjun, i-am-xhy, jakirkham, jaroslaw-weber, jayzed82, jeroko, jmontoyam, jonathan.striebel, josephsalmon, jschendel, leereeves, martin-hahn, mathurinm, mehak-sachdeva, mlewis1729, mlliou112, mthorrell, ndingwall, nuffe, yangarbiter, plagree, pldtc325, Breno Freitas, Brett Olsen, Brian A. Alfano, Brian Burns, polmauri, Brandon Carter, Charlton Austin, Chayant T15h, Chinmaya Pancholi, Christian Danielsen, Chung Yen, Chyi-Kwei Yau, pravarmahajan, DOHMATOB Elvis, Daniel LeJeune, Daniel Hnyk, Darius Morawiec, David DeTomaso, David Gasquez, David Haberthür, David Heryanto, David Kirkby, David Nicholson, rashchedrin, Deborah Gertrude Digges, Denis Engemann, Devansh D, Dickson, Bob Baxley, Don86, E. Lynch-Klarup, Ed Rogers, Elizabeth Ferriss, Ellen-Co2, Fabian Egli, Fang-Chieh Chou, Bing Tian Dai, Greg Stupp, Grzegorz Szpak, Bertrand Thirion, Hadrien Bertrand, Harizo Rajaona, zxcvbnius, Henry Lin, Holger Peters, Icyblade Dai, Igor Andriushchenko, Ilya, Isaac Laughlin, Iván Vallés, Aurélien Bellet, JPFrancoia, Jacob Schreiber, Asish Mahapatra
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.19.rst
main
scikit-learn
[ 0.0040879081934690475, -0.06399554759263992, -0.01316682156175375, 0.06518656760454178, -0.03808843716979027, 0.095857173204422, -0.014595423825085163, 0.05901220440864563, 0.019412217661738396, -0.05204423516988754, -0.014423512853682041, 0.031557079404592514, -0.04049959033727646, 0.0068...
0.057538
.. include:: \_contributors.rst .. currentmodule:: sklearn ============ Version 0.18 ============ .. warning:: Scikit-learn 0.18 is the last major release of scikit-learn to support Python 2.6. Later versions of scikit-learn will require Python 2.7 or above. .. \_changes\_0\_18\_2: Version 0.18.2 ============== \*\*June 20, 2017\*\* Changelog --------- - Fixes for compatibility with NumPy 1.13.0: :issue:`7946` :issue:`8355` by `Loic Esteve`\_. - Minor compatibility changes in the examples :issue:`9010` :issue:`8040` :issue:`9149`. Code Contributors ----------------- Aman Dalmia, Loic Esteve, Nate Guerin, Sergei Lebedev .. \_changes\_0\_18\_1: Version 0.18.1 ============== \*\*November 11, 2016\*\* Changelog --------- Enhancements ............ - Improved ``sample\_without\_replacement`` speed by utilizing numpy.random.permutation for most cases. As a result, samples may differ in this release for a fixed random state. Affected estimators: - :class:`ensemble.BaggingClassifier` - :class:`ensemble.BaggingRegressor` - :class:`linear\_model.RANSACRegressor` - :class:`model\_selection.RandomizedSearchCV` - :class:`random\_projection.SparseRandomProjection` This also affects the :meth:`datasets.make\_classification` method. Bug fixes ......... - Fix issue where ``min\_grad\_norm`` and ``n\_iter\_without\_progress`` parameters were not being utilised by :class:`manifold.TSNE`. :issue:`6497` by :user:`Sebastian Säger ` - Fix bug for svm's decision values when ``decision\_function\_shape`` is ``ovr`` in :class:`svm.SVC`. :class:`svm.SVC`'s decision\_function was incorrect from versions 0.17.0 through 0.18.0. :issue:`7724` by `Bing Tian Dai`\_ - Attribute ``explained\_variance\_ratio`` of :class:`discriminant\_analysis.LinearDiscriminantAnalysis` calculated with SVD and Eigen solver are now of the same length. :issue:`7632` by :user:`JPFrancoia ` - Fixes issue in :ref:`univariate\_feature\_selection` where score functions were not accepting multi-label targets. :issue:`7676` by :user:`Mohammed Affan ` - Fixed setting parameters when calling ``fit`` multiple times on :class:`feature\_selection.SelectFromModel`. :issue:`7756` by `Andreas Müller`\_ - Fixes issue in ``partial\_fit`` method of :class:`multiclass.OneVsRestClassifier` when number of classes used in ``partial\_fit`` was less than the total number of classes in the data. :issue:`7786` by `Srivatsan Ramesh`\_ - Fixes issue in :class:`calibration.CalibratedClassifierCV` where the sum of probabilities of each class for a data was not 1, and ``CalibratedClassifierCV`` now handles the case where the training set has less number of classes than the total data. :issue:`7799` by `Srivatsan Ramesh`\_ - Fix a bug where :class:`sklearn.feature\_selection.SelectFdr` did not exactly implement Benjamini-Hochberg procedure. It formerly may have selected fewer features than it should. :issue:`7490` by :user:`Peng Meng `. - :class:`sklearn.manifold.LocallyLinearEmbedding` now correctly handles integer inputs. :issue:`6282` by `Jake Vanderplas`\_. - The ``min\_weight\_fraction\_leaf`` parameter of tree-based classifiers and regressors now assumes uniform sample weights by default if the ``sample\_weight`` argument is not passed to the ``fit`` function. Previously, the parameter was silently ignored. :issue:`7301` by :user:`Nelson Liu `. - Numerical issue with :class:`linear\_model.RidgeCV` on centered data when `n\_features > n\_samples`. :issue:`6178` by `Bertrand Thirion`\_ - Tree splitting criterion classes' cloning/pickling is now memory safe :issue:`7680` by :user:`Ibraim Ganiev `. - Fixed a bug where :class:`decomposition.NMF` sets its ``n\_iters\_`` attribute in `transform()`. :issue:`7553` by :user:`Ekaterina Krivich `. - :class:`sklearn.linear\_model.LogisticRegressionCV` now correctly handles string labels. :issue:`5874` by `Raghav RV`\_. - Fixed a bug where :func:`sklearn.model\_selection.train\_test\_split` raised an error when ``stratify`` is a list of string labels. :issue:`7593` by `Raghav RV`\_. - Fixed a bug where :class:`sklearn.model\_selection.GridSearchCV` and :class:`sklearn.model\_selection.RandomizedSearchCV` were not pickleable because of a pickling bug in ``np.ma.MaskedArray``. :issue:`7594` by `Raghav RV`\_. - All cross-validation utilities in :mod:`sklearn.model\_selection` now permit one time cross-validation splitters for the ``cv`` parameter. Also non-deterministic cross-validation splitters (where multiple calls to ``split`` produce dissimilar splits) can be used as ``cv`` parameter. The :class:`sklearn.model\_selection.GridSearchCV` will cross-validate each parameter setting on the split produced by the first ``split`` call to the cross-validation splitter. :issue:`7660` by `Raghav RV`\_. - Fix bug where :meth:`preprocessing.MultiLabelBinarizer.fit\_transform` returned an invalid CSR matrix. :issue:`7750` by :user:`CJ Carey `. - Fixed a bug where :func:`metrics.pairwise.cosine\_distances` could return a small negative distance. :issue:`7732` by :user:`Artsion `. API changes summary ------------------- Trees and forests - The ``min\_weight\_fraction\_leaf`` parameter of tree-based classifiers and regressors now assumes uniform sample weights by default if the ``sample\_weight``
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.18.rst
main
scikit-learn
[ -0.05351788550615311, -0.059218112379312515, -0.0015454248059540987, 0.0032133690547198057, 0.11529868841171265, -0.05117614194750786, -0.08378787338733673, -0.002190528204664588, -0.06074579060077667, 0.009552441537380219, 0.020241063088178635, -0.026855159550905228, -0.07597154378890991, ...
0.13051
invalid CSR matrix. :issue:`7750` by :user:`CJ Carey `. - Fixed a bug where :func:`metrics.pairwise.cosine\_distances` could return a small negative distance. :issue:`7732` by :user:`Artsion `. API changes summary ------------------- Trees and forests - The ``min\_weight\_fraction\_leaf`` parameter of tree-based classifiers and regressors now assumes uniform sample weights by default if the ``sample\_weight`` argument is not passed to the ``fit`` function. Previously, the parameter was silently ignored. :issue:`7301` by :user:`Nelson Liu `. - Tree splitting criterion classes' cloning/pickling is now memory safe. :issue:`7680` by :user:`Ibraim Ganiev `. Linear, kernelized and related models - Length of ``explained\_variance\_ratio`` of :class:`discriminant\_analysis.LinearDiscriminantAnalysis` changed for both Eigen and SVD solvers. The attribute has now a length of min(n\_components, n\_classes - 1). :issue:`7632` by :user:`JPFrancoia ` - Numerical issue with :class:`linear\_model.RidgeCV` on centered data when ``n\_features > n\_samples``. :issue:`6178` by `Bertrand Thirion`\_ .. \_changes\_0\_18: Version 0.18 ============ \*\*September 28, 2016\*\* .. \_model\_selection\_changes: Model Selection Enhancements and API Changes -------------------------------------------- - \*\*The model\_selection module\*\* The new module :mod:`sklearn.model\_selection`, which groups together the functionalities of formerly `sklearn.cross\_validation`, `sklearn.grid\_search` and `sklearn.learning\_curve`, introduces new possibilities such as nested cross-validation and better manipulation of parameter searches with Pandas. Many things will stay the same but there are some key differences. Read below to know more about the changes. - \*\*Data-independent CV splitters enabling nested cross-validation\*\* The new cross-validation splitters, defined in the :mod:`sklearn.model\_selection`, are no longer initialized with any data-dependent parameters such as ``y``. Instead they expose a `split` method that takes in the data and yields a generator for the different splits. This change makes it possible to use the cross-validation splitters to perform nested cross-validation, facilitated by :class:`model\_selection.GridSearchCV` and :class:`model\_selection.RandomizedSearchCV` utilities. - \*\*The enhanced cv\_results\_ attribute\*\* The new ``cv\_results\_`` attribute (of :class:`model\_selection.GridSearchCV` and :class:`model\_selection.RandomizedSearchCV`) introduced in lieu of the ``grid\_scores\_`` attribute is a dict of 1D arrays with elements in each array corresponding to the parameter settings (i.e. search candidates). The ``cv\_results\_`` dict can be easily imported into ``pandas`` as a ``DataFrame`` for exploring the search results. The ``cv\_results\_`` arrays include scores for each cross-validation split (with keys such as ``'split0\_test\_score'``), as well as their mean (``'mean\_test\_score'``) and standard deviation (``'std\_test\_score'``). The ranks for the search candidates (based on their mean cross-validation score) is available at ``cv\_results\_['rank\_test\_score']``. The parameter values for each parameter is stored separately as numpy masked object arrays. The value, for that search candidate, is masked if the corresponding parameter is not applicable. Additionally a list of all the parameter dicts are stored at ``cv\_results\_['params']``. - \*\*Parameters n\_folds and n\_iter renamed to n\_splits\*\* Some parameter names have changed: The ``n\_folds`` parameter in new :class:`model\_selection.KFold`, :class:`model\_selection.GroupKFold` (see below for the name change), and :class:`model\_selection.StratifiedKFold` is now renamed to ``n\_splits``. The ``n\_iter`` parameter in :class:`model\_selection.ShuffleSplit`, the new class :class:`model\_selection.GroupShuffleSplit` and :class:`model\_selection.StratifiedShuffleSplit` is now renamed to ``n\_splits``. - \*\*Rename of splitter classes which accepts group labels along with data\*\* The cross-validation splitters ``LabelKFold``, ``LabelShuffleSplit``, ``LeaveOneLabelOut`` and ``LeavePLabelOut`` have been renamed to :class:`model\_selection.GroupKFold`, :class:`model\_selection.GroupShuffleSplit`, :class:`model\_selection.LeaveOneGroupOut` and :class:`model\_selection.LeavePGroupsOut` respectively. Note the change from singular to plural form in :class:`model\_selection.LeavePGroupsOut`. - \*\*Fit parameter labels renamed to groups\*\* The ``labels`` parameter in the `split` method of the newly renamed splitters :class:`model\_selection.GroupKFold`, :class:`model\_selection.LeaveOneGroupOut`, :class:`model\_selection.LeavePGroupsOut`, :class:`model\_selection.GroupShuffleSplit` is renamed to ``groups`` following the new nomenclature of their class names. - \*\*Parameter n\_labels renamed to n\_groups\*\* The parameter ``n\_labels`` in the newly renamed :class:`model\_selection.LeavePGroupsOut` is changed to ``n\_groups``. - Training scores and Timing information ``cv\_results\_`` also includes the training scores for each cross-validation split (with keys such as ``'split0\_train\_score'``), as well as their mean (``'mean\_train\_score'``) and standard deviation (``'std\_train\_score'``). To avoid the cost of evaluating training score, set ``return\_train\_score=False``. Additionally the mean and standard deviation of the times taken to
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.18.rst
main
scikit-learn
[ -0.08182892948389053, -0.02054723910987377, -0.09860678017139435, 0.039540939033031464, 0.006132651586085558, -0.04719390720129013, -0.03422531113028526, 0.005008983891457319, -0.05534897372126579, -0.024216752499341965, 0.05759928375482559, -0.13780030608177185, -0.03227021545171738, -0.0...
0.031161
scores and Timing information ``cv\_results\_`` also includes the training scores for each cross-validation split (with keys such as ``'split0\_train\_score'``), as well as their mean (``'mean\_train\_score'``) and standard deviation (``'std\_train\_score'``). To avoid the cost of evaluating training score, set ``return\_train\_score=False``. Additionally the mean and standard deviation of the times taken to split, train and score the model across all the cross-validation splits is available at the key ``'mean\_time'`` and ``'std\_time'`` respectively. Changelog --------- New features ............ Classifiers and Regressors - The Gaussian Process module has been reimplemented and now offers classification and regression estimators through :class:`gaussian\_process.GaussianProcessClassifier` and :class:`gaussian\_process.GaussianProcessRegressor`. Among other things, the new implementation supports kernel engineering, gradient-based hyperparameter optimization or sampling of functions from GP prior and GP posterior. Extensive documentation and examples are provided. By `Jan Hendrik Metzen`\_. - Added new supervised learning algorithm: :ref:`Multi-layer Perceptron ` :issue:`3204` by :user:`Issam H. Laradji ` - Added :class:`linear\_model.HuberRegressor`, a linear model robust to outliers. :issue:`5291` by `Manoj Kumar`\_. - Added the :class:`multioutput.MultiOutputRegressor` meta-estimator. It converts single output regressors to multi-output regressors by fitting one regressor per output. By :user:`Tim Head `. Other estimators - New :class:`mixture.GaussianMixture` and :class:`mixture.BayesianGaussianMixture` replace former mixture models, employing faster inference for sounder results. :issue:`7295` by :user:`Wei Xue ` and :user:`Thierry Guillemot `. - Class `decomposition.RandomizedPCA` is now factored into :class:`decomposition.PCA` and it is available calling with parameter ``svd\_solver='randomized'``. The default number of ``n\_iter`` for ``'randomized'`` has changed to 4. The old behavior of PCA is recovered by ``svd\_solver='full'``. An additional solver calls ``arpack`` and performs truncated (non-randomized) SVD. By default, the best solver is selected depending on the size of the input and the number of components requested. :issue:`5299` by :user:`Giorgio Patrini `. - Added two functions for mutual information estimation: :func:`feature\_selection.mutual\_info\_classif` and :func:`feature\_selection.mutual\_info\_regression`. These functions can be used in :class:`feature\_selection.SelectKBest` and :class:`feature\_selection.SelectPercentile` as score functions. By :user:`Andrea Bravi ` and :user:`Nikolay Mayorov `. - Added the :class:`ensemble.IsolationForest` class for anomaly detection based on random forests. By `Nicolas Goix`\_. - Added ``algorithm="elkan"`` to :class:`cluster.KMeans` implementing Elkan's fast K-Means algorithm. By `Andreas Müller`\_. Model selection and evaluation - Added :func:`metrics.fowlkes\_mallows\_score`, the Fowlkes Mallows Index which measures the similarity of two clusterings of a set of points By :user:`Arnaud Fouchet ` and :user:`Thierry Guillemot `. - Added `metrics.calinski\_harabaz\_score`, which computes the Calinski and Harabaz score to evaluate the resulting clustering of a set of points. By :user:`Arnaud Fouchet ` and :user:`Thierry Guillemot `. - Added new cross-validation splitter :class:`model\_selection.TimeSeriesSplit` to handle time series data. :issue:`6586` by :user:`YenChen Lin ` - The cross-validation iterators are replaced by cross-validation splitters available from :mod:`sklearn.model\_selection`, allowing for nested cross-validation. See :ref:`model\_selection\_changes` for more information. :issue:`4294` by `Raghav RV`\_. Enhancements ............ Trees and ensembles - Added a new splitting criterion for :class:`tree.DecisionTreeRegressor`, the mean absolute error. This criterion can also be used in :class:`ensemble.ExtraTreesRegressor`, :class:`ensemble.RandomForestRegressor`, and the gradient boosting estimators. :issue:`6667` by :user:`Nelson Liu `. - Added weighted impurity-based early stopping criterion for decision tree growth. :issue:`6954` by :user:`Nelson Liu ` - The random forest, extra tree and decision tree estimators now has a method ``decision\_path`` which returns the decision path of samples in the tree. By `Arnaud Joly`\_. - A new example has been added unveiling the decision tree structure. By `Arnaud Joly`\_. - Random forest, extra trees, decision trees and gradient boosting estimator accept the parameter ``min\_samples\_split`` and ``min\_samples\_leaf`` provided as a percentage of the training samples. By :user:`yelite ` and `Arnaud Joly`\_. - Gradient boosting estimators accept the parameter ``criterion`` to specify to splitting criterion used in built decision trees. :issue:`6667` by :user:`Nelson Liu `. - The memory footprint is reduced (sometimes greatly) for `ensemble.bagging.BaseBagging` and classes that inherit from
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.18.rst
main
scikit-learn
[ -0.0742480680346489, -0.10882021486759186, -0.06963067501783371, 0.03888864815235138, 0.0954790934920311, 0.04978754371404648, -0.06038149818778038, 0.06820834428071976, -0.06806710362434387, -0.045467037707567215, -0.008399798534810543, -0.02651817351579666, -0.01219412125647068, -0.04359...
0.120053
a percentage of the training samples. By :user:`yelite ` and `Arnaud Joly`\_. - Gradient boosting estimators accept the parameter ``criterion`` to specify to splitting criterion used in built decision trees. :issue:`6667` by :user:`Nelson Liu `. - The memory footprint is reduced (sometimes greatly) for `ensemble.bagging.BaseBagging` and classes that inherit from it, i.e, :class:`ensemble.BaggingClassifier`, :class:`ensemble.BaggingRegressor`, and :class:`ensemble.IsolationForest`, by dynamically generating attribute ``estimators\_samples\_`` only when it is needed. By :user:`David Staub `. - Added ``n\_jobs`` and ``sample\_weight`` parameters for :class:`ensemble.VotingClassifier` to fit underlying estimators in parallel. :issue:`5805` by :user:`Ibraim Ganiev `. Linear, kernelized and related models - In :class:`linear\_model.LogisticRegression`, the SAG solver is now available in the multinomial case. :issue:`5251` by `Tom Dupre la Tour`\_. - :class:`linear\_model.RANSACRegressor`, :class:`svm.LinearSVC` and :class:`svm.LinearSVR` now support ``sample\_weight``. By :user:`Imaculate `. - Add parameter ``loss`` to :class:`linear\_model.RANSACRegressor` to measure the error on the samples for every trial. By `Manoj Kumar`\_. - Prediction of out-of-sample events with Isotonic Regression (:class:`isotonic.IsotonicRegression`) is now much faster (over 1000x in tests with synthetic data). By :user:`Jonathan Arfa `. - Isotonic regression (:class:`isotonic.IsotonicRegression`) now uses a better algorithm to avoid `O(n^2)` behavior in pathological cases, and is also generally faster (:issue:`#6691`). By `Antony Lee`\_. - :class:`naive\_bayes.GaussianNB` now accepts data-independent class-priors through the parameter ``priors``. By :user:`Guillaume Lemaitre `. - :class:`linear\_model.ElasticNet` and :class:`linear\_model.Lasso` now works with ``np.float32`` input data without converting it into ``np.float64``. This allows to reduce the memory consumption. :issue:`6913` by :user:`YenChen Lin `. - :class:`semi\_supervised.LabelPropagation` and :class:`semi\_supervised.LabelSpreading` now accept arbitrary kernel functions in addition to strings ``knn`` and ``rbf``. :issue:`5762` by :user:`Utkarsh Upadhyay `. Decomposition, manifold learning and clustering - Added ``inverse\_transform`` function to :class:`decomposition.NMF` to compute data matrix of original shape. By :user:`Anish Shah `. - :class:`cluster.KMeans` and :class:`cluster.MiniBatchKMeans` now works with ``np.float32`` and ``np.float64`` input data without converting it. This allows to reduce the memory consumption by using ``np.float32``. :issue:`6846` by :user:`Sebastian Säger ` and :user:`YenChen Lin `. Preprocessing and feature selection - :class:`preprocessing.RobustScaler` now accepts ``quantile\_range`` parameter. :issue:`5929` by :user:`Konstantin Podshumok `. - :class:`feature\_extraction.FeatureHasher` now accepts string values. :issue:`6173` by :user:`Ryad Zenine ` and :user:`Devashish Deshpande `. - Keyword arguments can now be supplied to ``func`` in :class:`preprocessing.FunctionTransformer` by means of the ``kw\_args`` parameter. By `Brian McFee`\_. - :class:`feature\_selection.SelectKBest` and :class:`feature\_selection.SelectPercentile` now accept score functions that take X, y as input and return only the scores. By :user:`Nikolay Mayorov `. Model evaluation and meta-estimators - :class:`multiclass.OneVsOneClassifier` and :class:`multiclass.OneVsRestClassifier` now support ``partial\_fit``. By :user:`Asish Panda ` and :user:`Philipp Dowling `. - Added support for substituting or disabling :class:`pipeline.Pipeline` and :class:`pipeline.FeatureUnion` components using the ``set\_params`` interface that powers `sklearn.grid\_search`. See :ref:`sphx\_glr\_auto\_examples\_compose\_plot\_compare\_reduction.py` By `Joel Nothman`\_ and :user:`Robert McGibbon `. - The new ``cv\_results\_`` attribute of :class:`model\_selection.GridSearchCV` (and :class:`model\_selection.RandomizedSearchCV`) can be easily imported into pandas as a ``DataFrame``. Ref :ref:`model\_selection\_changes` for more information. :issue:`6697` by `Raghav RV`\_. - Generalization of :func:`model\_selection.cross\_val\_predict`. One can pass method names such as `predict\_proba` to be used in the cross validation framework instead of the default `predict`. By :user:`Ori Ziv ` and :user:`Sears Merritt `. - The training scores and time taken for training followed by scoring for each search candidate are now available at the ``cv\_results\_`` dict. See :ref:`model\_selection\_changes` for more information. :issue:`7325` by :user:`Eugene Chen ` and `Raghav RV`\_. Metrics - Added ``labels`` flag to :class:`metrics.log\_loss` to explicitly provide the labels when the number of classes in ``y\_true`` and ``y\_pred`` differ. :issue:`7239` by :user:`Hong Guangguo ` with help from :user:`Mads Jensen ` and :user:`Nelson Liu `. - Support sparse contingency matrices in cluster evaluation (`metrics.cluster.supervised`) to scale to a large number of clusters. :issue:`7419` by :user:`Gregory Stupp ` and `Joel Nothman`\_. - Add ``sample\_weight`` parameter to :func:`metrics.matthews\_corrcoef`. By :user:`Jatin Shah ` and `Raghav RV`\_.
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.18.rst
main
scikit-learn
[ -0.097646564245224, -0.055034615099430084, -0.04446070268750191, 0.0874023586511612, 0.09102929383516312, -0.051976870745420456, 0.017171569168567657, 0.01672475039958954, -0.06522469222545624, -0.019279371947050095, -0.03234349191188812, -0.08536992967128754, 0.013759705238044262, -0.0770...
0.074613
Guangguo ` with help from :user:`Mads Jensen ` and :user:`Nelson Liu `. - Support sparse contingency matrices in cluster evaluation (`metrics.cluster.supervised`) to scale to a large number of clusters. :issue:`7419` by :user:`Gregory Stupp ` and `Joel Nothman`\_. - Add ``sample\_weight`` parameter to :func:`metrics.matthews\_corrcoef`. By :user:`Jatin Shah ` and `Raghav RV`\_. - Speed up :func:`metrics.silhouette\_score` by using vectorized operations. By `Manoj Kumar`\_. - Add ``sample\_weight`` parameter to :func:`metrics.confusion\_matrix`. By :user:`Bernardo Stein `. Miscellaneous - Added ``n\_jobs`` parameter to :class:`feature\_selection.RFECV` to compute the score on the test folds in parallel. By `Manoj Kumar`\_ - Codebase does not contain C/C++ cython generated files: they are generated during build. Distribution packages will still contain generated C/C++ files. By :user:`Arthur Mensch `. - Reduce the memory usage for 32-bit float input arrays of `utils.sparse\_func.mean\_variance\_axis` and `utils.sparse\_func.incr\_mean\_variance\_axis` by supporting cython fused types. By :user:`YenChen Lin `. - The `ignore\_warnings` now accept a category argument to ignore only the warnings of a specified type. By :user:`Thierry Guillemot `. - Added parameter ``return\_X\_y`` and return type ``(data, target) : tuple`` option to :func:`datasets.load\_iris` dataset :issue:`7049`, :func:`datasets.load\_breast\_cancer` dataset :issue:`7152`, :func:`datasets.load\_digits` dataset, :func:`datasets.load\_diabetes` dataset, :func:`datasets.load\_linnerud` dataset, `datasets.load\_boston` dataset :issue:`7154` by :user:`Manvendra Singh`. - Simplification of the ``clone`` function, deprecate support for estimators that modify parameters in ``\_\_init\_\_``. :issue:`5540` by `Andreas Müller`\_. - When unpickling a scikit-learn estimator in a different version than the one the estimator was trained with, a ``UserWarning`` is raised, see :ref:`the documentation on model persistence ` for more details. (:issue:`7248`) By `Andreas Müller`\_. Bug fixes ......... Trees and ensembles - Random forest, extra trees, decision trees and gradient boosting won't accept anymore ``min\_samples\_split=1`` as at least 2 samples are required to split a decision tree node. By `Arnaud Joly`\_ - :class:`ensemble.VotingClassifier` now raises ``NotFittedError`` if ``predict``, ``transform`` or ``predict\_proba`` are called on the non-fitted estimator. by `Sebastian Raschka`\_. - Fix bug where :class:`ensemble.AdaBoostClassifier` and :class:`ensemble.AdaBoostRegressor` would perform poorly if the ``random\_state`` was fixed (:issue:`7411`). By `Joel Nothman`\_. - Fix bug in ensembles with randomization where the ensemble would not set ``random\_state`` on base estimators in a pipeline or similar nesting. (:issue:`7411`). Note, results for :class:`ensemble.BaggingClassifier` :class:`ensemble.BaggingRegressor`, :class:`ensemble.AdaBoostClassifier` and :class:`ensemble.AdaBoostRegressor` will now differ from previous versions. By `Joel Nothman`\_. Linear, kernelized and related models - Fixed incorrect gradient computation for ``loss='squared\_epsilon\_insensitive'`` in :class:`linear\_model.SGDClassifier` and :class:`linear\_model.SGDRegressor` (:issue:`6764`). By :user:`Wenhua Yang `. - Fix bug in :class:`linear\_model.LogisticRegressionCV` where ``solver='liblinear'`` did not accept ``class\_weights='balanced``. (:issue:`6817`). By `Tom Dupre la Tour`\_. - Fix bug in :class:`neighbors.RadiusNeighborsClassifier` where an error occurred when there were outliers being labelled and a weight function specified (:issue:`6902`). By `LeonieBorne `\_. - Fix :class:`linear\_model.ElasticNet` sparse decision function to match output with dense in the multioutput case. Decomposition, manifold learning and clustering - `decomposition.RandomizedPCA` default number of `iterated\_power` is 4 instead of 3. :issue:`5141` by :user:`Giorgio Patrini `. - :func:`utils.extmath.randomized\_svd` performs 4 power iterations by default, instead of 0. In practice this is enough for obtaining a good approximation of the true eigenvalues/vectors in the presence of noise. When `n\_components` is small (``< .1 \* min(X.shape)``) `n\_iter` is set to 7, unless the user specifies a higher number. This improves precision with few components. :issue:`5299` by :user:`Giorgio Patrini`. - Whiten/non-whiten inconsistency between components of :class:`decomposition.PCA` and `decomposition.RandomizedPCA` (now factored into PCA, see the New features) is fixed. `components\_` are stored with no whitening. :issue:`5299` by :user:`Giorgio Patrini `. - Fixed bug in :func:`manifold.spectral\_embedding` where diagonal of unnormalized Laplacian matrix was incorrectly set to 1. :issue:`4995` by :user:`Peter Fischer `. - Fixed incorrect initialization of `utils.arpack.eigsh` on all occurrences. Affects `cluster.bicluster.SpectralBiclustering`, :class:`decomposition.KernelPCA`, :class:`manifold.LocallyLinearEmbedding`, and :class:`manifold.SpectralEmbedding` (:issue:`5012`). By :user:`Peter Fischer `. - Attribute ``explained\_variance\_ratio\_`` calculated with the SVD solver of
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.18.rst
main
scikit-learn
[ -0.058762840926647186, -0.009462310001254082, -0.13872705399990082, 0.042261067777872086, 0.008448809385299683, -0.026004426181316376, 0.025130731984972954, 0.042875222861766815, -0.10554520785808563, 0.008927690796554089, 0.09372324496507645, -0.11725830286741257, 0.0242866650223732, -0.0...
0.148675
- Fixed bug in :func:`manifold.spectral\_embedding` where diagonal of unnormalized Laplacian matrix was incorrectly set to 1. :issue:`4995` by :user:`Peter Fischer `. - Fixed incorrect initialization of `utils.arpack.eigsh` on all occurrences. Affects `cluster.bicluster.SpectralBiclustering`, :class:`decomposition.KernelPCA`, :class:`manifold.LocallyLinearEmbedding`, and :class:`manifold.SpectralEmbedding` (:issue:`5012`). By :user:`Peter Fischer `. - Attribute ``explained\_variance\_ratio\_`` calculated with the SVD solver of :class:`discriminant\_analysis.LinearDiscriminantAnalysis` now returns correct results. By :user:`JPFrancoia ` Preprocessing and feature selection - `preprocessing.data.\_transform\_selected` now always passes a copy of ``X`` to transform function when ``copy=True`` (:issue:`7194`). By `Caio Oliveira `\_. Model evaluation and meta-estimators - :class:`model\_selection.StratifiedKFold` now raises error if all n\_labels for individual classes is less than n\_folds. :issue:`6182` by :user:`Devashish Deshpande `. - Fixed bug in :class:`model\_selection.StratifiedShuffleSplit` where train and test sample could overlap in some edge cases, see :issue:`6121` for more details. By `Loic Esteve`\_. - Fix in :class:`sklearn.model\_selection.StratifiedShuffleSplit` to return splits of size ``train\_size`` and ``test\_size`` in all cases (:issue:`6472`). By `Andreas Müller`\_. - Cross-validation of :class:`multiclass.OneVsOneClassifier` and :class:`multiclass.OneVsRestClassifier` now works with precomputed kernels. :issue:`7350` by :user:`Russell Smith `. - Fix incomplete ``predict\_proba`` method delegation from :class:`model\_selection.GridSearchCV` to :class:`linear\_model.SGDClassifier` (:issue:`7159`) by `Yichuan Liu `\_. Metrics - Fix bug in :func:`metrics.silhouette\_score` in which clusters of size 1 were incorrectly scored. They should get a score of 0. By `Joel Nothman`\_. - Fix bug in :func:`metrics.silhouette\_samples` so that it now works with arbitrary labels, not just those ranging from 0 to n\_clusters - 1. - Fix bug where expected and adjusted mutual information were incorrect if cluster contingency cells exceeded ``2\*\*16``. By `Joel Nothman`\_. - :func:`metrics.pairwise\_distances` now converts arrays to boolean arrays when required in ``scipy.spatial.distance``. :issue:`5460` by `Tom Dupre la Tour`\_. - Fix sparse input support in :func:`metrics.silhouette\_score` as well as example examples/text/document\_clustering.py. By :user:`YenChen Lin `. - :func:`metrics.roc\_curve` and :func:`metrics.precision\_recall\_curve` no longer round ``y\_score`` values when creating ROC curves; this was causing problems for users with very small differences in scores (:issue:`7353`). Miscellaneous - `model\_selection.tests.\_search.\_check\_param\_grid` now works correctly with all types that extends/implements `Sequence` (except string), including range (Python 3.x) and xrange (Python 2.x). :issue:`7323` by Viacheslav Kovalevskyi. - :func:`utils.extmath.randomized\_range\_finder` is more numerically stable when many power iterations are requested, since it applies LU normalization by default. If ``n\_iter<2`` numerical issues are unlikely, thus no normalization is applied. Other normalization options are available: ``'none', 'LU'`` and ``'QR'``. :issue:`5141` by :user:`Giorgio Patrini `. - Fix a bug where some formats of ``scipy.sparse`` matrix, and estimators with them as parameters, could not be passed to :func:`base.clone`. By `Loic Esteve`\_. - :func:`datasets.load\_svmlight\_file` now is able to read long int QID values. :issue:`7101` by :user:`Ibraim Ganiev `. API changes summary ------------------- Linear, kernelized and related models - ``residual\_metric`` has been deprecated in :class:`linear\_model.RANSACRegressor`. Use ``loss`` instead. By `Manoj Kumar`\_. - Access to public attributes ``.X\_`` and ``.y\_`` has been deprecated in :class:`isotonic.IsotonicRegression`. By :user:`Jonathan Arfa `. Decomposition, manifold learning and clustering - The old `mixture.DPGMM` is deprecated in favor of the new :class:`mixture.BayesianGaussianMixture` (with the parameter ``weight\_concentration\_prior\_type='dirichlet\_process'``). The new class solves the computational problems of the old class and computes the Gaussian mixture with a Dirichlet process prior faster than before. :issue:`7295` by :user:`Wei Xue ` and :user:`Thierry Guillemot `. - The old `mixture.VBGMM` is deprecated in favor of the new :class:`mixture.BayesianGaussianMixture` (with the parameter ``weight\_concentration\_prior\_type='dirichlet\_distribution'``). The new class solves the computational problems of the old class and computes the Variational Bayesian Gaussian mixture faster than before. :issue:`6651` by :user:`Wei Xue ` and :user:`Thierry Guillemot `. - The old `mixture.GMM` is deprecated in favor of the new :class:`mixture.GaussianMixture`. The new class computes the Gaussian mixture faster than before and some of computational problems have been solved. :issue:`6666` by :user:`Wei Xue ` and :user:`Thierry Guillemot `. Model evaluation and meta-estimators
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.18.rst
main
scikit-learn
[ -0.03154313564300537, -0.08054990321397781, 0.004988713189959526, 0.001766348141245544, -0.005059870891273022, -0.017292220145463943, -0.08098316192626953, -0.009948048740625381, -0.03569777309894562, -0.054872117936611176, 0.10048562288284302, -0.04547509178519249, -0.039937857538461685, ...
-0.056703
:user:`Wei Xue ` and :user:`Thierry Guillemot `. - The old `mixture.GMM` is deprecated in favor of the new :class:`mixture.GaussianMixture`. The new class computes the Gaussian mixture faster than before and some of computational problems have been solved. :issue:`6666` by :user:`Wei Xue ` and :user:`Thierry Guillemot `. Model evaluation and meta-estimators - The `sklearn.cross\_validation`, `sklearn.grid\_search` and `sklearn.learning\_curve` have been deprecated and the classes and functions have been reorganized into the :mod:`sklearn.model\_selection` module. Ref :ref:`model\_selection\_changes` for more information. :issue:`4294` by `Raghav RV`\_. - The ``grid\_scores\_`` attribute of :class:`model\_selection.GridSearchCV` and :class:`model\_selection.RandomizedSearchCV` is deprecated in favor of the attribute ``cv\_results\_``. Ref :ref:`model\_selection\_changes` for more information. :issue:`6697` by `Raghav RV`\_. - The parameters ``n\_iter`` or ``n\_folds`` in old CV splitters are replaced by the new parameter ``n\_splits`` since it can provide a consistent and unambiguous interface to represent the number of train-test splits. :issue:`7187` by :user:`YenChen Lin `. - ``classes`` parameter was renamed to ``labels`` in :func:`metrics.hamming\_loss`. :issue:`7260` by :user:`Sebastián Vanrell `. - The splitter classes ``LabelKFold``, ``LabelShuffleSplit``, ``LeaveOneLabelOut`` and ``LeavePLabelsOut`` are renamed to :class:`model\_selection.GroupKFold`, :class:`model\_selection.GroupShuffleSplit`, :class:`model\_selection.LeaveOneGroupOut` and :class:`model\_selection.LeavePGroupsOut` respectively. Also the parameter ``labels`` in the `split` method of the newly renamed splitters :class:`model\_selection.LeaveOneGroupOut` and :class:`model\_selection.LeavePGroupsOut` is renamed to ``groups``. Additionally in :class:`model\_selection.LeavePGroupsOut`, the parameter ``n\_labels`` is renamed to ``n\_groups``. :issue:`6660` by `Raghav RV`\_. - Error and loss names for ``scoring`` parameters are now prefixed by ``'neg\_'``, such as ``neg\_mean\_squared\_error``. The unprefixed versions are deprecated and will be removed in version 0.20. :issue:`7261` by :user:`Tim Head `. Code Contributors ----------------- Aditya Joshi, Alejandro, Alexander Fabisch, Alexander Loginov, Alexander Minyushkin, Alexander Rudy, Alexandre Abadie, Alexandre Abraham, Alexandre Gramfort, Alexandre Saint, alexfields, Alvaro Ulloa, alyssaq, Amlan Kar, Andreas Mueller, andrew giessel, Andrew Jackson, Andrew McCulloh, Andrew Murray, Anish Shah, Arafat, Archit Sharma, Ariel Rokem, Arnaud Joly, Arnaud Rachez, Arthur Mensch, Ash Hoover, asnt, b0noI, Behzad Tabibian, Bernardo, Bernhard Kratzwald, Bhargav Mangipudi, blakeflei, Boyuan Deng, Brandon Carter, Brett Naul, Brian McFee, Caio Oliveira, Camilo Lamus, Carol Willing, Cass, CeShine Lee, Charles Truong, Chyi-Kwei Yau, CJ Carey, codevig, Colin Ni, Dan Shiebler, Daniel, Daniel Hnyk, David Ellis, David Nicholson, David Staub, David Thaler, David Warshaw, Davide Lasagna, Deborah, definitelyuncertain, Didi Bar-Zev, djipey, dsquareindia, edwinENSAE, Elias Kuthe, Elvis DOHMATOB, Ethan White, Fabian Pedregosa, Fabio Ticconi, fisache, Florian Wilhelm, Francis, Francis O'Donovan, Gael Varoquaux, Ganiev Ibraim, ghg, Gilles Louppe, Giorgio Patrini, Giovanni Cherubin, Giovanni Lanzani, Glenn Qian, Gordon Mohr, govin-vatsan, Graham Clenaghan, Greg Reda, Greg Stupp, Guillaume Lemaitre, Gustav Mörtberg, halwai, Harizo Rajaona, Harry Mavroforakis, hashcode55, hdmetor, Henry Lin, Hobson Lane, Hugo Bowne-Anderson, Igor Andriushchenko, Imaculate, Inki Hwang, Isaac Sijaranamual, Ishank Gulati, Issam Laradji, Iver Jordal, jackmartin, Jacob Schreiber, Jake Vanderplas, James Fiedler, James Routley, Jan Zikes, Janna Brettingen, jarfa, Jason Laska, jblackburne, jeff levesque, Jeffrey Blackburne, Jeffrey04, Jeremy Hintz, jeremynixon, Jeroen, Jessica Yung, Jill-Jênn Vie, Jimmy Jia, Jiyuan Qian, Joel Nothman, johannah, John, John Boersma, John Kirkham, John Moeller, jonathan.striebel, joncrall, Jordi, Joseph Munoz, Joshua Cook, JPFrancoia, jrfiedler, JulianKahnert, juliathebrave, kaichogami, KamalakerDadi, Kenneth Lyons, Kevin Wang, kingjr, kjell, Konstantin Podshumok, Kornel Kielczewski, Krishna Kalyan, krishnakalyan3, Kvle Putnam, Kyle Jackson, Lars Buitinck, ldavid, LeiG, LeightonZhang, Leland McInnes, Liang-Chi Hsieh, Lilian Besson, lizsz, Loic Esteve, Louis Tiao, Léonie Borne, Mads Jensen, Maniteja Nandana, Manoj Kumar, Manvendra Singh, Marco, Mario Krell, Mark Bao, Mark Szepieniec, Martin Madsen, MartinBpr, MaryanMorel, Massil, Matheus, Mathieu Blondel, Mathieu Dubois, Matteo, Matthias Ekman, Max Moroz, Michael Scherer, michiaki ariga, Mikhail Korobov, Moussa Taifi, mrandrewandrade, Mridul Seth, nadya-p, Naoya Kanai, Nate George, Nelle Varoquaux, Nelson Liu, Nick James, NickleDave, Nico, Nicolas Goix, Nikolay Mayorov, ningchi, nlathia, okbalefthanded, Okhlopkov, Olivier Grisel, Panos Louridas, Paul Strickland, Perrine Letellier, pestrickland, Peter Fischer, Pieter, Ping-Yao, Chang, practicalswift, Preston Parry, Qimu Zheng, Rachit Kansal,
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.18.rst
main
scikit-learn
[ -0.042759452015161514, -0.07601644098758698, -0.05231669545173645, 0.011032631620764732, 0.11642647534608841, -0.031169375404715538, 0.04023217409849167, 0.017812669277191162, -0.06105555593967438, -0.058857083320617676, 0.02804538793861866, -0.02791471779346466, 0.030400699004530907, -0.0...
0.054996
ariga, Mikhail Korobov, Moussa Taifi, mrandrewandrade, Mridul Seth, nadya-p, Naoya Kanai, Nate George, Nelle Varoquaux, Nelson Liu, Nick James, NickleDave, Nico, Nicolas Goix, Nikolay Mayorov, ningchi, nlathia, okbalefthanded, Okhlopkov, Olivier Grisel, Panos Louridas, Paul Strickland, Perrine Letellier, pestrickland, Peter Fischer, Pieter, Ping-Yao, Chang, practicalswift, Preston Parry, Qimu Zheng, Rachit Kansal, Raghav RV, Ralf Gommers, Ramana.S, Rammig, Randy Olson, Rob Alexander, Robert Lutz, Robin Schucker, Rohan Jain, Ruifeng Zheng, Ryan Yu, Rémy Léone, saihttam, Saiwing Yeung, Sam Shleifer, Samuel St-Jean, Sartaj Singh, Sasank Chilamkurthy, saurabh.bansod, Scott Andrews, Scott Lowe, seales, Sebastian Raschka, Sebastian Saeger, Sebastián Vanrell, Sergei Lebedev, shagun Sodhani, shanmuga cv, Shashank Shekhar, shawpan, shengxiduan, Shota, shuckle16, Skipper Seabold, sklearn-ci, SmedbergM, srvanrell, Sébastien Lerique, Taranjeet, themrmax, Thierry, Thierry Guillemot, Thomas, Thomas Hallock, Thomas Moreau, Tim Head, tKammy, toastedcornflakes, Tom, TomDLT, Toshihiro Kamishima, tracer0tong, Trent Hauck, trevorstephens, Tue Vo, Varun, Varun Jewalikar, Viacheslav, Vighnesh Birodkar, Vikram, Villu Ruusmann, Vinayak Mehta, walter, waterponey, Wenhua Yang, Wenjian Huang, Will Welch, wyseguy7, xyguo, yanlend, Yaroslav Halchenko, yelite, Yen, YenChenLin, Yichuan Liu, Yoav Ram, Yoshiki, Zheng RuiFeng, zivori, Óscar Nájera
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v0.18.rst
main
scikit-learn
[ -0.020881058648228645, -0.0729045495390892, -0.039471954107284546, 0.028902117162942886, -0.023904040455818176, 0.05256081745028496, -0.028248293325304985, 0.0835270807147026, -0.039464205503463745, 0.013246754184365273, -0.02788388729095459, 0.02325754053890705, -0.04033731296658516, 0.05...
0.10639
.. include:: \_contributors.rst .. currentmodule:: sklearn .. \_release\_notes\_1\_7: =========== Version 1.7 =========== For a short description of the main highlights of the release, please refer to :ref:`sphx\_glr\_auto\_examples\_release\_highlights\_plot\_release\_highlights\_1\_7\_0.py`. .. include:: changelog\_legend.inc .. towncrier release notes start .. \_changes\_1\_7\_2: Version 1.7.2 ============= \*\*September 2025\*\* :mod:`sklearn.compose` ---------------------- - |Fix| :class:`compose.TransformedTargetRegressor` now passes the transformed target to the regressor with the same number of dimensions as the original target. By :user:`kryggird `. :pr:`31563` :mod:`sklearn.feature\_extraction` --------------------------------- - |Fix| Set the tag `requires\_fit=False` for the classes :class:`feature\_extraction.FeatureHasher` and :class:`feature\_extraction.text.HashingVectorizer`. By :user:`hakan çanakcı `. :pr:`31851` :mod:`sklearn.impute` --------------------- - |Fix| Fixed a bug in :class:`impute.SimpleImputer` with `strategy="most\_frequent"` when there is a tie in the most frequent value and the input data has mixed types. By :user:`Alexandre Abraham `. :pr:`31820` :mod:`sklearn.linear\_model` --------------------------- - |Fix| Fixed a bug with `solver="newton-cholesky"` on multi-class problems in :class:`linear\_model.LogisticRegressionCV` and in :class:`linear\_model.LogisticRegression` when used with `warm\_start=True`. The bug appeared either with `fit\_intercept=True` or with `penalty=None` (both resulting in unpenalized parameters for the solver). The coefficients and intercepts of the last class as provided by warm start were partially wrongly overwritten by zero. By :user:`Christian Lorentzen `. :pr:`31866` :mod:`sklearn.pipeline` ----------------------- - |Fix| :class:`pipeline.FeatureUnion` now validates that all transformers return 2D outputs and raises an informative error when transformers return 1D outputs, preventing silent failures that previously produced meaningless concatenated results. By :user:`gguiomar `. :pr:`31559` .. \_changes\_1\_7\_1: Version 1.7.1 ============= \*\*July 2025\*\* :mod:`sklearn.base` ------------------- - |Fix| Fix regression in HTML representation when detecting the non-default parameters that where of array-like types. By :user:`Dea María Léon ` :pr:`31528` :mod:`sklearn.compose` ---------------------- - |Fix| :class:`compose.ColumnTransformer` now correctly preserves non-default index when mixing pandas Series and Dataframes. By :user:`Nicolas Bolle `. :pr:`31079` :mod:`sklearn.datasets` ----------------------- - |Fix| Fixed a regression preventing to extract the downloaded dataset in :func:`datasets.fetch\_20newsgroups`, :func:`datasets.fetch\_20newsgroups\_vectorized`, :func:`datasets.fetch\_lfw\_people` and :func:`datasets.fetch\_lfw\_pairs`. This only affects Python versions `>=3.10.0,<=3.10.11` and `>=3.11.0,<=3.11.3`. By :user:`Jérémie du Boisberranger `. :pr:`31685` :mod:`sklearn.inspection` ------------------------- - |Fix| Fix multiple issues in the multiclass setting of :class:`inspection.DecisionBoundaryDisplay`: - `contour` plotting now correctly shows the decision boundary. - `cmap` and `colors` are now properly ignored in favor of `multiclass\_colors`. - Linear segmented colormaps are now fully supported. By :user:`Yunjie Lin ` :pr:`31553` :mod:`sklearn.naive\_bayes` -------------------------- - |Fix| :class:`naive\_bayes.CategoricalNB` now correctly declares that it accepts categorical features in the tags returned by its `\_\_sklearn\_tags\_\_` method. By :user:`Olivier Grisel ` :pr:`31556` :mod:`sklearn.utils` -------------------- - |Fix| Fixed a spurious warning (about the number of unique classes being greater than 50% of the number of samples) that could occur when passing `classes` :func:`utils.multiclass.type\_of\_target`. By :user:`Sascha D. Krauss `. :pr:`31584` .. \_changes\_1\_7\_0: Version 1.7.0 ============= \*\*June 2025\*\* Changed models -------------- - |Fix| Change the `ConvergenceWarning` message of estimators that rely on the `"lbfgs"` optimizer internally to be more informative and to avoid suggesting to increase the maximum number of iterations when it is not user-settable or when the convergence problem happens before reaching it. By :user:`Olivier Grisel `. :pr:`31316` Changes impacting many modules ------------------------------ - Sparse update: As part of the SciPy change from spmatrix to sparray, all internal use of sparse now supports both sparray and spmatrix. All manipulations of sparse objects should work for either spmatrix or sparray. This is pass 1 of a migration toward sparray (see `SciPy migration to sparray `\_ By :user:`Dan Schult ` :pr:`30858` Support for Array API --------------------- Additional estimators and functions have been updated to include support for all `Array API `\_ compliant inputs. See :ref:`array\_api` for more details. - |Feature| :func:`sklearn.utils.check\_consistent\_length` now supports Array API compatible inputs. By :user:`Stefanie Senger ` :pr:`29519` - |Feature| :func:`sklearn.metrics.explained\_variance\_score` and :func:`sklearn.metrics.mean\_pinball\_loss` now support Array API compatible inputs. By :user:`Virgil Chan ` :pr:`29978` - |Feature| :func:`sklearn.metrics.fbeta\_score`, :func:`sklearn.metrics.precision\_score` and :func:`sklearn.metrics.recall\_score` now support Array
https://github.com/scikit-learn/scikit-learn/blob/main//doc/whats_new/v1.7.rst
main
scikit-learn
[ -0.04547560214996338, -0.03282007947564125, 0.035248421132564545, -0.0023495296481996775, 0.10813205689191818, 0.06286759674549103, 0.011976831592619419, -0.004841613117605448, -0.04482860863208771, 0.004265812691301107, 0.05059375241398811, -0.04724948853254318, -0.03813224658370018, -0.0...
0.029981