nielsgl commited on
Commit
920c7b5
·
1 Parent(s): c100c37

update project

Browse files
Files changed (1) hide show
  1. app.py +3 -2
app.py CHANGED
@@ -8,8 +8,9 @@ from sklearn.pipeline import make_pipeline
8
  from sklearn.svm import OneClassSVM
9
 
10
  md_description = """
11
- This example shows how to approximate the solution of [sklearn.svm.OneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM) in the case of an RBF kernel with [sklearn.linear_model.SGDOneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM), a Stochastic Gradient Descent (SGD) version of the One-Class SVM. A kernel approximation is first used in order to apply [sklearn.linear_model.SGDOneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM) which implements a linear One-Class SVM using SGD.
12
- Note that [sklearn.linear_model.SGDOneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM) scales linearly with the number of samples whereas the complexity of a kernelized [sklearn.svm.OneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM) is at best quadratic with respect to the number of samples. It is not the purpose of this example to illustrate the benefits of such an approximation in terms of computation time but rather to show that we obtain similar results on a toy dataset.
 
13
  """
14
 
15
  font = {"weight": "normal", "size": 15}
 
8
  from sklearn.svm import OneClassSVM
9
 
10
  md_description = """
11
+ This example shows how to approximate the solution of [sklearn.svm.OneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM) in the case of an RBF kernel with [sklearn.linear_model.SGDOneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDOneClassSVM.html#sklearn.linear_model.SGDOneClassSVM), a Stochastic Gradient Descent (SGD) version of the One-Class SVM. A kernel approximation is first used in order to apply [sklearn.linear_model.SGDOneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDOneClassSVM.html#sklearn.linear_model.SGDOneClassSVM) which implements a linear One-Class SVM using SGD.
12
+
13
+ Note that [sklearn.linear_model.SGDOneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDOneClassSVM.html#sklearn.linear_model.SGDOneClassSVM) scales linearly with the number of samples whereas the complexity of a kernelized [sklearn.svm.OneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html#sklearn.svm.OneClassSVM) is at best quadratic with respect to the number of samples. It is not the purpose of this example to illustrate the benefits of such an approximation in terms of computation time but rather to show that we obtain similar results on a toy dataset.
14
  """
15
 
16
  font = {"weight": "normal", "size": 15}