y1450 commited on
Commit
8384124
1 Parent(s): 5eea6f9

pushing files to the repo from the example!

Browse files
Files changed (4) hide show
  1. README.md +219 -0
  2. config.json +196 -0
  3. plot_hf_hub.py +172 -0
  4. skops-bo_9fb88.pkl +3 -0
README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: sklearn
3
+ tags:
4
+ - sklearn
5
+ - skops
6
+ - tabular-classification
7
+ model_file: skops-bo_9fb88.pkl
8
+ widget:
9
+ structuredData:
10
+ area error:
11
+ - 30.29
12
+ - 96.05
13
+ - 48.31
14
+ compactness error:
15
+ - 0.01911
16
+ - 0.01652
17
+ - 0.01484
18
+ concave points error:
19
+ - 0.01037
20
+ - 0.0137
21
+ - 0.01093
22
+ concavity error:
23
+ - 0.02701
24
+ - 0.02269
25
+ - 0.02813
26
+ fractal dimension error:
27
+ - 0.003586
28
+ - 0.001698
29
+ - 0.002461
30
+ mean area:
31
+ - 481.9
32
+ - 1130.0
33
+ - 748.9
34
+ mean compactness:
35
+ - 0.1058
36
+ - 0.1029
37
+ - 0.1223
38
+ mean concave points:
39
+ - 0.03821
40
+ - 0.07951
41
+ - 0.08087
42
+ mean concavity:
43
+ - 0.08005
44
+ - 0.108
45
+ - 0.1466
46
+ mean fractal dimension:
47
+ - 0.06373
48
+ - 0.05461
49
+ - 0.05796
50
+ mean perimeter:
51
+ - 81.09
52
+ - 123.6
53
+ - 101.7
54
+ mean radius:
55
+ - 12.47
56
+ - 18.94
57
+ - 15.46
58
+ mean smoothness:
59
+ - 0.09965
60
+ - 0.09009
61
+ - 0.1092
62
+ mean symmetry:
63
+ - 0.1925
64
+ - 0.1582
65
+ - 0.1931
66
+ mean texture:
67
+ - 18.6
68
+ - 21.31
69
+ - 19.48
70
+ perimeter error:
71
+ - 2.497
72
+ - 5.486
73
+ - 3.094
74
+ radius error:
75
+ - 0.3961
76
+ - 0.7888
77
+ - 0.4743
78
+ smoothness error:
79
+ - 0.006953
80
+ - 0.004444
81
+ - 0.00624
82
+ symmetry error:
83
+ - 0.01782
84
+ - 0.01386
85
+ - 0.01397
86
+ texture error:
87
+ - 1.044
88
+ - 0.7975
89
+ - 0.7859
90
+ worst area:
91
+ - 677.9
92
+ - 1866.0
93
+ - 1156.0
94
+ worst compactness:
95
+ - 0.2378
96
+ - 0.2336
97
+ - 0.2394
98
+ worst concave points:
99
+ - 0.1015
100
+ - 0.1789
101
+ - 0.1514
102
+ worst concavity:
103
+ - 0.2671
104
+ - 0.2687
105
+ - 0.3791
106
+ worst fractal dimension:
107
+ - 0.0875
108
+ - 0.06589
109
+ - 0.08019
110
+ worst perimeter:
111
+ - 96.05
112
+ - 165.9
113
+ - 124.9
114
+ worst radius:
115
+ - 14.97
116
+ - 24.86
117
+ - 19.26
118
+ worst smoothness:
119
+ - 0.1426
120
+ - 0.1193
121
+ - 0.1546
122
+ worst symmetry:
123
+ - 0.3014
124
+ - 0.2551
125
+ - 0.2837
126
+ worst texture:
127
+ - 24.64
128
+ - 26.58
129
+ - 26.0
130
+ ---
131
+
132
+ # Model description
133
+
134
+ [More Information Needed]
135
+
136
+ ## Intended uses & limitations
137
+
138
+ [More Information Needed]
139
+
140
+ ## Training Procedure
141
+
142
+ ### Hyperparameters
143
+
144
+ The model is trained with below hyperparameters.
145
+
146
+ <details>
147
+ <summary> Click to expand </summary>
148
+
149
+ | Hyperparameter | Value |
150
+ |---------------------------------|----------------------------------------------------------|
151
+ | aggressive_elimination | False |
152
+ | cv | 5 |
153
+ | error_score | nan |
154
+ | estimator__categorical_features | |
155
+ | estimator__early_stopping | auto |
156
+ | estimator__l2_regularization | 0.0 |
157
+ | estimator__learning_rate | 0.1 |
158
+ | estimator__loss | log_loss |
159
+ | estimator__max_bins | 255 |
160
+ | estimator__max_depth | |
161
+ | estimator__max_iter | 100 |
162
+ | estimator__max_leaf_nodes | 31 |
163
+ | estimator__min_samples_leaf | 20 |
164
+ | estimator__monotonic_cst | |
165
+ | estimator__n_iter_no_change | 10 |
166
+ | estimator__random_state | |
167
+ | estimator__scoring | loss |
168
+ | estimator__tol | 1e-07 |
169
+ | estimator__validation_fraction | 0.1 |
170
+ | estimator__verbose | 0 |
171
+ | estimator__warm_start | False |
172
+ | estimator | HistGradientBoostingClassifier() |
173
+ | factor | 3 |
174
+ | max_resources | auto |
175
+ | min_resources | exhaust |
176
+ | n_jobs | -1 |
177
+ | param_grid | {'max_leaf_nodes': [5, 10, 15], 'max_depth': [2, 5, 10]} |
178
+ | random_state | 42 |
179
+ | refit | True |
180
+ | resource | n_samples |
181
+ | return_train_score | True |
182
+ | scoring | |
183
+ | verbose | 0 |
184
+
185
+ </details>
186
+
187
+ ### Model Plot
188
+
189
+ The model plot is below.
190
+
191
+ <style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={&#x27;max_depth&#x27;: [2, 5, 10],&#x27;max_leaf_nodes&#x27;: [5, 10, 15]},random_state=42)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">HalvingGridSearchCV</label><div class="sk-toggleable__content"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={&#x27;max_depth&#x27;: [2, 5, 10],&#x27;max_leaf_nodes&#x27;: [5, 10, 15]},random_state=42)</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">estimator: HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div></div></div></div></div></div></div></div>
192
+
193
+ ## Evaluation Results
194
+
195
+ [More Information Needed]
196
+
197
+ # How to Get Started with the Model
198
+
199
+ [More Information Needed]
200
+
201
+ # Model Card Authors
202
+
203
+ This model card is written by following authors:
204
+
205
+ [More Information Needed]
206
+
207
+ # Model Card Contact
208
+
209
+ You can contact the model card authors through following channels:
210
+ [More Information Needed]
211
+
212
+ # Citation
213
+
214
+ Below you can find information related to citation.
215
+
216
+ **BibTeX:**
217
+ ```
218
+ [More Information Needed]
219
+ ```
config.json ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "sklearn": {
3
+ "columns": [
4
+ "mean radius",
5
+ "mean texture",
6
+ "mean perimeter",
7
+ "mean area",
8
+ "mean smoothness",
9
+ "mean compactness",
10
+ "mean concavity",
11
+ "mean concave points",
12
+ "mean symmetry",
13
+ "mean fractal dimension",
14
+ "radius error",
15
+ "texture error",
16
+ "perimeter error",
17
+ "area error",
18
+ "smoothness error",
19
+ "compactness error",
20
+ "concavity error",
21
+ "concave points error",
22
+ "symmetry error",
23
+ "fractal dimension error",
24
+ "worst radius",
25
+ "worst texture",
26
+ "worst perimeter",
27
+ "worst area",
28
+ "worst smoothness",
29
+ "worst compactness",
30
+ "worst concavity",
31
+ "worst concave points",
32
+ "worst symmetry",
33
+ "worst fractal dimension"
34
+ ],
35
+ "environment": [
36
+ "scikit-learn=1.1.1"
37
+ ],
38
+ "example_input": {
39
+ "area error": [
40
+ 30.29,
41
+ 96.05,
42
+ 48.31
43
+ ],
44
+ "compactness error": [
45
+ 0.01911,
46
+ 0.01652,
47
+ 0.01484
48
+ ],
49
+ "concave points error": [
50
+ 0.01037,
51
+ 0.0137,
52
+ 0.01093
53
+ ],
54
+ "concavity error": [
55
+ 0.02701,
56
+ 0.02269,
57
+ 0.02813
58
+ ],
59
+ "fractal dimension error": [
60
+ 0.003586,
61
+ 0.001698,
62
+ 0.002461
63
+ ],
64
+ "mean area": [
65
+ 481.9,
66
+ 1130.0,
67
+ 748.9
68
+ ],
69
+ "mean compactness": [
70
+ 0.1058,
71
+ 0.1029,
72
+ 0.1223
73
+ ],
74
+ "mean concave points": [
75
+ 0.03821,
76
+ 0.07951,
77
+ 0.08087
78
+ ],
79
+ "mean concavity": [
80
+ 0.08005,
81
+ 0.108,
82
+ 0.1466
83
+ ],
84
+ "mean fractal dimension": [
85
+ 0.06373,
86
+ 0.05461,
87
+ 0.05796
88
+ ],
89
+ "mean perimeter": [
90
+ 81.09,
91
+ 123.6,
92
+ 101.7
93
+ ],
94
+ "mean radius": [
95
+ 12.47,
96
+ 18.94,
97
+ 15.46
98
+ ],
99
+ "mean smoothness": [
100
+ 0.09965,
101
+ 0.09009,
102
+ 0.1092
103
+ ],
104
+ "mean symmetry": [
105
+ 0.1925,
106
+ 0.1582,
107
+ 0.1931
108
+ ],
109
+ "mean texture": [
110
+ 18.6,
111
+ 21.31,
112
+ 19.48
113
+ ],
114
+ "perimeter error": [
115
+ 2.497,
116
+ 5.486,
117
+ 3.094
118
+ ],
119
+ "radius error": [
120
+ 0.3961,
121
+ 0.7888,
122
+ 0.4743
123
+ ],
124
+ "smoothness error": [
125
+ 0.006953,
126
+ 0.004444,
127
+ 0.00624
128
+ ],
129
+ "symmetry error": [
130
+ 0.01782,
131
+ 0.01386,
132
+ 0.01397
133
+ ],
134
+ "texture error": [
135
+ 1.044,
136
+ 0.7975,
137
+ 0.7859
138
+ ],
139
+ "worst area": [
140
+ 677.9,
141
+ 1866.0,
142
+ 1156.0
143
+ ],
144
+ "worst compactness": [
145
+ 0.2378,
146
+ 0.2336,
147
+ 0.2394
148
+ ],
149
+ "worst concave points": [
150
+ 0.1015,
151
+ 0.1789,
152
+ 0.1514
153
+ ],
154
+ "worst concavity": [
155
+ 0.2671,
156
+ 0.2687,
157
+ 0.3791
158
+ ],
159
+ "worst fractal dimension": [
160
+ 0.0875,
161
+ 0.06589,
162
+ 0.08019
163
+ ],
164
+ "worst perimeter": [
165
+ 96.05,
166
+ 165.9,
167
+ 124.9
168
+ ],
169
+ "worst radius": [
170
+ 14.97,
171
+ 24.86,
172
+ 19.26
173
+ ],
174
+ "worst smoothness": [
175
+ 0.1426,
176
+ 0.1193,
177
+ 0.1546
178
+ ],
179
+ "worst symmetry": [
180
+ 0.3014,
181
+ 0.2551,
182
+ 0.2837
183
+ ],
184
+ "worst texture": [
185
+ 24.64,
186
+ 26.58,
187
+ 26.0
188
+ ]
189
+ },
190
+ "model": {
191
+ "file": "skops-bo_9fb88.pkl"
192
+ },
193
+ "model_format": "pickle",
194
+ "task": "tabular-classification"
195
+ }
196
+ }
plot_hf_hub.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ scikit-learn models on Hugging Face Hub
3
+ ---------------------------------------
4
+
5
+ This guide demonstrates how you can use this package to create a Hugging Face
6
+ Hub model repository based on a scikit-learn compatible model, and how to
7
+ fetch scikit-learn compatible models from the Hub and run them locally.
8
+ """
9
+
10
+ # %%
11
+ # Imports
12
+ # =======
13
+ # First we will import everything required for the rest of this document.
14
+
15
+ import json
16
+ import os
17
+ import pickle
18
+ from pathlib import Path
19
+ from tempfile import mkdtemp, mkstemp
20
+ from uuid import uuid4
21
+
22
+ import sklearn
23
+ from huggingface_hub import HfApi
24
+ from sklearn.datasets import load_breast_cancer
25
+ from sklearn.ensemble import HistGradientBoostingClassifier
26
+ from sklearn.experimental import enable_halving_search_cv # noqa
27
+ from sklearn.model_selection import HalvingGridSearchCV, train_test_split
28
+
29
+ from skops import card, hub_utils
30
+
31
+ # %%
32
+ # Data
33
+ # ====
34
+ # Then we create some random data to train and evaluate our model.
35
+
36
+ X, y = load_breast_cancer(as_frame=True, return_X_y=True)
37
+ X_train, X_test, y_train, y_test = train_test_split(
38
+ X, y, test_size=0.3, random_state=42
39
+ )
40
+ print("X's summary: ", X.describe())
41
+ print("y's summary: ", y.describe())
42
+
43
+
44
+ # %%
45
+ # Train a Model
46
+ # =============
47
+ # Using the above data, we train a model. To select the model, we use
48
+ # :class:`~sklearn.model_selection.HalvingGridSearchCV` with a parameter grid
49
+ # over :class:`~sklearn.ensemble.HistGradientBoostingClassifier`.
50
+
51
+ param_grid = {
52
+ "max_leaf_nodes": [5, 10, 15],
53
+ "max_depth": [2, 5, 10],
54
+ }
55
+
56
+ model = HalvingGridSearchCV(
57
+ estimator=HistGradientBoostingClassifier(),
58
+ param_grid=param_grid,
59
+ random_state=42,
60
+ n_jobs=-1,
61
+ ).fit(X_train, y_train)
62
+ model.score(X_test, y_test)
63
+
64
+ # %%
65
+ # Initialize a Model Repo
66
+ # =======================
67
+ # We now initialize a model repository locally, and push it to the hub. For
68
+ # that, we need to first store the model as a pickle file and pass it to the
69
+ # hub tools.
70
+
71
+ # The file name is not significant, here we choose to save it with a `pkl`
72
+ # extension.
73
+ _, pkl_name = mkstemp(prefix="skops-", suffix=".pkl")
74
+ with open(pkl_name, mode="bw") as f:
75
+ pickle.dump(model, file=f)
76
+
77
+ local_repo = mkdtemp(prefix="skops-")
78
+ hub_utils.init(
79
+ model=pkl_name,
80
+ requirements=[f"scikit-learn={sklearn.__version__}"],
81
+ dst=local_repo,
82
+ task="tabular-classification",
83
+ data=X_test,
84
+ )
85
+ if "__file__" in locals(): # __file__ not defined during docs built
86
+ # Add this script itself to the files to be uploaded for reproducibility
87
+ hub_utils.add_files(__file__, dst=local_repo)
88
+
89
+ # %%
90
+ # We can no see what the contents of the created local repo are:
91
+ print(os.listdir(local_repo))
92
+
93
+ # %%
94
+ # Model Card
95
+ # ==========
96
+ # We will now create a model card and save it. For more information about how
97
+ # to create a good model card, refer to the :ref:`model card example
98
+ # <sphx_glr_auto_examples_plot_model_card.py>`. The following code uses
99
+ # :func:`~skops.card.metadata_from_config` which creates a minimal metadata
100
+ # object to be included in the metadata section of the model card. The
101
+ # configuration used by this method is stored in the ``config.json`` file which
102
+ # is created by the call to :func:`~skops.hub_utils.init`.
103
+ model_card = card.Card(model, metadata=card.metadata_from_config(Path(local_repo)))
104
+ model_card.save(Path(local_repo) / "README.md")
105
+
106
+ # %%
107
+ # Push to Hub
108
+ # ===========
109
+ # And finally, we can push the model to the hub. This requires a user access
110
+ # token which you can get under https://huggingface.co/settings/tokens
111
+
112
+ # you can put your own token here, or set it as an environment variable before
113
+ # running this script.
114
+ token = os.environ["HF_HUB_TOKEN"]
115
+
116
+ repo_name = f"hf_hub_example-{uuid4()}"
117
+ user_name = HfApi().whoami(token=token)["name"]
118
+ repo_id = f"{user_name}/{repo_name}"
119
+ print(f"Creating and pushing to repo: {repo_id}")
120
+
121
+ # %%
122
+ # Now we can push our files to the repo. The following function creates the
123
+ # remote repository if it doesn't exist; this is controlled via the
124
+ # ``create_remote`` argument. Note that here we're setting ``private=True``,
125
+ # which means only people with the right permissions would see the model. Set
126
+ # ``private=False`` to make it visible to the public.
127
+
128
+ hub_utils.push(
129
+ repo_id=repo_id,
130
+ source=local_repo,
131
+ token=token,
132
+ commit_message="pushing files to the repo from the example!",
133
+ create_remote=True,
134
+ private=True,
135
+ )
136
+
137
+ # %%
138
+ # Once uploaded, other users can download and use it, unless you make the repo
139
+ # private. Given a repository's name, here's how one can download it:
140
+ repo_copy = mkdtemp(prefix="skops")
141
+ hub_utils.download(repo_id=repo_id, dst=repo_copy, token=token)
142
+ print(os.listdir(repo_copy))
143
+
144
+
145
+ # %%
146
+ # You can also get the requirements of this repository:
147
+ print(hub_utils.get_requirements(path=repo_copy))
148
+
149
+ # %%
150
+ # As well as the complete configuration of the project:
151
+ print(json.dumps(hub_utils.get_config(path=repo_copy), indent=2))
152
+
153
+ # %%
154
+ # Now you can check the contents of the repository under your user.
155
+ #
156
+ # Update Requirements
157
+ # ===================
158
+ # If you update your environment and the versions of your requirements are
159
+ # changed, you can update the requirement in your repo by calling
160
+ # ``update_env``, which automatically detects the existing installation of the
161
+ # current environment and updates the requirements accordingly.
162
+
163
+ hub_utils.update_env(path=local_repo, requirements=["scikit-learn"])
164
+
165
+ # %%
166
+ # Delete Repository
167
+ # =================
168
+ # At the end, you can also delete the repository you created using
169
+ # ``HfApi().delete_repo``. For more information please refer to the
170
+ # documentation of ``huggingface_hub`` library.
171
+
172
+ #HfApi().delete_repo(repo_id=repo_id, token=token)
skops-bo_9fb88.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7315c566432a5873ff9d7939376a74dc78b81f3df99ff7e72b504dc6f558e84
3
+ size 233388