YAML Metadata Error: "widget" must be an array

Model description

[More Information Needed]

Intended uses & limitations

[More Information Needed]

Training Procedure

Hyperparameters

The model is trained with below hyperparameters.

Click to expand
Hyperparameter Value
memory
steps [('onehotencoder', OneHotEncoder(handle_unknown='ignore', sparse=False)), ('xgbregressor', XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1, enable_categorical=False,
gamma=0, gpu_id=-1, importance_type=None,
interaction_constraints='', learning_rate=0.300000012,
max_delta_step=0, max_depth=5, min_child_weight=1, missing=nan,
monotone_constraints='()', n_estimators=100, n_jobs=8,
num_parallel_tree=1, predictor='auto', random_state=0, reg_alpha=0,
reg_lambda=1, scale_pos_weight=1, subsample=1, tree_method='exact',
validate_parameters=1, verbosity=None))]
verbose False
onehotencoder OneHotEncoder(handle_unknown='ignore', sparse=False)
xgbregressor XGBRegressor(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=1, enable_categorical=False,
gamma=0, gpu_id=-1, importance_type=None,
interaction_constraints='', learning_rate=0.300000012,
max_delta_step=0, max_depth=5, min_child_weight=1, missing=nan,
monotone_constraints='()', n_estimators=100, n_jobs=8,
num_parallel_tree=1, predictor='auto', random_state=0, reg_alpha=0,
reg_lambda=1, scale_pos_weight=1, subsample=1, tree_method='exact',
validate_parameters=1, verbosity=None)
onehotencoder__categories auto
onehotencoder__drop
onehotencoder__dtype <class 'numpy.float64'>
onehotencoder__handle_unknown ignore
onehotencoder__max_categories
onehotencoder__min_frequency
onehotencoder__sparse False
xgbregressor__objective reg:squarederror
xgbregressor__base_score 0.5
xgbregressor__booster gbtree
xgbregressor__colsample_bylevel 1
xgbregressor__colsample_bynode 1
xgbregressor__colsample_bytree 1
xgbregressor__enable_categorical False
xgbregressor__gamma 0
xgbregressor__gpu_id -1
xgbregressor__importance_type
xgbregressor__interaction_constraints
xgbregressor__learning_rate 0.300000012
xgbregressor__max_delta_step 0
xgbregressor__max_depth 5
xgbregressor__min_child_weight 1
xgbregressor__missing nan
xgbregressor__monotone_constraints ()
xgbregressor__n_estimators 100
xgbregressor__n_jobs 8
xgbregressor__num_parallel_tree 1
xgbregressor__predictor auto
xgbregressor__random_state 0
xgbregressor__reg_alpha 0
xgbregressor__reg_lambda 1
xgbregressor__scale_pos_weight 1
xgbregressor__subsample 1
xgbregressor__tree_method exact
xgbregressor__validate_parameters 1
xgbregressor__verbosity

Model Plot

The model plot is below.

Pipeline(steps=[('onehotencoder',OneHotEncoder(handle_unknown='ignore', sparse=False)),('xgbregressor',XGBRegressor(base_score=0.5, booster='gbtree',colsample_bylevel=1, colsample_bynode=1,colsample_bytree=1, enable_categorical=False,gamma=0, gpu_id=-1, importance_type=None,interaction_constraints='',learning_rate=0.300000012, max_delta_step=0,max_depth=5, min_child_weight=1, missing=nan,monotone_constraints='()', n_estimators=100,n_jobs=8, num_parallel_tree=1, predictor='auto',random_state=0, reg_alpha=0, reg_lambda=1,scale_pos_weight=1, subsample=1,tree_method='exact', validate_parameters=1,verbosity=None))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Evaluation Results

[More Information Needed]

How to Get Started with the Model

[More Information Needed]

Model Card Authors

This model card is written by following authors:

[More Information Needed]

Model Card Contact

You can contact the model card authors through following channels: [More Information Needed]

Citation

Below you can find information related to citation.

BibTeX:

[More Information Needed]
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.