Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Add details to description
Browse files
app.py
CHANGED
@@ -70,11 +70,21 @@ def train_model(num_samples, num_info):
|
|
70 |
|
71 |
|
72 |
title = "Feature importances with a forest of trees 🌳"
|
73 |
-
description = """
|
74 |
-
|
|
|
|
|
|
|
75 |
|
76 |
-
|
77 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
with gr.Blocks() as demo:
|
80 |
gr.Markdown(f"## {title}")
|
|
|
70 |
|
71 |
|
72 |
title = "Feature importances with a forest of trees 🌳"
|
73 |
+
description = """
|
74 |
+
This example shows the use of a random forest model in the evaluation of feature importances \
|
75 |
+
of features on an artificial classification task. The model is trained with simulated data that \
|
76 |
+
are generated using a user-selected number of informative features. \
|
77 |
+
|
78 |
|
79 |
+
The plots show the feature impotances calculated with two different methods. In the first method (left) \
|
80 |
+
the importances are provided by the model and they are computed as the mean and standard deviation \
|
81 |
+
of accumulation of the impurity decrease within each tree. In the second method (right) uses permutation \
|
82 |
+
feature importance which is the decrease in a model score when a single feature value is randomly shuffled. \
|
83 |
+
|
84 |
+
|
85 |
+
The blue bars are the feature importances of the random forest model, along with their inter-trees variability \
|
86 |
+
represented by the error bars.
|
87 |
+
"""
|
88 |
|
89 |
with gr.Blocks() as demo:
|
90 |
gr.Markdown(f"## {title}")
|