Datasets:
Formats:
csv
Size:
10M - 100M
Update README.md
Browse files
README.md
CHANGED
@@ -62,19 +62,19 @@ dataset = load_dataset("inria_soda/tabular-benchmark", data_files="reg_cat/house
|
|
62 |
|
63 |
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
|
64 |
|
65 |
-
- Heterogeneous columns
|
66 |
images or signal datasets where each column corresponds to the same signal on different sensors.
|
67 |
-
- Not high dimensional
|
68 |
-
- Undocumented datasets We remove datasets where too little information is available. We did keep
|
69 |
datasets with hidden column names if it was clear that the features were heterogeneous.
|
70 |
-
- I.I.D. data
|
71 |
-
Real-world data
|
72 |
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
|
73 |
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
|
74 |
-
Not too small
|
75 |
benchmarks on numerical features only, we remove categorical features before checking if enough
|
76 |
features and samples are remaining.
|
77 |
-
- Not too easy
|
78 |
Logistic Regression (or Linear Regression for regression) reach a score whose relative difference
|
79 |
with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn) is below 5%. Other benchmarks use different metrics to
|
80 |
remove too easy datasets, like removing datasets which can be learnt perfectly by a single decision
|
@@ -82,7 +82,7 @@ classifier [Bischl et al., 2021], but this does not account for different Bayes
|
|
82 |
As tree-based methods have been shown to be superior to Logistic Regression [Fernández-Delgado
|
83 |
et al., 2014] in our setting, a close score for these two types of models indicates that we might
|
84 |
already be close to the best achievable score.
|
85 |
-
- Not deterministic
|
86 |
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
|
87 |
datasets are very different from most real-world tabular datasets, and should be studied separately
|
88 |
|
|
|
62 |
|
63 |
This dataset is curated to benchmark performance of tree based models against neural networks. The process of picking the datasets for curation is mentioned in the paper as below:
|
64 |
|
65 |
+
- **Heterogeneous columns**. Columns should correspond to features of different nature. This excludes
|
66 |
images or signal datasets where each column corresponds to the same signal on different sensors.
|
67 |
+
- **Not high dimensional**. We only keep datasets with a d/n ratio below 1/10.
|
68 |
+
- **Undocumented datasets** We remove datasets where too little information is available. We did keep
|
69 |
datasets with hidden column names if it was clear that the features were heterogeneous.
|
70 |
+
- **I.I.D. data**. We remove stream-like datasets or time series.
|
71 |
+
- **Real-world data**. We remove artificial datasets but keep some simulated datasets. The difference is
|
72 |
subtle, but we try to keep simulated datasets if learning these datasets are of practical importance
|
73 |
(like the Higgs dataset), and not just a toy example to test specific model capabilities.
|
74 |
+
- **Not too small**. We remove datasets with too few features (< 4) and too few samples (< 3 000). For
|
75 |
benchmarks on numerical features only, we remove categorical features before checking if enough
|
76 |
features and samples are remaining.
|
77 |
+
- **Not too easy**. We remove datasets which are too easy. Specifically, we remove a dataset if a default
|
78 |
Logistic Regression (or Linear Regression for regression) reach a score whose relative difference
|
79 |
with the score of both a default Resnet (from Gorishniy et al. [2021]) and a default HistGradientBoosting model (from scikit learn) is below 5%. Other benchmarks use different metrics to
|
80 |
remove too easy datasets, like removing datasets which can be learnt perfectly by a single decision
|
|
|
82 |
As tree-based methods have been shown to be superior to Logistic Regression [Fernández-Delgado
|
83 |
et al., 2014] in our setting, a close score for these two types of models indicates that we might
|
84 |
already be close to the best achievable score.
|
85 |
+
- **Not deterministic**. We remove datasets where the target is a deterministic function of the data. This
|
86 |
mostly means removing datasets on games like poker and chess. Indeed, we believe that these
|
87 |
datasets are very different from most real-world tabular datasets, and should be studied separately
|
88 |
|