taesiri commited on
Commit
9670e47
1 Parent(s): da13c0b

Upload papers/2401/2401.08867v1.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2401/2401.08867v1.tex +490 -0
papers/2401/2401.08867v1.tex ADDED
@@ -0,0 +1,490 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \typeout{IJCAI--24 Instructions for Authors}
4
+
5
+
6
+
7
+ \documentclass{article}
8
+ \pdfpagewidth=8.5in
9
+ \pdfpageheight=11in
10
+
11
+ \usepackage{ijcai24}
12
+
13
+ \usepackage{times}
14
+ \usepackage{soul}
15
+ \usepackage{url}
16
+ \usepackage[hidelinks]{hyperref}
17
+ \usepackage[utf8]{inputenc}
18
+ \usepackage[small]{caption}
19
+ \usepackage{graphicx}
20
+ \usepackage{amsmath}
21
+ \usepackage{amsthm}
22
+ \usepackage{booktabs}
23
+ \usepackage{algorithm}
24
+ \usepackage{algorithmic}
25
+ \usepackage{multicol}
26
+ \usepackage{amsmath, multicol, multirow, booktabs, microtype, balance}
27
+ \usepackage[switch]{lineno}
28
+ \usepackage{xcolor}
29
+ \usepackage{subfig}
30
+ \definecolor{amber}{rgb}{1.0, 0.6, 0.0}
31
+
32
+
33
+
34
+ \urlstyle{same}
35
+
36
+
37
+
38
+ \newtheorem{example}{Example}
39
+ \newtheorem{theorem}{Theorem}
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+ \pdfinfo{
51
+ /TemplateVersion (IJCAI.2024.0)
52
+ }
53
+
54
+ \title{MambaTab: A Simple Yet Effective Approach for Handling Tabular Data}
55
+
56
+ \author{
57
+ }
58
+
59
+
60
+ \author{
61
+ Md Atik Ahamed$^1$
62
+ \and
63
+ Qiang Cheng$^{1, 2}$\footnote{Correspondence should be addressed to: qiang.cheng@uky.edu}
64
+ \affiliations
65
+ $^1$Department of Computer Science, University of Kentucky, Lexington, KY, USA\\
66
+ $^2$Institute for Biomedical Informatics, University of Kentucky, Lexington, KY, USA\\
67
+ \emails
68
+ \{atikahamed, qiang.cheng\}@uky.edu
69
+ }
70
+
71
+
72
+ \begin{document}
73
+
74
+ \maketitle
75
+
76
+ \begin{abstract}
77
+ Tabular data remains ubiquitous across domains despite growing use of images and texts for machine learning. While deep learning models like convolutional neural networks and transformers achieve strong performance on tabular data, they require extensive data preprocessing, tuning, and resources, limiting accessibility and scalability. This work develops an innovative approach based on a structured state-space model (SSM), MambaTab, for tabular data. SSMs have strong capabilities for efficiently extracting effective representations from data with long-range dependencies. MambaTab leverages Mamba, an emerging SSM variant, for end-to-end supervised learning on tables. Compared to state-of-the-art baselines, MambaTab delivers superior performance while requiring significantly fewer parameters and minimal preprocessing, as empirically validated on diverse benchmark datasets. MambaTab’s efficiency, scalability, generalizability, and predictive gains signify it as a lightweight, “out-of-the-box” solution for diverse tabular data with promise for enabling wider practical applications.
78
+ \end{abstract}
79
+
80
+ \section{Introduction}
81
+ Tabular data remains the predominant data type across industrial, healthcare, academic, and various other domains due to its structured format, despite the recent trend of using images and natural language processing in machine learning. To handle tabular data, numerous strategies have been developed, including machine learning (ML) techniques that use traditional shallow models as well as newer deep learning (DL) architectures. Foundational models such as convolutional neural networks (CNNs) and Transformers
82
+ \cite{vaswani2017attention} have been actively explored and tailored to tabular data modeling in DL, enabling impactful insights and analytics.
83
+
84
+ While powerful, state-of-the-art deep tabular models typically require a large number of learning parameters, extensive data preprocessing, and hyperparameter tuning.
85
+ This demands significant computational and human resources, which can impose barriers to developing and deploying these complex models for tabular data, thereby impeding their wider application.
86
+ Moreover, almost all existing tabular learning methods, except TransTab \cite{wang2022transtab}, operate under vanilla supervised learning, requiring identical train and test table structures. They are not well-suited for feature incremental learning where features are
87
+ sequentially added; under such a setting, they have to either drop new features or old data, leading to insufficient use of training data. It is desirable to have the ability to continuously learn from new features.
88
+
89
+ To address these challenges, we introduce a new approach for tabular data based on structured state-space models (SSMs) \cite{gu2021combining} \cite{gu2021efficiently} \cite{fu2022hungry}. These models can be interpreted as a combination of CNNs and recursive neural networks, having advantages of both types of models. They offer parameter efficiency, scalability, and strong capabilities for learning representations from varied data, particularly for sequential data with long-range dependencies. To tap into these potential advantages, we leverage SSMs as an alternative to CNNs or Transformers for modeling tabular data.
90
+
91
+ Specifically, we leverage Mamba \cite{gu2023mamba}, an emerging SSM variant, as a critical building block to build a new supervised model for tabular data called \emph{MambaTab}. This proposed model has several key advantages over existing models. Thanks to Mamba's innovative approach as an SSM, MambaTab not only requires significantly fewer model weights and exhibits linear parameter growth, but also inherently aligns well with feature incremental learning. Additionally, MambaTab has a simple architecture needing minimal data preprocessing or manual tuning. Finally, MambaTab outperforms state-of-the-art baselines, including Transformers and CNN-based models as well as classic learning models.
92
+
93
+ We extensively benchmark MambaTab against state-of-the-art tabular data approaches. Experiments under two different settings - vanilla supervised learning and feature incremental learning - on 8 public datasets demonstrate MambaTab's superior performance. It consistently and significantly outperforms the state-of-the-art baselines, including Transformer-based models, while using a small fraction, typically \emph{$\mathbf{<1\%}$}, of their parameters.
94
+
95
+ In summary, the key innovations and contributions of MambaTab are:
96
+ \begin{itemize}
97
+ \item Extremely small model size and number of learning parameters
98
+ \item Linear scalability of model parameters in Mamba blocks, number of features, or sequence length
99
+ \item Effective end-to-end training and inference with minimal data wrangling needed, in particular, naturally suitable for feature incremental learning
100
+ \item Superior performance over state-of-the-art tabular learning approaches
101
+ \end{itemize}
102
+ As the first Mamba-based architecture for tabular data, MambaTab's advantages suggest that it can serve as an \emph{out-of-the-box}, plug-and-play model for tabular data on systems with varying computational resources. This holds promise to enable wide applicability across diverse practical settings.
103
+
104
+
105
+
106
+ \section{Related Work}
107
+ \label{sec:literature}
108
+ In this section, we briefly review existing approaches for learning from tabular data. We roughly categorize them into three groups based on whether they utilize classical shallow models, deep learning like CNN- or contemporary Transformer-based architectures, or self-supervised learning strategies.
109
+
110
+ \paragraph{{\bf{Classic Learning-based Approaches}}} A variety of models exist based on classic ML techniques such as logistic regression (LR), XGBoost \cite{chen2016xgboost} \cite{zhang2020customer}, and multilayer perceptron (MLP). For example, an MLP variant called self-normalizing neural network (SNN) \cite{klambauer2017self} uses the scaled exponential linear unit (SELU) specifically for tabular data. SNN neuron activations automatically converge towards zero mean and unit variance, enabling high-level abstract representations.
111
+
112
+ \paragraph{{\bf{Deep Learning-based Supervised Models}}} TabNet \cite{arik2021tabnet} is a DL model for tabular data modeling based on attention mechanism. It uses sequential attention to choose which features to attend to at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. TabNet is shown to have high performance on a wide range of non-performance-saturated tabular datasets and yield interpretable feature attributions or insights into the global model behavior. Deep cross networks (DCN) \cite{wang2017deep} constructs a new network structure consisting of two parts: a deep network and a cross network. The deep network is a standard feed-forward network that can learn high-order feature interactions. The cross network is a new component that can explicitly apply automatic feature crossing with a special operation called vector-wise cross. DCN is shown to be efficient in learning certain bounded-degree feature interactions.
113
+
114
+ A variety of models have been developed with Transformers as building blocks.
115
+ AutoInt \cite{song2019autoint} uses Transformers to learn the importance of different input features. By relying on self-attention networks, this model can automatically learn high-order feature interactions in a data-driven way. TabTransformer \cite{huang2020tabtransformer} is also built upon self-attention based Transformers, which transform the embeddings of categorical features into robust contextual embeddings to achieve higher prediction accuracy. The contextual embeddings are shown to be highly robust against both missing and noisy data features and provide better interpretability. Moreover, FT-Transformer \cite{fttrans} converts categorical features into continuous embeddings using a tokenizer, and models the interactions between the continuous embeddings with Transformers.
116
+
117
+
118
+ \paragraph{{\bf{Self-Supervised Learning-based Models}}} Recently several approaches have been developed to pre-train deep learning models using self-supervised strategies.
119
+ VIME \cite{yoon2020vime} introduces a tabular data augmentation method for self- and semi-supervised learning frameworks. It builds a pretext task of estimating mask vectors from corrupted tabular data in addition to the reconstruction pretext task. Transformer is used as a component of the pre-trained model.
120
+ SCARF \cite{bahri2021scarf} is a self-supervised contrastive learning technique for pre-training on real-world tabular datasets. It forms views for contrastive learning by corrupting a random subset of features.
121
+ TransTab \cite{wang2022transtab} proposes a novel framework for learning from tabular data across tables with different learning strategies such as self-supervised and feature incremental learning. Transformer is used as an integral component of the framework.
122
+
123
+
124
+ \section{Method}
125
+ \begin{figure*}[t]
126
+ \centering
127
+ \includegraphics[width=\linewidth]{figures/mambatab.pdf}
128
+ \caption{Schematic diagram of our proposed method (MambaTab). \textbf{Left:} Data preprocessing and representation learning. The embedding learner module is critical to ensure the embedded feature dimension is the same before and after new features are added under incremental learning. \textbf{Right:} Conversion of input data to prediction values via Mamba and a fully connected layer.}
129
+ \label{fig:method}
130
+ \end{figure*}
131
+ \begin{figure}
132
+ \centering
133
+ \includegraphics[width=\linewidth]{figures/incremental.pdf}
134
+ \caption{Illustration of feature incremental learning setting. While most existing methods are capable of only learning from a fixed set of features, our method MambaTab and existing TransTab can learn under an incremental feature setting. Here, Feature Set${_i}$, $i = 1,2,3$, have incrementally added features. Feature Set${_X}$ represents the set of features for test data.}
135
+ \label{fig:method_incremental}
136
+ \end{figure}
137
+ In this section, we present our approach for robust learning of tabular data classification, aiming to improve performance through a simple, efficient, yet effective method. Below we describe each component of our method and the working procedures.
138
+ \paragraph{Data preprocessing}
139
+ \label{sec:method_preprocess}
140
+ We consider a tabular dataset, $\{F_i,y_i\}_{i=1}^m$, where the features of the $i$-th sample are represented by $F_i=\{v_{i,j}\}_{j=1}^n$, its corresponding label is $y_i\in \{ 0,1 \}$, and $v_{i,j}$ can be categorical, binary or numerical. We treat both binary and categorical features as categorical and utilize an ordinal encoder for encoding them, as shown in Figure~\ref{fig:method}. Unlike TransTab~\cite{wang2022transtab}, our method does not require manual identification of feature types such as categorical, numerical, or binary.
141
+ We keep numerical features unchanged in the dataset and handle missing values by imputing the mode. This preprocessing preserves the feature set cardinality, i.e. $n(F_i)=n({F_i}')$), where $n(F_i)$
142
+ and $n({F_i}')$ are the numbers of features before and after processing. Before feeding data into our model, we normalize
143
+ values $v_{i,j}\in [0,1]$ using min-max scaling: \begin{equation}
144
+ \label{eq:min_max}
145
+ {v'}_{i,j}=\frac{v_{i,j}-min_{i,j=1}^{i=n,j=m}(v_{i,j})}{max_{i,j=1}^{i=n,j=m} (v_{i,j})-min_{i,j=1}^{i=n,j=m} (v_{i,j})}.
146
+ \end{equation}
147
+
148
+ \paragraph{Embedding representation learning}
149
+ After getting the preprocessed data, we utilize a fully connected layer to learn an embedded representation from the processed features. This is necessary to provide more meaningful representations as input to the proposed architecture. Moreover, while the ordinal-encoder enforces ordered representations for categorical features, some may not necessarily have inherent ordering among them. The embedding representation learner enables our method to learn multi-dimensional representations directly from the features without relying on the imposed ordering. In addition, this embedding representation learning can ensure that the downstream Mamba blocks have the same input feature dimensions during training and testing under incremental feature learning. We demonstrate the feature incremental learning setting in Figure~\ref{fig:method_incremental}. While most existing methods, except TransTab, are only capable of learning from a fixed set of features, our method MambaTab can learn and transfer weights from Feature Set$_1$ to Feature Set$_2$, and so on. We also utilize layer normalization~\cite{ba2016layer} instead of batch normalization~\cite{ioffe2015batch} on the learned embedded representations due to its independence of batch size.
150
+ \paragraph{Cascading Mamba Blocks}
151
+ After getting the normalized embedded representations from layer normalization, we apply ReLU activation~\cite{agarap2018relu} and pass the resulting values $\{u_k^i\}$, with $u_k^i$ being the $k$-th token for example $i$, to a Mamba block~\cite{gu2023mamba}. This maps features $Batch \times Length \times Dimension \rightarrow Batch \times Length \times Dimension$. Here, $Batch$ is the minibatch size; $Length$ refers to the token sequence length, and $Dimension$ is the number of channels for each input token. For simplicity, we use $Dimension=1$ by default and $Length$ matches the output dimension from the embedding learning layer (Figure~\ref{fig:method}). Although Mamba blocks can repeat ${\mathcal{M}}$ times, we set ${\mathcal{M}}=1$ as our default value. However, we perform sensitivity study for ${\mathcal{M}}=2, \cdots, 100$ with stacked Mamba blocks, which are connected with residual connections~\cite{he2016deep}, to evaluate their information retention or propagation capacity.
152
+
153
+ Inside a Mamba block, two fully-connected layers in two branches calculate linear projections ($LP_1,LP_2)$. The first branch $LP_1$'s output passes through a 1D causal convolution and SiLU activation ${\mathcal{S}}(\cdot)$~\cite{silu}, then a structured state space model (SSM). The continuous-time SSM is a system of first-order ordinary differential equation, which maps an input function or sequence $u(t)$ to output $y(t)$ through a latent state $h(t)$:
154
+ \begin{equation}
155
+ \label{eq:CT-SSM}
156
+ dh(t)/dt = A \ h(t) + B \ u(t), \quad x(t) = C \ h(t),
157
+ \end{equation}
158
+ where $h(t)$ is $N$-dimensional, with $N$ also known as a state expansion factor, $u(t)$ is $D$-dimensional, with $D$ being the $Dimension$ factor or the number of channels, $x(t)$ is usually taken as 1D, and $A$, $B$, and $C$ are coefficient matrices of appropriate sizes. This dynamic system induces a discrete version governing state evolution and SSM outputs given the input token sequence via time sampling at $\{ k \Delta \}$ with a $\Delta$ time interval. This discrete SSM version is a difference equation:
159
+ \begin{equation}
160
+ \label{eq:DT-SSM}
161
+ h_k = \bar{A} \ h_{k-1} + \bar{B} \ u_{k}, \quad x_k = C \ h_{k},
162
+ \end{equation}
163
+ where $h_k$, $u_k$, and $x_k$ are respectively samples of $h(t)$, $u(t)$, and $x(t)$ at time $k \Delta$, $\bar{A} = \exp(\Delta A)$, and $\bar{B} = (\Delta A)^{-1} (\exp(\Delta A) - I) \Delta B$. For SSMs, diagonal $A$ is often used, and Mamba also makes $B$, $C$, and $\Delta$ linear time-varying functions dependent on the input. In particular, for an input token $u$, $B$ and $C$ are both $Linear_N(u)$, and $\Delta$ is $softplus(parameter + Linear_D(Linear_1 (u)))$, with
164
+ $Linear_p (u)$ being a linear projection to a $p$-dimensional space and $softplus$ activation function. With such time-varying coefficient matrices, the resulting SSM possesses context and input selectivity properties \cite{gu2023mamba}, facilitating Mamba blocks to selectively propagate or forget information along the potentially long input token sequence based on the current token. Subsequently, the SSM output is multiplicatively modulated with $\mathcal{S}(LP_2)$ before another fully connected projection. As a result, these integrated blocks empower MambaTab for content-dependent feature extraction and reasoning with long-range dependencies and feature interactions.
165
+ \paragraph{Output Prediction}
166
+ In this portion, our method learns representations from the concatenated Mamba blocks' output $\{x_k^i\}$ of shape $Batch \times Length \times Dimension$, where $x_k^i$ is the $k$-th token output for example $i$ in a minibatch. These are projected via a fully connected layer from $Batch \times Length \times Dimension \rightarrow Batch \times 1$, resulting in prediction logit $y'_i$ for example $i$. With sigmoid activation,
167
+ \begin{equation}
168
+ \label{eq:sigmoid}
169
+ sigmoid(y_i') = \frac{1}{1+exp(-y_i')},
170
+ \end{equation}
171
+ we obtain the predicted probability score for calculating AUROC and binary-cross-entropy loss.
172
+
173
+ \section{Experiments}
174
+ \label{sec:experiments}
175
+ \subsection{Datasets, Implementation Details, and Baselines}
176
+
177
+ {\bf{Datasets}} To systematically evaluate the effectiveness of our method, we utilize 8 diverse public datasets. We provide the dataset's details and abbreviations in Table~\ref{tab:dataset_details}. Links to the datasets can be found in Supplementary Table~\ref{tab:dataset_details_with_links}. Our default experimental settings follow those of ~\cite{wang2022transtab}. We split all datasets into train (70\%), validation (10\%), and test (20\%) sets.
178
+
179
+ \paragraph{{\bf{Implementation Details}}} To keep the preprocessing simple, we follow the approach described in Section~\ref{sec:method_preprocess}, generalizing for all datasets without manual intervention or tuning. Post training-validation, we take the best validation model and use it on the test set for prediction. We set up MambaTab with default hyperparameters and tuned potential hyperparameters for each dataset under vanilla supervised learning. For our default hyperparameters, we set up training for 1000 epochs with early stopping patience = 5. We adopt Adam optimizer~\cite{kingma2014adam} and cosine-annealing learning rate scheduler with initial learning rate = $1e^{-4}$. In addition to training hyperparameters, MambarTab also involves other model-related hyperparameters and their default values are: embedded representation size ($Length$) = 32, SSM state expansion factor ($N$) = 32, local convolution width (d\_conv) = 4, SSM block expansion factor (${\mathcal{M}}$) = 1.
180
+
181
+ \paragraph{{\bf{Baselines}}} We extensively benchmark our model by comparing against standard and current state-of-the-art methods. These include: LR, XGBoost, MLP, SNN with SELU MLP, TabNet, DCN, AutoInt, TabTransformer, FT-Transformer, VIME, SCARF, and TransTab. More information about them can be found in Section \ref{sec:literature}. For fair comparison, we follow their architectures and implementation detailed in TransTab \cite{wang2022transtab}.
182
+
183
+ \paragraph{{\bf{Performance Benchmark}}} With default hyperparameters under vanilla supervised learning, our method denoted by MamabaTab-D achieves better performance than state-of-the-art baselines on many datasets and comparable performance on others with far fewer parameters (Table~\ref{tab:pub_datasets_params_sizes}). After tuning hyperparameters, we denote our tuned model by MambaTab-T, whose performance further improves. Moreover, under feature incremental learning, our method substantially outperforms the existing method simply with default hyperparameters. We implement MambaTab in PyTorch and will release code upon acceptance. For evaluation, we use Area Under the Receiver Operating Characteristic (AUROC) following ~\cite{fttrans,wang2022transtab}. We obtain probability scores via Equation \ref{eq:sigmoid} on model output logits and use them and ground truth labels to calculate AUROC.
184
+
185
+ \begin{table}[t]
186
+
187
+ \centering
188
+ \caption{Publicly available datasets with statistics (positive sample ratio, train, validation (val), test data points) and abbreviations used in this paper.}
189
+ \Huge
190
+ \label{tab:dataset_details}
191
+ \resizebox{\linewidth}{!}{
192
+ \begin{tabular}{lcccccc}
193
+
194
+ \toprule
195
+ Dataset Name & Abbreviation & Datapoints & Train & Val & Test & Positive\\
196
+ \midrule
197
+ Credit-g & CG & 1000 & 700& 100& 200& 0.70\\
198
+ Credit-approval & CA & 690 & 483 & 69 & 138 & 0.56\\
199
+ Dresses-sales & DS & 500 & 350 & 50 & 100 & 0.42\\
200
+ Adult & AD & 48842 & 34189 & 4884 & 9769 & 0.24\\
201
+ Cylinder-bands & CB & 540 & 378 & 54 & 108 & 0.58\\
202
+ Blastchar & BL & 7043 & 4930 & 704 & 1409 & 0.27\\
203
+ Insurance-co & IO & 5822 & 4075 & 582 & 1165 & 0.06\\
204
+ Income-1995 & IC & 32561 & 22792 & 3256 & 6513 & 0.24\\
205
+ \bottomrule
206
+ \end{tabular}
207
+ }
208
+ \end{table}
209
+
210
+ \subsection{Vanilla Supervised Learning Performance}
211
+ For this setting, we follow the protocols from~\cite{wang2022transtab} directly using the training-validation sets for model learning and the test set for evaluation. To overcome potential sampling bias, we report average results over 10 runs with different random seeds on each of the 8 datasets. With defaults, MambaTab-D outperforms baselines on 3 public datasets (CG, CA, BL) and has comparable performance to transformer-based baselines on others. For example, MambaTab-D outperforms TransTab~\cite{wang2022transtab} on 5 out of 8 datasets (CG, CA, DS, CB, BL). After tuning hyperparameters, MambaTab-T achieves even better performance, outperforming all baselines on 6 datasets and achieving the second best on the other 2.
212
+ \begin{table}[t]
213
+ \centering
214
+ \caption{Test AUROC results on 8 public datasets for vanilla supervised learning. Results reported here are averaged over 10 runs with random splits for our method. The best achieved results are shown in \textbf{bold} and the second best are shown in \underline{underlined}.}
215
+ \label{tab:pub_dataset_results}
216
+ \setlength{\tabcolsep}{2pt}
217
+ \medskip
218
+ \resizebox{\linewidth}{!}{
219
+ \begin{tabular}{lccccccccc}
220
+ \toprule
221
+ \multirow{2}{*}{Methods} & \phantom{a} & \multicolumn{8}{c}{Datasets}\\
222
+ \cmidrule{3-10}
223
+ &\phantom{a}& CG & CA & DS & AD & CB & BL & IO & IC\\
224
+ \cmidrule{3-10}
225
+ LR & \phantom{a} & 0.720 & 0.836 & 0.557 & 0.851 & 0.748 & 0.801 & 0.769 & 0.860\\
226
+ XGBoost & \phantom{a} & 0.726 & 0.895 & 0.587 & 0.912 & \underline{0.892} & 0.821& 0.758 & \bf0.925\\
227
+ MLP & \phantom{a} & 0.643 & 0.832 & 0.568 & 0.904 & 0.613 & 0.832 & 0.779 & 0.893\\
228
+ SNN & \phantom{a} & 0.641 & 0.880 & 0.540 & 0.902 & 0.621 & 0.834 & 0.794 & 0.892\\
229
+ TabNet & \phantom{a} & 0.585 & 0.800 & 0.478 & 0.904 & 0.680 & 0.819 & 0.742 & 0.896\\
230
+ \midrule
231
+ DCN & \phantom{a}& 0.739 & 0.870 & \underline{0.674} & \underline{0.913} & 0.848 & 0.840 & 0.768 & 0.915\\
232
+ AutoInt & \phantom{a} & 0.744 & 0.866 & 0.672 & \underline{0.913} & 0.808 & 0.844 & 0.762 & 0.916\\
233
+ \midrule
234
+ TabTrans & \phantom{a} & 0.718 & 0.860 & 0.648 & \bf0.914 & 0.855 & 0.820 & 0.794 & 0.882\\
235
+ FT-Trans & \phantom{a} & 0.739 & 0.859 & 0.657 & \underline{0.913}& 0.862 & 0.841 & 0.793 & 0.915\\
236
+ \midrule
237
+ VIME & \phantom{a} & 0.735 & 0.852 & 0.485 & 0.912 & 0.769 & 0.837 & 0.786 & 0.908\\
238
+ SCARF & \phantom{a} & 0.733 & 0.861 & 0.663 & 0.911 & 0.719 & 0.833 & 0.758 & 0.905\\
239
+ TransTab & \phantom{a} & 0.768 & 0.881 & 0.643 & 0.907 & 0.851 & 0.845 & \bf0.822 & 0.919\\
240
+ \midrule
241
+ MambaTab-D & \phantom{a} &\underline{0.771} & \underline{0.954} & 0.643 & 0.906 & 0.862 & \underline{0.852} & 0.785 & 0.906\\
242
+ MambaTab-T & \phantom{a} &\bf0.801&\bf0.963&\bf0.681&\bf0.914&\bf0.896 & \bf0.854 & \underline{0.812} & \underline{0.920}\\
243
+ \bottomrule
244
+ \end{tabular}
245
+ }
246
+ \end{table}
247
+
248
+ \subsection{Feature Incremental Learning Performance}
249
+ For this setting, we divide the feature set $F$ of each dataset into three non-overlapping subsets $s_1,s_2,s_3$. $set_1$ contains $s_1$ features, $set_2$ contains $s_1,s_2$ features, and $set_3$ contains all features in $s_1,s_2,s_3$. While other baselines can only learn from either $set_1$ by dropping all incrementally added features (with respect to $s_1$) or $set_3$ by dropping old data, TransTab~\cite{wang2022transtab} and MambaTab can incrementally learn from $set_1$ to $set_2$ to $set_3$. In our method, we simply change the input feature cardinality $n(set_i)$ between settings, with the architecture fixed. Our method works because Mamba has strong content and context selectivity for extrapolation and we keep the representation space dimension fixed, that is, independent of feature set cardinality $n(F)$. Thus, this demonstrates the adaptability and simplicity of our method for incremental environments. Even with default hyperparameters, MambaTab-D outperforms all baselines as shown in Table~\ref{tab:feature_incremental_performance}. Here, we report the results averaged over 10 runs with different random seeds. Since it already achieves strong performance, we do not tune the hyperparameters further, although doing so could potentially improve performance.
250
+ \begin{table}[t]
251
+ \centering
252
+ \caption{Test AUROC results on 8 public datasets for feature incremental learning. Results reported here are averaged for 10 runs with random splits for our method MambaTab-D. The best results are shown in \textbf{bold}.}
253
+ \label{tab:feature_incremental_performance}
254
+ \setlength{\tabcolsep}{2pt}
255
+ \medskip
256
+ \resizebox{\linewidth}{!}{
257
+ \begin{tabular}{lccccccccc}
258
+ \toprule
259
+ \multirow{2}{*}{Methods} & \phantom{a} & \multicolumn{8}{c}{Datasets} \\
260
+ \cmidrule{3-10}
261
+ &\phantom{a}& CG & CA & DS & AD & CB & BL & IO & IC\\
262
+ \cmidrule{3-10}
263
+ LR & \phantom{a}& 0.670 & 0.773 & 0.475 & 0.832 & 0.727 & 0.806 & 0.655 & 0.825 \\
264
+ XGBoost & \phantom{a} & 0.608 & 0.817 & 0.527 & 0.891 & 0.778 & 0.816 & 0.692 & 0.898\\
265
+ MLP & \phantom{a} & 0.586 & 0.676 & 0.516 & 0.890 & 0.631 & 0.825 & 0.626 & 0.885\\
266
+ SNN & \phantom{a} & 0.583 & 0.738 & 0.442 & 0.888 & 0.644 & 0.818 & 0.643 & 0.881\\
267
+ TabNet & \phantom{a} & 0.573 & 0.689 & 0.419 & 0.886 & 0.571 & 0.837 & 0.680 & 0.882\\
268
+ \midrule
269
+ DCN & \phantom{a} & 0.674 & 0.835 & 0.578 & 0.893 & 0.778 & 0.840 & 0.660 & 0.891\\
270
+ AutoInt & \phantom{a} & 0.671 & 0.825 & 0.563 & 0.893 & 0.769 & 0.836 & 0.676 & 0.887\\
271
+ \midrule
272
+ TabTrans & \phantom{a} & 0.653 & 0.732 & 0.584 & 0.856 & 0.784 & 0.792 & 0.674 & 0.828\\
273
+ FT-Trans & \phantom{a} & 0.662 & 0.824 & 0.626 & 0.892 & 0.768 & 0.840 & 0.645 & 0.889\\
274
+ \midrule
275
+ VIME & \phantom{a} & 0.621 & 0.697 & 0.571 & 0.892 & 0.769 & 0.803 & 0.683 & 0.881\\
276
+ SCARF & \phantom{a} & 0.651 & 0.753 & 0.556 & 0.891 & 0.703 & 0.829 & 0.680 & 0.887\\
277
+ TransTab & \phantom{a} & 0.741 & 0.879 & 0.665 & 0.894 & 0.791 & 0.841 & 0.739 & 0.897\\
278
+ \midrule
279
+ MambaTab-D & \phantom{a}&\bf0.787&\bf0.961&\bf0.669&\bf0.904&\bf0.860&\bf0.853& \bf0.783 &\bf0.908\\
280
+ \bottomrule
281
+ \end{tabular}
282
+ }
283
+ \end{table}
284
+ \subsection{Learnable Parameter Comparison}
285
+ Our method not only achieves superior performance compared to existing state-of-the-art methods, it is also memory and space efficient. We demonstrate our method's superiority in terms of learnable parameter size while comparing against transformer-based approaches in Table~\ref{tab:pub_datasets_params_sizes}. It is seen that our method (both Mambda-D and -T) achieves comparable or better performance than TransTab typically with $<1\%$ of its learnable parameters. To evaluate learnable parameter size, we use the default settings specified in FT-Trans, TransTab, and TabTrans~\footnote{https://github.com/lucidrains/tab-transformer-pytorch}~\footnote{https://github.com/RyanWangZf/transtab}. We also notice that, despite varying features, TransTab's model size remains unchanged. The most important tunable hyperparameters for MambaTab include the block expansion factor (the local kernel size), the state expansion factor ($N$), and the embedded representation space dimension. We perform sensitivity analysis on them in Section \ref{sec:sensitivity-ana} and also fine-tune them for each dataset. In addition, we conduct ablation study for the normalization layer of our model.
286
+ \begin{table}[t]
287
+ \centering
288
+ \caption{Comparison of total learnable parameters between our method MambaTab and transformer-based methods. (M = million, K = thousand)}
289
+ \label{tab:pub_datasets_params_sizes}
290
+ \setlength{\tabcolsep}{2pt}
291
+ \medskip
292
+ \resizebox{\linewidth}{!}{
293
+ \begin{tabular}{lccccccccc}
294
+ \toprule
295
+ \multirow{2}{*}{Methods} & \phantom{a} & \multicolumn{8}{c}{Datasets} \\
296
+ \cmidrule{3-10}
297
+ &\phantom{a}& CG & CA & DS & AD & CB & BL & IO & IC\\
298
+ \cmidrule{3-10}
299
+ TabTrans & \phantom{a} & 2.7M &1.2M&2.0M&1.2M&6.5M &3.4M&87.0M&1.0M\\
300
+ FT-Trans & \phantom{a} & 176K &176K &179K&178K&203K&176K&193K&177K\\
301
+ TransTab & \phantom{a} &4.2M&4.2M&4.2M&4.2M&4.2M&4.2M &4.2M&4.2M\\
302
+ \cmidrule{1-10}
303
+ MambaTab-D & \phantom{a} & \bf13K&\bf13K&13K&\bf13K&\bf14K& 13K&15K&13K \\
304
+ MambaTab-T & \phantom{a} & 50K & 38K & \bf5K & 255K & 30K & \bf11K & \bf13K & \bf10K\\
305
+ \bottomrule
306
+ \end{tabular}
307
+ }
308
+ \end{table}
309
+ \subsection{Hyperparameter Tuning}
310
+ As mentioned above, we tune important hyperparameters and use the validation loss for tuning. Therefore, the test set is never used for tuning. The hyperparameters achieving the best validation loss are used for testing. We have reported averaged test results over 10 runs with different random seeds with the tuned MambaTab (Table \ref{tab:pub_dataset_results}). Learnable parameter sizes of MambaTab-T are reported in Table~\ref{tab:pub_datasets_params_sizes}. Interestingly, MambaTab-T sometimes consumes fewer parameters than even MambaTab-D, e.g., on DS, BL, IO, and IC. We demonstrate key components of our tuned model MambaTab-T in Table~\ref{tab:hyper_params}, where the tuned values for these components are shown, with other training-related hyperparameters, such as batch size and learning rate, at default values; see Implementation Details in Section~\ref{sec:experiments}.
311
+ \begin{table}[t]
312
+ \centering
313
+ \caption{Hyperparameters of our tuned model, MambaTab-T. The performance of MambaTab-T for vanilla supervised learning has been shown in Table~\ref{tab:pub_dataset_results}. }
314
+ \label{tab:hyper_params}
315
+ \setlength{\tabcolsep}{2pt}
316
+ \medskip
317
+ \resizebox{\linewidth}{!}{
318
+ \begin{tabular}{lccccccccc}
319
+ \toprule
320
+ Hyperparameters & \phantom{a} & \multicolumn{8}{c}{Datasets} \\
321
+ \cmidrule{3-10}
322
+ &\phantom{a}& CG & CA & DS & AD & CB & BL & IO & IC\\
323
+ \cmidrule{3-10}
324
+ Embedding Representation Space & \phantom{a} & 64 & 32 & 16 & 64 & 32 & 16 & 16 & 32\\
325
+ \cmidrule{3-10}
326
+ State Expansion Factor & \phantom{a} & 16 & 64 & 32 & 64 & 8 & 4 & 8 & 64\\
327
+ \cmidrule{3-10}
328
+ Block Expansion Factor& \phantom{a} & 3 & 4 & 2 & 10 & 7 & 10 & 9 & 1\\
329
+
330
+ \bottomrule
331
+ \end{tabular}
332
+ }
333
+ \end{table}
334
+ \section{Hyperparameter Sensitivity Analysis and Ablation Study}
335
+ \label{sec:sensitivity-ana}
336
+ In this section, we demonstrate extensive sensitivity analyses and ablation experiments on MambaTab's most important hyperparameters using two randomly selected datasets: Cylinder-Bands (CB) and Credit - g (CG). We measure performance by changing each factor, including block expansion factor, state expansion factor, and embedding representation space dimension, while keeping $\mathcal{M}=1$ and other hyperparameters at default values as in MambaTab-D. We report results averaged over 10 runs with different random splits to overcome the potential bias due to randomness.
337
+ \subsection{Block Expansion Factor}
338
+ We experiment with block expansion factor (kernel size) $\{1,2,...,10\}$, keeping the other hyperparameters at default values as in MambaTab-D. As seen in Figure~\ref{fig:ablation_block_expansion}, MambaTab's performance changes only slightly with different block expansion factors, with no clear or monotonic trends. Thus we set the default to 2, inspired by~\cite{gu2023mamba}, though tuning this parameter further could improve performance on some datasets.
339
+ \begin{figure}[t]
340
+ \centering
341
+ \includegraphics[width=\linewidth]{figures/expand_factor_ablation.pdf}
342
+ \caption{Sensitivity of MambaTab AUROC to block expansion factor on two randomly chosen datasets CB and CG.}
343
+ \label{fig:ablation_block_expansion}
344
+ \end{figure}
345
+
346
+ \subsection{State Expansion Factor}
347
+ We demonstrate the effect of the state expansion factor ($N$) using values in $\{4,8,16,32,64,128\}$. As seen in Figure~\ref{fig:ablation_state_expansion}, MambaTab performance in AUROC improves with increasing state expansion factor for both datasets CG and CB. Thus tuning this hyperparameter could further improve performance. However, a larger state expansion factor consumes more memory. To balance performance versus memory consumption, we select 32 as the default value.
348
+ \begin{figure}[t]
349
+ \centering
350
+ \includegraphics[width=\linewidth]{figures/state_factor_ablation.pdf}
351
+ \caption{Sensitivity of MambaTab AUROC to state expansion factor on two randomly chosen datasets CB and CG.}
352
+ \label{fig:ablation_state_expansion}
353
+ \end{figure}
354
+ \subsection{Size of Embedded Representations}
355
+ As mentioned in the Method section, we allow flexibility for the model to learn the embedding via a fully connected layer. We also perform sensitivity analysis for the length of the embedded representations, with values in $\{4, 8, 16, 32, 64, 128\}$. As seen in Figure~\ref{fig:ablation_representation}, MambaTab's performance
356
+ essentially increases for both CG and CB datasets with larger embedding sizes, though at the cost of more parameters and thus larger CPU/GPU space. To balance performance versus model size, we choose to keep the default embedding length to 32.
357
+ \begin{figure}[t]
358
+ \centering
359
+ \includegraphics[width=\linewidth]{figures/representation.pdf}
360
+ \caption{Sensitivity of MambaTab performance in AUROC to size of embedded representations on two datasets CB and CG.}
361
+ \label{fig:ablation_representation}
362
+ \end{figure}
363
+
364
+ \subsection{Ablation of Layer Normalization}
365
+ We demonstrate the effect of layer normalization, which is applied to the embedded representations, in our model architecture shown in Figure~\ref{fig:method}. We contrast the performance by keeping or dropping this layer in vanilla supervised learning experiments on CG and CB datasets. The results in AUROC metric are shown in Table~\ref{tab:layer_norm}.
366
+ \begin{table}[t]
367
+ \centering
368
+ \caption{Ablation analysis of layer normalization. Experiments are conducted with and without layer normalization under supervised learning using the model architecture shown in Figure~\ref{fig:method}. Results demonstrated are on test set AUROC.}
369
+ \label{tab:layer_norm}
370
+ \setlength{\tabcolsep}{2pt}
371
+ \medskip
372
+
373
+ \begin{tabular}{lcccc}
374
+ \toprule
375
+ Ablation & \phantom{a} & \multicolumn{3}{c}{Datasets} \\
376
+ \midrule
377
+ && CG &\phantom{a}& CB\\
378
+ Without Layer Normalization & \phantom{a} & 0.759 & \phantom{a} & 0.847\\
379
+ With Layer Normalization &\phantom{a} & \bf0.771 & \phantom{a} & \bf0.862 \\
380
+
381
+ \bottomrule
382
+ \end{tabular}
383
+
384
+ \end{table}
385
+ We can see the effectiveness of the normalization layer. Without layer normalization, the embeddings would directly pass through the ReLU activation, as shown in the overall scheme (Figure~\ref{fig:method}). On both CG and CB datasets, MambaTab's performance improves with layer normalization versus without. This ablation thus justifies the incorporation of layer normalization in our model.
386
+ \subsection{Effect of Batch Size}
387
+ In addition to the above model-related hyperparameters, we also perform sensitivity analysis with experiments on batch size. Due to its small model size, MambaTab can easily handle a large number of samples per batch. We demonstrate AUROC results in Figure~\ref{fig:batch_ablation} using sizes of $\{60, 80,100,120,140\}$. We see small variations in performance for both CG and CB datasets across minibatch sizes. This demonstrates our method's generalization capability regarding batch size. Considering this insensitivity, we set 100 as the default batch size.
388
+ \begin{figure}[t]
389
+ \centering
390
+ \includegraphics[width=\linewidth]{figures/batch_ablation.pdf}
391
+ \caption{Sensitivity of MambaTab performance in AUROC to batch size. Other parameters are kept at default values while the batch size is varied from 60 to 140 in steps of 20.}
392
+ \label{fig:batch_ablation}
393
+ \end{figure}
394
+ \subsection{Scaling Mamba}
395
+ \begin{figure}[t]
396
+ \centering
397
+ \subfloat[AUROC results (CB)]{{\includegraphics[trim={0.25cm 0.25cm 0.3cm 0.25cm},clip,width=0.448\linewidth]{figures/cb_auc.pdf} }}\qquad
398
+ \subfloat[Learnable parameters (CB)]{{\includegraphics[trim={0.25cm 0.25cm 0.3cm 0.25cm},clip,width=0.448\linewidth]{figures/cb_params.pdf} }}\qquad
399
+ \subfloat[AUROC results (CG)]{{\includegraphics[trim={0.25cm 0.25cm 0.3cm 0.25cm},clip,width=0.448\linewidth]{figures/cg_auc.pdf} }}\qquad
400
+ \subfloat[Learnable parameters (CG)]{{\includegraphics[trim={0.25cm 0.25cm 0.3cm 0.25cm},clip,width=0.448\linewidth]{figures/cg_params.pdf} }}\caption{AUROC results and learnable parameter sizes versus number of stacked residual Mamba blocks ($\mathcal{M}$) on CB and CG datasets. Other parameters are kept at default values of MambaTab-D.}\label{fig:scaling_mamba}\end{figure}
401
+ Although we have achieved comparable or superior performance to current state-of-the-art methods with a default ${\mathcal{M}} = 1$ under regular supervised learning (see Table~\ref{tab:pub_dataset_results}), we also study the effect of scaling Mamba blocks via residual connections following ~\cite{he2016deep}. We stack Mamba blocks as in Figure~\ref{fig:method}, concatenating $\mathcal{M}=2$ up to 100 blocks, as shown in Equation~\ref{eq:res_connection}:
402
+ \begin{equation}
403
+ \label{eq:res_connection}
404
+ h^{(i)}=Mamba_i(h^{(i-1)})+h^{(i-1)}.
405
+ \end{equation}
406
+ Here, $h^{(i)}$ is the hidden state from the $i$-th Mamba block, that is, $Mamba_i$, taking the prior block's hidden state $h^{(i-1)}$ as input. As seen in Figure~\ref{fig:scaling_mamba}, with increasing Mamba blocks, MambaTab retains comparable performance while the learnable parameters increase linearly on both CG and CB datasets. This demonstrates Mamba block's information retention capacity. We observe that few Mamba blocks suffice for strong performance. Hence, we choose to use $\mathcal{M}=1$ by default in MambaTab.
407
+
408
+ \section{Future Scope}
409
+ Although we have evaluated our method on tabular datasets for classification, in the future we would like to incorporate our method for regression tasks as well on tabular data. Our method is flexible enough to incorporate regression tasks since we have kept the output layer open to predict real values. Therefore, our future research scope includes but is not limited to evaluating performance on different learning tasks.
410
+ \section{Conclusion}
411
+ This paper presents Mambatab, a simple yet effective method for handling tabular data. It uses Mamba, a state-space-model variant, as a building block to classify tabular data. MambaTab can effectively learn and predict in both vanilla supervised learning and feature incremental learning settings. Moreover, it requires no manual data preprocessing. MambaTab demonstrates superior performance over current state-of-the-art deep learning and traditional machine learning-based baselines under both supervised and incremental learning on 8 public datasets. Remarkably, MambaTab occupies only a small fraction of memory in learnable parameter size compared to Transformer-based baselines for tabular data. Extensive results demonstrate MambaTab's efficacy, efficiency, and generalizability for diverse tabular learning applications.
412
+ \section*{Acknowledgments}
413
+ We thank the creators of the public datasets used, for making these valuable resources openly available for research purposes. We also thank the authors of the baseline models and Mamba~\cite{gu2023mamba}
414
+ for releasing their valuable code publicly.
415
+ This research is supported in part by the NSF under Grant IIS 2327113 and the NIH under Grants R21AG070909, P30AG072946, and R01HD101508-01.
416
+ \clearpage
417
+ \bibliographystyle{named}
418
+ \bibliography{ijcai24}
419
+ \clearpage
420
+ \nolinenumbers
421
+ \setcounter{table}{0}
422
+ \section*{Supplementary Materials}
423
+ \captionsetup[table]{name={Supplementary Table}}
424
+ \section*{Dataset details with accessible links}
425
+ Here in this section, we provide the dataset accessible links with their abbreviations.
426
+ \begin{table}[H]
427
+ \Large
428
+ \centering
429
+ \caption{Publicly available datasets abbreviations and external accessible link}
430
+ \label{tab:dataset_details_with_links}
431
+ \resizebox{\linewidth}{!}{
432
+ \begin{tabular}{lcc}
433
+
434
+ \toprule
435
+ Dataset Name & Abbreviation &External link\\
436
+ \midrule
437
+ Credit-g & CG & \color{blue}\href{https://www.openml.org/search?type=data\&status=active\&id=31}{openml.org/search?type=data\&status=active\&id=31}\\
438
+ Credit-approval & CA & \color{blue}\href{https://archive.ics.uci.edu/ml/datasets/credit+approval}{archive.ics.uci.edu/ml/datasets/credit+approval}\\
439
+ Dresses-sales & DS & \color{blue}\href{https://www.openml.org/search?type=data\&status=active\&id=23381}{openml.org/search?type=data\&status=active\&id=23381}\\
440
+ Adult & AD & \color{blue}\href{https://www.openml.org/search?type=data\&status=active\&id=1590}{openml.org/search?type=data\&status=active\&id=1590}\\
441
+ Cylinder-bands & CB & \color{blue}\href{https://www.openml.org/search?type=data\&status=active\&id=6332}{openml.org/search?type=data\&status=active\&id=6332}\\
442
+ Blastchar & BL & \color{blue}\href{https://www.kaggle.com/datasets/blastchar/telco-customer-churn}{kaggle.com/datasets/blastchar/telco-customer-churn}\\
443
+ Insurance-co & IO & \color{blue}\href{https://archive.ics.uci.edu/ml/datasets/Insurance+Company+Benchmark+\%28COIL+2000\%29}{archive.ics.uci.edu/ml/datasets/Insurance+Company+Benchmark+\%28COIL+2000\%29}\\
444
+ Income-1995 & IC & \color{blue}\href{https://www.kaggle.com/datasets/lodetomasi1995/income-classification}{kaggle.com/datasets/lodetomasi1995/income-classification}\\
445
+ \bottomrule
446
+ \end{tabular}
447
+ }
448
+ \end{table}
449
+
450
+
451
+
452
+
453
+
454
+ \section*{Instructions on how to run the code}
455
+ The following instructions are helpful to run our codes. The instructions are written in a sequential manner.
456
+ \subsection*{Installation}
457
+ Please install the following required libraries:
458
+ \begin{itemize}
459
+ \item pip install torch==2.1.1 torchvision==0.16.1
460
+ \item pip install causal-conv1d==1.1.1
461
+ \item pip install mamba-ssm
462
+ \end{itemize}
463
+ \subsection*{File instruction and brief definition}
464
+ \begin{itemize}
465
+ \item $\bf config.py$: contains training related configurations setting.
466
+ \item $\bf MambaTab.py$: contains our method-related code of MambaTab.
467
+ \item $\bf supervised\_mambatab.py$: contains code related to vanilla supervised learning setting.
468
+ \item $\bf feature\_incremental.py$: contains code related to incremental feature setting.
469
+ \item $\bf train\_val.py$: contains training and validation related code for training and validating the model over the epochs.
470
+ \item $\bf utility.py$: contains code for data reading and data pre-processing.
471
+ \end{itemize}
472
+ \subsection*{Data download and processing}
473
+ Please download data using the accessible links mentioned in the supplementary materials table ~\ref{tab:dataset_details_with_links}.\\
474
+ \textcolor{amber}{\underline{Cautions:}}
475
+ \begin{itemize}
476
+ \item Dataset must be in $.csv$ format.
477
+ \item Header row must be in the first row in the $.csv$
478
+ \item Target column must be in the last column in the $.csv$
479
+ \item Rename the $X.csv$ to $data\_processed.csv$ and place it into $datasets/X$ folder. Here 'X' can be 'dress', 'cylinder', etc.
480
+ \end{itemize}
481
+
482
+ \subsection*{Configurations}
483
+ Please utilize our $\bf config.py$ file for the necessary configurations to run the code. Moreover, more configurations can also be modified in our $\bf MambaTab.py$ file for model-related configurations.
484
+ \subsection*{Running specific files}
485
+ \begin{itemize}
486
+ \item To run vanilla supervised learning, please execute $\bf supervised\_mambatab.py$. Please make sure to change the necessary configurations in $\bf config.py$ before running this file. We have mentioned the necessary comments for an easier understanding of the flow of the code.
487
+ \item To run the feature incremental learning setting, please execute $\bf feature\_incremental.py$. Here, as mentioned in our paper, we divide the features into 3 non-overlapping sets and perform the training incrementally.
488
+ \item As mentioned above, our code is flexible in changing parameters and getting results with tuned settings.
489
+ \end{itemize}
490
+ \end{document}