id
stringlengths
14
17
paper_id
stringlengths
33
34
text
stringlengths
1
500
metadata
dict
2312.16600v1_38
http://arxiv.org/abs/2312.16600v1
pairs, and the remaining pairs as negative pairs. For cell i, its cluster-aware contrastive loss in terms of z_i is as follows:1.05!l_clu(z_i) = -log ∑_j^B E_z_i, z^'_j∈ l_i· exp(sim(z_i, z^'_j)/T) + ∑_j^B E_z_i, z_j∈ l_i·𝕀_i ≠ j· exp(sim(z_i, z_j)/T)/∑_j^B E_z_i, z^'_j∉ l_i· exp(sim(z_i, z^'_j)/T) + ∑_j^B E_z_i, z_j∉ l_i· exp(sim(z_i, z_j)/T)where l_i is the pseudo-label of z_i. E_z_i, z^'_j∈ l_i is an indicator function whose value is 1 if the label of z^'_j is l_i, and 0 otherwise. E_z_i,
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_39
http://arxiv.org/abs/2312.16600v1
label of z^'_j is l_i, and 0 otherwise. E_z_i, z^'_j∉ l_i is also an indicator function whose value is 0 if the label of z^'_j is l_i, and 1 otherwise.The overall cluster-aware loss is as follows:ℒ_clu = 1/2B∑_i=1^B[l_clu(z_i) + l_clu(z^'_i)]where l_clu(z^'_i) is cell i's cluster-aware contrastiveloss in terms of z^'_i. This loss is particularly effective because it tries to minimize the distance between cells of similar cluster and maximize the distance between cells of different
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_40
http://arxiv.org/abs/2312.16600v1
maximize the distance between cells of different clusters.Finally, by combining ℒ_ins and ℒ_clu with the hyperparameter λ, we have the whole loss function as follows:ℒ = ℒ_ins + λℒ_cluWe set λ = 0.1 in our experiments. A small λ can prevent the negative effect of clustering error of K-means. With this loss, CICL exploits the cluster structure underlying the data to achieve simultaneous optimization of data representation and cluster label assignment.Compared with traditional contrastive
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_41
http://arxiv.org/abs/2312.16600v1
assignment.Compared with traditional contrastive learning, ours is iterative contrastive learning, which iteratively learns the representations of cells in the direction favorable for clustering.§.§ AlgorithmHere, we present the algorithm of our method in Alg. <ref>, which consists of two phases: the training phase and the clustering phase. In the training phase, in each epoch we first randomly split the training data X^train into n_B=[X^train/S_B] minbatches. And for each minbatch X^j,
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_42
http://arxiv.org/abs/2312.16600v1
minbatches. And for each minbatch X^j, wegenerate two augmented views X^j_aug1 and X^j_aug2.Next, we obtain the representations H^j_aug1, H^j_aug2 and H^j of X^j_aug1, X^j_aug2 and X^j by a transformer encoder. We perform K-means on H^j to get centroid matrix C^j, and then the pseudo-labels L^j are obtained from H^j and C^j.Meanwhile, H^j_aug1, H^j_aug2 are input into the projection-head for obtaining projections Z^j_aug1 and Z^j_aug2. The whole loss ℒ consists of ℒ_ins and ℒ_clu, which are
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_43
http://arxiv.org/abs/2312.16600v1
loss ℒ consists of ℒ_ins and ℒ_clu, which are computed with Z^j_aug1,Z^j_aug2 and L^j by Equ. (<ref>) and Equ. (<ref>) respectively. In the training phase, we use K-means to cluster the representations H^test of testing data X^test, encoded by the trained transformer encoder, to generate the clustering result R^test. 1em 1em§ EXPERIMENTS AND RESULTS§.§ Implementation Details and Experimental SetupHere, we present the implementation details of our method and the experimental setup in
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_44
http://arxiv.org/abs/2312.16600v1
of our method and the experimental setup in Table <ref>. CICL uses similar parameters on all of the datasets in Table <ref>, and all compared methods use default parameters provided in their original papers. All the experiments in this paper are conducted on 4 NVIDIA RTX3090 GPUs.§.§ Compared Existing MethodsWe compare CICL with 8 existing scRNA-seq data clustering methods, including a graph-based method Seurat <cit.>, a multi-kernel learning method SIMLR <cit.>, a transfer learning method
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_45
http://arxiv.org/abs/2312.16600v1
method SIMLR <cit.>, a transfer learning method ItClust <cit.>, a contrastive learning method CLEAR <cit.>, a deep graph embedding based method GraphSCC <cit.>, and three deep embedding based methods scDeepCluster <cit.>, scDHA <cit.> and scVI <cit.>. More information of these methods is as follows: * Seurat <cit.> is a widely used pipeline for single-cell gene expression data analysis. It performs dimension reduction first, then employs Louvain method on the shared nearest neighbor graph.*
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_46
http://arxiv.org/abs/2312.16600v1
method on the shared nearest neighbor graph.* SIMLR <cit.> combines multiple cores to learn the similarity between samples and performs spectral clustering.* ItClust <cit.> trains a neural network to extract information from a well-labeled source dataset, then initializes the target network with parameters estimated from the training network. * CLEAR <cit.> is a self-supervised contrastive learning-based integrative scRNA-seq data analysis tool. It introduces a novel data augmentation method
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_47
http://arxiv.org/abs/2312.16600v1
It introduces a novel data augmentation method and performs contrastive learning by InfoNCE loss.* GraphSCC <cit.> extracts the structural relationships between cells using a graph convolutional network, and optimizes the representations by a dual self-supervised module.* scDeepCluster <cit.> adds a ZINB distribution model simulating the distribution of scRNA-seq data to the denoising autoencoder, and learns feature representations and clusters by explicit modeling of scRNA-seq data. *
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_48
http://arxiv.org/abs/2312.16600v1
by explicit modeling of scRNA-seq data. * scDHA <cit.> first exploits a non-negative kernel autoencoder to do dimension reduction and then projects the data onto a low-dimensional space with a self-learning network based on variational autoencoder (VAE).* scVI <cit.> is a comprehensive tool for the analysis of scRNA-seq data. It models scRNA-seq data in a deep generative manner with the ZINB model and variational autoencoder.§.§ Performance ComparisonTable <ref> summarizes the clustering
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_49
http://arxiv.org/abs/2312.16600v1
ComparisonTable <ref> summarizes the clustering performance of CICL and 8 existing methods on 25 scRNA-seq datasets.CICL achieves the best ARI and NMI on 10 and 9 datasets, and the 2nd best ARI and NMI on 7 and 10 datasets, respectively. On average, our method obtains the best ARI (0.7757) and NMI (0.8057) on the 25 datasets. In particular, CICL surpasses scDHA by 13.87% and 4.96% in terms of ARI and NMI on average, which shows the outstanding clustering performance of our method.We can also
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_50
http://arxiv.org/abs/2312.16600v1
clustering performance of our method.We can also see that CICL performs excellently on large datasets such as Bach (23184 cells), havatin (48266 cells), QX_Trachea (11269 cells), QX_Spleen (9552 cells) and Wang_Lung (9519 cells). Furthermore, our method also achieves good clustering scores on datasets with more than 10 subtypes of cells, such as muraro (10 subtypes), pollen (11 subtypes), QS_Lung (11 subtypes) and Young (11 subtypes). In summary, CICL surpasses the existing methods by from 14%
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_51
http://arxiv.org/abs/2312.16600v1
CICL surpasses the existing methods by from 14% to 280%, and from 5% to 133% on average in terms of performance metrics ARI and NMI,respectively.Note that the latest contrastive learning-based method CLEAR does not show advantages over the other methods on the 25 datasets. However, our method achieves excellent results, thanks to the proposed cluster-aware iterative contrastive learning mechanism.§.§ Visualization with Low-dimensional RepresentationsIn cellular heterogeneity analysis,
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_52
http://arxiv.org/abs/2312.16600v1
cellular heterogeneity analysis, visualization is an intuitive and effective way to display different cell types. We use t-SNE <cit.> to project the representations of cells into a two-dimensional space and visualize them in Fig. <ref>. As we can see, CICL learns to embed cells of the same type within the same cluster while separating cells of different types well into different clusters, producing similar clustering results to the ground truth cell annotations. The clustering result of CICL is
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_53
http://arxiv.org/abs/2312.16600v1
annotations. The clustering result of CICL is superior to that of the other methods on the hrvatin dataset. Although the performance of scDHA and scVI is also good, they divide the oligodendrocyte cells into multiple clusters.Furthermore, CICL performs well on QS_Lung, the cells of different types are effectively separated in the embedding space, which is much better than with the other methods. As for the Wang_Lung dataset with two subtypes, CICL not only achieves the best ARI (see Table
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_54
http://arxiv.org/abs/2312.16600v1
CICL not only achieves the best ARI (see Table <ref>) but also exhibits the best clustering visualization effect: the data is grouped into two distinct clusters. Fig. <ref> illustrates the clustering process of our iterative contrastive learning on the muraro dataset. The upper and lower figures represent the clustering results of our method and the ground truth, respectively. We can see that various types of cells are distributed chaotically in the early epochs (e.g. epoch = 0, 3, 6).However,
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_55
http://arxiv.org/abs/2312.16600v1
the early epochs (e.g. epoch = 0, 3, 6).However, as the iterative learning goes, CICL split different types of cells with growing accuracy. At epoch 50, our method correctly clusters the data. In summary,CICL is able to gradually refine the clustering outcome, and eventually makes the clustering result match with the ground truth. This demonstrates that our model iteratively learns more and more accurate cell representations.Furthermore, we demonstrate how the clustering performance metrics ARI
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_56
http://arxiv.org/abs/2312.16600v1
how the clustering performance metrics ARI and NMI change with the iterative contrastive learning process on the muraro and pollen datasets in Fig. <ref>. We can see that in the early epochs (epoch <60 on muraro and epoch <50 on pollen), both metrics undergo a rapid increasing and acutely fluctuation period. After that, the metrics enter a relatively stable period. Certainly, excessive training will also lead to overfitting and result in slight degradation of model performance, as we can see on
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_57
http://arxiv.org/abs/2312.16600v1
of model performance, as we can see on the pollen dataset. §.§ Ablation StudyHere, we conduct ablation studies on the effect of cluster-aware contrastive learning.Cluster-aware contrastive loss.One of the major innovations of CICL is the cluster-aware contrastive learning mechanism, which incorporates cluster structure information into contrastive loss, thereby enhancing the representations of cells.To validate the effectiveness of our mechanism, we conduct an ablation study. For comparison, we
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_58
http://arxiv.org/abs/2312.16600v1
we conduct an ablation study. For comparison, we consider a variant without the cluster-aware loss (i.e., the 2nd term in Equ. (<ref>)).The results are presented in Fig. <ref>. Here, the vertical axis shows the results of our method, and the horizontal axis presents the results of the variant without the cluster-aware contrastive loss. Notably, in terms of both ARI and NMI, the majority of points lie above the line y = x, indicating that CICL outperforms the variant model, which affirms the
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_59
http://arxiv.org/abs/2312.16600v1
outperforms the variant model, which affirms the efficacy of our new contrastive learning loss. Nevertheless, we also see that on some datasets (e.g. kolodziejczyk, Mammary_Gland and Tosches_turtle), CICL exhibits similar or even inferior performance. This is possibly caused by the errors of pseudo-labels.Effect of hyperparameter λ. Here, we investigate the effect of the hyperparameter λ on the performance of the model. We increase λ from 0 to 1.0, and report the performance in terms of ARI and
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_60
http://arxiv.org/abs/2312.16600v1
and report the performance in terms of ARI and NMI. The results are illustrated in Fig. <ref>. We can see that our method has the worst performance at λ = 0 (without the cluster-aware contrastive loss). As λ increases, the performance is improved rapidly. When λ = 0.1, both ARI and NMI reach the highest point. After that, ARI and NMI decrease slightly and gradually tend to be stable with the increase of λ. The result indicates that the model considerably benefits from the cluster-aware
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_61
http://arxiv.org/abs/2312.16600v1
considerably benefits from the cluster-aware contrastive loss.In our experiments, we set λ=0.1, to avoid potential negative impact of pseudo-label error.§ CONCLUSIONIn this paper, to boost the performance of scRNA-seq data clustering analysis, we propose a novel approach called CICL. CICL adopts an iterative representation learning and clustering framework with an innovative cluster-aware contrastive loss. By comprehensively exploiting the underlying cluster structure of the training data, CICL
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_62
http://arxiv.org/abs/2312.16600v1
cluster structure of the training data, CICL can learn better scRNA-seq data presentations and thus achieve better clustering performance progressively.Extensive experiments on 25 real scRNA-seq datasets show that CICL outperforms the state-of-the-art methods in most cases, and achieves dominate advantage over the existing methods on average. Future work will focus on replacing K-means with advanced clustering methods to generate accurate pseudo-labels, and extending our idea to other
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.16600v1_63
http://arxiv.org/abs/2312.16600v1
pseudo-labels, and extending our idea to other downstream scRNA-seq data analysis tasks.plainnat
{ "authors": [ "Weikang Jiang", "Jinxian Wang", "Jihong Guan", "Shuigeng Zhou" ], "categories": [ "q-bio.GN", "cs.AI", "cs.LG" ], "primary_category": "q-bio.GN", "published": "20231227145059", "title": "scRNA-seq Data Clustering by Cluster-aware Iterative Contrastive Learning" }
2312.15994v1_0
http://arxiv.org/abs/2312.15994v1
Computing Gerber-Shiu function in the classical risk model with interest using collocation method Zan Yu,   Lianzeng ZhangCorresponding author. School of Finance, Nankai University, Tianjin 300350,China =================================================================================================================== Addressing bias in the trained machine learning system often requires access to sensitive attributes. In practice, these attributes are not available either due to legal and policy
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_1
http://arxiv.org/abs/2312.15994v1
are not available either due to legal and policy regulations or data unavailability for a given demographic. Existing bias mitigation algorithms are limited in their applicability to real-world scenarios as they require access to sensitive attributes to achieve fairness. In this research work, we aim to address this bottleneck through our proposed unsupervised proxy-sensitive attribute label generation technique. Towards this end, we propose a two-stage approach of unsupervised embedding
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_2
http://arxiv.org/abs/2312.15994v1
a two-stage approach of unsupervised embedding generation followed by clustering to obtain proxy-sensitive labels. The efficacy of our work relies on the assumption that bias propagates through non-sensitive attributes that are correlated to the sensitive attributes and, when mapped to the high dimensional latent space, produces clusters of different demographic groups that exist in the data. Experimental results demonstrate that bias mitigation using existing algorithms such as Fair Mixup and
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_3
http://arxiv.org/abs/2312.15994v1
using existing algorithms such as Fair Mixup and Adversarial Debiasing yields comparable results on derived proxy labels when compared against using true sensitive attributes. § INTRODUCTION Machine Learning has attained high success rates in practically every field, including healthcare, finance, and education, based on the accuracy and efficiency of the model's outcome <cit.>. However, these models are biased and exhibit a propensity to favor one demographic group over another in various
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_4
http://arxiv.org/abs/2312.15994v1
one demographic group over another in various applications, including credit and loan approval, criminal justice, and resume-based candidate shortlisting <cit.>. The idea of fairness has received a lot of attention recently to combat the discrimination from the outcome of ML models <cit.>.The existing bias mitigation techniques <cit.> can be classified into three categories: pre-processing<cit.>, post-processing <cit.> and in-processing <cit.>. While pre-processing bias mitigation techniques
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_5
http://arxiv.org/abs/2312.15994v1
While pre-processing bias mitigation techniques attempt to transform the input before feeding it to the model for training, post-processing strategies filter out the output through certain transformations. In order to produce fair output, in-processing strategies strive to learn bias-invariant models by imposing certain constraints during training. Nevertheless, most state-of-the-art algorithms require information about sensitive attributes to produce an unbiased model. However, in practice,
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_6
http://arxiv.org/abs/2312.15994v1
produce an unbiased model. However, in practice, these sensitive attributes are inaccessible due to difficulties in data collection, privacy, and legal constraints imposed by the government, like General Data Protection Regulation(GDPR) introduced by the European Union in May 2018 and Equal Credit Opportunity Act <cit.>. Fairness is challenging to achieve in the absence of sensitive attributes due to a lack of supervision. While sensitive attributes are inaccessible in the real-world setting,
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_7
http://arxiv.org/abs/2312.15994v1
are inaccessible in the real-world setting, it has been found that some non-sensitive attributes have strong correlations with the sensitive features, which leads to bias propagating through AI models<cit.>. For instance, Hispanic and black populations have a higher proportion of younger people, resulting in the correlation between age and race <cit.>. Similarly, zip codes can be correlated with race. Hence, the bias gets embedded in the non-sensitive attributes that are used in the model
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_8
http://arxiv.org/abs/2312.15994v1
attributes that are used in the model training. Based on this hypothesis, a few initial efforts have been made to mitigate bias in the absence of protected attributes <cit.>. The most recent approach <cit.> identifies related features that are correlated with the sensitive attributes and would further minimize the correlation between the related features and the model’s prediction to learn a fair classifier with respect to the sensitive attribute. However, identification of related features
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_9
http://arxiv.org/abs/2312.15994v1
However, identification of related features require domain knowledge and access to sensitive attributes to determine the correlation.This research aims to provide proxy labels for sensitive attributes to make the present bias mitigation approaches suitable for real-world applications where access to protected attributes during model training is constrained. Ideally the likelihood of a positive outcome should be the same regardless of person's protected group. However in real life this does not
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_10
http://arxiv.org/abs/2312.15994v1
group. However in real life this does not hold true. The group which has more likelihood of getting positive outcome just because of their protected attribute is referred as favourable groupand group which has more likelihood of getting negative outcome just because of their protected attribute is referred as unfavourable group in this paper. We determine proxy for favorable and unfavorable groups by leveraging the bias information embedded in the non-sensitive features available in the given
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_11
http://arxiv.org/abs/2312.15994v1
the non-sensitive features available in the given dataset. This proxy sensitive labels can then be passed as an input into the existing bias mitigation techniques. Thus we address the bottleneck in the applicability of the existing bias mitigation to real-world applications. We have proposed a novel pipeline that involves two stages: (1) Stage-1: Learn embeddings using self-supervised learning that captures inter-feature relationships and, consequently, latent bias information. (2) Stage-2:
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_12
http://arxiv.org/abs/2312.15994v1
latent bias information. (2) Stage-2: Generate proxy for demographic groups by clustering the samples based on the embeddings obtained from Stage-1. Further, experimental analysis reveals that identical results can be observed by using the proxy labels in the current bias mitigation technique as opposed to the genuine labels of sensitive qualities.§ RELATED WORK A substantial amount of work has been done to address and mitigate bias in data sets and models<cit.>. Based on the point of
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_13
http://arxiv.org/abs/2312.15994v1
data sets and models<cit.>. Based on the point of intervention of the modeling stage, bias mitigation techniques broadly fall into three categories: pre-processing, in-processing, and post-processing. Pre-processing techniques underpin the first stage of the modeling and transform the training data so that the underlying discrimination is removed<cit.>. These techniques reduce or eliminate the correlation between sensitive attributes and other features, including the target labels.
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_14
http://arxiv.org/abs/2312.15994v1
and other features, including the target labels. Unfortunately, due to the blindness of these techniques to model's inference of the data, some level of bias still can creep into the model predictions. In-processing techniques modify learning algorithms to remove bias during the model training process. Most of the algorithms in this category solve constraint optimization problem for different fairness objectives. To ensure independence between predictions and sensitive attributes, <cit.>
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_15
http://arxiv.org/abs/2312.15994v1
predictions and sensitive attributes, <cit.> regularizes the covariance between them. <cit.> minimizes the disparity between the sensitive groups by regularizing the decision boundary of the classifier. <cit.> proposed a data augmentation strategy for optimizing group fairness constraints such as equalized odds anddemographic parity. Another efficient algorithm <cit.>, tries to maximize the predictor's ability to predict the ground truth while minimizing the adversary’s ability to predict the
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_16
http://arxiv.org/abs/2312.15994v1
minimizing the adversary’s ability to predict the sensitive attribute. Post-processing techniques treat the learned model as a black-box model and try to mitigate bias from the prediction <cit.>. Typically, post-processing algorithms select a subset of samples and adjust the predicted labels accordingly. An intriguing finding is that any sample can be altered to meet the requirements of group fairness because the metrics are expectations. The papers <cit.> choose samples at random, whereas
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_17
http://arxiv.org/abs/2312.15994v1
papers <cit.> choose samples at random, whereas <cit.> choose the samples with the greatest degree of uncertainty, reflecting the human tendency to give unprivileged groups the benefit of the doubt.Most of the current algorithms have restrictions on their use in real-world scenarios since they need access to protected attributes for bias mitigation. Very recently efforts have been made towards bias mitigation in the absence of sensitive attributes <cit.>. <cit.> introduced a framework based on
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_18
http://arxiv.org/abs/2312.15994v1
<cit.>. <cit.> introduced a framework based on bayesian variational autoencoders that relies on knowledge ofcausal graph to derive proxy. The algorithm estimates proxy in a multi dimensional space and then uses this generated proxy to remove bias from the model. But, since the proxy are generated in a multi dimensional space, they cannot be generalised to other bias mitigation algorithms. The paper <cit.> introduced a framework wherein it only performs debiasing on the classification head. The
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_19
http://arxiv.org/abs/2312.15994v1
debiasing on the classification head. The algorithm neutralizes the training samples that have the same ground truth label but with different sensitive attribute annotations. Proxy generation for the sensitive attributes is done by training a bias intensified model and then annotating samples based on its confidence level. However, the algorithm makes a strong assumption that bias-amplified model tends to assign the privileged group more desired outcome whereas assigning the under-privileged
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_20
http://arxiv.org/abs/2312.15994v1
outcome whereas assigning the under-privileged group a less-desired outcome based on the obtained prediction scores. The most recent approach <cit.> identifies related features that are correlated with the sensitive attributes and would further minimize the correlation between the related features and the model’s prediction to learn a fair classifier with respect to the sensitive attribute. To identify the related features, however, this method needs access to sensitive attributes to determine
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_21
http://arxiv.org/abs/2312.15994v1
needs access to sensitive attributes to determine the correlation. § METHODOLOGYIt is widely established that bias propagates to the models even when protected attributes are not used during training <cit.>. This is attributed to the frequent incorporation of protected attribute data into other correlated non-protected attributes. Zip codes, for instance, can be associated to the race attribute. Based on this hypothesis, we utilize the non-protected attributes to obtain proxy-sensitive labels.
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_22
http://arxiv.org/abs/2312.15994v1
attributes to obtain proxy-sensitive labels. Assuming the availability of all variables except the protected attribute, our goal is to recover all the latent information associated with the protected attribute embedded into the available non-protected features.This section outlines our suggested method for generating a proxy for a sensitive protected attribute. We break the objective down into two stages. In the first stage, we utilize self-supervised learning to produce the contextual
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_23
http://arxiv.org/abs/2312.15994v1
learning to produce the contextual embedding of the input samples. Our goal is to learn an embedding with maximum information about the protected attribute. In the second stage, we obtain proxy labels for favorable and unfavorable groups using an unsupervised clustering approach on the embedding obtained from the first stage. Finally, we pass the generated proxy through existing state-of-the-art bias mitigation algorithms to mitigate bias from any model. Figure <ref> outlines the
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_24
http://arxiv.org/abs/2312.15994v1
bias from any model. Figure <ref> outlines the proxy-generation pipeline. §.§ Proxy Generation for Sensitive Attribute Stage-1: In the first stage, as shown the figure <ref>, we obtain contextual embedding of the input samples. Towards this goal, we train neural network architectures in a self-supervised fashion to efficiently encode inter-feature relationships. In this paper, we have experimented with two neural network architectures: (1) Auto-encoders and, (2) Transformers.We train an
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_25
http://arxiv.org/abs/2312.15994v1
Auto-encoders and, (2) Transformers.We train an auto-encoder on the reconstruction task to obtain embeddings containing crucial input data details. An auto-encoder consists of encoder and decoder modules. In the encoding operation, we pass the input feature vector that gets mapped to a lower dimensional latent representation. In the decoding operation, the original input data gets reconstructed back from the latent representation. We trained the network on a reconstruction loss that minimizes
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_26
http://arxiv.org/abs/2312.15994v1
network on a reconstruction loss that minimizes the mean square error between the input and output embeddings. Input data X is passed through the encoder to get latent representation h and then reconstructed as X̂ by the decoder as shown in the equations <ref> and <ref>. We train the network on reconstruction loss Loss_AE as shown in equation <ref> where n represents the number of data points in a batch. Here, f1 and f2 are activation functions, W is weight matrix and b is bias.The latent
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_27
http://arxiv.org/abs/2312.15994v1
W is weight matrix and b is bias.The latent embeddings obtained from the encoder module contain information about the protected attribute as it is generated from features that are correlated with the protected attribute. h = f1(W_i*X + b_i) X̂ = f2(W_j*h + b_j) Loss_AE = 1/nΣ_i=1^n|X_i -X̂_̂î| We experimented with another neural network architecture called Transformer with a similar goal. Transformers utilize a self-attention <cit.> mechanism to learn the embeddings. To compute self-attention,
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_28
http://arxiv.org/abs/2312.15994v1
learn the embeddings. To compute self-attention, first, three vectors, Query(Q), Key(K), and (V), are learned corresponding to each feature in the input, and then the attention is computed as shown in the equation <ref>. Finally, self-attained embeddings h are obtained as shown in the equation <ref>. We train the Transformer on a self-supervised learning task called Masked Language Modelling (MLM). Towards this, 15% of the input data fields are chosen randomly and replaced with a masked token.
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_29
http://arxiv.org/abs/2312.15994v1
chosen randomly and replaced with a masked token. The Transformer then processes samples to produce contextual row embeddings. The MLM head, made up of MLP layers, reconstructs the original fields from these row embeddings. The model is trained end-to-end by minimizing cross-entropy loss as shown in the equation <ref>. The loss is calculated only on masked fields. The latent embeddings (h) obtained from the transformer contain information about the protected attribute due to its inherent
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_30
http://arxiv.org/abs/2312.15994v1
about the protected attribute due to its inherent property to learn the inter-feature relationships. Attention(Q, K, V) = softmax(QK^T/√(d_k))V head_i = Attention(Q W^Q_i, K W^K_i, V W^V_i) h = Concat(head_1, ..., head_h)W^Op_i = Softmax(MLP(h)) Loss_T = -∑_c=1^My_ilog(p_i)Further to ensure that the generated embeddings do not corresponds to the true labels of the downstream classification task, we have trained the above described neural network models on KL Divergence loss. KL divergence loss
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_31
http://arxiv.org/abs/2312.15994v1
models on KL Divergence loss. KL divergence loss historically has been used in classification tasks to ensure class separation between two different labels. The KL divergence loss is based on the information theoretic measure of the Kullback-Leibler (KL) Divergence, which measures the difference between two probability distributions. By introducing the KL divergence loss, the model is able to learn the distinction between the two different labels better, thus leading to improved embedding
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_32
http://arxiv.org/abs/2312.15994v1
labels better, thus leading to improved embedding generation which contains information related to protected attribute and not downstream task labels.In order to implement the Kullback-Leibler (KL) divergence in the proposed neural network architecture, a multi-layer perceptron (MLP) layer has been applied on the generated embedding vectors. In the autoencoder, the MLP is applied on top of the latent vectors, while in the transformer, the MLP is fed with the contextual vector (h). The
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_33
http://arxiv.org/abs/2312.15994v1
MLP is fed with the contextual vector (h). The calculation of the KL loss on top of the MLP depends on the input embedding vector. Specifically, the input embedding vector is fed into the MLP, which will generate the probability distribution. Then, the KL divergence between the probability distribution and the target distribution is calculated. This KL loss is then used to optimize the MLP weights and biases.Stage-2: In the second stage, as shown the figure <ref>, we use an unsupervised
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_34
http://arxiv.org/abs/2312.15994v1
as shown the figure <ref>, we use an unsupervised clustering algorithm to identify various groups in the embeddings obtained from the previous stage. As we know, clustering is a subjective statistical analysis, and there are many algorithms suitable for each data set and problem type. In this paper, we have experimented with centroid-based and hierarchical clustering algorithms. In particular, we have experimented with K means, Hierarchical and BIRCH to obtain two clusters that serves as a
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_35
http://arxiv.org/abs/2312.15994v1
and BIRCH to obtain two clusters that serves as a proxy for favourable and unfavourable groups. We further evaluate the performance of generated proxy from each clustering algorithm on bias mitigation. §.§ Bias Mitigation Through Generated Proxy Sensitive AttributeOnce the proxy labels are obtained corresponding to the favourable and unfavourable groups, we pass them as input to the existing bias mitigation algorithms. In this paper, we have experimented with two widely used benchmarks for bias
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_36
http://arxiv.org/abs/2312.15994v1
with two widely used benchmarks for bias mitigation: Adversarial Debiasing and Fair Mixup. Both algorithms require labels corresponding to the protected attribute in the input. We pass the proxy for the protected attribute obtained from the proposed pipeline as input to de-bias the model. We have compared the performance on bias mitigation with the true labels and proxy labels for the protected attributes in the results section. However, for fairness evaluation, we use true sensitive labels.
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_37
http://arxiv.org/abs/2312.15994v1
evaluation, we use true sensitive labels. Figure <ref> shows the pipeline for bias mitigation and fairness evaluation. § EXPERIMENTAL DETAILS§.§ Dataset DescriptionWe have evaluated the proposed pipeline on the Adult Income Dataset, generated from 1994 US Census. The objective of the dataset is to predict the income level based on personal individual information. The target variable,Y takes a binary value depicting salary ≤ 50K or salary > 50k.The dataset consist of 14 independent attributes
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_38
http://arxiv.org/abs/2312.15994v1
dataset consist of 14 independent attributes and the field 'Gender' is considered as a sensitive attribute in our case. It takes up two values, namely 'Male' and 'Female'.The dataset is imbalanced: only 24% of the samples belong to class 1, out of which only 15.13% are females. The dataset consist of 48,842 independent rows. During the training of our model, we do not take into account the information provided by the 'Gender' attribute. §.§ Implementation DetailsWe have implemented the proposed
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_39
http://arxiv.org/abs/2312.15994v1
DetailsWe have implemented the proposed pipeline in the Pytorch framework. All the experiments were performed on Ubuntu 16.04.7 with the Nvidia GeForce GTX 1080Ti GPU. 16GB of RAM was utilized while experimenting on the Adult Income Dataset.In Stage-1, we experimented with two embedding generator networks, autoencoders and transformers. The autoencoders used in the algorithm consist of one hidden layer. The hidden layer's output receives ReLU activation while its input receives Tanh activation.
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_40
http://arxiv.org/abs/2312.15994v1
while its input receives Tanh activation. The model was trained for 200 epochs with a batch size of 32 and a learning rate of 0.001 using Adam as the optimizer. The Transformer architecture contains only the encoder module. Three encoder blocks are used with six attention heads. Each encoder module is a feed-forward network with 128 hidden units. We used the implementation of the Transformer provided in the hugging face library. In Stage-2, we experimented with K-Means, BIRCH, and Hierarchical
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_41
http://arxiv.org/abs/2312.15994v1
with K-Means, BIRCH, and Hierarchical clustering algorithms to generate proxy labels for protected attributes. We utilize the implementation of these clustering algorithms given in python's sklearn library. We utilize all the data samples to train our proposed pipeline to obtain the proxy labels for sensitive attributes. Next, we randomly split the dataset into 80-20 train and test split and train the classification model using bias mitigation algorithms on the train set. We employ existing
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_42
http://arxiv.org/abs/2312.15994v1
algorithms on the train set. We employ existing bias mitigation algorithms like Adversarial debiasing<cit.> provided in the IBM AIF360 toolkit and fair-mixup<cit.> an open-source solution that is accessible on GitHub. During training, we use the generated proxy instead of the actual labels of the protected attribute and assess performance on the protected attribute's actual labels. §.§ Fairness MetricsFairness in machine learning measures the degree of disparate treatment for different groups
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_43
http://arxiv.org/abs/2312.15994v1
of disparate treatment for different groups (e.g., female vs. male), or individual fairness, emphasizing similar individuals should be treated similarly. There exists various metrics in the literature to quantify fairness, each focusing on different aspects of fairness. We have used two popularly used metrics: Statistical Parity Difference (SPD) and Equalized Odds Difference (EOD). Statistical Parity Difference (SPD): A classifier is considered fair if the prediction Y on input features X is
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_44
http://arxiv.org/abs/2312.15994v1
fair if the prediction Y on input features X is independent from the protected attribute S. The underlying idea is that eachdemographic group has the same chance for a positive outcome. <cit.> SPD = |P(Ŷ=1|S = 0) - P(Ŷ = 1|S = 1)|Equalized Odds Difference (EOD) : An algorithm is considered fair if across both privileged and unprivileged groups, the predictor Y has equal false positive rate(FPR) and false negative rate(FNR). This constraint enforces that accuracy is equally high in all
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_45
http://arxiv.org/abs/2312.15994v1
enforces that accuracy is equally high in all demographics since the rate of positive and negative classification is equal across the groups.The notion of fairness here is that chances of being correctly or incorrectly classified positive should be equal for every group.FPR = |{P(Ŷ=1|S = 1, Y = 0) - P(Ŷ=1|S = 0, Y = 0)}|FNR = |{P(Ŷ=0|S = 1, Y = 1) -P(Ŷ=0|S = 0, Y = 1)}| EOD = FPR + FNR/2§ RESULTS In this section, we empirically assess the effectiveness of the proxy-sensitive label obtained
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_46
http://arxiv.org/abs/2312.15994v1
of the proxy-sensitive label obtained through the proposed pipeline. Towards this end, we pass the proxy-sensitive labels through state-of-the-art bias mitigation methods like adversarial debiasing and fair mixup and evaluate the fairness and classification performance on a public dataset called UCI Adult Income. We have reported the classification performance on Average Precision and fairness on Statistical Parity Difference (SPD) and Equalized Odds Difference (EOD). Fair mixup and adversarial
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_47
http://arxiv.org/abs/2312.15994v1
Odds Difference (EOD). Fair mixup and adversarial debiasing bias mitigation algorithms require protected attribute information to de-bias the models. To form the baseline, we have passed true labels of the protected attribute gender through the mentioned bias mitigation algorithms. Fair mixup has a trade-off parameter between fairness and accuracy, called lambda. We set this parameter as 0.5 for SPD and 2.5 for EOD. Table <ref> compares the classification and fairness performance of the model
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_48
http://arxiv.org/abs/2312.15994v1
and fairness performance of the model trained using bias mitigation algorithms like fair mixup and adversarial debiasing against a classifier trained without using any bias mitigation algorithm. From table <ref>, we can observe that the model trained without any bias mitigation algorithm produce an average precision of 0.8 and SPD and EOD metrics as 0.2 and 0.11.However, with model debiasing, we can see an improvement in SPD and EOD values proving the efficacy of the bias mitigation algorithms
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_49
http://arxiv.org/abs/2312.15994v1
the efficacy of the bias mitigation algorithms in achieving fairness. In this paper, we concentrate on a more practical experimental setup, where we have assumed that the protected attributes are unavailable during model training. Here, we have used a proxy generated by our pipeline as an input to the existing bias mitigation techniques discussed above rather than the true labels of the protected attribute to test the efficacy of the generated proxy in model debiasing. With proxy-sensitive
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_50
http://arxiv.org/abs/2312.15994v1
proxy in model debiasing. With proxy-sensitive labels, we aim to achieve a similar performance as the baselines as shown in Table <ref>.We have experimented with several algorithms in both stages to generate proxy-sensitive labels. In Stage 1, we experimented with Autoencoder and Transformer architectures to generate the embeddings. And in Stage-2, we experimented with clustering algorithms like K-means, Hierarchical, and BIRCH. Figure <ref> shows the performance of all the configurations when
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_51
http://arxiv.org/abs/2312.15994v1
the performance of all the configurations when Autoencoder is used for embedding generation, and Figure <ref> shows the performance when Transformer is utilized. From figure <ref> we can observe that the proxy generated by hierarchical clustering produces the best results with the adversarial debiasing algorithm. In this configuration, we can observe an absolute improvement of 0.14% in SPD with comparable average precision and EOD performance when proxy labels are used instead of true labels
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_52
http://arxiv.org/abs/2312.15994v1
when proxy labels are used instead of true labels for the sensitive attribute Gender. Figure <ref> shows that with the Fair mixup algorithm, the best-performing configuration with proxy-sensitive labels has achieved an average precision of 0.77 with EOD and SPD values of 0.05 and 0.07. This performance is comparable to model performance with the true protected attribute. On the other hand, with the Adversarial debiasing algorithm, the embeddings obtained from the Transformer have led to a 1%
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_53
http://arxiv.org/abs/2312.15994v1
obtained from the Transformer have led to a 1% absolute lift in the average precision while improving the fairness metrics compared to the baseline model trained on true sensitive labels. Transformer architecture to learn embedding in the proxy generation phase produces a significant lift in fairness. The inherent properties of the transformer architecture to learn the inter-feature relationships enables it to generate informative embeddings for the tabular dataset. This is supported by the
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_54
http://arxiv.org/abs/2312.15994v1
for the tabular dataset. This is supported by the experimental results shown in figures <ref> and <ref> on the Adult Income dataset. However, the choice of the modeling architectures to obtain the embedding and the clustering algorithms are dataset-dependent. §.§ Learned Embedding Analysis The performance evaluation discussed in the above section indicates that the proxy-sensitive labels can be used as a substitute for the true labels of protected attributes in the existing bias-mitigation
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_55
http://arxiv.org/abs/2312.15994v1
attributes in the existing bias-mitigation algorithms. In this section, we analyze the quality of embeddings learned in Stage-1 of the proposed pipeline through an auxiliary prediction task similar to <cit.>.Towards this effect, we train three linear classifiersCProxy, CTrue and CDownstream that take the embeddings as input and predict proxy attribute, true protected attribute, and target class labels respectively. Next, we compare the learned weight matrix of CProxy with CTrue and CDownstream
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_56
http://arxiv.org/abs/2312.15994v1
matrix of CProxy with CTrue and CDownstream separately using cosine similarity.The cosine similarity between the weight vectors of C_Proxy and C_True is 0.25, and between C_Proxy and C_Downstream is 0.02. A high value of cosine similarity between weight parameters of C_Proxy and C_True indicates that embedding contains a substantial amount of information about the true protected attribute. In contrast, a low cosine similarity value between weights of C_Proxy and C_Downstream indicates that the
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_57
http://arxiv.org/abs/2312.15994v1
of C_Proxy and C_Downstream indicates that the clusters formed over the embeddings are not along the downstream prediction task.§ CONCLUSIONBias mitigation with no access to sensitive attributes is a challenging problem and has received little attention in the literature. Numerous relevant research studies exist on fairness in AI, but most of these studies assume that protected attributes are accessible at the time of training. This assumption limits their use in modeling scenarios where
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_58
http://arxiv.org/abs/2312.15994v1
limits their use in modeling scenarios where protected labels are unavailable. In an effort to reduce this dependency, we propose a novel pipeline that leverages the inherent bias information in the non-protected attributes to obtain proxy labels of protected attributes. In the current state-of-the-art bias mitigation algorithms, these proxies are passed as input rather than the true labels of the sensitive attribute. Experimental results demonstrate that model trained using generated proxy
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }
2312.15994v1_59
http://arxiv.org/abs/2312.15994v1
that model trained using generated proxy labels results in satisfactory bias metrics such as SPD and EOD with little or no reduction in detection rate. In the future, we will continue to advance our research by investigating more effective methods to incorporate additional bias information into the embedding to improve proxy labels. Additionally, we would validate the compatibility of the proposed approach with additional bias mitigation algorithms beyond the algorithms studied in this work.
{ "authors": [ "Bhushan Chaudhary", "Anubha Pandey", "Deepak Bhatt", "Darshika Tiwari" ], "categories": [ "cs.LG", "cs.CY" ], "primary_category": "cs.LG", "published": "20231226105415", "title": "Practical Bias Mitigation through Proxy Sensitive Attribute Label Generation" }