taesiri commited on
Commit
7716034
1 Parent(s): 37a4868

Upload papers/2401/2401.12824.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2401/2401.12824.tex +1248 -0
papers/2401/2401.12824.tex ADDED
@@ -0,0 +1,1248 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \documentclass[sigconf]{acmart}
2
+
3
+
4
+ \usepackage{placeins}
5
+ \usepackage{natbib}
6
+ \usepackage{subfig}
7
+ \usepackage{multirow, multicol, tabularx, longtable}
8
+ \usepackage{nicefrac}
9
+ \usepackage{siunitx}
10
+ \usepackage{array,framed}
11
+ \usepackage{booktabs}
12
+ \usepackage{
13
+ color,
14
+ float,
15
+ epsfig,
16
+ wrapfig,
17
+ graphics,
18
+ graphicx
19
+ }
20
+ \usepackage{verbatim}
21
+ \usepackage{textcomp}
22
+ \usepackage{setspace}
23
+ \listfiles
24
+ \usepackage{bm}
25
+ \usepackage{latexsym,fancyhdr,url}
26
+ \usepackage{enumerate}
27
+ \usepackage[ruled]{algorithm2e}
28
+ \usepackage{algpseudocode}
29
+ \usepackage{graphics}
30
+ \usepackage{xparse} \usepackage{xspace}
31
+ \usepackage{multirow}
32
+ \usepackage{csvsimple}
33
+ \usepackage{algpseudocode}
34
+ \renewcommand{\algorithmicrequire}
35
+ {\textbf{Input:}}
36
+ \renewcommand{\algorithmicensure}
37
+ {\textbf{Output:}}
38
+ \newcommand{\setParDis}{\setlength {\parskip} {0.3cm} }
39
+ \newcommand{\setParDef}{\setlength {\parskip} {0pt} }
40
+
41
+ \AtBeginDocument{\providecommand\BibTeX{{\normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}}
42
+
43
+
44
+ \setcopyright{acmcopyright}
45
+ \copyrightyear{2018}
46
+ \acmYear{2018}
47
+ \acmDOI{XXXXXXX.XXXXXXX}
48
+
49
+ \acmConference[Conference acronym 'XX]{Make sure to enter the correct
50
+ conference title from your rights confirmation email}{June 03--05,
51
+ 2018}{Woodstock, NY}
52
+ \acmPrice{15.00}
53
+ \acmISBN{978-1-4503-XXXX-X/18/06}
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+ \begin{document}
67
+ \title{MAPPING: Debiasing Graph Neural Networks for Fair Node Classification with Limited Sensitive Information Leakage}
68
+
69
+ \author{Ying Song, Balaji Palanisamy}
70
+ \email{{yis121, bpalan}@pitt.edu}
71
+ \affiliation{\institution{University of Pittsburgh}
72
+ \streetaddress{135 North Bellefield Avenue}
73
+ \city{Pittsburgh}
74
+ \state{PA}
75
+ \country{USA}
76
+ \postcode{15260}
77
+ }
78
+
79
+
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+
94
+
95
+ \begin{abstract}
96
+ Despite remarkable success in diverse web-based applications, Graph Neural Networks(GNNs) inherit and further exacerbate historical discrimination and social stereotypes, which critically hinder their deployments in high-stake domains such as online clinical diagnosis, financial crediting, etc. However, current fairness research that primarily craft on i.i.d data, cannot be trivially replicated to non-i.i.d. graph structures with topological dependence among samples. Existing fair graph learning typically favors pairwise constraints to achieve fairness but fails to cast off dimensional limitations and generalize them into multiple sensitive attributes; besides, most studies focus on in-processing techniques to enforce and calibrate fairness, constructing a model-agnostic debiasing GNN framework at the pre-processing stage to prevent downstream misuses and improve training reliability is still largely under-explored. Furthermore, previous work on GNNs tend to enhance either fairness or privacy individually but few probe into their interplays. In this paper, we propose a novel model-agnostic debiasing framework named MAPPING (\underline{M}asking \underline{A}nd \underline{P}runing and Message-\underline{P}assing train\underline{ING}) for fair node classification, in which we adopt the distance covariance($dCov$)-based fairness constraints to simultaneously reduce feature and topology biases in arbitrary dimensions, and combine them with adversarial debiasing to confine the risks of attribute inference attacks. Experiments on real-world datasets with different GNN variants demonstrate the effectiveness and flexibility of MAPPING. Our results show that MAPPING can achieve better trade-offs between utility and fairness, and mitigate privacy risks of sensitive information leakage.
97
+
98
+
99
+
100
+
101
+
102
+
103
+
104
+
105
+ \end{abstract}
106
+ \begin{CCSXML}
107
+ <ccs2012>
108
+ <concept>
109
+ <concept_id>10010147.10010257</concept_id>
110
+ <concept_desc>Computing methodologies~Machine learning</concept_desc>
111
+ <concept_significance>500</concept_significance>
112
+ </concept>
113
+ <concept>
114
+ <concept_id>10010405.10010455</concept_id>
115
+ <concept_desc>Applied computing~Law, social and behavioral sciences</concept_desc>
116
+ <concept_significance>500</concept_significance>
117
+ </concept>
118
+ </ccs2012>
119
+ \end{CCSXML}
120
+
121
+ \ccsdesc[500]{Computing methodologies~Machine learning}
122
+ \ccsdesc[500]{Applied computing~Law, social and behavioral sciences}
123
+
124
+ \keywords{Graph Neural Networks, Group Fairness, Privacy Risks, Distance Covariance, Adversarial Training}
125
+
126
+
127
+
128
+ \received{20 February 2007}
129
+ \received[revised]{12 March 2009}
130
+ \received[accepted]{5 June 2009}
131
+
132
+ \maketitle
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+
157
+
158
+
159
+
160
+
161
+
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+
181
+
182
+
183
+
184
+
185
+
186
+
187
+
188
+
189
+
190
+
191
+
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+
200
+
201
+
202
+
203
+
204
+
205
+
206
+
207
+
208
+
209
+
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+
220
+
221
+
222
+
223
+
224
+
225
+
226
+
227
+
228
+
229
+
230
+
231
+
232
+
233
+
234
+
235
+
236
+
237
+
238
+
239
+
240
+
241
+
242
+
243
+
244
+
245
+
246
+
247
+
248
+
249
+
250
+
251
+
252
+
253
+
254
+
255
+
256
+
257
+
258
+
259
+
260
+
261
+
262
+
263
+
264
+
265
+
266
+
267
+
268
+
269
+
270
+
271
+
272
+
273
+
274
+
275
+
276
+
277
+
278
+
279
+
280
+
281
+
282
+
283
+
284
+
285
+
286
+
287
+
288
+
289
+
290
+
291
+
292
+
293
+
294
+ \section{Introduction}
295
+
296
+ Graph Neural Networks (GNNs) have shown superior performance in various web applications, including recommendation systems\cite{social_recom} and online advertisement\cite{web_adv}. Message-passing schemes(MP)\cite{wu2019comprehensive_gnn} empower GNNs by aggregating node information from local neighborhoods, thereby rendering the clearer boundary between similar and dissimilar nodes\cite{fairwalk} to facilitate downstream graph tasks. However, disparities among different demographic groups can be perpetuated and amplified, causing severe social consequences in high-stake scenarios\cite{fairconsequences}. For instance, in clinical diagnosis\cite{gender_bias}, men are more extensively treated than women with the same severity of symptoms in a plethora of diseases, and older men aged 50 years or above receive extra healthcare or life-saving interventions than older women. With more GNNs adopted in medical analysis, gender discrimination may further deteriorate and directly cause misdiagnosis for women or even life endangerment.
297
+
298
+
299
+
300
+ Unfortunately, effective bias mitigation on non-i.i.d graphs is still largely under-explored, which particularly faces the following two problems: first, they tend to employ in-processing methods\cite{dai2021sayno_fair,nifty,learn_fairGNN,fmp_topology_bias}, while few work jointly alleviate feature and topology biases at the pre-processing stage and then feed debiased data into any GNN variants. For instance, FairDrop\cite{fairdrop} pre-modifies graph topologies to minimize distances among sensitive subgroups. However, it ignores the significant roles of node features to encode biases. Kamiran et al.\cite{pre_process_class} and Wang et al.\cite{counterfactual_pre_process} pre-debias features by ruffling, reweighting, or counterfactual perturbation, whereas their methods cannot be trivially applied on GNNs since unbiased features with biased topologies can result in biased distributions among different groups\cite{dong_fairgraph_survey}.
301
+ Second, although recent studies\cite{dong2022edits} fill the above gap, they introduce pairwise constraints, e.g., covariance($Cov$), mutual information($MI$), Wasserstein distance($Was$)\cite{optimal_transport}, to promote fairness, which, nonetheless, are computationally inefficient in high dimensions and cannot be easily extended into multiple sensitive attributes. Besides, $Cov$ cannot reveal mutual independence between target variables and sensitive attributes; $MI$ cannot break dimensional limitations and be intractable to compute, and some popular estimators, e.g., MINE\cite{MINE} are proved to be heavily biased\cite{biasMI}; and Was is sensitive to outliers\cite{robust_wass_dis}, which hinders its uses in heavy-tailed data samples. To tackle these issues, we adopt a distribution-free, scale-invariant\cite{dcor_free} and outlier-resistant\cite{dcor_outliers} metric - $dCov$ as fairness constraints, which most importantly, allows computations in arbitrary dimensions and can guarantee independence. We combine it with adversarial training to develop a feature and topology debiasing framework for GNNs.
302
+
303
+
304
+
305
+
306
+ Sensitive attributes not only result in bias aggravation but also raise data privacy concerns. Prior work elucidate that GNNs are vulnerable to attribute inference attacks\cite{GNNprivacy20}. Even though such identifiable information is masked and then released in public by data owners for specific purposes, e.g., research institutions publish pre-processed real-world datasets for researchers to use, malicious third-parties can combine masked features with prior knowledge to recover them. Meanwhile, links are preferentially connected w.r.t. sensitive attributes in reality, therefore, topological structures can contribute to sensitive membership identification. More practically, some data samples, e.g., users or customers, may unintentionally disclose their sensitive information to the public. Attackers can exploit these open resources to launch attribute inference attacks, which further amplify group inequalities and bring immeasurable social impacts. Thus, it is imperative to confine such privacy risks derived from features and topologies at the pre-processing stage.
307
+
308
+
309
+ Prior privacy-preserving studies mainly deploy in-processing techniques, such as attack-and-defense games\cite{Adversarial_privacy_preserving_against_infer,info_obfuscation_privacy_protection} and/or resort to privacy constraints\cite{privacy_protection_partial_sens_attr,GNN_mutual_privacy,info_obfuscation_privacy_protection} to fight against potential privacy risks on GNNs. However, these methods are not designed for multiple sensitive attributes, and their constraints, e.g., $Cov$ or $MI$, have deficiencies as previously described. Most importantly, they fail to consider fairness issues. PPFR\cite{interaction_priv_fair} is the first work to explore such interactions on GNNs, which empirically proves that the risks of link-stealing attacks increase as individual fairness is boosted with limited utility costs, but it centers on post-processing techniques to achieve individual fairness. Yet this work motivates us to explore whether fairness enhancements can simultaneously reduce privacy risks. To address the above issues, we propose MAPPING with $dCov$-based constraints and adversarial training to decorrelate sensitive information from features and topologies, which aligns with the same goal of fair learning. We evaluate the privacy risks via attribute inference attacks and the empirical results show that MAPPING successfully guarantees fairness while ameliorating sensitive information leakage. To the best of our knowledge, this is the first work to highlight the alignments between group-level fairness and multiple sensitive attribute privacy on GNNs at the pre-processing stage.
310
+ In summary, our main contributions are threefold:
311
+
312
+ \textit{\textbf{MAPPING}} We propose a novel debiasing framework called MAPPING for fair node classification, which confines sensitive attribute inferences from pre-debiased features and topologies. Our empirical results demonstrate that MAPPING obtains better flexibility and generalization to any GNN variants.
313
+
314
+ \textit{\textbf{Effectiveness and Efficiency}} We evaluate MAPPING on three real-world datasets and compare MAPPING with vanilla GNNs and state-of-the-art debiasing models. The experimental results confirm the effectiveness and efficiency of MAPPING.
315
+
316
+ \textit{\textbf{Alignments and Trade-offs}} We discuss the alignments between fairness and privacy on GNNs, and illustrate MAPPING can achieve better trade-offs between utility and fairness while mitigate the privacy risks of attribute inference.
317
+
318
+
319
+
320
+
321
+
322
+
323
+
324
+
325
+
326
+
327
+
328
+
329
+
330
+
331
+
332
+
333
+
334
+
335
+ %
336
+ \vspace{-2mm}
337
+ \section{Preliminaries}
338
+ In this section, we present the notations and introduce preliminaries of GNNs, $dCov$, two fairness metrics $\Delta SP$ and $\Delta EO$, and attribute inference attacks to measure sensitive information leakage.
339
+ \vspace{-2mm}
340
+ \subsection{Notations}
341
+ In our work, we focus on node classification tasks. Given an undirected attributed graph $\mathcal{G}=\left(\mathcal{V}, \mathcal{E}, \mathcal{X}\right)$, where $\mathcal{V}$ denotes a set of nodes, $\mathcal{E}$ denotes a set of edges and the node feature set $\mathcal{X}=\left(\mathcal{X}_N, S\right)$ concatenates non-sensitive features $\mathcal{X}_N\in \mathcal{R}^{n\times d}$ and a sensitive attribute $S$. The goal of node classification is to predict the ground-truth labels $\mathcal{Y}$ as $\hat{Y}$ after optimizing the objective function $f_{\theta}(\mathcal{Y}, \hat{Y})$. Beyond the above notations, $A \in \mathbb{R}^{n \times n}$ is the adjacency matrix, where $n=|\mathcal{V}|$, $A_{ij}=1$ if two nodes $\left(v_i, v_j\right) \in \mathcal{E}$, otherwise, $A_{ij}=0$.
342
+ \vspace{-7mm}
343
+ \subsection{Graph Neural Networks}
344
+ GNNs utilize MP to aggregate information of each node $v\in\mathcal{V}$ from its local neighborhood $\mathcal{N}(v)$ and thereby update its representation $H_{v}^{l}$ at layer $l$, which can be expressed as:
345
+ \begin{equation}
346
+ \setlength{\abovedisplayskip}{3pt}
347
+ \setlength{\belowdisplayskip}{3pt}
348
+ H_{v}^{l}=UPD^{l}(H_{v}^{l-1},AGG^{l-1}(\{H_{u}^{l-1}:u\in\mathcal{N}(v)\}))
349
+ \end{equation}
350
+ where $H_{v}^0=X_{v}$, and $UPD$ and $AGG$ are arbitrary differentiable functions that distinguish the different GNN variants. For a $l$-th layer GNN, typically, $H_{v}^{l}$ is fed into i.e., a linear classifier with a softmax function to predict node $v$'s label.
351
+ \vspace{-3mm}
352
+ \subsection{Distance Covariance}
353
+
354
+
355
+
356
+
357
+
358
+
359
+
360
+
361
+ $dCov$ reveals independence between two random variables $X \in \mathbb{R}^p$ and $Y \in \mathbb{R}^q$ that follow any distribution, where $p$ and $q$ are arbitrary dimensions. As defined in \cite{dcororiginal}, given a sample $(X,Y)=\{(X_k,Y_k):k=1,\dots,n\}$ from a joint distribution, the empirical $dCov$ - $\mathcal{V}_n^2(X,Y)$ and its corresponding distance correlation($dCor$) - $\mathcal{R}_n^2(X,Y)$ are defined as:
362
+ \begin{equation}
363
+ \setlength{\abovedisplayskip}{3pt}
364
+ \setlength{\belowdisplayskip}{3pt}
365
+ \begin{aligned}
366
+ & \mathcal{V}_n^2(X,Y)=\frac{1}{n^2}\sum_{k,l=1}^{n}A_{kl}B_{kl} \\
367
+ & \mathcal{R}_n^2(X,Y)=\begin{cases}
368
+ \frac{\mathcal{V}_n^2(X,Y)}{\sqrt{\mathcal{V}_n^2(X)\mathcal{V}_n^2(Y)}}, & \mathcal{V}_n^2(X)\mathcal{V}_n^2(Y)> 0 \\
369
+ 0, & \mathcal{V}_n^2(X)\mathcal{V}_n^2(Y)=0
370
+ \end{cases}
371
+ \end{aligned}
372
+ \end{equation}
373
+ where $A_{kl}=a_{kl}-\Bar{a}_{k\cdot}-\Bar{a}_{\cdot l}+\Bar{a}_{\cdot \cdot}$, $a_{kl}=|X_k - X_l|_{p}$, $\Bar{a}_{k\cdot}=\frac{1}{n}\sum_{l=1}^{n}a_{kl}$, $\Bar{a}_{\cdot l}=\frac{1}{n}\sum_{k=1}^{n}a_{kl}$, and $\Bar{a}_{\cdot \cdot}=\frac{1}{n^2}\sum_{k,l=1}^{n}a_{kl}$. $B_{kl}$ is defined similarly. $\mathcal{V}^2(X)=\mathcal{V}^2(X,X)$ and $\mathcal{V}^2(Y)$ is defined similarly. $\mathcal{V}_n^2(X,Y)\ge0$ and $0\le \mathcal{R}_n^2(X,Y)\le 1$. And $\mathcal{V}_n^2(X,Y)=\mathcal{R}_n^2(X,Y)=0$ iff $X$ and $Y$ are independent.
374
+
375
+
376
+
377
+
378
+
379
+
380
+
381
+
382
+
383
+
384
+
385
+ \vspace{-3mm}
386
+ \subsection{Fairness Metrics}
387
+ Given a binary label $\mathcal{Y} \in \left\{0,1\right\}$ and its predicted label $\hat{Y}$,
388
+ and a binary sensitive attribute $S \in \left\{0,1\right\}$, statistical parity(SP)\cite{fairthroughaware} and equal opportunity(EO)\cite{eo} can be defined as follows:
389
+
390
+ \textit{\textbf{SP}} SP requires that $\hat{Y}$ and $S$ are independent, written as $P(\hat{Y} | S=0)=P(\hat{Y} | S=1)$, which indicates that the positive predictions between two subgroups should be equal.
391
+
392
+ \textit{\textbf{EO}} EO adds extra requirements for $\mathcal{Y}$, which requires the true positive rate between two subgroups to be equal, mathematically, $P(\hat{Y} | \mathcal{Y}=1, S=0)=P(\hat{Y} | \mathcal{Y}=1, S=1)$.
393
+
394
+
395
+ Following \cite{louizos2017variational_fair}, we use the differences of SP and EO between two subgroups as fairness measures, expressed as:
396
+ \begin{equation}
397
+ \setlength{\abovedisplayskip}{3pt}
398
+ \setlength{\belowdisplayskip}{3pt}
399
+ \begin{aligned}
400
+ & \Delta SP = |P(\hat{Y} | S=0)=P(\hat{Y} | S=1)| \\
401
+ & \Delta EO=|P(\hat{Y} | \mathcal{Y}=1, S=0)-P(\hat{Y} | \mathcal{Y}=1, S=1)|
402
+ \end{aligned}
403
+ \end{equation}
404
+ \vspace{-5mm}
405
+ \subsection{Attribute Inference Attacks}
406
+ We assume adversaries can access the pre-debiased features $\hat{X}$, topologies $\hat{A}$, and labels $\mathcal{Y}$, and gain partial sensitive attributes $S_p$ through legal or illegal channels as prior knowledge. They combine all the above information to infer $\hat{S}$. We assume they cannot tamper with internal parameters or architectures under the black-box setting. The attacker goal is to train a supervised attack classifier $f_{\theta_{att}}(\hat{X},\hat{A},\mathcal{Y})=\hat{S}$ with any GNN variant. Our attack assumption is practical in reality, to name a few scenarios, business companies pack their models into APIs and open access to users, in view of security and privacy concerns, adversaries are blocked from querying models since they cannot pass authentication or further be identified by detectors, but they can download graph data from corresponding business-sponsored competitions in public platforms, e.g. Kaggle; additionally, due to strict legal privacy and compliance policies, research or business institutions only allow partial employees to deal with sensitive data and then transfer pre-processed information to other departments. Attackers cannot impersonate formal employees or access strongly confidential database, whereas they can steal masked contents during normal communications.
407
+ \vspace{-2mm} \section{Empirical Analysis}
408
+ \label{analysis}
409
+
410
+ In this section, we first investigate biases from node features, graph topology and MP with $dCor$. We empirically prove that biased node features and/or topological structures can be fully exploited by malicious third parties to launch attribute inference attacks, which in turn can perpetuate and amplify existing social discrimination and stereotypes. We use synthetic datasets with multiple sensitive attributes to conduct these experiments. As suggested by Bose et al.\cite{compositional_fair}, we do not add any activation function to avoid nonlinear effects. We note that prior work\cite{dai2021sayno_fair} has demonstrated that topologies and MP can both exacerbate biases hidden behind node features. For instance, FairVGNN\cite{fairview} has illustrated that even masking sensitive attributes, sensitive correlations still exist after feature propagation. However, they are all evaluated on pairwise metrics of a single sensitive attribute.
411
+ \vspace{-2mm}
412
+ \subsection{Data Synthesis} First, we identify the main sensitive attribute $S_m$ and the minor $S_n$. E.g., to investigate racism in risk assessments\cite{recidivism}, 'race' is the key focus while 'age' follows behind. The overview of the synthetic data is shown in Figure \ref{data synthesis}, wherein we visualize non-sensitive features with t-SNE\cite{tsne}. We detail the data synthesis in Appendix \ref{synthesis}.
413
+
414
+
415
+
416
+ \begin{figure}[htbp]
417
+ \vspace{-4mm}
418
+ \setlength{\abovecaptionskip}{1mm}
419
+ \setlength{\belowcaptionskip}{-4mm}
420
+ \centering
421
+ \subfloat[Biased non-sensitive features and graph topology(Major)]
422
+ {
423
+ \includegraphics[width=0.245\textwidth]{tsne_biased_x_mul_minor-2.png}
424
+ \includegraphics[width=0.245\textwidth]{biased_topology_minor.png}
425
+ \label{Biased features and topology}}
426
+ \hfill
427
+ \vspace{-4mm}
428
+ \subfloat[Unbiased non-sensitive features and graph topology(Major)]
429
+ {
430
+ \includegraphics[width=0.245\textwidth]{tsne_unbiased_x_mul_minor-2.png}
431
+ \includegraphics[width=0.245\textwidth]{unbiased_topology_minor.png}
432
+ \label{Unbiased features and topology}}
433
+ \caption{Biased and Unbiased Graph Data Synthesis}
434
+ \label{data synthesis}
435
+ \end{figure}
436
+ \vspace{-2mm}
437
+ \subsection{Case Analysis}
438
+
439
+ \subsubsection{\textbf{Sensitive Correlation}} To unify the standard pipeline, we first measure $\mathcal{R}^2_{n}(\mathcal{X}, S)$, $\mathcal{R}^2_{n}(\mathcal{X}_{N}, S)$ and $h_{sens}$ for $S_n$ and $S_m$, which denote sensitive distance correlations of original features and non-sensitive features, and sensitive homophily of $S_n$ and $S_m$, respectively. We next obtain the prediction $\hat{Y}$, compute $\mathcal{R}^2_{n}(\hat{Y}, S)$, record $\Delta$SP and $\Delta$EO for $S_n$ and $S_m$, and compare them to evaluate biases. The split ratio is 1:1:8 for training, validation and test sets and we repeat the experiments 10 times to report the average results. The experimental setting is detailed in Appendix \ref{prelim}.
440
+ \vspace{-2mm}
441
+ \begin{table}[htbp]
442
+ \caption{Sensitive Correlation in Before/After GNN Training. The results are shown in percentage($\%$). 1 represents the minor sensitive attribute while 2 denotes the major.}
443
+ \label{sens_cor_four_case}
444
+ \vspace{-4mm}
445
+ \setlength{\abovecaptionskip}{-2mm}
446
+ \setlength{\belowcaptionskip}{-4mm}
447
+ \resizebox{\linewidth}{!}{
448
+ \begin{tabular}{cccccccccc}
449
+ \hline
450
+
451
+ \multirow{2}{*}{\textbf{Cases}} & \multicolumn{4}{c}{\textbf{Before Training}} & \multicolumn{5}{c}{\textbf{After Training}} \\
452
+ \cmidrule(lr){2-5}
453
+ \cmidrule(lr){6-10}
454
+ &\textbf{$\boldsymbol{\mathcal{R}^2_{n}(\mathcal{X}, S)}$} & \textbf{$\boldsymbol{\mathcal{R}^2_{n}(\mathcal{X}_{N}, S)}$} &
455
+ \textbf{$\boldsymbol{h_{sens1}}$} &
456
+ \textbf{$\boldsymbol{h_{sens2}}$} &
457
+ \textbf{\bm{$\Delta SP_1$}}&
458
+ \textbf{\bm{$\Delta EO_1$}} &
459
+ \textbf{\bm{$\Delta SP_2$}} &
460
+ \textbf{\bm{$\Delta EO_2$}} &
461
+ \textbf{$\boldsymbol{\mathcal{R}^2_{n}(\hat{Y}, S)}$}\\ \hline
462
+ \textbf{BFDT}
463
+ & 72.76
464
+ & 72.68
465
+ & 56.52
466
+ & 65.22
467
+ & 22.83$\pm 6.0$
468
+ & 33.18$\pm 5.6$
469
+ & 21.95$\pm 6.3$
470
+ & 21.65$\pm 4.7$
471
+ & 17.70$\pm 0.0$\\ \hline
472
+ \textbf{DFBT}
473
+ & 29.50
474
+ & 4.46 & 66.89
475
+ & 79.70
476
+ & 5.17$\pm 1.7$
477
+ & 5.22$\pm 1.7$
478
+ & 12.80$\pm 2.3$
479
+ & 14.25$\pm 2.9$
480
+ & 12.70$\pm 0.0$\\ \hline
481
+ \textbf{BFBT}
482
+ & 72.76
483
+ & 72.68
484
+ & 66.89 & 79.70
485
+ & 4.30$\pm 3.0$
486
+ & 4.37$\pm 2.5$
487
+ & 47.94$\pm 5.7$
488
+ & 29.90$\pm 3.9$
489
+ & 22.57$\pm 0.0$\\ \hline
490
+ \textbf{DFDT}
491
+ & 29.50
492
+ & 4.46 & 56.53
493
+ & 65.22
494
+ & 3.35$\pm 1.6$
495
+ & 5.33$\pm 3.0$
496
+ & 12.07$\pm 2.9$
497
+ & 14.13$\pm 4.1$
498
+ & 12.57$\pm 0.0$
499
+ \\ \hline
500
+ \end{tabular}}
501
+ \end{table}
502
+
503
+ \vspace{-3mm}
504
+ \textbf{Case 1: Biased Features and Debiased Topology(BFDT)} In Case 1, we feed biased non-sensitive features and debiased topology into GNNs. As shown in Table \ref{sens_cor_four_case}, there is only a small difference before and after masking sensitive attributes. After MP, biases from both sensitive attributes are projected into predicted results.
505
+
506
+
507
+
508
+
509
+
510
+
511
+
512
+ \textbf{Case 2: Debiased Features and Biased Topology(DFBT)} In Case 2, we focus on debiased features and biased topology. From Table \ref{sens_cor_four_case}, there is a relatively large difference before and after masking and the sensitive correlation is much lower and the results are fairer on both sensitive attributes and more consistent.
513
+
514
+
515
+
516
+
517
+
518
+ \textbf{Case 3: Biased Feature and Topology(BFBT)} In Case 3, we shift to biased non-sensitive features and topology. In Table \ref{sens_cor_four_case}, a higher sensitive correlation is introduced. We notice that the fairness performance of the major sensitive attribute is much higher than the minor, and the minor's performance is even a bit lower than Case 2. On average, the sensitive attributes are more biased.
519
+
520
+ \textbf{Case 4: Debiased Feature and Topology(DFDT)} In Case 4, we turn to debiased features and topology. In Table \ref{sens_cor_four_case}, the fairness performance is close to Case 2 and the sensitive correlation reflected in the final prediction is slightly lower, which indicates that the majority of biases stem from biased node features. Graph topologies and MP mainly play complementary roles in amplifying biases.
521
+
522
+ \textbf{Discussion} The case analyses elucidate that node features, graph topologies and MP are all crucial bias resources, which motivates us to simultaneously debias features and topologies under MP at the pre-processing stage instead of debiasing them separately without taking MP into consideration.
523
+
524
+
525
+
526
+
527
+ \subsubsection{\textbf{Sensitive Information Leakage}}
528
+ Our work considers a universal situation where due to fair awareness or legal compliance, the data owners (e.g., research institutions or companies) release masked non-sensitive features and incomplete graph topologies to the public for specific purposes. We argue that even though the usual procedures pre-handle features and topologies that are ready for release, the sensitive information leakage problem still exists and sensitive attributes can be inferred from the aforementioned resources, which further amplify existing inequalities and cause severe social consequences.
529
+ We assume that the adversaries can access pre-processed $\tilde{\mathcal{X}},\tilde{A}$ and label $\mathcal{Y}$, and then obtain partial sensitive attributes $S_p$ of specific individuals as prior knowledge, where $p\in\{0.3125\%,0.625\%,1.25\%,2.5\%,5\%,10\%,20\%\}$. Finally, they simply use a 1-layer GCN with a linear classifier as the attack model to identify different sensitive memberships, i.e., $S_m$ or $S_n$ of the rest nodes. We adopt the synthetic data again to explore sensitive information leakage with or without fairness intervention.
530
+
531
+ As Figure \ref{fig:att_syn} shows, even though the adversaries only acquire very few $S_m$ or $S_n$, they can successfully infer the rest from all pairs since highly associated features are retained and become more sensitively correlated after MP. We note that $S_m$ are more biased than $S_n$. While there are turning points in the fewer label cases due to performance instability, with more sensitive labels to be collected, there are higher attack accuracy and sensitive correlations. In all, the BFBT pair always introduces the most biases and sensitive information leakage,
532
+ the BFDT pair leads to lower sensitive correlation and attack accuracy, and the performances of DFBT and DFDT pairs are close, which indicates that compared to biased features, biased topology contributes less to inference attacks of $S_m$ and $S_n$. Once fairness interventions for features and topology are added, the overall attack accuracy stabilizes to decrease by all 10$\%$ and the sensitive correlation decreases by almost 50$\%$/40$\%$ for $S_m$ and $S_n$.
533
+
534
+ \textbf{Discussion} The above analysis illustrates that even simple fairness interventions can alleviate attribute inference attacks under the black-box settings. Generally, more advanced debiasing methods will result in less sensitive information leakage, which fits our intuition and motivates us to design more effective debiasing techniques with limited sensitive information leakage.
535
+
536
+ \begin{figure}[htbp]
537
+ \vspace{-4mm}
538
+ \setlength{\abovecaptionskip}{1mm}
539
+ \setlength{\belowcaptionskip}{-4mm}
540
+ \centering
541
+ \subfloat[Attack Accuracy]{
542
+ \includegraphics[width=0.245\textwidth]{acc_syn_re_m2-2.png}
543
+ \includegraphics[width=0.245\textwidth]{acc_syn_re_m1-2.png}
544
+ \label{acc_major}}
545
+ \hfill
546
+ \vspace{-4mm}
547
+ \subfloat[Sensitive Correlation]{
548
+ \includegraphics[width=0.245\textwidth]{sensitive_dcor_syn_re_m2-2.png}
549
+ \includegraphics[width=0.245\textwidth]{sensitive_dcor_syn_re_m1-2.png}
550
+ \label{dcor_major}}
551
+ \caption{Attribute Inference Attack Under Different Cases}
552
+ \label{fig:att_syn}
553
+ \end{figure}
554
+
555
+
556
+ \vspace{-2mm}
557
+ \subsection{Problem Statement}
558
+ Based on the two empirical studies, we define the formal problem as:
559
+ \textit{{Given an undirected attributed graph $\mathcal{G}=\left(\mathcal{V}, \mathcal{E}, \mathcal{X}\right)$ with the sensitive attributes $S$, non-sensitive features $\mathcal{X}_N$, graph topology $A$ and node labels $\mathcal{Y}$, we aim to learn pre-debiasing functions $\Phi_f(X)=\hat{X}$ and $\Phi_t(A)=\hat{A}$ and thereby construct a fair and model-agnostic classifier $f_{\theta}(\hat{X}, \hat{A})=\hat{Y}$ with limited sensitive information leakage.}}
560
+ \section{Framework Design}
561
+ In this section, we provide an overview of MAPPING, which consecutively debias node features and graph topologies, and then we detail each module to tackle the formulated problem.
562
+
563
+ \subsection{Framework Overview}
564
+
565
+ \begin{figure*}[!htbp]
566
+ \centering
567
+ \vspace{-4mm}
568
+ \setlength{\abovecaptionskip}{1mm}
569
+ \setlength{\belowcaptionskip}{-4mm}
570
+ \includegraphics[width=16cm]{MAPPING.png}
571
+ \caption{\textbf{The Framework Overview of MAPPING with Feature and Topology Debiasing}}
572
+ \label{MAPPING_overview}
573
+ \end{figure*}
574
+
575
+
576
+ MAPPING consists of two modules to debias node features and graph topologies. The feature debiasing module contains $1)$ pre-masking: masking sensitive attributes and their highly associated features based on hard rules to trade off utility and fairness and $2)$reweighting: reweighting the pre-masked features $\Tilde{\mathcal{X}}$ to restrict attribute inference attacks by adversarial training and $dCov$-based fairness constraints. With debiased features $\hat{X}$ on hand, the topology debiasing module includes $1)$ Fair MP: initializing equalized weights $W_0$ for existed edges $\mathcal{E}$ and then employing a $dCov$-based fairness constraint to mitigate sensitive privacy leakage and
577
+ $2)$ Post-pruning: pruning edges with weights $\hat{W}_{\mathcal{E}}$ beyond the pruning threshold $r_p$ to obtain $\Tilde{W}_{\mathcal{E}}$ and then returning the debiased adjacency matrix $\hat{A}$. The overview of MAPPING is shown in Figure \ref{MAPPING_overview}. And the algorithm overview is detailed in Algorithm \ref{alg2} in Appendix \ref{code}.
578
+
579
+ \vspace{-2mm}
580
+ \subsection{Feature Debiasing Module}
581
+
582
+ \subsubsection{\textbf{Pre-masking}}
583
+
584
+ Simply removing sensitive attributes, i.e., fairness through blindness\cite{fairthroughaware}, cannot sufficiently protect specific demographic groups from discrimination and attacks. $\mathcal{X}_{N}$ that are highly associated with $S$ can reveal the sensitive membership or be used to infer $S$ even without access to it\cite{fairthroughaware, main_attr_infer_attack}. Furthermore,
585
+ without fairness intervention, $\mathcal{Y}$ are always reflective of societal inequalities and stereotypes\cite{societal_bias_label} and $\hat{Y}$ ineluctably inherit such biases since they are predicted by minimizing the difference between $\mathcal{Y}$ and $\hat{Y}$. Inspired by these points, we first design a pre-masking method to mask $S$ and highly associated $\mathcal{X}_N$ with the power of $dCor$. However, we cannot simply discard all highly associated $\mathcal{X}_N$ since part of them may carry useful information and finally contribute to node classification. Hence, we must carefully make the trade-off between fairness and utility.
586
+
587
+ Considering above factors, we first compute $\mathcal{R}_{n}^{2}(\mathcal{X}_i, S)$ and $\mathcal{R}_{n}^{2}(\\ \mathcal{X}_i, \mathcal{Y})$, $i\in[1,\dots,d]$. We set a distributed ratio $r$(e.g., $20\%$) to pick up $x$ top related features based on $\mathcal{R}_{n}^{2}(\mathcal{X}, S)$ and $x$ less related features based on $\mathcal{R}_{n}^{2}(\mathcal{X}, \mathcal{Y})$. We then take an intersection of these two sets to acquire features that are highly associated with $S$ and simultaneously contribute less to accurate node classification. Next, we use a sensitive threshold $r_s$($70\%$) to filter features whose $\mathcal{R}_{n}^{2}(\mathcal{X}_i, S)<r_s$. Finally, we take the union of these two sets of features to guarantee:
588
+ \begin{itemize}
589
+ \item \textit{We cut off very highly associated $\mathcal{X}_{N}$ to pursue fairness;}
590
+ \item \textit{Besides the hard rule, to promise accuracy, we scrutinize that only $\mathcal{X}$ are highly associated with $S$ and make fewer contributions to the final prediction are masked;}
591
+ \item \textit{The rest of the features are pre-masked features $\Tilde{\mathcal{X}}$.}
592
+ \end{itemize}
593
+ More details are seen in Algorithm \ref{alg1} in Appendix \ref{code}.
594
+
595
+
596
+
597
+ Pre-masking benefits bias mitigation and privacy preservation. Besides, it reduces the dimension of node features, thereby saving training costs, which is particularly effective on large-scale datasets with higher dimensions. However, partially masking $S$ and its highly associated $\mathcal{X}_{N}$ may not be adequate. Prior studies \cite{fairclasswithoutsens,without_sens_learning} have demonstrated the feasibility of $S$ estimation without accessing $S$. To tackle this issue, we further debias node features after pre-masking.
598
+
599
+ \vspace{-3mm}
600
+ \subsubsection{\textbf{Reweighting}}
601
+ We first assign initially equal weights $W_{f_{0}}$ for $\Tilde{\mathcal{X}}$. For feature $\Tilde{\mathcal{X}_i},i\in[1,\dots, d_m]$, where $d_m$ is the dimension of masked features, if corresponding $\hat{W}_{fi}$ decreases, $\Tilde{\mathcal{X}_i}$ plays a less important role to debias features and vice versa. Therefore, the first objective is to minimize:
602
+ \begin{equation}
603
+ \setlength{\abovedisplayskip}{3pt}
604
+ \setlength{\belowdisplayskip}{3pt}
605
+ \min_{\theta_{r}}\mathcal{L}_r=\|\Tilde{\mathcal{X}}-\hat{X}\|_{2}
606
+ \end{equation}
607
+ where $\hat{X}=f_{\theta_{\hat{w}}}(\Tilde{\mathcal{X}})=\Tilde{\mathcal{X}}\hat{W}_f$.
608
+
609
+ In addition, we restrict the weight changes and control the sparsity of weights. Here we introduce $L$1 regularization as:
610
+ \begin{equation}
611
+ \setlength{\abovedisplayskip}{3pt}
612
+ \setlength{\belowdisplayskip}{3pt}
613
+ \min_{\theta_{\hat{w}}}\mathcal{L}_{\hat{w}}=\|\hat{W}_f\|_{1}
614
+ \end{equation}
615
+
616
+ Next, we introduce $dCov$-based fairness constraints. Since the same technique is utilized in fair MP, we unify them to avoid repeated discussions. The ideal cases are $\hat{X} \perp S$ and $\hat{Y} \perp S$, which indicates that the sensitive attribute inference derived from $\hat{X}$ and $\hat{Y}$ are close to random guessing. Zafar et al.\cite{fairtreat17} and Dai et al.\cite{dai2021sayno_fair} employ $Cov$-based constraints to learn fair classifiers. However, $Cov$ needs pairwise computation, and it ranges from $-\infty$ to $\infty$, which require adding the extra absolute form. Moreover, $Cov=0$ cannot ensure independence but only reflect irrelevance. Cho et al.\cite{fairmutual} and Roh et al.\cite{fairmutualtrain} use $MI$-based constraints for fair classification. Though $MI$ uncovers mutual independence between random variables, it cannot get rid of dimensional restrictions. Whereas $dCov$ can overcome these deficiencies. $dCov$ reveals independence and it is larger than $0$ for dependent cases. Above all, it breaks dimensional limits and thereby saves computation costs. Hence, we leverage a $dCov$-based fairness constraint in our optimization, marked as:
617
+ \begin{equation}
618
+ \setlength{\abovedisplayskip}{3pt}
619
+ \setlength{\belowdisplayskip}{3pt}
620
+ \mathcal{L}_{s}=\mathcal{V}_{n}^2(\hat{X}, S)
621
+ \end{equation}
622
+
623
+ Finally, we use adversarial training\cite{fair_ad_learn} to mitigate sensitive privacy leakage by maximizing the classification loss as:
624
+ \begin{equation}
625
+ \label{sens}
626
+ \setlength{\abovedisplayskip}{3pt}
627
+ \setlength{\belowdisplayskip}{3pt}
628
+ \max_{\theta_{a}}\mathcal{L}_{a}=-\frac{1}{n}\sum_{i=1}^{n}S_i\log(\hat{S}_i)+(1-S_i)\log(1-\hat{S}_i)
629
+ \end{equation}
630
+ where $\hat{S} = f_{\theta_{{s}}}(\hat{X})$.
631
+
632
+ Cho et al.\cite{fairmutual} empirically shows that adversarial training suffers from significant stability issues and $Cov$-based constraints are commonly adopted to alleviate such instability. In this paper, we use $dCov$ to achieve the same goal.
633
+
634
+ \vspace{-1mm}
635
+ \subsubsection{\textbf{Final Objective Function of Feature Debiasing}}
636
+ Now we have $f_{\theta_{r}}$ to minimize feature reconstruction loss, $f_{\theta_{\hat{w}}}$ to restrict weights, $f_{\theta_{a}}$ to debias adversarially, and $\mathcal{L}_{s}$ to diminish sensitive information disclosure. In summary, the final objective function of the feature debiasing module is:
637
+ \begin{equation}
638
+ \setlength{\abovedisplayskip}{3pt}
639
+ \setlength{\belowdisplayskip}{3pt}\min_{\theta_{r},\theta_{\hat{w}},\theta_{a}}\mathcal{L}_r+\lambda_1 \mathcal{L}_{\hat{w}} + \lambda_2 \mathcal{L}_s - \lambda_3 \mathcal{L}_a
640
+ \end{equation}
641
+ where $\theta_r$, $\theta_{\hat{w}}$ and $\theta_a$ denote corresponding objective functions' parameters. Coefficients $\lambda_1$, $\lambda_2$ and $\lambda_3$ control weight sparsity, sensitive correlation and adversarial debiasing, respectively.
642
+
643
+ \vspace{-2mm}
644
+ \subsection{Topology Debiasing Module}
645
+
646
+ \subsubsection{\textbf{Fair MP}}
647
+
648
+
649
+ Prior work\cite{dai2021sayno_fair} empirically demonstrates biases can be magnified by graph topologies and MP. FMP\cite{fmp_topology_bias} investigates how topologies can enhance biases during MP. Our empirical analysis likewise reveals that sensitive correlations increase after MP. Here we propose a novel debiasing method to jointly debias topologies under MP, which is conducive to providing post-pruning with explanations of edge importance.
650
+
651
+ First, we initialize equal weights $W_{{\mathcal{E}_0}}$ for $\mathcal{E}$, then feed $W_{{\mathcal{E}}_0}$, $A$ and $\hat{X}$ into GNN training. The main goal of node classification is to pursue accuracy, which equalizes to minimize:
652
+ \begin{equation}
653
+ \setlength{\abovedisplayskip}{3pt}
654
+ \setlength{\belowdisplayskip}{3pt}
655
+ \min_{\theta_{\mathcal{C}}}\mathcal{L}_{\mathcal{C}}=-\frac{1}{n}\sum_{i=1}^{n}\mathcal{Y}_i\log(\hat{Y}_i)+(1-\mathcal{Y}_i)\log(1-\hat{Y}_i)
656
+ \end{equation}
657
+
658
+ As mentioned before, we add a $dCov$-based fairness constraint into the objective function to ameliorate sensitive attribute inference attacks derived from $\hat{Y}$. Defined as:
659
+ \begin{equation}
660
+ \setlength{\abovedisplayskip}{3pt}
661
+ \setlength{\belowdisplayskip}{3pt}
662
+ \mathcal{L}_{\mathcal{F}}=\mathcal{V}^2_{n}(\hat{Y},S)
663
+ \end{equation}
664
+
665
+ \vspace{-1mm}
666
+ \subsubsection{\textbf{Final Objective Function of Topology Debiasing}}
667
+ Now we have $f_{\theta_{\mathcal{C}}}$ to minimize node classification loss and $\mathcal{L}_{\mathcal{F}}$ to restrain sensitive information leakage. The final objective function of topology debiasing module can be written as:
668
+ \begin{equation}
669
+ \setlength{\abovedisplayskip}{3pt}
670
+ \setlength{\belowdisplayskip}{3pt}
671
+ \min_{f_{\theta_{\mathcal{C}}}}\mathcal{L}_e = \mathcal{L}_{\mathcal{C}} + \lambda_4\mathcal{L}_{\mathcal{F}}
672
+ \end{equation}
673
+ Where $\theta_{\mathcal{C}}$ denotes the parameter of the node classifier $\mathcal{C}$, the coefficient $\lambda_4$ controls the balance between utility and fairness.
674
+
675
+ \subsubsection{\textbf{Post-pruning}}
676
+ After learnable weights $\hat{W}_{\mathcal{E}}$ have been updated to construct a fair node classifier with limited sensitive information leakage, we apply a hard rule to prune edges with edge weights $\hat{W}_{e_i\cdot}$ for $e_{i\cdot}\in\mathcal{E}$ that are beyond the pruning threshold $r_p$. Please note, since we target undirected graphs, if any two nodes $i$ and $j$ are connected, $\hat{W}_{e_{ij}}$ should be equal to $\hat{W}_{e_{ji}}$, which assures $A_{ij}=A_{ji}$ after pruning. Mathematically,
677
+ \begin{equation}
678
+ \setlength{\abovedisplayskip}{3pt}
679
+ \setlength{\belowdisplayskip}{3pt}
680
+ \begin{aligned}
681
+ & \Tilde{W}_{ei}=
682
+ \begin{cases}
683
+ 1, & \mbox{if }\hat{W}_{e_{i\cdot}} \ge r_p\\
684
+ 0, & \mbox{otherwise}
685
+ \end{cases} \\
686
+ & s.t.,\hat{W}_{e_{ij}}=\hat{W}_{e_{ji}}, \mbox{ if } e_{ij}\in\mathcal{E}
687
+ \end{aligned}
688
+ \end{equation}
689
+
690
+ Removing edges is practical since edges can be noisy in reality, and meanwhile, they can exacerbate biases and leak privacy under MP. $\hat{W}_{e_i}$ explains which edges contribute less/more to fair node classification. We simply discard identified uninformative and biased edges. The rest forms the new $\hat{A}$.
691
+
692
+ \vspace{-2mm}
693
+ \subsection{Extension to Multiple Sensitive Attributes} Since $dCor$ and $dCov$ are not limited by dimensions, our work can be easily extended into the pre-masking strategy and adding $dCov$-based fairness constraints $\mathcal{L}_s$ and $\mathcal{L}_{\mathcal{F}}$ with multiple sensitive attributes as we directly calculate in previous sections. As for adversarial training, the binary classification loss cannot be trivially extended to multiple sensitive labels. Previously in the reweighting strategy, we simply adopted $f_{\theta_s}$ to map estimated masked features with $d_m$ into a single predicted sensitive attribute. Instead, in the multiple case, we map the features into predicted multiple sensitive attributes with the desired dimensions $d_s$. Next, we leverage the function (\ref{sens}) to compute the classification loss for each sensitive attribute and then we can take the average or other forms of all losses. The rest steps are exactly the same as described before.
694
+
695
+
696
+
697
+ %
698
+ \vspace{-1mm}
699
+ \section{Experiments}
700
+ In this section, we implement a series of experiments to demonstrate the effectiveness and flexibility of MAPPING with different GNN variants. Particularly, we address the following questions:
701
+ \vspace{-1mm}
702
+ \begin{itemize}
703
+ \item \textbf{Q1:} Does MAPPING effectively and efficiently debias feature and topology biases hidden behind graphs?
704
+ \item \textbf{Q2:} Does MAPPING flexibly adapt to different GNNs?
705
+ \item \textbf{Q3:} Does MAPPING outperform existing pre-processing and in-processing algorithms for fair node classification?
706
+ \item \textbf{Q4:} Whether MAPPING can achieve better trade-offs between utility and fairness and meanwhile mitigate sensitive information leakage?
707
+ \item \textbf{Q5:} How debiasing contribute to fair node classification?
708
+ \end{itemize}
709
+ \vspace{-2mm}
710
+
711
+ \subsection{Experimental Setup}
712
+ In this subsection, we first describe the datasets, metrics and baselines, and then summarize the implementation details.
713
+
714
+ \subsubsection{\textbf{Datasets}}
715
+ We validate MAPPING on $3$ real-world datasets, namely, German, Recidivism and Credit\cite{nifty}\footnote{https://github.com/chirag126/nifty}.
716
+ The detailed statistics are summarized in Table \ref{data_summary} and more contents are in Appendix \ref{data}. \begin{table}[htbp]
717
+ \caption{\textbf{Statistics Summary of Datasets}}
718
+ \vspace{-4mm}
719
+ \setlength{\abovecaptionskip}{-2mm}
720
+ \setlength{\belowcaptionskip}{-4mm}
721
+ \resizebox{\linewidth}{!}{
722
+ \centering
723
+ \begin{tabular}{llll}
724
+ \toprule
725
+ \textbf{Dataset} & \textbf{German} & \textbf{Recidivism} & \textbf{Credit} \\
726
+ \midrule
727
+ \textbf{\# Nodes} & 1000 & 18,876 & 30,000 \\
728
+ \textbf{\# Edges} & 22,242 & 321,308 & 1,436,858 \\
729
+ \textbf{\# Features} & 27 & 18 & 13 \\
730
+ \textbf{Sensitive Attr.} & Gender(Male/female) & Race(Black/white) & Age($\le 25$/$>25$) \\
731
+ \textbf{Label} & Good/bad credit & Bail/no bail & Default/no default \\
732
+ \bottomrule
733
+ \end{tabular}}
734
+ \label{data_summary}
735
+ \vspace{-4mm}
736
+ \end{table}
737
+
738
+ \subsubsection{\textbf{Evaluation Metrics}}
739
+ We adopt accuracy(ACC), F1 and AUROC to evaluate utility and $\Delta_{SP}$ and $\Delta_{EO}$ to measure fairness.
740
+
741
+ \subsubsection{\textbf{Baselines}}
742
+ We investigate the effectiveness and flexibility of MAPPING on three representative GNNs, namely, GCN, GraphSAGE \cite{GraphSAGE} and GIN\cite{GIN}, and compare MAPPING with three state-of-the-art debiasing models.
743
+
744
+ \textbf{Vanilla} GCN leverages a convolutional aggregator to sum propagated features from local neighborhoods. GraphSAGE aggregates node features from local sampled neighbors, which is more scalable to handle unseen nodes. GIN emphasizes the expressive power of graph-level representation to satisfy the Weisfeiler-Lehman graph isomorphism test\cite{WL_test}.
745
+
746
+
747
+ \textbf{State-of-the-art(SODA) Debiasing Models} We choose one pre-proces -sing model, namely EDITS\cite{dong2022edits} and two in-processing models, namely, FairGNN\cite{dai2021sayno_fair} and NIFTY\cite{nifty}.
748
+ Since NIFTY\cite{nifty} targets fair node representation, we evaluate the quality of node representation on the downstream node classification task.
749
+
750
+ \vspace{-2mm}
751
+ \subsubsection{\textbf{Implementation Details}}
752
+ We keep the same experiment setting as before. The GNNs follow the same architectures in NIFTY \cite{nifty}. The fine-tuning processes are handled with Optuna\cite{optuna} via the grid search. As for feature debiasing, we deploy a $1$-layer multilayer perceptron(MLP) for adversarial debiasing and leverage the proximal gradient descent method to optimize $W_f$. Since PyTorch Geometric only allows positive weights, we use a simple sigmoid function to transfer weights into $[0,1]$. We set the learning rate as $0.001$ and the weight decay as 1e-5 for all the three datasets, and set training epochs as $500$. For topology debiasing, we adopt a $1$-layer GCN to mitigate biases under MP. We set training epochs as $1000$ for all datasets, as for GIN in Credit, since small epochs can achieve comparable performance, we set early stopping to avoid overfitting. Others are the same as feature debiasing. As for GNN training, we utilize the split setting in NIFTY\cite{nifty}, we fix the hidden layer as $16$, the dropout as $0.2$, training epochs as $1000$, weight decay as 1e-5 and learning rate from $\{0.01,0.03\}$ for all GNNs. For fair comparison, we rigorously follow the settings in SODA\cite{dai2021sayno_fair,nifty,dong2022edits}. The other detailed hyperparameter settings are in Appendix \ref{setting}.
753
+
754
+ The attack setting is the same as before to evaluate sensitive information leakage. Still, we repeat experiments $10$ times with $10$ different seeds and finally report the average results.
755
+
756
+ \vspace{-2mm}
757
+
758
+ \subsection{Performance Evaluation}
759
+ \label{perf}
760
+ In this subsection, we evaluate the performance by addressing the top $4$ questions raised at the beginning of this section.
761
+
762
+ \subsubsection{\textbf{Debiasing Effectiveness and Efficiency}}
763
+ To answer \textbf{Q1}, we first compute $\Delta$SP and $\Delta$EO before and after debiasing and then evaluate the debiasing effectiveness. Second, we provide the time complexity analysis to illustrate the efficiency of MAPPING.
764
+
765
+ \textbf{Effectiveness} The results shown in Table \ref{performance} demonstrate the impressive debiasing power of MAPPING in node classification tasks. Compared to vanilla GNNs, $\Delta SP$ and $\Delta EO$ in Table \ref{performance} all decrease, especially in German, GNNs drop more biases than in Recidivism and Credit, wherein GCN introduces more biases than other GNN variants but reduces more biases as well in most cases.
766
+
767
+
768
+ \textbf{Efficiency} Once pre-debiasing is completed, the rest training time purely reflects the efficiency of vanilla GNNs. Generally, the summed running time of pre-debiasing and GNN training is lower than in-processing methods which introduce extra computational costs, e.g., complex objective functions and/or iterative operations. Even in the small-scale German dataset, the whole running time of MAPPING is always less than $150$ seconds for $10$ trials, but FairGNN\cite{dai2021sayno_fair} and NIFTY\cite{nifty}(excluding counterfactual fairness computation) are $1.17$-$9.28$ times slower. Since we directly use EDITS's \cite{dong2022edits} debiased datasets, there is no comparison for pre-debiasing, but EDITS is $1.00$-$6.43$ times slower in running GNNs. We argue that MAPPING modifies more features and edges but still achieves competitive debiasing and classification performance. Concerning time complexity, the key lies in $dCov$ and $dCor$ calculations, which is $\mathcal{O}(|\mathcal{V}|^{2})$\cite{dcor_complexity}. For feature debiasing, the time complexity of pre-masking is $\mathcal{O}(d|\mathcal{V}|^{2})$, where $d$ is the dimension of original features. Since $d$ is small, the time complexity is comparable to $\mathcal{O}(|\mathcal{V}|^{2})$; the time complexity of reweighting for each training epoch is $2\mathcal{O}(|\mathcal{V}|^{2})+\mathcal{O}(d)$, which is comparable to $\mathcal{O}(|\mathcal{V}|^{2})$. As for topology debiasing, fair MP is in time complexity of $\mathcal{O}(|\mathcal{V}|^{2})+\mathcal{O}(|\mathcal{V}|)$ for each training epoch and post-pruning is $\mathcal{O}(|\mathcal{E}|)$. As suggested in \cite{fast_dcov}, the time complexity of $dCov$ can be further reduced to $\mathcal{O}(|\mathcal{V}|\log(|\mathcal{V}|))$ for univariate cases.
769
+
770
+ \subsubsection{\textbf{Framework Flexibility and Model Performance}}
771
+
772
+ To answer \textbf{Q2} to \textbf{Q4}, we compare MAPPING against other baselines and launch attribute inference attacks to investigate the effects of sensitive information leakage mitigation of MAPPING.
773
+
774
+ \textbf{Flexibility} To answer \textbf{Q2}, from Table \ref{performance}, we observe that compared to vanilla GNNs, MAPPING improves utility in most cases, especially training German with GIN, and Recidivism with GCN and GraphSAGE. We argue that MAPPING can remove features that contribute less to node classification and meanwhile delete redundant and noisy edges. Plus the debiasing analysis, we conclude that MAPPING can flexibly adapt to diverse types of GNN variants.
775
+
776
+ \textbf{Model Comparison and Trade-offs} To answer \textbf{Q3}, we observe that MAPPING achieves more competitive performance than other SODA models. In view of utility, MAPPING more or less outperforms in one or more utility metrics in all cases. It even outperforms all other baselines in all metrics when training German with GIN and Recidivism with GCN. With respect to fairness, on most occasions, all debiasing models can effectively alleviate biases, wherein MAPPPING outperforms others. Moreover, MAPPING is more stable than the rest. Overall, we conclude that MAPPING achieves better utility and fairness trade-offs than the baselines.
777
+
778
+ \begin{figure}[htbp]
779
+ \vspace{-6mm}
780
+ \centering
781
+ \subfloat[]{
782
+ \includegraphics[width=0.48\linewidth]{acc_g_re_sin-2.png}
783
+ \label{acc_german}}
784
+ \subfloat[]{
785
+ \includegraphics[width=0.48\linewidth]{sensitive_dcor_g_re_sin-2.png}
786
+ \label{sens_dcor_german}}
787
+ \vspace{-3mm}
788
+ \caption{Attribute Inference Attack Under Different Inputs}
789
+ \vspace{-3mm}
790
+ \label{fig:att_german}
791
+ \end{figure}
792
+
793
+ \textbf{Sensitive Information Leakage} To answer \textbf{Q4}, since German reduces the largest biases, we simply use German to explore sensitive information leakage under different input pairs. As shown in Figure \ref{fig:att_german}, since all sensitive attributes are masked and MAPPING only prunes a small portion of edges in German, the attack accuracy and sensitive correlation of BFBT pair are quite close to the BFDT pair while DFDT's performance is slightly lower than the DFBT pair, which further verifies the aforementioned empirical findings and elucidates that MAPPING can effectively confine attribute inference attacks even when adversaries can collect large numbers of sensitive labels. Please note that there are some performance drops, we argue it is due to the combined effects of data imbalance and more stable performance after collecting more sensitive labels.
794
+ \vspace{-2mm}
795
+ \subsection{\textbf{Extension to Multiple Sensitive Attributes}} We set the main sensitive attribute as gender and the minor as age$\left(\le 25/>25\right)$. Still, we adopt German to perform evaluation and explore sensitive information leakage. The details follow the same pattern in Subsection \ref{perf} and more are shown in Appendix \ref{multi}.
796
+
797
+ \begin{table*}[!htbp]
798
+ \caption{\textbf{Node classification performance comparison on German, Recidivism and Credit.}}
799
+ \resizebox{\linewidth}{!}{
800
+ \begin{tabular}{ccccccccccccccccc}
801
+ \hline
802
+
803
+ \multirow{2}{*}{\textbf{GNN}} & \multirow{2}{*}{\textbf{Framework}} & \multicolumn{5}{c}{\textbf{German}} & \multicolumn{5}{c}{\textbf{Recidivism}} & \multicolumn{5}{c}{\textbf{Credit}} \\
804
+ \cmidrule(lr){3-7}
805
+ \cmidrule(lr){8-12}
806
+ \cmidrule(lr){13-17}
807
+ & & \textbf{ACC} & \textbf{F1} & \textbf{AUC} & \textbf{\bm{$\Delta$}SP} & \textbf{\bm{$\Delta$}EO}
808
+ & \textbf{ACC} & \textbf{F1} & \textbf{AUC} & \textbf{\bm{$\Delta$}SP} & \textbf{\bm{$\Delta$}EO}
809
+ & \textbf{ACC} & \textbf{F1} & \textbf{AUC} & \textbf{\bm{$\Delta$}SP} & \textbf{\bm{$\Delta$}EO}
810
+ \\ \hline
811
+ \multirow{5}{*}{\textbf{GCN}}
812
+ & Vanilla
813
+ & \textbf{72.00}$\pm 2.8$ & 80.27$\pm 2.5$ & \textbf{74.34}$\pm 2.4$ & 31.42$\pm 9.5$ & 22.56$\pm 6.2$
814
+ & 87.54$\pm 0.1$ & 82.51$\pm 0.1$ & 91.11$\pm 0.1$ & 9.28$\pm 0.1$ & 8.19$\pm 0.3$ & 76.19$\pm 0.4$ & 84.22$\pm 0.4$ & \textbf{73.34}$\pm 0.0$ & 9.00$\pm 1.2$ & 6.12$\pm 0.9$ \\
815
+
816
+ & FairGNN
817
+ & 67.80$\pm 11.0$ & 74.10$\pm 17.6$ & 73.08$\pm 2.0$ & 24.37$\pm 8.7$ & 16.99$\pm 6.8$
818
+ & 87.50$\pm 0.2$ & 83.40$\pm 0.2$ & 91.53$\pm 0.1$ & 9.17$\pm 0.2$ & 7.93$\pm 0.4$
819
+ & 73.78$\pm 0.1$ & 82.01$\pm 0.0$ & 73.28$\pm 0.0$ & 12.29$\pm 0.6$ & 10.04$\pm 0.7$ \\
820
+
821
+ & NIFTY
822
+ & 66.68$\pm 8.6$ & 73.59$\pm 13.4$ & 70.59$\pm 4.8$ & 15.65$\pm 9.2$ & 10.58$\pm 7.3$ & 76.67$\pm 1.8$ & 69.09$\pm 0.9$ & 81.27$\pm 0.4$ & 3.11$\pm 0.4$ & 2.78$\pm 0.5$ & 73.33$\pm 0.1$ & 81.62$\pm 0.1$ & 72.08$\pm 0.1$ & 11.63$\pm 0.2$ & 9.32$\pm 0.2$ \\
823
+
824
+ & EDITS
825
+ & 69.80$\pm 3.2$ & 80.18$\pm 2.2$ & 67.57$\pm 6.0$ & 4.85$\pm 2.8$ & 4.93$\pm 2.2$ & 84.82$\pm 0.8$ & 78.56$\pm 1.1$ & 87.42$\pm 0.7$ & 7.23$\pm 0.3$ & 4.43$\pm 0.7$ & 75.20$\pm 1.7$ & 84.11$\pm 2.2$ & 68.63$\pm 5.8$ & 5.33$\pm 3.8$ & 3.64$\pm 2.7$ \\
826
+
827
+ & MAPPING
828
+ & 70.84$\pm 1.8$ & \textbf{81.33}$\pm 1.3$ & 70.86$\pm 1.9$ & \textbf{4.54}$\pm 2.2$ & \textbf{4.00}$\pm 1.7$ & \textbf{88.91}$\pm 0.2$ & \textbf{84.17}$\pm 0.4$ & \textbf{93.31}$\pm 0.1$ & \textbf{2.81}$\pm 0.2$ & \textbf{0.73}$\pm 0.3$ & \textbf{76.73}$\pm 0.2$ & \textbf{84.81}$\pm 0.2$ & 73.26$\pm 0.0$ & \textbf{1.39}$\pm 0.4$ & \textbf{0.21}$\pm 0.2$ \\ \hline
829
+
830
+ \multirow{5}{*}{\textbf{GraphSAGE}}
831
+ & Vanilla
832
+ & 71.76$\pm 1.4$ & \textbf{81.86}$\pm 0.8$ & 71.10$\pm 3.1$ & 14.00$\pm 8.4$ & 7.10$\pm 4.9$
833
+ & 85.91$\pm 3.2$ & 81.20$\pm 2.9$ & 90.42$\pm 1.3$ & 4.42$\pm 3.1$ & 3.34$\pm 2.2$ & 78.68$\pm 0.8$ & 86.57$\pm 0.7$ & 74.22$\pm 0.4$ & 19.49$\pm 5.8$ & 15.92$\pm 5.5$ \\
834
+
835
+ & FairGNN
836
+ & \textbf{73.80}$\pm 1.4$ & 81.13$\pm 1.1$ & \textbf{74.37}$\pm 1.0$ & 20.94$\pm 4.0$ & 12.05$\pm 3.8$ & \textbf{87.83}$\pm 1.0$ & \textbf{83.06}$\pm 1.1$ & 91.72$\pm 0.5$ & 3.73$\pm 1.9$ & 4.97$\pm 2.7$ & 72.99$\pm 1.9$ & 81.28$\pm 1.7$ & \textbf{75.60}$\pm 0.2$ & 11.63$\pm 4.9$ & 9.592$\pm 5.0$ \\
837
+
838
+ & NIFTY
839
+ & 70.04$\pm 2.2$ & 78.77$\pm 2.5$ & 73.02$\pm 2.2$ & 16.93$\pm 8.0$ & 11.08$\pm 6.5$ & 84.53$\pm 6.5$ & 80.30$\pm 4.7$ & \textbf{91.99}$\pm 0.8$ & 5.92$\pm 1.2$ & 4.54$\pm 1.5$ & 73.64$\pm 1.5$ & 81.89$\pm 1.3$ & 73.33$\pm 0.2$ & 11.51$\pm 1.0$ & 9.05$\pm 0.9$ \\
840
+
841
+ & EDITS
842
+ & 69.76$\pm 1.5$ & 80.23$\pm 1.8$ & 69.35$\pm 1.5$ & 4.52$\pm 3.3$ & 6.03$\pm 5.5$
843
+ & 85.13$\pm 1.1$ & 78.64$\pm 1.3$ & 89.36$\pm 0.9$ & 6.75$\pm 1.0$ & 5.14$\pm 1.2$ & 74.06$\pm 2.0$ & 82.34$\pm 1.8$ & 74.12$\pm 1.3$ & 13.00$\pm 8.6$ & 11.42$\pm 9.1$ \\
844
+
845
+ & MAPPING
846
+ & 70.76$\pm 1.2$ & 81.51$\pm 0.7$ & 69.89$\pm 1.9$ & \textbf{3.73}$\pm 3.1$ & \textbf{2.39}$\pm 1.5$
847
+ & 87.30$\pm 0.8$ & 82.07$\pm 1.2$ & 91.41$\pm 0.9$ & \textbf{3.54}$\pm 1.9$ & \textbf{3.27}$\pm 1.8$ & \textbf{80.19}$\pm 0.3$ &\textbf{88.24}$\pm 0.2$ & 74.07$\pm 0.6$ & \textbf{4.93}$\pm 0.8$ & \textbf{2.57}$\pm 0.6$ \\ \hline
848
+
849
+ \multirow{5}{*}{\textbf{GIN}}
850
+ & Vanilla
851
+ & 71.88$\pm 1.5$ & 81.93$\pm 0.7$ & 67.21$\pm 10.3$ & 14.07$\pm 10.6$ & 9.78$\pm 8.2$
852
+ & \textbf{87.62}$\pm 3.7$ & \textbf{83.44}$\pm 4.5$ & \textbf{91.05}$\pm 3.1$ & 9.92$\pm 2.6$ & 7.75$\pm 2.1$ & 74.82$\pm 1.9$ & 83.31$\pm 2.1$ & 73.84$\pm 1.2$ & 9.40$\pm 4.5$ & 7.37$\pm 3.8$ \\
853
+
854
+ & FairGNN
855
+ & 65.32$\pm 10.4$ & 72.31$\pm 17.8$ & 66.07$\pm 8.7$ & 13.67$\pm 11.8$ & 10.91$\pm 11.2$ & 84.32$\pm 1.8$ & 80.30$\pm 2.1$ & 90.19$\pm 1.3$ & 8.15$\pm 2.4$ & 6.28$\pm 1.4$ & 72.22$\pm 0.5$ & 80.67$\pm 0.4$ & \textbf{74.87}$\pm 0.2$ & 12.52$\pm 3.2$ & 10.56$\pm 3.6$ \\
856
+
857
+ & NIFTY
858
+ & 64.96$\pm 5.9$ & 72.62$\pm 7.8$ & 67.70$\pm 4.1$ & 11.36$\pm 6.3$ & 10.07$\pm 6.9$ & 83.52$\pm 1.6$ & 77.18$\pm 3.1$ & 87.56$\pm 0.9$ & 6.09$\pm 0.9$ & 5.65$\pm 1.1$ & 75.88$\pm 0.7$ & 83.96$\pm 0.8$ & 72.01$\pm 0.5$ & 11.36$\pm 1.8$ & 8.95$\pm 1.5$ \\
859
+
860
+ & EDITS
861
+ & 71.12$\pm 1.5$ & 81.63$\pm 1.3$ & 69.91$\pm 1.8$ & 3.04$\pm 2.6$ & 3.47$\pm 3.3$ & 75.73$\pm 7.8$ & 65.56$\pm 10.3$ & 77.57$\pm 9.1$ & 4.22$\pm 1.4$ & 3.35$\pm 1.2$ & 76.68$\pm 0.8$ & 85.15$\pm 0.7$ & 70.91$\pm 2.0$ & 5.52$\pm 3.9$ & 4.76$\pm 2.8$ \\
862
+
863
+ & MAPPING
864
+ & \textbf{73.40}$\pm 1.2$ & \textbf{83.18}$\pm 0.5$ & \textbf{71.48}$\pm 0.6$ & \textbf{2.20}$\pm 1.4$ & \textbf{2.44}$\pm 1.7$
865
+ & 82.69$\pm 3.0$ & 78.43$\pm 2.6$ & 90.12$\pm 1.4$ & \textbf{2.54}$\pm 1.4$ & \textbf{1.63}$\pm 1.2$ & \textbf{78.28}$\pm 1.0$ & \textbf{86.67}$\pm 1.0$ & 72.00$\pm 1.6$ & \textbf{5.08}$\pm 3.6$ & \textbf{3.92}$\pm 2.9$ \\ \hline
866
+ \end{tabular}}
867
+ \label{performance}
868
+ \vspace{-4mm}
869
+ \end{table*}
870
+
871
+
872
+
873
+
874
+
875
+
876
+
877
+
878
+
879
+
880
+
881
+
882
+
883
+
884
+
885
+
886
+
887
+ \vspace{-2mm}
888
+ \subsection{Impact Studies} To answer \textbf{Q5}, we conduct ablation and parameter studies to explore how each debiasing process in MAPPING contributes to fair node classification. As before, we solely adopt German to illustrate the impact of each process.
889
+
890
+ \vspace{-1mm}
891
+ \subsubsection{\textbf{Ablation Studies}}
892
+ MAPPING is composed of two modules and corresponding four processes, namely, pre-masking, and reweighting for the feature debiasing module, and fair MP and post-pruning for the topology debiasing module. We test different MAPPING variants from down-top perspectives, i.e, we first train without pre-masking(w/o-msk), then train without reweighting(w/o-re), and finally train without the feature debiasing module(w/o-fe). However, the pipeline is different for topology debiasing. Since post-pruning is tightly associated with fair MP, updated edge weights may not be all equal to $0$, which requires considering all possible edges. To avoid undesired computational costs, we treat training without post-pruning and training without topology debiasing module(w/o-to) as the same case. In turn, if we directly remove fair MP, there is no edge to modify. Thus we purely consider training without the topology debiasing module.
893
+
894
+ As shown in Table \ref{ablation}, w/o-msk or w/o-re leads to more biases and w/o-fe results in the most biases, while w/o-to introduces less bias than other variants. In most cases, different variants sacrifice fairness for utility. The results verify the necessity of each module and corresponding processes to alleviate biases, and demonstrate comparable performance in utility.
895
+ \begin{table}[!htbp]
896
+ \caption{\textbf{Ablation Studies on German}}
897
+ \resizebox{\linewidth}{!}{
898
+ \begin{tabular}{clccccc}
899
+ \hline
900
+
901
+ \textbf{GNN} & \textbf{Variants} & \textbf{ACC} & \textbf{F1} & \textbf{AUC} & \textbf{\bm{$\Delta$}SP} & \textbf{\bm{$\Delta$}EO} \\ \hline
902
+ \multirow{6}{*}{\textbf{GCN}}
903
+ & Vanilla
904
+ & 72.00$\pm 2.8$ & 80.27$\pm 2.5$ & \textbf{74.34}$\pm 2.4$ & 31.42$\pm 9.5$ & 22.56$\pm 6.2$ \\
905
+
906
+ & w/o-msk
907
+ & 71.56$\pm 0.7$ & 80.93$\pm 1.0$ & 72.60$\pm 1.6$ & 17.37$\pm 7.7$ & 13.26$\pm 6.6$ \\
908
+
909
+
910
+
911
+
912
+
913
+ & w/o-re
914
+ & 68.56$\pm 12.4$ & 73.37$\pm 22.7$ & 72.74$\pm 3.5$ & 16.82$\pm 6.5$ & 10.67$\pm 4.8$ \\
915
+
916
+ & w/o-fe
917
+ & \textbf{72.40}$\pm 2.0$ & \textbf{81.37}$\pm 1.3$ & 72.86$\pm 4.3$ & 25.70$\pm 17.0$ & 18.07$\pm 11.8$ \\
918
+
919
+
920
+
921
+ & w/o-to
922
+ & 70.44$\pm 1.6$ & 80.03$\pm 1.8$ & 72.05$\pm 1.0$ & 5.90$\pm 2.3$ & 4.20$\pm 2.3$ \\
923
+
924
+ & MAPPING
925
+ & 70.84$\pm 1.8$ & 81.33$\pm 1.3$ & 70.86$\pm 1.9$ & \textbf{4.54}$\pm 2.2$ & \textbf{4.00}$\pm 1.7$ \\ \hline
926
+
927
+ \multirow{6}{*}{\textbf{GraphSAGE}}
928
+ & Vanilla
929
+ & 71.76$\pm 1.4$ & 81.86$\pm 0.8$ & 71.10$\pm 3.1$ & 14.00$\pm 8.4$ & 7.10$\pm 4.9$ \\
930
+
931
+ & w/o-msk
932
+ & 70.52$\pm 1.9$ & 81.19$\pm 0.9$ & 69.46$\pm 3.9$ & 8.79$\pm 7.7$ & 5.16$\pm 5.0$ \\
933
+
934
+
935
+
936
+
937
+
938
+ & w/o-re
939
+ & 72.36$\pm 2.0$ & 82.34$\pm 1.1$ & 71.08$\pm 2.2$ & 8.88$\pm 6.2$ & 4.20$\pm 2.6$ \\
940
+
941
+ & w/o-fe
942
+ & \textbf{72.84}$\pm 1.1$ & \textbf{82.39}$\pm 0.7$ & \textbf{71.87}$\pm 3.0$ & 16.75$\pm 7.8$ & 9.59$\pm 5.4$ \\
943
+
944
+
945
+
946
+ & w/o-to
947
+ & 70.76$\pm 1.2$ & 81.50$\pm 0.6$ & 68.75$\pm 4.1$ & 4.94$\pm 4.8$ & 2.80$\pm 2.7$ \\
948
+
949
+ & MAPPING
950
+ & 70.76$\pm 1.2$ & 81.51$\pm 0.7$ & 69.89$\pm 1.9$ & \textbf{3.73}$\pm 3.1$ & \textbf{2.39}$\pm 1.5$ \\ \hline
951
+
952
+ \multirow{6}{*}{\textbf{GIN}}
953
+ & Vanilla
954
+ & 71.88$\pm 1.5$ & 81.93$\pm 0.7$ & 67.21$\pm 10.3$ & 14.07$\pm 10.6$ & 9.78$\pm 8.2$ \\
955
+
956
+ & w/o-msk
957
+ & 73.24$\pm 1.1$ & 83.25$\pm 0.5$ & 71.49$\pm 2.0$ & 7.22$\pm 4.7$ & 2.68$\pm 4.1$ \\
958
+
959
+
960
+
961
+
962
+
963
+ & w/o-re
964
+ & 73.96$\pm 2.1$ & 83.46$\pm 0.8$ & 70.79$\pm 4.4$ & 3.46$\pm 2.5$ & \textbf{1.68}$\pm 1.6$ \\
965
+
966
+ & w/o-fe
967
+ & \textbf{74.08}$\pm 0.8$ & \textbf{83.53}$\pm 0.2$ & \textbf{73.02}$\pm 0.6$ & 18.10$\pm 8.1$ & 9.82$\pm 7.2$ \\
968
+
969
+
970
+
971
+ & w/o-to
972
+ & 72.92$\pm 1.9$ & 82.88$\pm 1.0$ & 70.49$\pm 1.1$ & \textbf{2.19}$\pm 1.5$ & 3.21$\pm 2.2$ \\
973
+
974
+ & MAPPING
975
+ & 73.40$\pm 1.2$ & 83.18$\pm 0.5$ & 71.48$\pm 0.6$ & 2.20$\pm 1.4$ & 2.44$\pm 1.7$ \\ \hline
976
+ \end{tabular}}
977
+ \vspace{-3mm}
978
+ \label{ablation}
979
+ \end{table}
980
+
981
+ \vspace{-1mm}
982
+ \subsubsection{\textbf{Parameter Studies}}
983
+ We mainly focus on the impacts of fairness-relevant coefficients $\lambda_2$, $\lambda_3$ for feature debiasing and $\lambda_4$ and $r_p$ for topology debiasing. The original choices are 3.50e4, 0.02, 1.29e4, 0.65 for German, 5e4, 100, 515 and 0.72 for Recidivism, and 8e4, 100, 1.34e5 and 0.724 for Credit, respectively. Still, we employ German to illustrate. Please note that we fix the pruning threshold $r_p$ and only investigate coefficients in objective functions. We vary $\lambda_2 \in$ \{0, 1e-5, 1e-3, 1, 1e3, 1e5, 1e7\} when $\lambda_3$ and $\lambda_4$ are fixed, and alter $\lambda_3 \in$ \{0, 1e-7, 1e-5, 1e-3, 1, 1e3, 1e5\} when $\lambda_2$ and $\lambda_4$ are fixed, and finally change $\lambda_4 \in$\{0, 1e-5, 1e-3, 1, 1e3, 1e5, 1e7\} when $\lambda_2$ and $\lambda_3$ are fixed. As shown in Appendix \ref{paras}, the different choices from wide ranges all mitigate biases, and sometimes they can reach win-win situations for utility and fairness, e.g, $\lambda_2$=1e5. Besides, they can achieve better trade-offs between utility and fairness when $\lambda_2\in$ [1e3,1e5], $\lambda_3\in$ [1e-3,10] and $\lambda_4\in$ [1e3,1e5] for all GNN variants.
984
+
985
+
986
+
987
+
988
+
989
+
990
+
991
+
992
+
993
+
994
+
995
+
996
+
997
+
998
+
999
+
1000
+
1001
+
1002
+
1003
+
1004
+
1005
+
1006
+
1007
+
1008
+
1009
+
1010
+
1011
+
1012
+
1013
+
1014
+
1015
+
1016
+
1017
+
1018
+
1019
+
1020
+
1021
+
1022
+
1023
+
1024
+
1025
+ \section{Related Work}
1026
+ Due to page limits, we briefly summarize representative work close to MAPPING. We refer the interested readers to \cite{dong_fairgraph_survey, li2022private} for extensive surveys on fair and private graph learning.
1027
+
1028
+ \textbf{Fair or Private Graph Learning}
1029
+ For fair graph learning, at the pre-processing stage, EDITS\cite{dong2022edits} is the first work to construct a model-agnostic debiasing framework based on $Was$, which reduces feature and structural biases by feature retuning and edge clipping. At the in-processing stage, FairGNN\cite{dai2021sayno_fair} first combines adversarial debiasing with $Cov$ constraints to learn a fair GNN classifier under missing sensitive attributes. NIFTY\cite{nifty} augments graphs from node, edge and sensitive attribute perturbation respectively, and then optimizes by maximizing the similarity between the augmented graph and the original one to promise counterfactual fairness in node representation. At the post-processing stage, FLIP\cite{burst_filter_bubble} achieves fairness by reducing graph modularity with a greedy algorithm, which takes the predicted links as inputs and calculates the changes in modularity after link flipping. But this method is only adapted to link prediction tasks. Excluding strict privacy protocols, existing privacy-preserving studies only partially aligns with fairness, e.g., \cite{Adversarial_privacy_preserving_against_infer, info_obfuscation_privacy_protection} employ attackers to launch attribute inference attacks and utilize game theory to decorrelate biases from node representations. Another line of research\cite{privacy_protection_partial_sens_attr,info_obfuscation_privacy_protection,GNN_mutual_privacy} introduce privacy constraints such as orthogonal subspace, $Was$ or $MI$ to remove linear or mutual dependence between sensitive attributes and node representations to fight against attribute inference or link-stealing attacks.
1030
+
1031
+ \textbf{Interplays between Fairness and Privacy on Graphs}
1032
+ Since few prior research address interactions between fairness and privacy on graphs, i.e., rivalries, friends, or both, we first enumerate representative research on i.i.d. data.
1033
+ One research direction is to explore privacy risks and protection of fair models. Chang et.al.\cite{privacy_risk_fair} empirically verifies that fairness gets promoted at the cost of privacy, and more biased data results in the higher membership inference attack risks of achieving group fairness; FAIRSP\cite{fairnessmeetprivacy} shows stronger privacy protection without debiasing models leads to better fairness performance while stronger privacy protection in debiasing models will worsen fairness performance. Another research direction is to investigate fairness effects under privacy guarantee, e.g., differential privacy\cite{dp_original}), which typically exacerbate disparities among different demographic groups\cite{fairwithprivacyprotection,empiricalfairprivacy} without fairness interventions. While some existing studies\cite{dpfair, sayno_privacy_extension} propose unified frameworks to simultaneously enforce fairness and privacy, they do not probe into detailed interactions.
1034
+ PPFR\cite{interaction_priv_fair} is the first work to empirically show that the privacy risks of link-stealing attacks can increase as individual fairness of each node is enhanced. Moreover, it models such interplays via influence functions and $Cor$, and finally devises a post-processing retraining method to reinforce fairness while mitigating edge privacy leakage. To the best of our knowledge, there is no thorough prior GNN research to address the interactions at the pre-processing stage.
1035
+
1036
+
1037
+
1038
+
1039
+
1040
+
1041
+
1042
+
1043
+
1044
+
1045
+
1046
+
1047
+
1048
+
1049
+
1050
+
1051
+
1052
+
1053
+
1054
+ %
1055
+ \vspace{-4mm}
1056
+ \section{Conclusion and Future Work}
1057
+ In this paper, we take the first major step towards exploring the alignments between group fairness and attribute privacy on GNNs at the pre-processing stage. We empirically show that GNNs preserve and amplify biases and further worsen multiple sensitive information leakage through the lens of attribute inference attacks, which motivate us to propose a novel model-agnostic debiasing framework named MAPPING. Specifically, MAPPING leverages $dCov$-based fairness constraints and adversarial training to jointly debias features and topologies with limited inference risks of multiple sensitive attributes. The empirical experiments demonstrate the effectiveness and flexibility of MAPPING, which achieves better trade-offs between utility and fairness, and meanwhile, maintains sensitive attribute privacy. As we primarily adopt empirical analysis in this work, one future direction is to provide theoretical support to exploit some interesting patterns under multiple sensitive attribute cases. Another direction is to develop a unified and model-agnostic fair framework under stronger privacy guarantee and investigate fairness effects of privacy-protection techniques on GNNs.
1058
+
1059
+
1060
+
1061
+
1062
+
1063
+
1064
+ \bibliographystyle{ACM-Reference-Format}
1065
+ \bibliography{references}
1066
+
1067
+ \appendix
1068
+ \section{Empirical Analysis}
1069
+
1070
+ \subsection{Data Synthesis}
1071
+ \label{synthesis}
1072
+
1073
+ Please note that $S_m$ can be highly related to $S_n$ or even not. For $S_n$, we generate a $2500 \times 3$ biased non-sensitive feature matrix from multivariate normal distributions $\mathcal{N} \left(\mu_0, \Sigma_0\right)$ and $\mathcal{N} \left(\mu_1, \Sigma_1\right)$, where subgroup $0$ represents minority in reality, $\mu_0=\left(-10,-2,-5\right)^{T}$, $\mu_1=\left(10,2,5\right)^{T}$, $\Sigma_0=\Sigma_1$ are both identity matrices, and $|S_0|=500$ and $|S_1|=2000$. We combine the top $100$ sample from $\left(\mu_0,\Sigma_0\right)$, the top $200$ from $\left(\mu_1,\Sigma_1\right)$, then the next $200$ from $\left(\mu_0,\Sigma_0\right)$ and $600$ from $\left(\mu_1,\Sigma_1\right)$, then the next $100$ from $\left(\mu_0,\Sigma_0\right)$ and $700$ from $\left(\mu_1,\Sigma_1\right)$, and finally the last $100$ from $\left(\mu_0,\Sigma_0\right)$ and $500$ from $\left(\mu_1,\Sigma_1\right)$. Next, generate $S_n$ based on this combination. And then we create another $2500 \times 3$ non-sensitive matrix attached to $S_n$ from multivariate normal distributions $\mathcal{N} \left(\mu_2, \Sigma_2\right)$ and $\mathcal{N} \left(\mu_3, \Sigma_3\right)$, where subgroup $2$ represents minority in reality, $\mu_2=\left(-12,-8,-4\right)^{T}$, $\mu_3=\left(12,8,4\right)^{T}$, $\Sigma_2=\Sigma_3$ are both identity matrices, and $|S_2|=700$ and $|S_3|=1800$. We combine the top $300$ from $\left(\mu_2,\Sigma_2\right)$, and $1200$ from $\left(\mu_3,\Sigma_3\right)$, and then $400$ from $\left(\mu_2,\Sigma_2\right)$ and $600$ from $\left(\mu_3,\Sigma_3\right)$. Next, generate $S_m$ based on this combination again. Debiased features are sampled from multivariate normal distributions with $\mu_4=\left(0,1,0,1,0,1\right)$ and covariance as the identity matrix. Second, the biased topology is formed via the stochastic block model, where the first block contains $500$ nodes while the second contains $2000$ nodes, and the link probability within blocks is 5e-3 and between blocks is 1e-7. The debiased topology is built with a random geometric graph with $0.033$ radius.
1074
+
1075
+ \begin{figure}[htbp]
1076
+ \vspace{-4mm}
1077
+ \setlength{\abovecaptionskip}{1mm}
1078
+ \setlength{\belowcaptionskip}{-4mm}
1079
+ \centering
1080
+ \subfloat[Biased non-sensitive features and graph topology(Minor)]
1081
+ {
1082
+ \includegraphics[width=0.245\textwidth]{tsne_biased_x_mul_main-2.png}
1083
+ \includegraphics[width=0.245\textwidth]{biased_topology_major.png}
1084
+ \label{Biased features and topology}}
1085
+ \hfill
1086
+ \vspace{-4mm}
1087
+ \subfloat[Unbiased non-sensitive features and graph topology(Minor)]
1088
+ {
1089
+ \includegraphics[width=0.245\textwidth]{tsne_unbiased_x_mul_main-2.png}
1090
+ \includegraphics[width=0.245\textwidth]{unbiased_topology_major.png}
1091
+ \label{Unbiased features and topology}}
1092
+ \caption{Biased and Unbiased Graph Data Synthesis}
1093
+ \label{data synthesis}
1094
+ \end{figure}
1095
+ \vspace{-2mm}
1096
+ \subsection{Case Analysis}
1097
+
1098
+ \subsection{Implementation Details}
1099
+ \label{prelim}
1100
+ We built $1$-layer GCNs with PyTorch Geometric\cite{torch_geometric} with Adam optimizer\cite{adam}, learning rate 1e-3, dropout 0.2, weight decay 1e-5, training epoch 1000, and hidden layer size 16 and implemented them in PyTorch\cite{pytorch}. All experiments are conducted on a 64-bit machine with 4 Nvidia A100 GPUs. The experiments are trained on $1,000$ epochs. We repeat experiments $10$ times with different seeds to report the average results.
1101
+
1102
+
1103
+ \section{Pseudo Codes}
1104
+ \label{code}
1105
+ \subsection{Algorithm 1 - Pre-masking Strategies}
1106
+ \begin{algorithm}
1107
+ \caption{Pre-masking Strategies}\label{alg1}
1108
+ \begin{algorithmic}[1]
1109
+ \Require Original feature matrix $\mathcal{X}$ with Normal feature matrix $\mathcal{X}_N$ and Sensitive attributes $S$, Ground-truth label $\mathcal{Y}$, distributed ratio $r$, sensitive threshold $r_s$
1110
+ \Ensure Pre-masked feature matrix $\tilde{\mathcal{X}}$
1111
+ \State \hspace{0.5cm} Compute $\mathcal{R}_{n}^{2}(\mathcal{X}_i, S)$ and $\mathcal{R}_{n}^{2}(\mathcal{X}_i, \mathcal{Y})$ based on equation(2) \textbf{for} $i\in[1,\dots,d]$;
1112
+ \State \hspace{0.5cm} Choose the top $x=\lfloor rd\rfloor$ features from descending $\mathcal{R}_{n}^{2}(\mathcal{X}, S)$ to obtain the top related feature set $Set_{top}$; Choose the top $x$ features from ascending $\mathcal{R}_{n}^{2}(\mathcal{X}, \mathcal{Y})$ to obtain the less related feature set $Set_{les}$;
1113
+ \State \hspace{0.5cm} Obtain the intersection set $Set_{int}=Set_{top}\cap Set_{les}$;
1114
+ \State \hspace{0.5cm} Filter features whose $\mathcal{R}_{n}^{2}(\mathcal{X}_i, S)<r_s$ to obtain the extremely highly sensitive feature set $Set_{sen}$;
1115
+ \State \hspace{0.5cm} Obtain the union set $Set_{uni}=Set_{int}\cup Set_{sen}$;
1116
+ \State \hspace{0.5cm} Obtain the pre-masked feature matrix $\tilde{\mathcal{X}}=\mathcal{X}\backslash Set_{uni}$.
1117
+ \State \hspace{0.5cm}\textbf{return} $\tilde{\mathcal{X}}$
1118
+ \end{algorithmic}
1119
+ \label{AlgoMAPPING}
1120
+ \end{algorithm}
1121
+
1122
+ \subsection{Algorithm 2 - MAPPING}
1123
+ \begin{algorithm}
1124
+ \caption{MAPPING}\label{alg2}
1125
+ \begin{algorithmic}[1]
1126
+ \Require Adjacency matrix $A$, Original feature matrix $\mathcal{X}$ with Normal feature matrix $\mathcal{X}_N$ and Sensitive attributes $S$, Ground-truth label $\mathcal{Y}$, MLP for feature debiasing $f_{mlp}$, GNN for topology debiasing $f_{gnn}$
1127
+ \Ensure Debiased adjacency matrix $\hat{A}$, Debiased feature matrix $\hat{X}$
1128
+ \State {\textsc{\textbf{Pre-masking}}$(\mathcal{X}):$}
1129
+ \State \hspace{0.5cm} Implement Algorithm 1;
1130
+ \State \hspace{0.5cm}\textbf{return} $\mathcal{\tilde{X}}$
1131
+ \State
1132
+ \State {\textsc{\textbf{Reweighting}}$(\mathcal{\tilde{X}}):$}
1133
+ \State \hspace{0.5cm} Initialize equal weights $W_{{f0}_{i}}=1$ \textbf{for} $\mathcal{\tilde{X}}_{i}, i\in[1,\dots, d_m]$;
1134
+ \State \hspace{0.5cm} Train $f_{mlp}(\mathcal{\tilde{X}})$; Update the feature weight matrix $\hat{W}_{f(i)} \gets \hat{W}_{f(i-1)}$ and the debiased feature matrix $\hat{X}(i) \gets \mathcal{\tilde{X}}\hat{W}_{f(i)}$ \textbf{for} $\hat{W}_{f(0)}=W_{f0}$ based on the equation(8) until convergence;
1135
+ \State \hspace{0.5cm}\textbf{return} $\hat{X}$
1136
+ \State
1137
+ \State {\textsc{\textbf{Fair MP}}$(A,\hat{X}):$}
1138
+ \State \hspace{0.5cm} Initialize equal weights $W_{{\mathcal{E}_{0}}_{ij}}=1$ \textbf{for} $A_{ij}, i,j\in[1,\dots, n]$;
1139
+ \State \hspace{0.5cm} Train $f_{gnn}(W_{\mathcal{E}_{0}},A,\hat{X})$; Update the edge weight matrix $\hat{W}_{\mathcal{E}(i)} \gets \hat{W}_{\mathcal{E}(i-1)}$ \textbf{for} $\hat{W}_{\mathcal{E}(0)}=W_{\mathcal{E}_{0}}$ based on the equation(11) until convergence;
1140
+ \State \hspace{0.5cm}\textbf{return} $\hat{W}_{\mathcal{E}}$
1141
+ \State
1142
+ \State {\textsc{\textbf{Post-tuning}}$(\hat{W}_{\mathcal{E}}):$}
1143
+ \State \hspace{0.5cm} Prune $\hat{W}_{\mathcal{E}}$ based on the equation(12); Obtain $\tilde{W}_{\mathcal{E}}=\hat{A}$;
1144
+ \State \hspace{0.5cm}\textbf{return} $\hat{A}$
1145
+ \end{algorithmic}
1146
+ \label{AlgoMAPPING}
1147
+ \end{algorithm}
1148
+
1149
+ \section{Experiments}
1150
+ \subsection{Dataset Description}
1151
+ In German, nodes represent bank clients and edges are connected based on the similarity of clients' credit accounts. To control credit risks, the bank needs to differentiate applicants with good/bad credits.
1152
+ In Recidivism, nodes denote defendants who got released on bail at the U.S state courts during 1990-2009 and edges are linked based on the similarity of defendants' basic demographics and past criminal histories. The task is to predict whether the defendants will be bailed. In Credit, nodes are credit card applicants and edges are formed based on the similarity of applicants' spending and payment patterns. The goal is to predict whether the applicants will default.
1153
+ \label{data}
1154
+
1155
+ \subsection{Hyperparameter Setting of SODA}
1156
+ In this subsection, we detail the hyperparameters for different fair models. To obtain relatively better performance, we leverage Optuna to facilitate grid search.
1157
+ \label{setting}
1158
+
1159
+ \textbf{FairGNN}: dropout from $\{0.1,0.2,0.3,0.4,0.5\}$, weight decay 1e-5, learning rate $\{0.001,0.005,0.01,0.05,0.1\}$, regularization coefficients $\alpha=4$ and $\beta=0.01$, sensitive number $200$ and label number $500$, hidden layer size $\{16,32,64,128\}$, .
1160
+
1161
+ \textbf{NIFTY}: project hidden layer size 16, drop edge and feature rates are $0.001$ and $0.1$, dropout $\{0.1,0.3,0.5\}$, weight decay 1e-5, learning rate $\{0.0001,0.001,0.01\}$, regularization coefficient $\{0.4,0.5,0.6,0.7,\\0.8\}$, hidden layer size 16.
1162
+
1163
+ \textbf{EDITS}: we directly use the debiased datasets in \cite{dong2022edits}, dropout $\{0.05,0.1,0.3,0.5\}$, weight decay \{1e-4,1e-5,1e-6,1e-7\}, learning rate $\{0.001,0.005,0.01,0.05\}$, hidden layer size 16.
1164
+
1165
+ \subsection{Extension to Multiple Sensitive Attributes}
1166
+ Since the SOTA cannot be trivially extended to multiple sensitive attribute cases, we only compare the performances of the vanilla and MAPPING. Besides, after careful checking, we found that only 'gender' and 'age' can be treated as sensitive attributes, the other candidate is 'foreigner', but this feature is too vague, and cannot indicate the exact nationality, so we solely use the above two sensitive attributes for experiments. And the hyperparameters are exactly the same in the experimental section.
1167
+ \begin{table}[!htbp]
1168
+ \caption{\textbf{Node Classification of Multiple Sensitive Attributes}}
1169
+ \resizebox{\linewidth}{!}{
1170
+ \begin{tabular}{clccccccc}
1171
+ \hline
1172
+
1173
+ \textbf{GNN} & \textbf{Variants} & \textbf{ACC} & \textbf{F1} & \textbf{AUC} & \textbf{\bm{$\Delta SP_{minor}$}} & \textbf{\bm{$\Delta EO_{minor}$}} & \textbf{\bm{$\Delta SP_{major}$}} & \textbf{\bm{$\Delta EO_{major}$}} \\ \hline
1174
+ \multirow{2}{*}{\textbf{GCN}}
1175
+ & Vanilla
1176
+ & 65.68$\pm 8.7$ & 70.24$\pm 12.0$ & \textbf{74.39}$\pm 0.5$ & 28.60$\pm 5.2$ & 28.65$\pm 3.9$ &
1177
+ 36.19$\pm 5.0$ & 28.57$\pm 1.6$\\
1178
+
1179
+ & MAPPING
1180
+ & \textbf{69.52}$\pm 5.7$ & \textbf{78.82}$\pm 9.1$ & 73.23$\pm 0.9$ & \textbf{4.92}$\pm 5.1$ & \textbf{5.05}$\pm 7.0$ &
1181
+ \textbf{9.30}$\pm 5.0$ & \textbf{6.46}$\pm 4.3$\\ \hline
1182
+
1183
+ \multirow{2}{*}{\textbf{GraphSAGE}}
1184
+ & Vanilla
1185
+ & \textbf{71.64}$\pm 1.1$ & \textbf{81.46}$\pm 1.1$ & \textbf{71.07}$\pm 2.4$ & 15.88$\pm 9.0$ & 10.47$\pm 8.0$ &
1186
+ 19.47$\pm 9.1$ & 11.78$\pm 7.0$ \\
1187
+
1188
+ & MAPPING
1189
+ & 67.40$\pm 10.4$ & 74.28$\pm 19.2$ & 70.69$\pm 1.6$ & \textbf{7.07}$\pm 5.0$ & \textbf{6.35}$\pm 4.5$ &
1190
+ \textbf{11.67}$\pm 8.5$ & \textbf{6.42}$\pm 4.8$ \\ \hline
1191
+
1192
+ \multirow{2}{*}{\textbf{GIN}}
1193
+ & Vanilla
1194
+ & 70.04$\pm 0.1$ & 81.98$\pm 1.2$ & 68.37$\pm 4.5$ & 2.17$\pm 6.0$ & 2.13$\pm 6.2$ &
1195
+ 2.99$\pm 8.9$ & 2.14$\pm 6.2$ \\
1196
+
1197
+ & MAPPING
1198
+ & \textbf{70.56}$\pm 1.1$ & \textbf{82.24}$\pm 0.5$ & \textbf{72.17}$\pm 1.7$ & \textbf{0.55}$\pm 1.3$ & \textbf{1.07}$\pm 2.7$ &
1199
+ \textbf{1.90}$\pm 4.2$ & \textbf{1.12}$\pm 2.2$ \\ \hline
1200
+ \end{tabular}}
1201
+ \vspace{-3mm}
1202
+ \label{multi_eva}
1203
+ \end{table}
1204
+
1205
+ From Table \ref{multi_eva}, we can observe that MAPPING achieves powerful debiasing effects even when training with GIN which the original model contains very small biases. MAPPING likewise owns better utility performance when training with GCN and GIN. Whether debiasing or not, GraphSAGE is less consistent and stable, but MAPPING does debias almost up to $50\%$. Besides, the major sensitive attributes are always more biased than the minor. Please note that we hierarchically adopt exactly the same hyperparameters, fine-tuning is not used, hence we cannot promise relatively optimal results. However, this setting has generated sufficiently good trade-offs between utility and fairness.
1206
+
1207
+ \label{multi}
1208
+ \begin{figure}[H]
1209
+ \centering
1210
+ \subfloat[Attack Accuracy of Multiple Sensitive Attributes]
1211
+ {
1212
+ \includegraphics[width=0.245\textwidth]{acc_g_re_minor-2.png}
1213
+ \includegraphics[width=0.245\textwidth]{acc_g_re_major-2.png}
1214
+ \label{attmul}}
1215
+ \hfill
1216
+ \subfloat[Sensitive Correlation of Multiple Sensitive Attributes]
1217
+ {
1218
+ \includegraphics[width=0.245\textwidth]{sensitive_dcor_g_re_minor-2.png} \includegraphics[width=0.245\textwidth]{sensitive_dcor_g_re_major-2.png}
1219
+ \label{attdcor}}
1220
+ \caption{Coefficient Impact of Utility and Fairness}
1221
+ \label{multiple}
1222
+ \end{figure}
1223
+
1224
+
1225
+ From Figure \ref{multiple}, we observe similar patterns in the previous experiments. E.g, the performances of BFBT and BFDT pairs are close while the DFBT and DFDT pairs are close; and the not stable performances in fewer label cases may be the main reason why there are some turning points. The more interesting phenomenon is the performances of all pairs of the minor sensitive attributes are pretty close to each other, and their attack accuracy and sensitive correlation increase rapidly after collecting more sensitive labels, but the performances of all pairs of the major increase dramatically at the fewer label situations and seem stabilized to some points. Its sensitive correlation is higher than the minor but the attack accuracy is much lower. We leave the deeper investigation of this inverse fact as the future work.
1226
+
1227
+ \subsection{Parameter Studies}
1228
+ The parameter studies have been discussed in the experimental section. Please note different colors represent for different coefficients, i.e., the green line denotes $\lambda_2$ and the red and blue lines denote $\lambda_3$ and $\lambda_4$, respectively. Different types of lines represent for different GNN variants, i.e., the straight line denotes GCN and the dotted line with larger intervals and the dotted line with smaller intervals denote GraphSAGE and GIN, respectively.
1229
+ \label{paras}
1230
+ \begin{figure}[H]
1231
+ \centering
1232
+ \subfloat[Utility Performance of Node Classification]
1233
+ {
1234
+ \includegraphics[width=0.245\textwidth]{accuracy_impact_study_re_new-2.png}
1235
+ \includegraphics[width=0.245\textwidth]{auroc_impact_study_re_new-2.png}
1236
+ \label{subfig:impact_utility}}
1237
+ \hfill
1238
+ \subfloat[Fairness Performance of Node Classification]
1239
+ {
1240
+ \includegraphics[width=0.245\textwidth]{sp_impact_study_re_new-2.png} \includegraphics[width=0.245\textwidth]{eo_impact_study_re_new-2.png}
1241
+ \label{subfig:impact_fair}}
1242
+ \caption{Coefficient Impact of Utility and Fairness}
1243
+ \label{impact_mapping}
1244
+ \end{figure}
1245
+
1246
+
1247
+
1248
+ \end{document}