\section{Introduction}
\label{sec:intro}


Federated learning has emerged as a privacy-aware learning paradigm 
that allows multiple participants (clients) to collaboratively train 
a global model without sharing their private data~\cite{mcmahan2017communication, tan2022fedproto}.  
In FL, each client follows the standard machine learning training procedure 
to train a local model on its own data 
and periodically shares its model parameter with a central server for aggregation.  
However, recent studies have shown that, like conventional machine learning, 
FL is also vulnerable to well-crafted adversarial examples~\cite{zizzo2020fat,lyu2022privacy,zhou2020adversarially}.   
In cross-silo FL, such vulnerability especially matters and 
may cause heavy losses~\cite{yang2019federated}. For example, in a medical-based cross-silo FL, 
a vulnerable global model may lead to incorrect diagnoses, 
even leading to the loss of human lives. 
It is thus imperative to develop a robust FL method that 
can train adversarially robust models resistant 
to different types of adversarial attacks.  

Adversarial training (AT) is one of the most effective strategies 
for enhancing the model robustness against adversarial attacks~\cite{madry2017towards}.  
As the classical and widely-used defense strategy, AT is typically formulated as a min-max optimization problem 
with the inner maximization for adversarial examples generation~\cite{madry2017towards}.
Some recent studies~\cite{zizzo2020fat,kairouz2021advances,chen2022gear} seek to integrate standard AT into 
federated learning (so-called FAT) to boost adversarial robustness of federated models.
However, most of the existing FAT works~\cite{zizzo2020fat,kairouz2021advances,chen2022gear} suffer from 
either low natural accuracy or low robust accuracy, especially 
in not independent and identically distributed (non-IID) data settings.  
As shown in Fig.\ref{fig1}, we show the test accuracy of adversarial training on CIFAR10 dataset under both IID and
non-IID FL settings. We summarize some valuable observations in the figure:
(1) The performance of AT-trained models with non-IID data distribution decrease significantly compared with IID data distribution.
(2) Under the non-IID data distribution, compared with the IID data distribution, the test results of the models fluctuate more sharply.
(3) The robustness accuracy of the model is low in both non-IID data distribution and IID data distribution.  
One reason for these phenomena may be that simply transfer origin adversarial training directly into federated learning (even if the data distribution is IID) can cause performance degradation.
The other reason behind this phenomenon is that the generated adversarial data could exacerbate the data heterogeneity among local clients cause adversarial training make worse performance on the minority classes~\cite{Wang_Xu_Liu_Li_Thuraisingham_Tang_2022}, 
making the global models perform poorly.

To solve the problems, we start from the origin of adversarial samples, that is, the root cause of the error or correct classification of the neural network caused by the adversarial samples. 
To understand the origin of adversarial samples, seminal studies in 
concentrated learning has extensively investigated adversarial vulnerability 
from multiple perspectives, such as excessive linearity of the hyperplane~\cite{goodfellow2014explaining},
aberration of statistical fluctuations~\cite{shafahi2018adversarial} and so on.
Last year, Kim et al.~\cite{kim2023demystifying} investigate the adversarial 
robustness of adversarial trained neural networks based on causal inference.
They believe that there are robust features and non-robust features widely 
in adversarial trained neural networks, 
and the non-robust features of adversarial samples 
may cause unexpected misclassification.   
Based on this idea, they proposed the causal instrumental variable regression then achieved great success in centralized adversarial training and showed the effectiveness of
causal inference for adversarial training. 
Meanwhile, in the study of federated adversarial learning, people always study how to overcome the problem of model heterogeneity caused by data heterogeneity~\cite{chen2022gear,chen2022calfat,zhu2023combating,zhang2023delving}.
These studies are mainly proposed from the perspectives of decision boundaries, 
data preprocessing, slack optimization, and so on.

\begin{figure}[t]
  \centering
  \includegraphics[width=0.9\linewidth,height=2in]{output/compare.png}
   \caption{Test accuracy reduces for adversarially trained model under non-IID data and IID data. Meanwhile, non-IID data distributions hurts the performance.
   When the data distribution is non-IID, the transparent lines represent the actual test results, while the opaque lines represent sliding average to reduce fluctuations.
   }
   \label{fig1}
\end{figure}



In this paper, we study the problem of FAT on non-IID data distribution with a particular focus on the challenging skewed label distribution setting. 
Motivated by Kim et al.~\cite{kim2023demystifying}, we seek to deploy a more appropriate adversarial training paradigm for federated learning from causal inference. 
However, directly using the causal adversarial training proposed by Kim et al.~\cite{kim2023demystifying} is not sufficient to solve the challenges in FL.
This is because, in non-IID federated learning, the label distributions of individual clients are diverse, 
leading to heterogeneous issues for causal feature extractors under such data distribution settings.  
We also calculate the differences between the models of different clients and found that the direct deployment of causal adversarial training deepened the differences between the client models.
This motivates us to propose a novel method called Federated Causal Adversarial Training (FCAT) for effective causal adversarial training on non-IID data.
In this method, the client data distribution information is added to the local causal adversarial training so that the causal model can adapt to the non-IID data distribution settings. 
At the same time, we propose an effective aggregation mechanism to alleviate the heterogeneity of the client models and reduce the additional communication overhead.
It is worth noting that, FCAT is model-agnostic and can be applied to any AT-based models under FL with different data heterogeneity.


Experimental results indicate that our approach can effectively alleviate the heterogeneity issue of the causal model and improve accuracy.  
Meanwhile the causal adversarial training method we proposed can be widely applied to various federated adversarial training frameworks, 
and experimental evidence demonstrates that our proposed methods have achieved significant performance improvements.
As shown in Tab.~\ref{experimental results}, we compare our FCAT with current competitive approaches under Non-IID data settings. It can be observed that our method achieves significantly superior performance over both clean and adversarial examples.
Our code is available in supplementary.  

In summary, our contributions mainly include:  
\begin{itemize}
  \item We introduced causal adversarial training in federated learning to extract causal features from samples, 
  and incorporated causal features into defense networks for collaborative training 
  to improve the adversarial robustness of client models in non-IID data settings. 
  As far as we know, this is the first time that causal inference has been introduced in federated adversarial training.  
  \item We have introduced a unique aggregation algorithm for causal adversarial training, enabling the application of 
  causal adversarial training in federated learning with a little additional communication overhead and alleviating the heterogeneity of the client models.  
  \item Extensive experiments have verified the effectiveness of our proposed method.
  
\end{itemize}  


\section{Related Work and Preliminaries}  

\subsection{Federated Learning} The most famous work in federated learning is FedAVG~\cite{mcmahan2017communication},
which has been proven effective during the distributed training to maintain data privacy. 
This is a seminal work in federated learning.  
In the next few years, following the success of federated learning in various tasks~\cite{liang2021fedrec++,liu2021fedct,liu2021fedct},
federated learning has attracted increasing attention.
In 2021, Mcmahan et al.~\cite{kairouz2021advances} point out that existing FL systems are vulnerable to adversarial attacks. 
In FL, the architectural design, distributed nature, 
and data constraints can bring new challenges to defend against these attacks.  

\subsection{Federated Adversarial Training} 

Several works have made the exploration of adversarial 
training(AT) into federated learning to enhance the adversarial robustness of federated models.
Zizzp et al.~\cite{zizzo2020fat} take the first trial to integrate standard AT into FL settings under different data heterogeneity, and reveal that AT fails to achieve remarkable improvements as achieved in centralized training,
especially for the Non-IID data. 

Several works have been proposed to solve this problem.  
Zhang et al.~\cite{zhang2023delving} propose an algorithm based on 
decision boundary, which combines local re-weighting strategy and global regularization to 
improve the accuracy and robustness of federated adversarial training.  
Chen et al.~\cite{chen2022calfat} study the problem of the skewed labels in federated learning and propose a CalFAT framework to calculate the logits of each class. However, none of these methods think about exactly how the neural network correctly classifies adversarial samples.
In this paper, we start by looking for the features of adversarial samples that cause the neural network to classify adversarial samples correctly, w
hich we call causal features. By regularizing the neural network model loss from this feature recognition mechanism, 
we can improve the robustness of client models.

In typical federated learning, training data are distributed across all the K clients, 
and there is a central server managing model aggregations and communications with clients.  
We set $D_k$ to denote a finite set of samples from the k-th client. Let $x$ denote the original
image, $x^{adv}$ denote the corresponding adversarial examples, and $\delta$ denote the perturbation
added on the original image, then $x^{adv} = x + \delta$. In federated adversarial learning, to generate
powerful adversarial examples, we attempt to maximize the loss $L(x+\delta;w)$, where L is the loss
function for the local update. Combined with adversarial training, the local objective becomes solving
the following min-max optimization problem:
\begin{equation}
  F_k(w) = min E_{x_k \sim D_k} [max_{x \in B(x,\delta)} L(w,x^{adv},y)]
  \label{local adversarial training}
\end{equation}
The inner maximization problem aims to find effective adversarial examples that achieve a high loss, 
while the outer optimization updates local models to minimize training loss.
After adversarial training in each client, a central server would manage model aggregations
and communications with clients. In general, federated adversarial learning attempts to minimize
the following optimization:
\begin{equation}
  \begin{aligned}
    &min_w f(w) = \sum_{k=1}^K \frac{n_k}{n}F_k(w) \\
        &= \sum_{k=1}^K \frac{n_k}{n}min E_{x_k \sim D_k} [max_{x \in B(x,\delta)} L(w,x^{adv},y)]
  \end{aligned}
  \label{FAT}
\end{equation} 

\subsection{Causal Inference}
The application of causal inference to economics has been a great success~\cite{runde1998assessing,bloom2001cumulative,sebri2014causal,omri2015modeling}. 
As a more powerful statistical relationship than correlation, 
causal inference has encountered many challenges in the development of deep learning 
practices. This is because there are more complex confounder models in deep learning.  
To solve these,
instrumental variable(IV) regression provides a way of identifying the causal relation between 
the treatment and outcome of interests 
despite the existence of unknown confounders~\cite{reiersol1945confluence}. 
However, earlier methods of IV regression cannot be used to solve non-parametric hypothesis model 
such as neural networks. 
More recently, the generalized method of moments (GMM)~\cite{bennett2019deep,dikkala2020minimax} 
has been cleverly proposed a solution for dealing 
with the non-parametric hypothesis model 
on the high-dimensional treatments through 
a zero-sum optimization, 
thereby successfully achieving the nonparametric IV regression. 
GMM assumes that data is generated by $Y=f_0(X) + \epsilon$, where the residual $\epsilon$ has zero mean and finite variance. Different
to standard supervised learning, GMM allows for the residual $\epsilon$ and $X$ to be correlated. And we assume that we have an instrument
$Z$ satisfying $E[\epsilon|Z]=0, P(X|Z) \ne P(X)$. GMM's goal is to identify the causal response function $f_0(\cdot)$ from a parametrized
family of functions $F=\{f(\cdot;\theta): \theta \in \Theta \}$. GMM leverages the moment conditions. Given functions $g_1,...,g_m$, $E[\epsilon|Z]=0$
implies $E[g_j(Z)\epsilon]=0$, then we can get:  
\begin{equation}
  \begin{aligned}
  &\phi(g_1;\theta_0) = ... = \phi(g_m;\theta_0) =0, \\ &where \quad \phi(g;\theta) = E[g(Z)(Y-f(X;\theta))]
  \end{aligned}
  \label{GMM}
\end{equation}  
A usual assumption when using GMM is that the $m$ moment conditions in Eq.\ref{GMM} are sufficient to uniquely pin down $\theta_0$.  
To estimate $\theta_0$, GMM considers these moments' empirical counter-parts, $\phi_n(g;\theta) = \frac{1}{n} \sum_{i=1}^n g(Z_i) (Y_i-f(X_i;\theta))$,
and seeks to make all of them small simultaneously, measured by their Euclidean norm $||v||^2 = v^Tv$:  
\begin{equation}
  \hat{\theta}^{GMM} \in argmin_{\theta \in \Theta} || \phi(g_1;\theta) , ... , \phi(g_m;\theta)||^2
  \label{EuTheta}
\end{equation}  
However, when there are many moments, using any unweighted vector norm can lead to significant inefficiencies, 
as we may be wasting modeling resources to make less relevant or duplicate moment conditions small. 
The optimal combination of moment conditions, yielding minimal variance estimates is given by 
weighting them by their inverse covariance~\cite{hansen1982large}, and it is sufficient to consistently estimate this covariance.  
\begin{equation}
  \begin{aligned}
    &||v||^2_{\tilde{\theta}} = v^TC_{\tilde{\theta}}^{-1}v \\ 
    &where [C_{\theta}]_{jk} = \frac{1}{n} \sum_{i=1}^n g_j(Z_i)g_k(Z_i)(Y_i-f(X_i;\theta))^2
    \label{OWGMM}
  \end{aligned}
\end{equation}
To introduce GMM to complex networks, we set $G=\{g(z;t):t \in T\}$ be the class of all neural networks of a given architecture with
varying weights $t$. The estimator of GMM in complex neural networks is defined as: 
\begin{equation}
  \begin{aligned}
    &\hat{\theta} \in argmin_{\theta \in \Theta} sup_{t \in T} U_{\tilde{\theta}}(\theta,t) \\ 
    & where U_{\tilde{\theta}}(\theta,t) = \frac{1}{n}\sum_{i=1}^n g(Z_i;t)(Y_i-f(X_i;\theta)) \\ 
    & +Regular\_Term
    \label{DeepGMM}
  \end{aligned}
\end{equation}  


