\section{Methodology}   
\subsection{Label Distribution}
In a classic federated learning setup~\cite{zhao2018federated,luo2019real}, 
the datasets on each client are usually different.
What is more in line with the reality is that the datasets on 
each client are usually non-iid, and of course, 
some more ideal environments are data with iid structures. Like previous works~\cite{hsieh2020non,chen2022calfat},
we assume that the label distribution across the clients is skewed but the class conditional is identical.  
The class probabilities $\{p_i(y|x) | i \in [K]  \}$ are non-identical under this assumption. The proof of this is given in the Appendix.
It means that for all $i \ne u$ and $i,u \in [K]$:  
\begin{itemize}
    \item $\exists y \in [C]$ such that $p_i(y) \ne p_u(y)$.  
    \item $p_i(x|y) = p_u(x|y)$ for all $x,y$.
\end{itemize}  
As shown in fig.\ref{fig1},  simply introducing instrumental variable regression~\cite{kim2023demystifying,bennett2019deep}
into federated adversarial training under the above assumptions can not effectively improve the robustness of the global model.
This is mainly because instrumental variable regression is built on the client datasets,
results in highly heterogeneous causal models on local clients during training. 
We use the sample variance of the ground-truth parameter to measure such heterogeneity as follows:  
\begin{equation}
(s^*)^2 = V(\theta_1^*,...,\theta_K^*) = \frac{1}{K-1}\sum_{i=1}^K ||\theta_i^* - \frac{1}{K} \sum_{j=1}^K \theta_j^*||
\end{equation}
Larger sample variance implies higher model heterogeneity. The following proposition suggests that the heterogeneity of local models originates 
from the heterogeneity of the local class probabilities.  

\textbf{Proposition 1} Assume that the label distribution across the clients is skewed but the class conditional is identical.
Define $\theta_i$ be the maximum likelihood estimate of the causal model on the client. Then we have: 
\begin{equation}
  s^2 = V(\theta_1,...,\theta_K) \xrightarrow{\stackrel{convergence}{}} (s^*)^2 \ne 0 
  \label{prop1} 
\end{equation}

Proposition 1 shows that when the label distribution among clients is skewed, 
the causal models obtained by GMM with centralized learning approaches are heterogeneous.  

\subsection{Federated Adversarial GMM}
Since the causal models are heterogeneous, using these causal models to enhance the client models 
will further deepen the heterogeneity of the client models, resulting in serious degradation of them.
Some previous work~\cite{glymour2003learning,gopnik2004theory,lattimore2019causal} has indicated how Bayes inference can be used in causal inference. ~\cite{lattimore2019causal} propose how to transform causal models into probabilistic models and then conduct Bayesian inference.
Similar to them, we define causal model $y_{cau} = f(x;\theta)$. Then we rethink the Bayes formula:
\begin{equation}
    p_i(y_{cau}|x) = \frac{p_i(x|y_{cau})p_i(y_{cau})}{\sum_{l=1}^C p_i(x|l)p_i(l)} 
    \label{bayes}
\end{equation}  

According to our previous assumption, 
the class conditional $\{p_i(x|y) | i \in [m] \}$ in the above Bayes formula are the same on different clients and the class priors can be easily
computed by the relative frequencies. If the causal feature has a strong positive causal relationship with the real label $y$ 
(in a sense, this is our training goal), then it can be approximated $p(y_{cau}|x) = p(y|x)$~\cite{pearl2000models}.
This means that we can recalculate the causal relationship $p_i(y_{cau}|x)$ based on arbitrary relative probabilities $p_i(x|l)$.  
Based on this conclusion, we can introduce label skewness information into instrumental variable regression 
by setting relative probability function related to the skewness:  
\begin{equation}
    f_i(y_{cau}|x) = \hat{q}_i(y_{cau}|x;\theta^*) = \frac{\hat{q}(x|y_{cau};\theta^*)\pi_i^y}{\sum_{l=1}^C\hat{q}(x|l;\theta^*)\pi_i^l}
    \label{calfat}
\end{equation}
where  
\begin{equation*}
    \pi_i^y = \frac{n_i^y}{n_i} + \Delta, y \in [C]
\end{equation*}  
In the above formula, the $\pi_i^y$ is an approximation of class prior $p_i(y)$. $n_i^y$ is the sample size on client $y$, and $\Delta \textgreater 0$ is a constant added for numerical stability purposes. We introduce such reparameterization into the causal model to eliminate the heterogeneity between them.
    
Next, to perform instrument variable regression, we need to first set the instrument variables:   
$$Z= F_l(X^{adv}) - F_l(X) = F_{adv} - F_{natural}$$
The $F_l$ outputs feature map in $l^{th}$ intermediated layer of adversarially trained networks. Similar to~\cite{kim2023demystifying}, we set the
treatment to $T = F_{adv}$ and set counterfactual treatment with a test function to $T_{CF} = F_{natural} + g(Z)$.  
Next, we use GMM~\cite{bennett2019deep,kim2023demystifying} to solve the regression of the instrumental variables. 
Reference to the equation \ref{GMM}, suppose that the causal network we trained is $f$, the causal feature we want to find is $Y_{cau}$, we can get:  
\begin{equation}
    \phi(g; \theta) = E[g(Z),(Y_{cau}-f(T;\theta))] = 0
    \label{GMM-1}
\end{equation}  


Then naturally the question arises, how to compute $Y_{cau}-f(T;\theta)$? To solve it, 
we alter the domain of computing GMM from feature space to log-likelihood space of model prediction by using the log-softmax,
label skewed information and one-hot vector-valued target label G.  
\begin{equation}
    Y_{cau} - f(T;\theta) = (G + \log(\pi_i), f(T;\theta)) 
    \label{yy}
\end{equation}

Then the objective of our local causal training with calibrated labels is that:  
\begin{equation}
    \begin{aligned}
        &\hat{\theta} \in argmin_{\theta \in \Theta} sup_{t\in T} U(\theta,t) \\
        U(\theta,t) &= E[g(Z,t)(Y_{causal}-f(T;\theta))] \\
        % &=E[g(Z,t)(G+log(\p))]        
    \end{aligned}
\end{equation}
Next, we explain how to efficiently implant the causal models into adversarial networks.  
Like previous works, we also use an inversion of causal features (i.e., causal inversion) 
reflecting those features on the input domain. 
We use a causal regularizer to hold adversarial features
not to stretch out from the possible bound of causal features,
thereby providing networks to eliminate backdoor path features 
dissociated from unknown confounders.
It takes advantage of well-representing causal 
features within allowable feature bounds regarding network parameters 
of the preceding sub-network $F_l$ for the given adversarial examples. 
Causal features are manipulated on an intermediate layer 
by the hypothesis models, thus they are not guaranteed to be on possible feature bound. 
The causal inversion then serves as a key to resolving it without harming the causal 
prediction much. We set $F_{AC}$ as adversarial causal features distilled by hypothesis model h,
and $\delta_{causal}$ denotes causal perturbation to represent causal inversion $X_{causal}$ such that $X_{causal} =X +\delta_{causal}$. 
Note that the $F_{AC}$ here still uses reparameterization techniques to reduce heterogeneity.
Then the $F_{AC}$ can be computed as $\hat{F}_{AC} = f_l(X_{causal})$,
embedding the causal features to defense networks as a form of empirical risk minimization (ERM), as follows:  
\begin{equation}
    min E_{D_k} [max L_{AT} + D_{KL}(f_{l+}(\hat{F}_{AC}) || f_{l+}(F_{adv}))]
    \label{Local Ob}
\end{equation}  
The $L_{AT}$ means the adversarial loss in adversarial training.
In federated learning, communication cost has always been an issue that needs to be addressed. The rest term represents a causal regularizer serving as causal inoculation to make adversarial features assimilate causal features. 

\begin{table*}[t]
    \setlength{\tabcolsep}{2pt}
    \caption{\textbf{Experimental Results}}
    \label{BN table}
    \scriptsize
    \centering
    \begin{tabular}{|c|ccccc|ccccc|} % 有七列，使用 "c" 表示居中对齐，没有竖线
    \toprule % 第一道横线
    \textbf{Dataset} & \multicolumn{5}{c|}{\textbf{\underline{CIFAR10}}} & \multicolumn{5}{c|}{\textbf{\underline{CIFAR100}}} \\
    \midrule
    \textbf{Methods} & \textbf{Natural} & \textbf{FGSM} & \textbf{PGD-20} & \textbf{CW} & \textbf{AA} & \textbf{Natural} & \textbf{FGSM} & \textbf{PGD-20} & \textbf{CW} & \textbf{AA} \\
    \midrule
    FAT & 53.35 & 29.14 & 26.27 & 22.79 & 21.89 & 34.43 & 15.69 & 14.36 & 11.31 & 9.06\\
    FedPGD & 46.96 & 28.70 & 26.74 & 24.38 & 22.47 & 33.96 & 16.07 & 14.67 & 11.67 & 10.87\\  
    FedTRADES & 46.06 & 27.75 & 26.31& 22.86 & 21.70 & 29.55 & 15.01 & 14.30 & 10.58 & 9.53 \\ 
    FedMART & 25.67 & 18.50 & 18.10 & 15.22 & 14.41 & 19.96 & 13.00 & 12.83 & 9.92 & 8.57 \\  
    CalFAT & 64.69 & 35.03 & 31.12 & 24.69 & 22.91 & 44.57 & 17.63 & 15.21 & 12.01 & 11.49 \\  
    \textbf{FCAT} & \textbf{75.27} & \textbf{40.12} & \textbf{35.94} & \textbf{27.81} & \textbf{25.29} & \textbf{54.18} & \textbf{18.52} & \textbf{15.30} & \textbf{13.15} & \textbf{11.87} \\


    \bottomrule % 底部横线
    % 继续插入更多数据行
    \end{tabular}
\end{table*}

\begin{table*}[t]
    \setlength{\tabcolsep}{2pt}
    \caption{\textbf{Experimental Results}}
    \label{experimental results}
    \scriptsize
    \centering
    \begin{tabular}{ccccccc|ccccc|ccccc|cccccc} % 有七列，使用 "c" 表示居中对齐，没有竖线
    \toprule % 第一道横线
    & \multicolumn{7}{c}{\textbf{\underline{CIFAR10}}} & \multicolumn{5}{c}{\textbf{\underline{CIFAR100}}}
    & \multicolumn{5}{c}{\textbf{\underline{SVHN}}} & \multicolumn{5}{c}{\textbf{\underline{Tiny}}}
    \\
    Model & Methods & Natural & FGSM & PGD & CW & AA & Natural & FGSM & PGD & CW & AA & Natural & FGSM & PGD & CW & AA & Natural & FGSM & PGD & CW & AA&
    \\
    \midrule
    \multirow{8}{*}{VGG-16} 
    & FAT & 42.62 & 28.59 & 26.96 & 20.35 & 23.36 & 44.85 & 26.55 & 24.63 & 18.48 & 19.26 & 73.45 & 53.81 & 44.52 & 38.25 & 36.71 & 44.33 & 27.87 & 26.76 & 19.61 & 19.52 &\\ % 插入实际的数据，自动表格宽度
    & \textbf{FAT-Promote} & \textbf{51.40} & \textbf{37.58} & \textbf{36.44} & \textbf{26.71} & \textbf{29.95} & \textbf{43.46} & \textbf{27.37} & \textbf{25.68} & \textbf{19.43} 
    & \textbf{19.93} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\
    & DBFAT & 25.71 & 21.34 & 21.13 & 18.29 & 20.05 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 20.90 & 13.36 & 12.92 & 10.96 & 10.73 & \\
    & \textbf{DBFAT-Promote} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} 
    & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\
    & CalFAT & 63.09 & 33.56 & 31.35 & 21.88 & 21.39 & 49.70 & 23.11 & 19.78 & 14.31 & 13.46 & 0 & 0 & 0 & 0 & 0 & 47.13 & 23.65 & 21.35 & 14.82 & 14.45 & \\
    & \textbf{CalFAT-Promote} & \textbf{66.53} & \textbf{38.47} & \textbf{34.73} & \textbf{26.75} & \textbf{25.51} & \textbf{51.59} & \textbf{23.07} & \textbf{19.59} & \textbf{14.61} 
    & \textbf{13.72} & \textbf{77.93} & \textbf{51.37} & \textbf{45.51} & \textbf{39.55} & \textbf{37.73} & \textbf{49.88} & \textbf{27.42} & \textbf{24.96} & \textbf{16.81} & \textbf{16.44} & \\
    & SFAT & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
    & \textbf{SFAT-Promote} & \textbf{56.12} & \textbf{38.83} & \textbf{36.36} & \textbf{29.15} & \textbf{32.05} & \textbf{46.10} & \textbf{26.50} & \textbf{24.34} & \textbf{18.51} 
    & \textbf{18.14} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\
    \midrule
    \multirow{8}{*}{Resnet-18} 
    & FAT & 70.02 & 51.61 & 48.39 & 39.36 & 39.23 & 50.53 & 28.27 & 26.57 & 17.44 & 17.01 & 83.43 & 63.51 & 54.05 & 48.32 & 46.64 & 49.22 & 31.16 & 29.98 & 21.05 & 20.99 &\\ % 插入实际的数据，自动表格宽度
    & \textbf{FAT-Promote} & \textbf{73.24} & \textbf{54.69} & \textbf{50.51} & \textbf{41.71} & \textbf{42.24} & \textbf{52.43} & \textbf{30.35} & \textbf{28.18} & \textbf{18.65} 
    & \textbf{18.23} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\
    & DBFAT & 32.8 & 25.84 & 25.44 & 23.00 & 23.41 & 21.83 & 15.09 & 14.82 & 12.91 & 12.48 & 0 & 0 & 0 & 0 & 0 & 22.53 & 14.83 & 14.38 & 12.21 & 12.05 & \\
    & \textbf{DBFAT-Promote} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} 
    & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\
    & CalFAT & 76.73 & 49.13 & 44.95 & 33.41 & 32.54 & 54.43 & 23.99 & 20.57 & 13.08 & 12.47& 0 & 0 & 0 & 0 & 0 & 52.34 & 25.01 & 22.81 & 13.96 & 13.77 & \\
    & \textbf{CalFAT-Promote} & \textbf{79.74} & \textbf{49.35} & \textbf{46.71} & \textbf{35.49} & \textbf{34.67} & \textbf{55.29} & \textbf{25.94} & \textbf{22.63} & \textbf{14.14} 
    & \textbf{13.74} & \textbf{89.01} & \textbf{62.64} & \textbf{54.63} & \textbf{46.14} & \textbf{44.66} & \textbf{54.31} & \textbf{27.02} & \textbf{24.81} & \textbf{14.89} & \textbf{14.775} & \\
    & SFAT & 70.08 & 46.27 & 45.87 & 37.52 & 37.21 & 47.95 & 25.67 & 25.07 & 15.34 & 15.67 & 79.70 & 58.87 & 49.17 & 44.72 & 42.71 & 48.32 & 30.91 & 29.86 & 20.55 & 20.50 & \\
    & \textbf{SFAT-Promote} & \textbf{73.87} & \textbf{52.62} & \textbf{49.67} & \textbf{41.03} & \textbf{40.74} & \textbf{51.47} & \textbf{28.83} & \textbf{26.62} & \textbf{17.70} 
    & \textbf{17.27} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\
    \midrule
    \multirow{8}{*}{WRN-28} 
    & FAT & 72.08 & 50.08 & 47.16 & 38.04 & 37.96 & 54.58 & 30.78 & 28.92 & 18.05 & 17.74 & 0 & 0 & 0 & 0 & 0 & 51.69 & 34.22 & 32.97 & 22.89 & 22.86 &\\ % 插入实际的数据，自动表格宽度
    & \textbf{FAT-Promote} & \textbf{75.48} & \textbf{53.71} & \textbf{50.93} & \textbf{41.35} & \textbf{41.68} & \textbf{56.31} & \textbf{32.17} & \textbf{29.81} & \textbf{19.84} 
    & \textbf{19.35} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\
    & DBFAT & 27.94 & 21.60 & 21.16 & 20.18 & 20.35 & 19.67 & 12.71 & 12.50 & 10.94 & 10.57 & 0 & 0 & 0 & 0 & 0 & 15.77 & 10.11 & 10.79 & 8.83 & 8.44 & \\
    & \textbf{DBFAT-Promote} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} 
    & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\
    & CalFAT & 77.42 & 48.14 & 44.85 & 35.70 & 35.79 & 58.75 & 26.55 & 23.56 & 14.69 & 14.24 & 80.85 & 52.03 & 47.59 & 39.36 & 39.84 & 57.25 & 28.37 & 26.26 & 15.71 & 15.43 & \\
    & \textbf{CalFAT-Promote} & \textbf{78.52} & \textbf{49.21} & \textbf{45.93} & \textbf{36.67} & \textbf{36.13} & \textbf{59.79} & \textbf{27.23} & \textbf{24.24} & \textbf{14.78} 
    & \textbf{14.51} & \textbf{83.15} & \textbf{58.50} & \textbf{52.37} & \textbf{46.14} & \textbf{44.56} & \textbf{59.53} & \textbf{31.64} & \textbf{29.55} & \textbf{18.17} & \textbf{17.89} & \\
    & SFAT & 43.03 & 29.58 & 29.29 & 24.34 & 28.46 & 34.37 & 19.73 & 19.01 & 14.14 & 15.92 & 0 & 0 & 0 & 0 & 0 & 34.34 & 23.05 & 22.56 & 15.45 & 15.71 & \\
    & \textbf{SFAT-Promote} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} 
    & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \textbf{0} & \\

    \toprule
    % 继续插入更多数据行
    \end{tabular}
    \end{table*}   


\subsection{Model Aggregation}  
We found that GMM with calibrated skewness still did not further solve the problem of drift between models. Motivated by \cite{zhu2023combating}, we introduce 
slack aggregation into causal adversarial training, and find that this aggregation can effectively reduce the drift between models.
Another problem with model aggregation is the cost of communication, and an intuitive idea is to have causal models also participate in the model aggregation process of federated learning.
If all parameters of the causal models are added to the model aggregation, 
there is no doubt that the communication cost will be further increased.
Of course, we can also choose not to aggregate the causal model, but without aggregation, the 
data on each client is not complete, so it will cause insufficient information obtained by the causal model, 
and then cause the degradation of the global model. Then it is necessary to develop causal aggregation methods that can efficiently aggregate causal models with as little communication overhead as possible. 
Inspired by~\cite{hong2021federated,li2021fedbn}, the batch normalization layer can 
reduce the feature deviation caused by different data distributions between models. 
And there is a strong connection between model robustness and statistic parameters in the BN layer~\cite{Schneider_Rusak_Eck_Bringmann_Brendel_Bethge_2020}.
So we set the causal aggregation to the inverse ratio of the divergence between the current client BN layers and the results of the previous round of aggregation.
In detail, let $\mu$ and $\sigma$ represent the mean and variance on the BN layer, then our aggregation is:  
\begin{equation}
    \hat{\mu} = \frac{1}{|S|}\sum_{j \in S} \alpha_j \mu_j, \hat{\sigma}^2 = \frac{1}{|S|}\sum_{j \in S} \alpha_j \sigma_j^2
\end{equation}
Where $\alpha$ indicates the weight of each client in FL.
\begin{equation}
    \begin{aligned}
        &\alpha_j = Softmax_T[\frac{1}{L}\sum_{l=1}^L Sim^l(BN_{*,t-1},BN_{t,j})] \\
        &Softmax_T(q_j) = \frac{exp(q_j/T)}{\sum_{j \in S} exp(q_j /T)}, T=0.01 \\
        &Sim^l(D_k,D_j) = [cos(\mu_{t-1}^l,\mu_j^l) + cos(\sigma_{t-1}^{2^l},\sigma_{j}^{2^l} )] \\ 
        &cos(x,y) = \frac{x^Ty}{||x||||y||} \\
    \end{aligned}
\end{equation}
Such aggregation requires very little communication overhead, but it is highly effective.
Our method can be widely used in various federated adversarial training frameworks, 
and can significantly improve the robustness of the original methods, which would be shown in detail in the experiment.


\section{Experiments}  
\begin{figure}[t]
    \centering
  
    \includegraphics[width=0.9\linewidth,height=2in]{output/data_distributed.eps}
     \caption{The distribution of client dataset simulated by Dirichlet partition function. }
     \label{fig2}
  \end{figure}

  \begin{figure}[t]
    \centering
    \includegraphics[width=1.2\linewidth,height=0.5in]{output/Y.eps}
    \caption{The effect of causal adversarial training before and after label skewed information is introduced.}
     \label{skewed ablation}
  \end{figure}
In this section, we integrate our proposed FCAT into three baseline networks and conduct extensive experiments to validate its efficacy on four datasets. 
For datasets, we take CIFAR-10~\cite{krizhevsky2009learning}, CIFAR-100~\cite{krizhevsky2009learning},SVHN~\cite{netzer2011reading}, 
and Tiny-ImageNet~\cite{le2015tiny}. To simulate label distribution skews distribution, 
we sample $p_i^l\sim Dir(\beta)$ and allocate a $p_i^l$ proportion of the data of label $l$ to client $i$, where $Dir(\beta)$ is the
Dirichlet distribution with a concentration parameter $\beta$~\cite{yurochkin2019bayesian}. By default, we set $\beta=0.1$ to simulate a highly skewed label distribution
that widely exists in reality. The figure~\ref{fig2} shows the data distribution of each client after we complete the sampling.
To show the model-agnostic ability of our approach, we apply FCAT to three widely-used networks: VGG-16~\cite{Simonyan_Zisserman_2015}, 
ResNet-1~\cite{He_Zhang_Ren_Sun_2016}, and a large network: WideResNet-28-2~\cite{Zagoruyko_Komodakis_2016}.   
We also implement a variety of methods for our competitive FL solutions, including MixFAT~\cite{zizzo2020fat}, 
CalFAT~\cite{chen2022calfat}, DBFAT~\cite{zhang2023delving}, GEAR~\cite{chen2022gear}, 
SFAT~\cite{zhu2023combating}. 
For attacks, we use perturbation budget 8/255 for CIFAR-10, CIFAR-100, SVHN and 4/255 for Tiny-ImageNet with two standard attacks: 
FGSM~\cite{goodfellow2014explaining}, PGD~\cite{madry2017towards}, 
and three strong attacks: CW~\cite{Carlini_Wagner_2017} and AP (Auto-PGD: step size-free). 
We generate adversarial examples using PGD~\cite{madry2017towards} on perturbation budget 8/255 
where we set 10 steps in training. 
Especially, adversarially training for Tiny-ImageNet is a computational burden, 
so we employ fast adversarial training with FGSM on the budget 4/255 and 
its 1.25 times the step size. For all training, we use SGD~\cite{Robbins_Monro} with 0.9 momentum and learning rate of 0.1 scheduled by Cyclic~\cite{Smith_2017} in 150 epochs. 

As shown in the table~\ref{experimental results}, our approach achieves significantly superior performance on improving the current multiple mainstream frameworks on natural.  

In particular, when exploit on the VGG-16 model, our FCAT achieves a $10\% $ improvement in accuracy and a $5\%$ improvement in robustness.
All methods demonstrate the worst performance on CIFAR100 and ImageNet subset datasets. 
We conjecture that this is because there are more classes in these two datasets, making federated training substantially harder.


\section{Ablation Studies}   


\textbf{Contribution of label skewed information} As shown in equation~\ref{yy}, we introduce the label skewed information to causal training.
This naturally raises a question: how does this skewed information contribute to FCAT? To answer this question, 
we compared the performance between simply transfer causal adversarial training in FL and our method in FAT~\cite{zizzo2020fat},
CalFAT~\cite{chen2022calfat} on CIFAR10 dataset. As shown in figure~\ref{skewed ablation}, this skewed information played a very important role during FCAT.
As illustrated in the Appendix, we prove that the causal adversarial training will further deepen the differences between the various client models, 
so it is necessary to introduce this skewed information as correction, and this also explains 
why the simple transfer of causal adversarial training to federated learning will cause catastrophic degradation of the model. 


\begin{table}[t]
    \setlength{\tabcolsep}{2pt}
    \caption{\textbf{Ablation of Aggregation's Results}}
    \label{BN table}
    \scriptsize
    \centering
    \begin{tabular}{|c|c|c|c|c|} % 有七列，使用 "c" 表示居中对齐，没有竖线
    \toprule % 第一道横线
    Methods & Time Cost & Communication file size & Robustness & Accuracy
    \\
    Ours & 10ms & $\le$ 2MB & 34.63 & 61.55 \\
    Fed-AVG &532ms & $\approx$ 79.8 MB & 35.12 & 61.75 \\
    Non-Aggregation & - & - & 32.21 & 57.87 \\
    \toprule
    % 继续插入更多数据行
\end{tabular}
\end{table} 

\begin{figure}[t]
    \centering
    \includegraphics[width=0.9\linewidth,height=2in]{output/BN.eps}
    \caption{ The causal model with aggregation is better than the causal model without aggregation. However, our BN layer aggregation algorithm, at a very small extra communication cost, can achieve results similar to all parameters of the aggregate causal model.}
     \label{aggregation compration}
  \end{figure}


\textbf{Contribution of causal aggregation} Since we only upload the BN layer of the causal model to the server 
for model aggregation, it is obvious that our communication overhead is better than uploading the entire causal model. 
So the other question is, is it necessary to consider the aggregation of causal models? As shown in the figure~\ref{aggregation compration}, we show the 
influence of the aggregate causal model, aggregate BN layer, and non-aggregate causal model on the final result of FCAT. 
The experiment shows that aggregation of the causal model is still a very necessary thing in the federated causal confrontation 
training. When the communication overhead of aggregating the full model is high, aggregating only the BN layer of the causal 
model can guarantee a significant reduction in the communication overhead without too much degradation of the model. 

  \begin{figure}[t]
    \centering
    \includegraphics[width=0.9\linewidth,height=2in]{output/tnse.eps}
    \caption{T-nse visualization.}
     \label{T-nse visualization}
  \end{figure}

    \begin{figure}[t]
    \centering
    \includegraphics[width=0.9\linewidth,height=2in]{output/causalab.eps}
    \caption{Feature visualization, the darker the area, the lower the impact on classification.}
     \label{Feature visualization}
  \end{figure}
  
\textbf{Feature Visualization}  
We use the technique of visual feature map to show the effectiveness of feature extractor. As shown in the fig~\ref{Feature visualization}, the black area shows that the activation value of the pixel after the multi-layer convolutional layer is low, which has little impact on the classification result.
To better understand the efficacy of our method, we use t-nse~\cite{van2008visualizing} visualization to demonstrate the effectiveness of our model.  
As shown in Fig.~\ref{T-nse visualization} we visualize the learned features extracted from the last convolution layer of VGG-16 on CIFAR10.
The results show that the model with causal adversarial training has significantly improved the ability of sample classification.
To be specific, compared with the original CalFAT method, the data distribution in label1,label6, and label7 after the causal adversarial training is significantly tighter.

\section{Conclusion}
In this paper, we introduced causal adversarial training into federated adversarial training for the first time. 
However, directly migrating causal adversarial training exacerbates heterogeneity among models. 
Therefore, we incorporated data information from various client nodes into the training process 
of the causal model. Moreover, due to communication constraints in federated learning, 
we proposed a causal model aggregation algorithm that aggregates only the models' batch 
normalization (BN) layers to achieve results close to the specific overall causal model. 
Experiments demonstrate that our method, with minimal additional communication costs, applies to various frameworks of federated adversarial training, significantly enhancing model accuracy and robustness.

\newpage