\begin{abstract}
Not limited to the conventional centralized training, increasing amount of works have revealed that federated learning (FL) is also vulnerable to adversarial attacks, especially under the Non-IID settings with severe label distribution skews.
To mitigate the issue, federated adversarial training (FAT) methods have received extensive attention to leverage the adversarial training (AT) for client models before global aggregation.
However, existing approaches



\end{abstract}