Papers
arxiv:2010.05057

Fairness-aware Agnostic Federated Learning

Published on Oct 10, 2020
Authors:
,
,
,

Abstract

Federated learning is an emerging framework that builds centralized machine learning models with training data distributed across multiple devices. Most of the previous works about federated learning focus on the privacy protection and communication cost reduction. However, how to achieve fairness in federated learning is under-explored and challenging especially when testing data distribution is different from training distribution or even unknown. Introducing simple fairness constraints on the centralized model cannot achieve model fairness on unknown testing data. In this paper, we develop a fairness-aware agnostic <PRE_TAG>federated learning framework (AgnosticFair)</POST_TAG> to deal with the challenge of unknown testing distribution. We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint. Therefore, the centralized model built from AgnosticFair can achieve high accuracy and fairness guarantee on unknown testing data. Moreover, the built model can be directly applied to local sites as it guarantees fairness on local data distributions. To our best knowledge, this is the first work to achieve fairness in federated learning. Experimental results on two real datasets demonstrate the effectiveness in terms of both utility and fairness under data shift scenarios.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2010.05057 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2010.05057 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2010.05057 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.