The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/UsmanGohar/FairEnsemble/FairEnsemble.py or any data file in the same directory. Couldn't find 'UsmanGohar/FairEnsemble' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in UsmanGohar/FairEnsemble. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/UsmanGohar/FairEnsemble/FairEnsemble.py or any data file in the same directory. Couldn't find 'UsmanGohar/FairEnsemble' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in UsmanGohar/FairEnsemble.

Need help to make the dataset viewer work? Open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Fair Ensembles

The repo contains the benchmark models, datasets and results for the following paper submission at ICSE 2023.

Title: Towards Understanding Fairness and its Composition in Ensemble Machine Learning

Abstract Machine Learning (ML) software has been widely adopted in modern society, with reported fairness implications for minority groups based on race, sex, age, etc. Many recent works have proposed methods to measure and mitigate algorithmic bias in ML models. The existing approaches focus on single classifier-based ML models. However, real-world ML models are often composed of multiple independent or dependent learners in an ensemble (e.g., Random Forest), where the fairness composes in a non-trivial way. \textit{How does fairness compose in ensembles? What are the fairness impacts of the learners on the ultimate fairness of the ensemble?} Furthermore, studies have shown that hyperparameters influence the fairness of ML models. Ensemble hyperparameters are more complex since they affect how learners are combined in different categories of ensembles. Understanding the impact of ensemble hyperparameters on fairness will help programmers in designing fair ensembles. Today, we do not understand these fully for different ensemble algorithms. In this paper, we comprehensively study popular real-world ensembles: bagging, boosting, stacking and voting. We have developed a benchmark of 168 ensemble models collected from Kaggle on four popular fairness datasets. We use existing fairness metrics to understand the composition of fairness. Our results show that ensembles can be designed to be fairer without using mitigation techniques. We also identify the interplay between fairness composition and data characteristics to guide fair ensemble design. Finally, our benchmark can be leveraged for further research on fair ensembles. To the best of our knowledge, this is one of the first and largest studies on fairness composition in ensembles yet presented in the literature.

Index

  1. Datasets
  1. Benchmark
  1. Results
    • For reproduction and validation of results, we provide the exact train/test splits, saved model (.pkl) and measures in a .csv. Note: Due to randomization in some models, output might vary a bit. We take mean of 10 runs to reduce the variance
Downloads last month
0
Edit dataset card