Abstract
When machine learning is used to automate judgments, e.g. in areas like lending or crime prediction, incorrect decisions can lead to adverse effects for affected individuals. This occurs, e.g., if the data used to train these models is based on prior decisions that are unfairly skewed against specific subpopulations. If models should automate decision-making, they must account for these biases to prevent perpetuating or creating discriminatory practices. Counter-factual fairness audits models with respect to a notion of fairness that asks for equal outcomes between a decision made in the real world and a counterfactual world where the individual subject to a decision comes from a different protected demographic group. In this work, we propose a method to conduct such audits without access to the underlying causal structure of the data generating process by framing it as a multi-objective optimization task that can be efficiently solved using a genetic algorithm.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Faculties: | Mathematics, Computer Science and Statistics > Statistics |
Subjects: | 300 Social sciences > 310 Statistics |
Place of Publication: | New York |
Annotation: | ISBN 978-1-4503-9268-6 |
Language: | English |
Item ID: | 109991 |
Date Deposited: | 22. Mar 2024, 07:56 |
Last Modified: | 22. Mar 2024, 07:56 |