Abstract
When machine learning is used to automate judgments, e.g. in areas like lending or crime prediction, incorrect decisions can lead to adverse effects for affected individuals. This occurs, e.g., if the data used to train these models is based on prior decisions that are unfairly skewed against specific subpopulations. If models should automate decision-making, they must account for these biases to prevent perpetuating or creating discriminatory practices. Counter-factual fairness audits models with respect to a notion of fairness that asks for equal outcomes between a decision made in the real world and a counterfactual world where the individual subject to a decision comes from a different protected demographic group. In this work, we propose a method to conduct such audits without access to the underlying causal structure of the data generating process by framing it as a multi-objective optimization task that can be efficiently solved using a genetic algorithm.
Dokumententyp: | Konferenzbeitrag (Paper) |
---|---|
Fakultät: | Mathematik, Informatik und Statistik > Statistik |
Themengebiete: | 300 Sozialwissenschaften > 310 Statistiken |
Ort: | New York |
Bemerkung: | ISBN 978-1-4503-9268-6 |
Sprache: | Englisch |
Dokumenten ID: | 109991 |
Datum der Veröffentlichung auf Open Access LMU: | 22. Mrz. 2024, 07:56 |
Letzte Änderungen: | 22. Mrz. 2024, 07:56 |