Abstract
AI-driven decision-making systems are becoming instrumental in the public sector, with applications spanning areas like criminal justice, social welfare, financial fraud detection, and public health. While these systems offer great potential benefits to institutional decision-making processes, such as improved efficiency and reliability, these systems face the challenge of aligning machine learning (ML) models with the complex realities of public sector decision-making. In this paper, we examine five key challenges where misalignment can occur, including distribution shifts, label bias, the influence of past decision-making on the data side, as well as competing objectives and human-in-the-loop on the model output side. Our findings suggest that standard ML methods often rely on assumptions that do not fully account for these complexities, potentially leading to unreliable and harmful predictions. To address this, we propose a shift in modeling efforts from focusing solely on predictive accuracy to improving decision-making outcomes. We offer guidance for selecting appropriate modeling frameworks, including counterfactual prediction and policy learning, by considering how the model estimand connects to the decision-maker's utility. Additionally, we outline technical methods that address specific challenges within each modeling approach. Finally, we argue for the importance of external input from domain experts and stakeholders to ensure that model assumptions and design choices align with real-world policy objectives, taking a step towards harmonizing AI and public sector objectives.
Dokumententyp: | Zeitschriftenartikel |
---|---|
Keywords: | Automated decision-making: Reliable artificial intelligence; Public policy; Causal machine learning |
Fakultät: | Mathematik, Informatik und Statistik > Statistik |
Themengebiete: | 300 Sozialwissenschaften > 310 Statistiken |
Ort: | New York, NY ; Amsterdam |
Sprache: | Englisch |
Dokumenten ID: | 122531 |
Datum der Veröffentlichung auf Open Access LMU: | 18. Nov. 2024 15:56 |
Letzte Änderungen: | 18. Nov. 2024 15:56 |