Mitigating Nonlinear Algorithmic Bias in Binary Classification

Research output: Chapter in Book/Conference paperConference paperpeer-review

Abstract

This paper proposes the use of causal modeling to detect and mitigate algorithmic bias that is nonlinear in the protected attribute. We provide a general overview of our approach. We use the German Credit data set, which is available for download from the UC Irvine Machine Learning Repository, to develop (1) a prediction model, which is treated as a black box, and (2) a causal model for bias mitigation. In this paper, we focus on age bias and the problem of binary classification. We show that the probability of getting correctly classified as "low risk"is lowest among young people. The probability increases with age nonlinearly. To incorporate the nonlinearity into the causal model, we introduce a higher order polynomial term. Based on the fitted causal model, the de-biased probability estimates are computed, showing improved fairness with little impact on overall classification accuracy. Causal modeling is intuitive and, hence, its use can enhance explicability and promotes trust among different stakeholders of AI.

Original languageEnglish
Title of host publicationProceedings - 2024 IEEE Conference on Artificial Intelligence
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages913-917
Number of pages5
ISBN (Electronic)9798350354096
DOIs
Publication statusPublished - 30 Jul 2024
Event2nd IEEE Conference on Artificial Intelligence - Singapore, Singapore
Duration: 25 Jun 202427 Jun 2024

Conference

Conference2nd IEEE Conference on Artificial Intelligence
Abbreviated titleCAI 2024
Country/TerritorySingapore
CitySingapore
Period25/06/2427/06/24

Fingerprint

Dive into the research topics of 'Mitigating Nonlinear Algorithmic Bias in Binary Classification'. Together they form a unique fingerprint.

Cite this