2020-04-16
On Algorithmic Fairness and Bias Mitigation in Recidivism Prediction
Publication
Publication
An econometric view on observed tradeoffs between conflicting definitions of fairness, and the applicability of post-processing methods for bias correction of criminal sentencing algorithms
Ensuring fair treatment of historically disadvantaged groups of individuals by machine learning (ML) guided decision-making systems is a rapidly growing point of discussion in both academics and commercial industries. This thesis aims to investigate whether a popular recidivism prediction instrument (RPI), known as COMPAS, can be accused of being unfairly biased against African-Americans and/ or women. Furthermore, the applicability of certain bias mitigation post-processing algorithms is studied for debiasing an arbitrary probabilistic recidivism predictor. Statistically conclusive results suggest that COMPAS-scores are in fact unfairly putting African-Americans at a disadvantage. However, the results with respect to a bias against women are inconclusive. Finally, reject option based classification (RObC) proves highly effective for achieving group-based fairness optima, while preserving balanced accuracy. However, these group-based fairness measures are optimised at the expense of an arguably important fairness notion, known as calibration.
Additional Metadata | |
---|---|
Bouman, P.C. | |
hdl.handle.net/2105/51867 | |
Econometrie | |
Organisation | Erasmus School of Economics |
Jansen, L.J. (2020, April 16). On Algorithmic Fairness and Bias Mitigation in Recidivism Prediction. Econometrie. Retrieved from http://hdl.handle.net/2105/51867
|