This thesis demonstrates how machine learning techniques can be made more interpretable using several methods that enhance the transparency of these techniques. First, different aspects of interpretability are outlined and various measures that allow for a comparison across methods are presented. After that, different explanation methods are applied in an empirical study on loan default prediction. It is illustrated that global interpretation can be attained by constructing variable importance measures or partial dependence plots, which give insight into the relationship between the explanatory variables and the outcome. On a local level insights into the model can be achieved by constructing feature contributions or a local linear approximation of the model, giving a detailed explanation on the prediction outcome of single instances. Some global understanding into how the model reaches its classification results can be given by inspecting the variable interactions that the classifier exploits. At last, single tree approximation can be performed, providing an approximating graphical representation of the model, allowing for understanding on a both global and local scale how decisions in the model are reached.

, ,
Velden, M. van de
hdl.handle.net/2105/45040
Econometrie
Erasmus School of Economics

Osinga, J.S. (2019, January 3). Uncovering the “black box”: A study on how to make machine learning techniques more interpretable in an application to loan default prediction. Econometrie. Retrieved from http://hdl.handle.net/2105/45040