In the field of Machine Learning, Stochastic Gradient Descent is one of the most popular methods in use. That is because it is computational easy and scalable. However, one of the things it does not take into account is differential privacy, that is a term for the privacy loss when an individual’s private information is being used for the creation of a data product. In this paper we will look what happens when we include differential privacy into the SGD algorithm, with Song et al. (2013) as example. We will compare the differential private algorithms with the normal ones and we will look at the effect of the batch sizes on the obtained objective values. Moreover we will look at the differential private algorithm with a different loss function. At the end it turns out that differential private algorithms are very volatile compared to the normal algorithms but when the batch size becomes larger, there is not much difference anymore in their performance.

Birbil, S.I.
hdl.handle.net/2105/43905
Econometrie
Erasmus School of Economics

Wessels, W.P.C. (2018, November 7). Stochastic Gradient Descent with Differential Private Updates. Econometrie. Retrieved from http://hdl.handle.net/2105/43905