Long Short-Term Memory (LSTM) is a modern artificial neural network architecture for sequence learning which is capable of handling long-term dependencies and detecting hidden patterns in the data. Even though it has not been widely employed in the field of finance except for some very recent attempts, it is essentially suitable for financial time series predictions. In this paper, LSTM is compared to a set of traditional econometric models to predict out-of-sample realized variances of the S&P 500 from 2010 until 2019. The results show that the performance of the LSTM models are very sensitive to the setting of their hyperparameters. Overall, it was obtained that ARMA and GJR-GARCH gave the best estimates in terms of mean squared error (MSE) in the two separate forecasting intervals belonging to 2010-2019 and that LSTM underperformed all models in both periods. Different combinations of hyperparameters should be experimented in order for LSTM to compete with the econometric models.

, , , ,
Quaedvlieg, R.
hdl.handle.net/2105/49832
Business Economics
Erasmus School of Economics

Ogus, D. (2019, August). Long Short-Term Memory: Can Artificial Neural Networks beat Econometric Models?. Business Economics. Retrieved from http://hdl.handle.net/2105/49832