Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2023, Cilt: 6 Sayı: 1, 1 - 9, 30.04.2023
https://doi.org/10.35377/saucis...1172027

Öz

Kaynakça

  • [1] D. Choi, J. C. Shallue, Z. Nado, J. Lee, J. C. Maddison and E. G. Dahl,”On Empirical Comparisons of Optimizers for Deep Learning,” Computing Research Repository (CoRR), vol. abs/1910.05446, oct 2019.
  • [2] B. Nakisa, M. N. Rastgoo, A. Rakotonirainy, F. Maire and V. Chandra,”Long Short Term Memory Hyperparameter Optimization for a Neural Network Based Emotion Recognition Framework,” IEEE Access, vol. 6, pp. 49325 - 49338, Sept 2018.
  • [3] J. Bergstra, R. Bardenet, Y. Bengio and B. Kegl,”Algorithms for Hyper-Parameter Optimization,” Advances in Neural Information Processing Systems 24 (NIPS 2011), 2011.
  • [4] R. K. Rathore, D. Mishra, P. S. Mehra, O. pal, A. S. Hashim, A. Shapi'i, T. Ciano and M. Shutaywi,”Real-world model for bitcoin price prediction,” Information Processing and Management, vol. 59, no. 4, 2022.
  • [5] T. Shintate and L. Pichl,”Trend Prediction Classification for High Frequency Bitcoin Time Series with Deep Learning,” Journal of Risk Financial Management, vol 12, no. 1, 2019.
  • [6] I. S. Kervancı and M. F. Akay,”Review on Bitcoin Price Prediction Using Machine Learning and Statistical Methods,” Sakarya University Journal of Computer and Information Sciences, vol. 3, no. 3, pp. 272-282, 2020.
  • [7] J. Michańków, P. Sakowski and R. Ślepaczuk,”LSTM in Algorithmic Investment Strategies on BTC and S&P500 Index,” Sensors , vol. 22, no. 3, 2022.
  • [8] F. Hutter, H. Hoos and K. Leyton-Brown,”An Efficient Approach for Assessing Hyperparameter Importance,” Proceedings of the 31st International Conference on Machine Learning, 2014.
  • [9] N. Reimers and I. Gurevych,”Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks,” EMNLP 2017, 2017.
  • [10] A. U. Rehman, A. K. Malik, B. Raza and W. Ali ,”A Hybrid CNN-LSTM Model for Improving Accuracy of Movie Reviews Sentiment Analysis,” Multimedia Tools and Applications, no. 78, pp. 26597–26613, June 2019.
  • [11] S. Ioffe and C. Szegedy,”Batch Normalization: Accelerating Deep Network Training Reducing Internal Covariate Shift,” Proceedings of the 32 nd International Conference on Machine Learning, Lille, France, 2015.
  • [12] A. Farzad, H. Mashayekhi and H. Hassanpour ,”A comparative performance analysis of different activation functions in LSTM networks for classification,” Neural Computing and Applications volume, vol. 31, pp. 2507–2521, 2019.
  • [13] Q. V. Le , J. Ngiam, A. Coates, A. Lahiri, B. Prochnow and A. Y. Ng,”On Optimization Methods for Deep Learning,” ICML, 2011.
  • [14] D. P. Kingma and J. Ba,”Adam: A Method for Stochastic Optimization,” Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015.
  • [15] J. Duchi , E. Hazan and Y. Singer ,”Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” Journal of Machine Learning Research, vol. 12, pp. 2121-2159, 2011.
  • [16] G. Hinton, N. Srivastava and K. Swersky,”Lecture 6.5 Rmsprop- Divide the Gradient by a Running Average of its Recent Magnitude,” Toronto Uni, 2012.
  • [17] S. Merity, N. S. Keskar and R. Socher,”Regularizing and Optimizing LSTM Language Models,” ICLR 2018, Vancouver, BC, Canada, 2018.
  • [18] I. Sutskever, J. Martens and G. D. Geoffr,”On the Importance of Initialization and Momentum in Deep Learning,” Proceedings of the 30th International Conference on Machine Learning, PMLR, 2013.
  • [19] keras,”keras,” Feb. 2020. [Online]. Available: https://keras.io/layers/recurrent/.
  • [20] Smith, Leslie N.;U.S. Naval Research Laboratory,”Cyclical Learning Rates for Training Neural Networks,” 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 2017.
  • [21] N. Golmant, N. Vemuri, Z. Yao, V. Feinberg, A. Gholami, K. Rothauge, M. W. Mahoney and J. Gonzalez,”On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent,” arXiv preprint arXiv:1811.12941, Nov 2018.
  • [22] S. SIAMI NAMIN and A. SIAMI NAMIN,”Forecasting Economic and Financial Time Series : ARIMA VS. LSTM,” arXiv preprint arXiv:1803.06386, 2018.
  • [23] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov,”Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol.15, pp. 1929-1958, 2014.
  • [24] «A Comparison Between Shallow and Deep Architecture Classifiers on Small Dataset,” 8th ICITEE, Yogyakarta, Indonesia, 2016.
  • [25] S. L. Smith, P. J. Kindermans, C. Ying and Q. V. Le,”Don't decay the learning rate, increase the batch size,” ICLR (International Conference on Learning Representations), Vancouver, 2018.
  • [26] N. Leibowitz, B. Baum, G. Enden and A. Karniel,”The exponential learning equation as a function of successful trials results in sigmoid performance,” Journal of Mathematical Psychology, vol. 54, no. 3, pp. 338-340, 2010.
  • [27] K. Struga and O. Qirici,”Bitcoin Price Prediction with Neural Networks,” RTA-CSIT 2018, 2018.

LSTM Hyperparameters optimization with Hparam parameters for Bitcoin Price Prediction

Yıl 2023, Cilt: 6 Sayı: 1, 1 - 9, 30.04.2023
https://doi.org/10.35377/saucis...1172027

Öz

Machine learning and deep learning algorithms produce very different results with different examples of their hyperparameters. Algorithm parameters require optimization because they aren't specific for all problems. In this paper Long Short-Term Memory (LSTM), eight different hyperparameters (go-backward, epoch, batch size, dropout, activation function, optimizer, learning rate and, number of layers) were used to examine to daily and hourly Bitcoin datasets. The effects of each parameter on the daily dataset on the results were evaluated and explained These parameters were examined with hparam properties of Tensorboard. As a result, it was seen that examining all combinations of parameters with hparam produced the best test Mean Square Error (MSE) values with hourly dataset 0.000043633 and daily dataset 0.00073843. Both datasets produced better results with the tanh activation function. Finally, when the results are interpreted, the daily dataset produces better results with a small learning rate and small dropout values, whereas the hourly dataset produces better results with a large learning rate and large dropout values.

Kaynakça

  • [1] D. Choi, J. C. Shallue, Z. Nado, J. Lee, J. C. Maddison and E. G. Dahl,”On Empirical Comparisons of Optimizers for Deep Learning,” Computing Research Repository (CoRR), vol. abs/1910.05446, oct 2019.
  • [2] B. Nakisa, M. N. Rastgoo, A. Rakotonirainy, F. Maire and V. Chandra,”Long Short Term Memory Hyperparameter Optimization for a Neural Network Based Emotion Recognition Framework,” IEEE Access, vol. 6, pp. 49325 - 49338, Sept 2018.
  • [3] J. Bergstra, R. Bardenet, Y. Bengio and B. Kegl,”Algorithms for Hyper-Parameter Optimization,” Advances in Neural Information Processing Systems 24 (NIPS 2011), 2011.
  • [4] R. K. Rathore, D. Mishra, P. S. Mehra, O. pal, A. S. Hashim, A. Shapi'i, T. Ciano and M. Shutaywi,”Real-world model for bitcoin price prediction,” Information Processing and Management, vol. 59, no. 4, 2022.
  • [5] T. Shintate and L. Pichl,”Trend Prediction Classification for High Frequency Bitcoin Time Series with Deep Learning,” Journal of Risk Financial Management, vol 12, no. 1, 2019.
  • [6] I. S. Kervancı and M. F. Akay,”Review on Bitcoin Price Prediction Using Machine Learning and Statistical Methods,” Sakarya University Journal of Computer and Information Sciences, vol. 3, no. 3, pp. 272-282, 2020.
  • [7] J. Michańków, P. Sakowski and R. Ślepaczuk,”LSTM in Algorithmic Investment Strategies on BTC and S&P500 Index,” Sensors , vol. 22, no. 3, 2022.
  • [8] F. Hutter, H. Hoos and K. Leyton-Brown,”An Efficient Approach for Assessing Hyperparameter Importance,” Proceedings of the 31st International Conference on Machine Learning, 2014.
  • [9] N. Reimers and I. Gurevych,”Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks,” EMNLP 2017, 2017.
  • [10] A. U. Rehman, A. K. Malik, B. Raza and W. Ali ,”A Hybrid CNN-LSTM Model for Improving Accuracy of Movie Reviews Sentiment Analysis,” Multimedia Tools and Applications, no. 78, pp. 26597–26613, June 2019.
  • [11] S. Ioffe and C. Szegedy,”Batch Normalization: Accelerating Deep Network Training Reducing Internal Covariate Shift,” Proceedings of the 32 nd International Conference on Machine Learning, Lille, France, 2015.
  • [12] A. Farzad, H. Mashayekhi and H. Hassanpour ,”A comparative performance analysis of different activation functions in LSTM networks for classification,” Neural Computing and Applications volume, vol. 31, pp. 2507–2521, 2019.
  • [13] Q. V. Le , J. Ngiam, A. Coates, A. Lahiri, B. Prochnow and A. Y. Ng,”On Optimization Methods for Deep Learning,” ICML, 2011.
  • [14] D. P. Kingma and J. Ba,”Adam: A Method for Stochastic Optimization,” Published as a conference paper at the 3rd International Conference for Learning Representations, San Diego, 2015.
  • [15] J. Duchi , E. Hazan and Y. Singer ,”Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” Journal of Machine Learning Research, vol. 12, pp. 2121-2159, 2011.
  • [16] G. Hinton, N. Srivastava and K. Swersky,”Lecture 6.5 Rmsprop- Divide the Gradient by a Running Average of its Recent Magnitude,” Toronto Uni, 2012.
  • [17] S. Merity, N. S. Keskar and R. Socher,”Regularizing and Optimizing LSTM Language Models,” ICLR 2018, Vancouver, BC, Canada, 2018.
  • [18] I. Sutskever, J. Martens and G. D. Geoffr,”On the Importance of Initialization and Momentum in Deep Learning,” Proceedings of the 30th International Conference on Machine Learning, PMLR, 2013.
  • [19] keras,”keras,” Feb. 2020. [Online]. Available: https://keras.io/layers/recurrent/.
  • [20] Smith, Leslie N.;U.S. Naval Research Laboratory,”Cyclical Learning Rates for Training Neural Networks,” 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 2017.
  • [21] N. Golmant, N. Vemuri, Z. Yao, V. Feinberg, A. Gholami, K. Rothauge, M. W. Mahoney and J. Gonzalez,”On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent,” arXiv preprint arXiv:1811.12941, Nov 2018.
  • [22] S. SIAMI NAMIN and A. SIAMI NAMIN,”Forecasting Economic and Financial Time Series : ARIMA VS. LSTM,” arXiv preprint arXiv:1803.06386, 2018.
  • [23] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov,”Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol.15, pp. 1929-1958, 2014.
  • [24] «A Comparison Between Shallow and Deep Architecture Classifiers on Small Dataset,” 8th ICITEE, Yogyakarta, Indonesia, 2016.
  • [25] S. L. Smith, P. J. Kindermans, C. Ying and Q. V. Le,”Don't decay the learning rate, increase the batch size,” ICLR (International Conference on Learning Representations), Vancouver, 2018.
  • [26] N. Leibowitz, B. Baum, G. Enden and A. Karniel,”The exponential learning equation as a function of successful trials results in sigmoid performance,” Journal of Mathematical Psychology, vol. 54, no. 3, pp. 338-340, 2010.
  • [27] K. Struga and O. Qirici,”Bitcoin Price Prediction with Neural Networks,” RTA-CSIT 2018, 2018.
Toplam 27 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka
Bölüm Makaleler
Yazarlar

I.sibel Kervancı 0000-0001-5547-1860

Fatih Akay 0000-0003-0780-0679

Erken Görünüm Tarihi 28 Nisan 2023
Yayımlanma Tarihi 30 Nisan 2023
Gönderilme Tarihi 7 Eylül 2022
Kabul Tarihi 2 Ocak 2023
Yayımlandığı Sayı Yıl 2023Cilt: 6 Sayı: 1

Kaynak Göster

IEEE I. Kervancı ve F. Akay, “LSTM Hyperparameters optimization with Hparam parameters for Bitcoin Price Prediction”, SAUCIS, c. 6, sy. 1, ss. 1–9, 2023, doi: 10.35377/saucis...1172027.

29070  The papers in this journal are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License