Machine learning and deep learning algorithms produce very different results with different examples of their hyperparameters. Algorithm parameters require optimization because they aren't specific for all problems. In this paper Long Short-Term Memory (LSTM), eight different hyperparameters (go-backward, epoch, batch size, dropout, activation function, optimizer, learning rate and, number of layers) were used to examine to daily and hourly Bitcoin datasets. The effects of each parameter on the daily dataset on the results were evaluated and explained These parameters were examined with hparam properties of Tensorboard. As a result, it was seen that examining all combinations of parameters with hparam produced the best test Mean Square Error (MSE) values with hourly dataset 0.000043633 and daily dataset 0.00073843. Both datasets produced better results with the tanh activation function. Finally, when the results are interpreted, the daily dataset produces better results with a small learning rate and small dropout values, whereas the hourly dataset produces better results with a large learning rate and large dropout values.
Primary Language | English |
---|---|
Subjects | Artificial Intelligence |
Journal Section | Articles |
Authors | |
Early Pub Date | April 28, 2023 |
Publication Date | April 30, 2023 |
Submission Date | September 7, 2022 |
Acceptance Date | January 2, 2023 |
Published in Issue | Year 2023 |
The papers in this journal are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License