Research Article
BibTex RIS Cite

Evaluating Feature Selection Algorithms for Machine Learning-Based Musical Instrument Identification in Monophonic Recordings

Year 2024, , 289 - 301, 31.08.2024
https://doi.org/10.35377/saucis...1516717

Abstract

Musical instrument identification (MII) research has been studied as a subfield of the Music Information Retrieval (MIR) field. Conventional MII models are developed based on hierarchical models representing musical instrument families. However, for MII models to be used in the field of music production, they should be developed based on the arrangement-based functions of instruments in musical styles rather than these hierarchical models. This study investigates how the performance of machine learning based classification algorithms for Guitar, Bass guitar and Drum classes changes with different feature selection algorithms, considering a popular music production scenario. To determine the effect of feature statistics on model performance, Minimum Redundancy Maximum Relevance (mRMR), Chi-sqaure (Chi2), ReliefF, Analysis of Variance (ANOVA) and Kruskal Wallis feature selection algorithms were used. In the end, the neural network algorithm with wide hyperparameters (WNN) achieved the best classification accuracy (91.4%) when using the first 20 statistics suggested by the mRMR and ReliefF feature selection algorithms.

References

  • [1] A. Ghosh, A. Pal, D. Sil, and S. Palit, “Music Instrument Identification Based on a 2-D Representation,” in 3rd International Conference on Electrical, Electronics, Communication, Computer Technologies and Optimization Techniques, ICEECCOT 2018, Institute of Electrical and Electronics Engineers Inc., Dec. 2018, pp. 509–513. doi: 10.1109/ICEECCOT43722.2018.9001486.
  • [2] U. Shukla, U. Tiwari, V. Chawla, and S. Tiwari, “Instrument classification using image based transfer learning,” in Proceedings of the 2020 International Conference on Computing, Communication and Security, ICCCS 2020, Institute of Electrical and Electronics Engineers Inc., Oct. 2020. doi: 10.1109/ICCCS49678.2020.9277366.
  • [3] I. Kaminskyj and A. Materka, “AUTOMATIC SOURCE IDENTIFICATION OF MONOPHONIC MUSICAL INSTRUMENT SOUNDS,” Proceedings of the Australian and New Zealand Conference on Intelligent Information Systems, 1995.
  • [4] I. Kaminskyj and P. Voumard, “Enhanced automatic source identification of monophonic musical instrument sounds,” Proceedings of the Australian and New Zealand Conference on Intelligent Information Systems, no. November, pp. 76–79, 1996.
  • [5] K. D. Martin and Y. E. Kim, “2pMU9. Musical instrument identification: A pattern-recognition approach *,” in Presented at the 136th meeting of the Acoustical Society of America, Newyork, 1998.
  • [6] P. Herrera-Boyer, G. Peeters, and S. Dubnov, “Automatic classification of musical instrument sounds,” in International Journal of Phytoremediation, Journal of New Music Research, 2003, pp. 3–21.
  • [7] S. K. Banchhor and A. Khan, “Musical Instrument Recognition using Spectrogram and Autocorrelation,” Soft comput, no. 1, pp. 1–4, 2012.
  • [8] H. Mukherjee, S. M. Obaidullah, S. Phadikar, and K. Roy, “SMIL - A Musical Instrument Identification System,” Springer, Singapore, 2017, pp. 129–140. doi: 10.1007/978-981-10-6427-2_11.
  • [9] Y. Han, J. Kim, and K. Lee, “Deep Convolutional Neural Networks for Predominant Instrument Recognition in Polyphonic Music,” IEEE/ACM Trans Audio Speech Lang Process, vol. 25, no. 1, pp. 208–221, Jan. 2017, doi: 10.1109/TASLP.2016.2632307.
  • [10] T. Kitahara, M. Goto, K. Komatani, T. Ogata, and H. G. Okuno, “Instrument identification in polyphonic music: Feature weighting with mixed sounds, pitch-dependent timbre modeling, and use of musical context,” ISMIR 2005 - 6th International Conference on Music Information Retrieval, no. January, pp. 558–563, 2005.
  • [11] S. Chang, S. Member, T. Sikora, S. Member, and A. Puri, “Overview of the MPEG-7 Standard,” vol. 11, no. 6, pp. 688–695, 2001.
  • [12] M. R. Bai, A. Member, and C. Chen, “Intelligent Preprocessing and Classification of Audio Signals*.”
  • [13] P. Wei, F. He, L. Li, and J. Li, “Research on sound classification based on SVM,” Neural Comput Appl, vol. 32, no. 6, pp. 1593–1607, Mar. 2020, doi: 10.1007/s00521-019-04182-0.
  • [14] F. Alías, J. C. Socoró, and X. Sevillano, “A review of physical and perceptual feature extraction techniques for speech, music and environmental sounds,” Applied Sciences, vol. 6, no. 5. Balkan Society of Geometers, 2016. doi: 10.3390/app6050143.
  • [15] J. D. Deng, C. Simmermacher, and S. Cranefield, “A study on feature analysis for musical instrument classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 38, no. 2, pp. 429–438, 2008, doi: 10.1109/TSMCB.2007.913394.
  • [16] A. Aljanaki and M. Soleymani, “A data-driven approach to mid-level perceptual musical feature modeling,” Jun. 2018, [Online]. Available: http://arxiv.org/abs/1806.04903
  • [17] J. L. Fernández-Martínez and Z. Fernández-Muñiz, “The curse of dimensionality in inverse problems,” J Comput Appl Math, vol. 369, 2020, doi: 10.1016/j.cam.2019.112571.
  • [18] J. Osmalskyj, M. Van Droogenbroeck, and J. J. Embrechts, “Performances of low-level audio classifiers for large-scale music similarity,” in International Conference on Systems, Signals, and Image Processing, 2014, pp. 91–94.
  • [19] Z. Fu, G. Lu, K. M. Ting, and D. Zhang, “On feature combination for music classification,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, pp. 453–462. doi: 10.1007/978-3-642-14980-1_44.
  • [20] M. Chmulik, R. Jarina, M. Kuba, and E. Lieskovska, “Continuous music emotion recognition using selected audio features,” in 2019 42nd International Conference on Telecommunications and Signal Processing, TSP 2019, 2019. doi: 10.1109/TSP.2019.8768806.
  • [21] J. Grekow, “Audio features dedicated to the detection of arousal and valence in music recordings,” in Proceedings - 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2017, 2017, pp. 40–44. doi: 10.1109/INISTA.2017.8001129.
  • [22] J. Mitra and D. Saha, “An Efficient Feature Selection in Classification of Audio Files,” pp. 29–38, 2014, doi: 10.5121/csit.2014.4303.
  • [23] M. Liu and C. Wan, “Feature selection for automatic classification of musical instrument sounds,” Proceedings of the ACM International Conference on Digital Libraries, pp. 247–248, 2001, doi: 10.1145/379437.379663.
  • [24] S. R. Gulhane, S. S. Badhe, and S. D. Shirbahadurkar, “Cepstral (MFCC) Feature and Spectral (Timbral) Features Analysis for Musical Instrument Sounds,” Proceedings - 2018 IEEE Global Conference on Wireless Computing and Networking, GCWCN 2018, pp. 109–113, 2018, doi: 10.1109/GCWCN.2018.8668628.
  • [25] P. S. Jadhav, “Classification of Musical Instruments sounds by Using MFCC and Timbral Audio Audio Descriptors,” 2015.
  • [26] J. Lee, T. Kim, J. Park, and J. Nam, “Raw Waveform-based Audio Classification Using Sample-level CNN Architectures,” no. Nips, 2017.
  • [27] K. Avramidis, A. Kratimenos, C. Garoufis, A. Zlatintsi, and P. Maragos, “Deep convolutional and recurrent networks for polyphonic instrument classification from monophonic raw audio waveforms,” ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2021-June, pp. 3010–3014, 2021, doi: 10.1109/ICASSP39728.2021.9413479.
  • [28] T. M. Hehn, J. F. P. Kooij, and F. A. Hamprecht, “End-to-End Learning of Decision Trees and Forests,” Int J Comput Vis, vol. 128, no. 4, 2020, doi: 10.1007/s11263-019-01237-6.
  • [29] Z. ÇETİNKAYA and F. HORASAN, “Decision Trees in Large Data Sets,” Uluslararası Muhendislik Arastirma ve Gelistirme Dergisi, vol. 13, no. 1, 2021, doi: 10.29137/umagd.763490.
  • [30] A. Araveeporn, “Comparison of Logistic Regression and Discriminant Analysis for Classification of Multicollinearity Data,” WSEAS Trans Math, vol. 22, 2023, doi: 10.37394/23206.2023.22.15.
  • [31] A. Saini, “Guide on Support Vector Machine (SVM) Algorithm,” Analytics Vidhya, 2024.
  • [32] S. Uddin, I. Haque, H. Lu, M. A. Moni, and E. Gide, “Comparative performance analysis of K-nearest neighbour (KNN) algorithm and its different variants for disease prediction,” Sci Rep, vol. 12, no. 1, Dec. 2022, doi: 10.1038/S41598-022-10358-X.
  • [33] R. A. Rizal, N. O. Purba, L. A. Siregar, K. P. Sinaga, and N. Azizah, “Analysis of Tuberculosis (TB) on X-ray Image Using SURF Feature Extraction and the K-Nearest Neighbor (KNN) Classification Method,” Jaict, vol. 5, no. 2, p. 9, Oct. 2020, doi: 10.32497/JAICT.V5I2.1979.
  • [34] B. Akalin, Ü. Veranyurt, and O. Veranyurt, “Classification of Individuals at Risk of Heart Disease Using Machine Learning,” Cumhuriyet Medical Journal, 2020, doi: 10.7197/cmj.vi.742161.
  • [35] X. Peng, R. Chen, K. Yu, F. Ye, and W. Xue, “An improved weighted k-nearest neighbor algorithm for indoor localization,” Electronics (Switzerland), vol. 9, no. 12, 2020, doi: 10.3390/electronics9122117.
  • [36] H. K. Karthikeya, K. Sudarshan, and D. S. Shetty, “Prediction of Agricultural Crops using KNN Algorithm,” Int J Innov Sci Res Technol, vol. 5, no. 5, 2020.
  • [37] R. Thiruvengatanadhan, “Speech/Music Classification using MFCC and KNN,” 2017.
  • [38] X. Mu, “Implementation of Music Genre Classifier Using KNN Algorithm,” Highlights in Science, Engineering and Technology, vol. 34, 2023, doi: 10.54097/hset.v34i.5439.
  • [39] I. D. Mienye and Y. Sun, “A Survey of Ensemble Learning: Concepts, Algorithms, Applications, and Prospects,” IEEE Access, vol. 10. 2022. doi: 10.1109/ACCESS.2022.3207287.
  • [40] A. Verikas, A. Gelzinis, and M. Bacauskiene, “Mining data with random forests: A survey and results of new tests,” Pattern Recognit, vol. 44, no. 2, 2011, doi: 10.1016/j.patcog.2010.08.011.
  • [41] Y. Freund and R. E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting,” J Comput Syst Sci, vol. 55, no. 1, 1997, doi: 10.1006/jcss.1997.1504.
  • [42] R. E. Schapire and Y. Singer, “Improved boosting algorithms using confidence-rated predictions,” Mach Learn, vol. 37, no. 3, 1999, doi: 10.1023/A:1007614523901.
  • [43] J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Ann Stat, vol. 29, no. 5, 2001, doi: 10.1214/aos/1013203451.
  • [44] S. Joshi, A. Gera, and S. Bhadra, “Neural Networks and Their Applications,” in Evolving Networking Technologies: Developments and Future Directions, 2023. doi: 10.1002/9781119836667.ch13.
  • [45] G. Alfonso and D. R. Ramirez, “Neural networks in narrow stock markets,” Symmetry (Basel), vol. 12, no. 8, 2020, doi: 10.3390/SYM12081272.
  • [46] M. Saglam, C. Spataru, and O. A. Karaman, “Forecasting Electricity Demand in Turkey Using Optimization and Machine Learning Algorithms,” Energies (Basel), vol. 16, no. 11, 2023, doi: 10.3390/en16114499.
  • [47] J. Chen, K. Li, K. Bilal, X. Zhou, K. Li, and P. S. Yu, “A Bi-layered parallel training architecture for large-scale convolutional neural networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 5, 2019, doi: 10.1109/TPDS.2018.2877359.
  • [48] J. Xi, O. K. Ersoy, J. Fang, T. Wu, X. Wei, and C. Zhao, “Parallel Multistage Wide Neural Network,” IEEE Trans Neural Netw Learn Syst, vol. 34, no. 8, 2023, doi: 10.1109/TNNLS.2021.3120331.
  • [49] A. Radhakrishnan, M. Belkin, and C. Uhler, “Wide and deep neural networks achieve consistency for classification,” Proc Natl Acad Sci U S A, vol. 120, no. 14, 2023, doi: 10.1073/pnas.2208779120.
  • [50] X. Tang, Q. He, X. Gu, C. Li, H. Zhang, and J. Lu, “A Novel Bearing Fault Diagnosis Method Based on GL-mRMR-SVM,” Processes, vol. 8, no. 7, Jul. 2020, doi: 10.3390/PR8070784.
  • [51] H. Alshamlan, G. Badr, and Y. Alohali, “mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling,” Biomed Res Int, vol. 2015, 2015, doi: 10.1155/2015/604910.
  • [52] H. Liu and R. Setiono, “Chi2: feature selection and discretization of numeric attributes,” Proceedings of the International Conference on Tools with Artificial Intelligence, pp. 388–391, 1995, doi: 10.1109/tai.1995.479783.
  • [53] T. D. Diwan et al., “Feature Entropy Estimation (FEE) for Malicious IoT Traffic and Detection Using Machine Learning,” Mobile Information Systems, vol. 2021, 2021, doi: 10.1155/2021/8091363.
  • [54] U. I. Larasati, M. A. Muslim, R. Arifudin, and A. Alamsyah, “Improve the Accuracy of Support Vector Machine Using Chi Square Statistic and Term Frequency Inverse Document Frequency on Movie Review Sentiment Analysis,” Scientific Journal of Informatics, vol. 6, no. 1, pp. 138–149, May 2019, doi: 10.15294/SJI.V6I1.14244.
  • [55] N. Yusliani, S. A. Q. Aruda, M. D. Marieska, D. M. Saputra, and A. Abdiansah, “The effect of Chi-Square Feature Selection on Question Classification using Multinomial Naïve Bayes,” Sinkron, vol. 7, no. 4, pp. 2430–2436, Oct. 2022, doi: 10.33395/SINKRON.V7I4.11788.
  • [56] X. Gong, R. Yuan, H. Qian, Y. Chen, and A. G. Cohn, “Emotion Regulation Music Recommendation Based on Feature Selection,” Frontiers in Artificial Intelligence and Applications, vol. 337, pp. 486–495, Sep. 2021, doi: 10.3233/FAIA210047.
  • [57] A. Tripathi, N. Bhoj, M. Khari, and B. Pandey, “Feature Selection and Scaling for Random Forest Powered Malware Detection System”, doi: 10.21203/RS.3.RS-778333/V1.
  • [58] K. Kira and L. A. Rendell, “The Feature Selection Problem: Traditional Methods and a New Algorithm,” in AAAI’92: Proceedings of the tenth national conference on Artificial intelligence, 1992, pp. 129–134.
  • [59] C. Zhang, M. Ye, L. Lei, and Y. Qian, “Feature Selection for Cross-Scene Hyperspectral Image Classification Using Cross-Domain I-ReliefF,” IEEE J Sel Top Appl Earth Obs Remote Sens, vol. 14, pp. 5932–5949, 2021, doi: 10.1109/JSTARS.2021.3086151.
  • [60] Y. Zhou, R. Zhang, S. Wang, and F. Wang, “Feature Selection Method Based on High-Resolution Remote Sensing Images and the Effect of Sensitive Features on Classification Accuracy,” Sensors, vol. 18, no. 7, Jul. 2018, doi: 10.3390/S18072013.
  • [61] L. Sun, X. Kong, J. Xu, Z. Xue, R. Zhai, and S. Zhang, “A Hybrid Gene Selection Method Based on ReliefF and Ant Colony Optimization Algorithm for Tumor Classification,” Sci Rep, vol. 9, no. 1, Dec. 2019, doi: 10.1038/S41598-019-45223-X.
  • [62] H. Ding and L. Huang, “Extraction of soybean planting areas based on multi-temporal Sentinel-1/2 data,” Third International Conference on Computer Vision and Pattern Analysis (ICCPA 2023), p. 8, Aug. 2023, doi: 10.1117/12.2684169.
  • [63] R. Togo et al., “Cardiac sarcoidosis classification with deep convolutional neural network-based features using polar maps,” Comput Biol Med, vol. 104, pp. 81–86, Jan. 2019, doi: 10.1016/J.COMPBIOMED.2018.11.008.
  • [64] C. S. Greene, N. M. Penrod, J. Kiralis, and J. H. Moore, “Spatially Uniform ReliefF (SURF) for computationally-efficient filtering of gene-gene interactions,” BioData Min, vol. 2, no. 1, 2009, doi: 10.1186/1756-0381-2-5.
  • [65] H. Nasiri and S. A. Alavi, “A Novel Framework Based on Deep Learning and ANOVA Feature Selection Method for Diagnosis of COVID-19 Cases from Chest X-Ray Images,” Comput Intell Neurosci, vol. 2022, 2022, doi: 10.1155/2022/4694567.
  • [66] M. O. Arowol, S. O. Abdulsalam, R. M. Isiaka, and K. A. Gbolagade, “A Hybrid Dimensionality Reduction Model for Classification of Microarray Dataset,” International Journal of Information Technology and Computer Science, vol. 9, no. 11, pp. 57–63, Nov. 2017, doi: 10.5815/IJITCS.2017.11.06.
  • [67] G. F. Dong, L. Zheng, S. H. Huang, J. Gao, and Y. C. Zuo, “Amino Acid Reduction Can Help to Improve the Identification of Antimicrobial Peptides and Their Functional Activities,” Front Genet, vol. 12, Apr. 2021, doi: 10.3389/FGENE.2021.669328.
  • [68] B. Thakur, N. Kumar, and G. Gupta, “Machine learning techniques with ANOVA for the prediction of breast cancer,” International Journal of Advanced Technology and Engineering Exploration, vol. 9, no. 87, pp. 232–245, Feb. 2022, doi: 10.19101/IJATEE.2021.874555.
  • [69] F. A. Putra, S. Mandala, and M. Pramudyo, “A Study of Feature Selection Method to Detect Coronary Heart Disease (CHD) on Photoplethysmography (PPG) Signals,” Building of Informatics, Technology and Science (BITS), vol. 4, no. 2, Sep. 2022, doi: 10.47065/BITS.V4I2.2259.
  • [70] S. Suresh and V. P. S. Naidu, “Mahalanobis-ANOVA criterion for optimum feature subset selection in multi-class planetary gear fault diagnosis,” Journal of Vibration and Control, vol. 28, no. 21–22, pp. 3257–3268, Nov. 2022, doi: 10.1177/10775463211029153.
  • [71] M. J. Siraj, T. Ahmad, and R. M. Ijtihadie, “Analyzing ANOVA F-test and Sequential Feature Selection for Intrusion Detection Systems,” International Journal of Advances in Soft Computing and Its Applications, vol. 14, no. 2, pp. 185–194, 2022, doi: 10.15849/IJASCA.220720.13.
  • [72] P. E. McKight and J. Najab, “Kruskal‐Wallis Test,” The Corsini Encyclopedia of Psychology, pp. 1–1, Jan. 2010, doi: 10.1002/9780470479216.CORPSY0491.
Year 2024, , 289 - 301, 31.08.2024
https://doi.org/10.35377/saucis...1516717

Abstract

References

  • [1] A. Ghosh, A. Pal, D. Sil, and S. Palit, “Music Instrument Identification Based on a 2-D Representation,” in 3rd International Conference on Electrical, Electronics, Communication, Computer Technologies and Optimization Techniques, ICEECCOT 2018, Institute of Electrical and Electronics Engineers Inc., Dec. 2018, pp. 509–513. doi: 10.1109/ICEECCOT43722.2018.9001486.
  • [2] U. Shukla, U. Tiwari, V. Chawla, and S. Tiwari, “Instrument classification using image based transfer learning,” in Proceedings of the 2020 International Conference on Computing, Communication and Security, ICCCS 2020, Institute of Electrical and Electronics Engineers Inc., Oct. 2020. doi: 10.1109/ICCCS49678.2020.9277366.
  • [3] I. Kaminskyj and A. Materka, “AUTOMATIC SOURCE IDENTIFICATION OF MONOPHONIC MUSICAL INSTRUMENT SOUNDS,” Proceedings of the Australian and New Zealand Conference on Intelligent Information Systems, 1995.
  • [4] I. Kaminskyj and P. Voumard, “Enhanced automatic source identification of monophonic musical instrument sounds,” Proceedings of the Australian and New Zealand Conference on Intelligent Information Systems, no. November, pp. 76–79, 1996.
  • [5] K. D. Martin and Y. E. Kim, “2pMU9. Musical instrument identification: A pattern-recognition approach *,” in Presented at the 136th meeting of the Acoustical Society of America, Newyork, 1998.
  • [6] P. Herrera-Boyer, G. Peeters, and S. Dubnov, “Automatic classification of musical instrument sounds,” in International Journal of Phytoremediation, Journal of New Music Research, 2003, pp. 3–21.
  • [7] S. K. Banchhor and A. Khan, “Musical Instrument Recognition using Spectrogram and Autocorrelation,” Soft comput, no. 1, pp. 1–4, 2012.
  • [8] H. Mukherjee, S. M. Obaidullah, S. Phadikar, and K. Roy, “SMIL - A Musical Instrument Identification System,” Springer, Singapore, 2017, pp. 129–140. doi: 10.1007/978-981-10-6427-2_11.
  • [9] Y. Han, J. Kim, and K. Lee, “Deep Convolutional Neural Networks for Predominant Instrument Recognition in Polyphonic Music,” IEEE/ACM Trans Audio Speech Lang Process, vol. 25, no. 1, pp. 208–221, Jan. 2017, doi: 10.1109/TASLP.2016.2632307.
  • [10] T. Kitahara, M. Goto, K. Komatani, T. Ogata, and H. G. Okuno, “Instrument identification in polyphonic music: Feature weighting with mixed sounds, pitch-dependent timbre modeling, and use of musical context,” ISMIR 2005 - 6th International Conference on Music Information Retrieval, no. January, pp. 558–563, 2005.
  • [11] S. Chang, S. Member, T. Sikora, S. Member, and A. Puri, “Overview of the MPEG-7 Standard,” vol. 11, no. 6, pp. 688–695, 2001.
  • [12] M. R. Bai, A. Member, and C. Chen, “Intelligent Preprocessing and Classification of Audio Signals*.”
  • [13] P. Wei, F. He, L. Li, and J. Li, “Research on sound classification based on SVM,” Neural Comput Appl, vol. 32, no. 6, pp. 1593–1607, Mar. 2020, doi: 10.1007/s00521-019-04182-0.
  • [14] F. Alías, J. C. Socoró, and X. Sevillano, “A review of physical and perceptual feature extraction techniques for speech, music and environmental sounds,” Applied Sciences, vol. 6, no. 5. Balkan Society of Geometers, 2016. doi: 10.3390/app6050143.
  • [15] J. D. Deng, C. Simmermacher, and S. Cranefield, “A study on feature analysis for musical instrument classification,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 38, no. 2, pp. 429–438, 2008, doi: 10.1109/TSMCB.2007.913394.
  • [16] A. Aljanaki and M. Soleymani, “A data-driven approach to mid-level perceptual musical feature modeling,” Jun. 2018, [Online]. Available: http://arxiv.org/abs/1806.04903
  • [17] J. L. Fernández-Martínez and Z. Fernández-Muñiz, “The curse of dimensionality in inverse problems,” J Comput Appl Math, vol. 369, 2020, doi: 10.1016/j.cam.2019.112571.
  • [18] J. Osmalskyj, M. Van Droogenbroeck, and J. J. Embrechts, “Performances of low-level audio classifiers for large-scale music similarity,” in International Conference on Systems, Signals, and Image Processing, 2014, pp. 91–94.
  • [19] Z. Fu, G. Lu, K. M. Ting, and D. Zhang, “On feature combination for music classification,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, pp. 453–462. doi: 10.1007/978-3-642-14980-1_44.
  • [20] M. Chmulik, R. Jarina, M. Kuba, and E. Lieskovska, “Continuous music emotion recognition using selected audio features,” in 2019 42nd International Conference on Telecommunications and Signal Processing, TSP 2019, 2019. doi: 10.1109/TSP.2019.8768806.
  • [21] J. Grekow, “Audio features dedicated to the detection of arousal and valence in music recordings,” in Proceedings - 2017 IEEE International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2017, 2017, pp. 40–44. doi: 10.1109/INISTA.2017.8001129.
  • [22] J. Mitra and D. Saha, “An Efficient Feature Selection in Classification of Audio Files,” pp. 29–38, 2014, doi: 10.5121/csit.2014.4303.
  • [23] M. Liu and C. Wan, “Feature selection for automatic classification of musical instrument sounds,” Proceedings of the ACM International Conference on Digital Libraries, pp. 247–248, 2001, doi: 10.1145/379437.379663.
  • [24] S. R. Gulhane, S. S. Badhe, and S. D. Shirbahadurkar, “Cepstral (MFCC) Feature and Spectral (Timbral) Features Analysis for Musical Instrument Sounds,” Proceedings - 2018 IEEE Global Conference on Wireless Computing and Networking, GCWCN 2018, pp. 109–113, 2018, doi: 10.1109/GCWCN.2018.8668628.
  • [25] P. S. Jadhav, “Classification of Musical Instruments sounds by Using MFCC and Timbral Audio Audio Descriptors,” 2015.
  • [26] J. Lee, T. Kim, J. Park, and J. Nam, “Raw Waveform-based Audio Classification Using Sample-level CNN Architectures,” no. Nips, 2017.
  • [27] K. Avramidis, A. Kratimenos, C. Garoufis, A. Zlatintsi, and P. Maragos, “Deep convolutional and recurrent networks for polyphonic instrument classification from monophonic raw audio waveforms,” ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2021-June, pp. 3010–3014, 2021, doi: 10.1109/ICASSP39728.2021.9413479.
  • [28] T. M. Hehn, J. F. P. Kooij, and F. A. Hamprecht, “End-to-End Learning of Decision Trees and Forests,” Int J Comput Vis, vol. 128, no. 4, 2020, doi: 10.1007/s11263-019-01237-6.
  • [29] Z. ÇETİNKAYA and F. HORASAN, “Decision Trees in Large Data Sets,” Uluslararası Muhendislik Arastirma ve Gelistirme Dergisi, vol. 13, no. 1, 2021, doi: 10.29137/umagd.763490.
  • [30] A. Araveeporn, “Comparison of Logistic Regression and Discriminant Analysis for Classification of Multicollinearity Data,” WSEAS Trans Math, vol. 22, 2023, doi: 10.37394/23206.2023.22.15.
  • [31] A. Saini, “Guide on Support Vector Machine (SVM) Algorithm,” Analytics Vidhya, 2024.
  • [32] S. Uddin, I. Haque, H. Lu, M. A. Moni, and E. Gide, “Comparative performance analysis of K-nearest neighbour (KNN) algorithm and its different variants for disease prediction,” Sci Rep, vol. 12, no. 1, Dec. 2022, doi: 10.1038/S41598-022-10358-X.
  • [33] R. A. Rizal, N. O. Purba, L. A. Siregar, K. P. Sinaga, and N. Azizah, “Analysis of Tuberculosis (TB) on X-ray Image Using SURF Feature Extraction and the K-Nearest Neighbor (KNN) Classification Method,” Jaict, vol. 5, no. 2, p. 9, Oct. 2020, doi: 10.32497/JAICT.V5I2.1979.
  • [34] B. Akalin, Ü. Veranyurt, and O. Veranyurt, “Classification of Individuals at Risk of Heart Disease Using Machine Learning,” Cumhuriyet Medical Journal, 2020, doi: 10.7197/cmj.vi.742161.
  • [35] X. Peng, R. Chen, K. Yu, F. Ye, and W. Xue, “An improved weighted k-nearest neighbor algorithm for indoor localization,” Electronics (Switzerland), vol. 9, no. 12, 2020, doi: 10.3390/electronics9122117.
  • [36] H. K. Karthikeya, K. Sudarshan, and D. S. Shetty, “Prediction of Agricultural Crops using KNN Algorithm,” Int J Innov Sci Res Technol, vol. 5, no. 5, 2020.
  • [37] R. Thiruvengatanadhan, “Speech/Music Classification using MFCC and KNN,” 2017.
  • [38] X. Mu, “Implementation of Music Genre Classifier Using KNN Algorithm,” Highlights in Science, Engineering and Technology, vol. 34, 2023, doi: 10.54097/hset.v34i.5439.
  • [39] I. D. Mienye and Y. Sun, “A Survey of Ensemble Learning: Concepts, Algorithms, Applications, and Prospects,” IEEE Access, vol. 10. 2022. doi: 10.1109/ACCESS.2022.3207287.
  • [40] A. Verikas, A. Gelzinis, and M. Bacauskiene, “Mining data with random forests: A survey and results of new tests,” Pattern Recognit, vol. 44, no. 2, 2011, doi: 10.1016/j.patcog.2010.08.011.
  • [41] Y. Freund and R. E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting,” J Comput Syst Sci, vol. 55, no. 1, 1997, doi: 10.1006/jcss.1997.1504.
  • [42] R. E. Schapire and Y. Singer, “Improved boosting algorithms using confidence-rated predictions,” Mach Learn, vol. 37, no. 3, 1999, doi: 10.1023/A:1007614523901.
  • [43] J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Ann Stat, vol. 29, no. 5, 2001, doi: 10.1214/aos/1013203451.
  • [44] S. Joshi, A. Gera, and S. Bhadra, “Neural Networks and Their Applications,” in Evolving Networking Technologies: Developments and Future Directions, 2023. doi: 10.1002/9781119836667.ch13.
  • [45] G. Alfonso and D. R. Ramirez, “Neural networks in narrow stock markets,” Symmetry (Basel), vol. 12, no. 8, 2020, doi: 10.3390/SYM12081272.
  • [46] M. Saglam, C. Spataru, and O. A. Karaman, “Forecasting Electricity Demand in Turkey Using Optimization and Machine Learning Algorithms,” Energies (Basel), vol. 16, no. 11, 2023, doi: 10.3390/en16114499.
  • [47] J. Chen, K. Li, K. Bilal, X. Zhou, K. Li, and P. S. Yu, “A Bi-layered parallel training architecture for large-scale convolutional neural networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 5, 2019, doi: 10.1109/TPDS.2018.2877359.
  • [48] J. Xi, O. K. Ersoy, J. Fang, T. Wu, X. Wei, and C. Zhao, “Parallel Multistage Wide Neural Network,” IEEE Trans Neural Netw Learn Syst, vol. 34, no. 8, 2023, doi: 10.1109/TNNLS.2021.3120331.
  • [49] A. Radhakrishnan, M. Belkin, and C. Uhler, “Wide and deep neural networks achieve consistency for classification,” Proc Natl Acad Sci U S A, vol. 120, no. 14, 2023, doi: 10.1073/pnas.2208779120.
  • [50] X. Tang, Q. He, X. Gu, C. Li, H. Zhang, and J. Lu, “A Novel Bearing Fault Diagnosis Method Based on GL-mRMR-SVM,” Processes, vol. 8, no. 7, Jul. 2020, doi: 10.3390/PR8070784.
  • [51] H. Alshamlan, G. Badr, and Y. Alohali, “mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling,” Biomed Res Int, vol. 2015, 2015, doi: 10.1155/2015/604910.
  • [52] H. Liu and R. Setiono, “Chi2: feature selection and discretization of numeric attributes,” Proceedings of the International Conference on Tools with Artificial Intelligence, pp. 388–391, 1995, doi: 10.1109/tai.1995.479783.
  • [53] T. D. Diwan et al., “Feature Entropy Estimation (FEE) for Malicious IoT Traffic and Detection Using Machine Learning,” Mobile Information Systems, vol. 2021, 2021, doi: 10.1155/2021/8091363.
  • [54] U. I. Larasati, M. A. Muslim, R. Arifudin, and A. Alamsyah, “Improve the Accuracy of Support Vector Machine Using Chi Square Statistic and Term Frequency Inverse Document Frequency on Movie Review Sentiment Analysis,” Scientific Journal of Informatics, vol. 6, no. 1, pp. 138–149, May 2019, doi: 10.15294/SJI.V6I1.14244.
  • [55] N. Yusliani, S. A. Q. Aruda, M. D. Marieska, D. M. Saputra, and A. Abdiansah, “The effect of Chi-Square Feature Selection on Question Classification using Multinomial Naïve Bayes,” Sinkron, vol. 7, no. 4, pp. 2430–2436, Oct. 2022, doi: 10.33395/SINKRON.V7I4.11788.
  • [56] X. Gong, R. Yuan, H. Qian, Y. Chen, and A. G. Cohn, “Emotion Regulation Music Recommendation Based on Feature Selection,” Frontiers in Artificial Intelligence and Applications, vol. 337, pp. 486–495, Sep. 2021, doi: 10.3233/FAIA210047.
  • [57] A. Tripathi, N. Bhoj, M. Khari, and B. Pandey, “Feature Selection and Scaling for Random Forest Powered Malware Detection System”, doi: 10.21203/RS.3.RS-778333/V1.
  • [58] K. Kira and L. A. Rendell, “The Feature Selection Problem: Traditional Methods and a New Algorithm,” in AAAI’92: Proceedings of the tenth national conference on Artificial intelligence, 1992, pp. 129–134.
  • [59] C. Zhang, M. Ye, L. Lei, and Y. Qian, “Feature Selection for Cross-Scene Hyperspectral Image Classification Using Cross-Domain I-ReliefF,” IEEE J Sel Top Appl Earth Obs Remote Sens, vol. 14, pp. 5932–5949, 2021, doi: 10.1109/JSTARS.2021.3086151.
  • [60] Y. Zhou, R. Zhang, S. Wang, and F. Wang, “Feature Selection Method Based on High-Resolution Remote Sensing Images and the Effect of Sensitive Features on Classification Accuracy,” Sensors, vol. 18, no. 7, Jul. 2018, doi: 10.3390/S18072013.
  • [61] L. Sun, X. Kong, J. Xu, Z. Xue, R. Zhai, and S. Zhang, “A Hybrid Gene Selection Method Based on ReliefF and Ant Colony Optimization Algorithm for Tumor Classification,” Sci Rep, vol. 9, no. 1, Dec. 2019, doi: 10.1038/S41598-019-45223-X.
  • [62] H. Ding and L. Huang, “Extraction of soybean planting areas based on multi-temporal Sentinel-1/2 data,” Third International Conference on Computer Vision and Pattern Analysis (ICCPA 2023), p. 8, Aug. 2023, doi: 10.1117/12.2684169.
  • [63] R. Togo et al., “Cardiac sarcoidosis classification with deep convolutional neural network-based features using polar maps,” Comput Biol Med, vol. 104, pp. 81–86, Jan. 2019, doi: 10.1016/J.COMPBIOMED.2018.11.008.
  • [64] C. S. Greene, N. M. Penrod, J. Kiralis, and J. H. Moore, “Spatially Uniform ReliefF (SURF) for computationally-efficient filtering of gene-gene interactions,” BioData Min, vol. 2, no. 1, 2009, doi: 10.1186/1756-0381-2-5.
  • [65] H. Nasiri and S. A. Alavi, “A Novel Framework Based on Deep Learning and ANOVA Feature Selection Method for Diagnosis of COVID-19 Cases from Chest X-Ray Images,” Comput Intell Neurosci, vol. 2022, 2022, doi: 10.1155/2022/4694567.
  • [66] M. O. Arowol, S. O. Abdulsalam, R. M. Isiaka, and K. A. Gbolagade, “A Hybrid Dimensionality Reduction Model for Classification of Microarray Dataset,” International Journal of Information Technology and Computer Science, vol. 9, no. 11, pp. 57–63, Nov. 2017, doi: 10.5815/IJITCS.2017.11.06.
  • [67] G. F. Dong, L. Zheng, S. H. Huang, J. Gao, and Y. C. Zuo, “Amino Acid Reduction Can Help to Improve the Identification of Antimicrobial Peptides and Their Functional Activities,” Front Genet, vol. 12, Apr. 2021, doi: 10.3389/FGENE.2021.669328.
  • [68] B. Thakur, N. Kumar, and G. Gupta, “Machine learning techniques with ANOVA for the prediction of breast cancer,” International Journal of Advanced Technology and Engineering Exploration, vol. 9, no. 87, pp. 232–245, Feb. 2022, doi: 10.19101/IJATEE.2021.874555.
  • [69] F. A. Putra, S. Mandala, and M. Pramudyo, “A Study of Feature Selection Method to Detect Coronary Heart Disease (CHD) on Photoplethysmography (PPG) Signals,” Building of Informatics, Technology and Science (BITS), vol. 4, no. 2, Sep. 2022, doi: 10.47065/BITS.V4I2.2259.
  • [70] S. Suresh and V. P. S. Naidu, “Mahalanobis-ANOVA criterion for optimum feature subset selection in multi-class planetary gear fault diagnosis,” Journal of Vibration and Control, vol. 28, no. 21–22, pp. 3257–3268, Nov. 2022, doi: 10.1177/10775463211029153.
  • [71] M. J. Siraj, T. Ahmad, and R. M. Ijtihadie, “Analyzing ANOVA F-test and Sequential Feature Selection for Intrusion Detection Systems,” International Journal of Advances in Soft Computing and Its Applications, vol. 14, no. 2, pp. 185–194, 2022, doi: 10.15849/IJASCA.220720.13.
  • [72] P. E. McKight and J. Najab, “Kruskal‐Wallis Test,” The Corsini Encyclopedia of Psychology, pp. 1–1, Jan. 2010, doi: 10.1002/9780470479216.CORPSY0491.
There are 72 citations in total.

Details

Primary Language English
Subjects Software Engineering (Other)
Journal Section Articles
Authors

İsmet Emre Yücel 0000-0001-7018-3349

Ulaş Yurtsever 0000-0003-3438-6872

Early Pub Date August 27, 2024
Publication Date August 31, 2024
Submission Date July 15, 2024
Acceptance Date August 21, 2024
Published in Issue Year 2024

Cite

IEEE İ. E. Yücel and U. Yurtsever, “Evaluating Feature Selection Algorithms for Machine Learning-Based Musical Instrument Identification in Monophonic Recordings”, SAUCIS, vol. 7, no. 2, pp. 289–301, 2024, doi: 10.35377/saucis...1516717.

29070    The papers in this journal are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License