Research Article
BibTex RIS Cite

Ontoloji-Based Generalized Zero-Shot Learning with Generative Networks

Year 2024, Volume: 10 Issue: 1, 183 - 192, 30.04.2024

Abstract

Zero-Shot Learning (ZSL) aims to classify images of new categories in the testing phase without labeled images during training, using examples from categories with labeled images and some auxiliary information. The auxiliary information includes semantic attributes, textual descriptions, word embeddings, etc., for both labeled and unlabeled classes, utilizing Natural Language Processing (NLP) approaches. The word embeddings created are limited by the semantic attributes and textual descriptions where the semantics of categories are insufficient. In this paper, we introduce a study for Generalized Zero-Shot Learning (GZSL), a type of ZSL, by integrating the rich semantics offered by ontology. We support semantic representation using semantic attributes coupled with ontology. We employ Variational Autoencoder (VAE) and Generative Adversarial Network (GAN) architectures together to synthesize visual features. We evaluate our work on the AWA2 dataset and achieve improvements in GZSL performance.

References

  • [1] F. Lv, J. Zhang, G. Yang, L. Feng, Y. Yu, and L. Duan, “Learning cross-domain semantic-visual relationships for transductive zero-shot learning,” Pattern Recognit., vol. 141, p. 109591, Sep. 2023. doi:10.1016/j.patcog.2023.109591
  • [2] W. Alhoshan, A. Ferrari, and L. Zhao, “Zero-shot learning for requirements classification: An exploratory study,” Inf. Softw. Technol., vol. 159, p. 107202, Jul. 2023. doi:10.1016/j.infsof.2023.107202
  • [3] E. Çelik and T. Dalyan, “Unified benchmark for zero-shot Turkish text classification,” Inf. Process. Manag., vol. 60, no. 3, p. 103298, May 2023. doi:10.1016/j.ipm.2023.103298
  • [4] “Zero-shot stance detection via multi-perspective contrastive learning with unlabeled data - ScienceDirect,” [Online]. Available: https://www.sciencedirect.com/science/article/abs/pii/S0306457323000985. [Accessed: 07 Dec. 2023].
  • [5] X. Li et al., “A structure-enhanced generative adversarial network for knowledge graph zero-shot relational learning,” Inf. Sci., vol. 629, pp. 169–183, Jun. 2023. doi:10.1016/j.ins.2023.01.113
  • [6] J. Eronen, M. Ptaszynski, and F. Masui, “Zero-shot cross-lingual transfer language selection using linguistic similarity,” Inf. Process. Manag., vol. 60, no. 3, p. 103250, May 2023. doi:10.1016/j.ipm.2022.103250
  • [7] X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y.-Y. Wang, “Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classification and Information Retrieval,” in Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado: Association for Computational Linguistics, 2015, pp. 912–921. doi:10.3115/v1/N15-1092
  • [8] F. Al Kassar and F. Armetta, “Extracting Tags from Large Raw Texts Using End-to-End Memory Networks,” in Proceedings of the 2nd Workshop on Semantic Deep Learning (SemDeep-2), D. Gromann, T. Declerck, and G. Heigl, Eds., Montpellier, France: Association for Computational Linguistics, Sep. 2017, pp. 33–40. [Online]. Available: https://aclanthology.org/W17-7305. [Accessed: 04 Dec. 2023].
  • [9] G. Petrucci, C. Ghidini, and M. Rospocher, “Ontology Learning in the Deep,” in Knowledge Engineering and Knowledge Management, vol. 10024, E. Blomqvist, P. Ciancarini, F. Poggi, and F. Vitali, Eds., in Lecture Notes in Computer Science, vol. 10024. , Cham: Springer International Publishing, 2016, pp. 480–495. doi:10.1007/978-3-319-49004-5_31
  • [10] D. Jurafsky and J. H. Martin, “Speech and Language Processing,” [Online]. Available: https://web.stanford.edu/~jurafsky/slp3/. [Accessed: 04 Dec. 2023].
  • [11] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3 edition. Upper Saddle River: Pearson, 2009.
  • [12] “Semantic Web - W3C,” [Online]. Available: https://www.w3.org/standards/semanticweb/. [Accessed: 04 Dec. 2023].
  • [13] J. Hendler, T. Berners-Lee, and E. Miller, “Integrating applications on the semantic web,” J. Inst. Electr. Eng. Jpn., vol. 122, pp. 676–680, Jan. 2002. doi:10.1541/ieejjournal.122.676
  • [14] F. Pourpanah et al., “A Review of Generalized Zero-Shot Learning Methods,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–20, 2022. doi:10.1109/TPAMI.2022.3191696
  • [15] “LearnOpenCV,” [Online]. Available: https://learnopencv.com/zero-shot-learning-an-introduction/. [Accessed: 04 Dec. 2023].
  • [16] C. Patrício and J. C. Neves, “Zero-shot face recognition: Improving the discriminability of visual face features using a Semantic-Guided Attention Model,” Expert Syst. Appl., vol. 211, p. 118635, Jan. 2023. doi:10.1016/j.eswa.2022.118635
  • [17] J. Wu, Y. Zhang, X. Zhao, and W. Gao, “A Generalized Zero-Shot Framework for Emotion Recognition from Body Gestures,” arXiv, Oct. 20, 2020. doi:10.48550/arXiv.2010.06362
  • [18] R. Gao et al., “Zero-VAE-GAN: Generating Unseen Features for Generalized and Transductive Zero-Shot Learning,” IEEE Trans. Image Process., vol. 29, pp. 3665–3680, Jan. 2020. doi:10.1109/TIP.2020.2964429
  • [19] S. Narayan, A. Gupta, F. S. Khan, C. G. M. Snoek, and L. Shao, “Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification.” arXiv, Jul. 18, 2020. doi:10.48550/arXiv.2003.07833
  • [20] J. Bao, D. Chen, F. Wen, H. Li, and G. Hua, “CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training.” arXiv, Oct. 12, 2017. doi:10.48550/arXiv.1703.10155
  • [21] Z. Han, Z. Fu, G. Li, and J. Yang, “Inference guided feature generation for generalized zero-shot learning,” Neurocomputing, vol. 430, pp. 150–158, Mar. 2021. doi:10.1016/j.neucom.2020.10.080
  • [22] “Semantic Deep Learning,” [Online]. Available: https://www.dfki.de/~declerck/semdeep-4/index.html. [Accessed: 04 Dec. 2023].
  • [23] H. Wang, “Semantic Deep Learning,” 2015.
  • [24] Y. Geng et al., “Generative Adversarial Zero-shot Learning via Knowledge Graphs,” arXiv, Apr. 06, 2020. doi:10.48550/arXiv.2004.03109
  • [25] Y. Geng et al., “OntoZSL: Ontology-enhanced Zero-shot Learning,” arXiv.org, [Online]. Available: https://arxiv.org/abs/2102.07339v1. [Accessed: 08 Dec. 2023].
  • [26] Y. Geng et al., “Benchmarking knowledge-driven zero-shot learning,” J. Web Semant., vol. 75, p. 100757, Jan. 2023. doi:10.1016/j.websem.2022.100757
  • [27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition.” arXiv, Dec. 10, 2015. doi:10.48550/arXiv.1512.03385
  • [28] H. Zhang, H. Que, J. Ren, and Z. Wu, “Transductive semantic knowledge graph propagation for zero-shot learning,” J. Frankl. Inst., vol. 360, no. 17, pp. 13108–13125, Nov. 2023. doi:10.1016/j.jfranklin.2023.07.009
  • [29] L. Nieto-Piña and R. Johansson, “Automatically Linking Lexical Resources with Word Sense Embedding Models,” in Proceedings of the Third Workshop on Semantic Deep Learning, L. E. Anke, D. Gromann, and T. Declerck, Eds., Santa Fe, New Mexico: Association for Computational Linguistics, Aug. 2018, pp. 23–29. [Online]. Available: https://aclanthology.org/W18-4003. [Accessed: 04 Dec. 2023].
  • [30] Y. Zhou, J. Shah, and S. Schockaert, “Learning Household Task Knowledge from WikiHow Descriptions,” in Proceedings of the 5th Workshop on Semantic Deep Learning (SemDeep-5), L. Espinosa-Anke, T. Declerck, D. Gromann, J. Camacho-Collados, and M. T. Pilehvar, Eds., Macau, China: Association for Computational Linguistics, Aug. 2019, pp. 50–56. [Online]. Available: https://aclanthology.org/W19-5808.
  • [31] D. Loureiro and A. Jorge, “LIAAD at SemDeep-5 Challenge: Word-in-Context (WiC),” in Proceedings of the 5th Workshop on Semantic Deep Learning (SemDeep-5), L. Espinosa-Anke, T. Declerck, D. Gromann, J. Camacho-Collados, and M. T. Pilehvar, Eds., Macau, China: Association for Computational Linguistics, Aug. 2019, pp. 1–5. [Online]. Available: https://aclanthology.org/W19-5801. [Accessed: 04 Dec. 2023].
  • [32] J. Park, K. Kim, W. Hwang, and D. Lee, “Concept embedding to measure semantic relatedness for biomedical information ontologies,” J. Biomed. Inform., vol. 94, p. 103182, Jun. 2019. doi:10.1016/j.jbi.2019.103182
  • [33] T. Murata et al., “Predicting Relations Between RDF Entities by Deep Neural Network,” Nov. 2017, pp. 343–354. doi:10.1007/978-3-319-70407-4_43
  • [34] G. Petrucci, M. Rospocher, and C. Ghidini, “Expressive ontology learning as neural machine translation,” J. Web Semant., vol. 52–53, pp. 66–82, Oct. 2018. doi:10.1016/j.websem.2018.10.002
  • [35] J.-R. Ruiz-Sarmiento, C. Galindo, J. Monroy, F.-A. Moreno, and J. Gonzalez-Jimenez, “Ontology-based conditional random fields for object recognition,” Knowl.-Based Syst., vol. 168, pp. 100–108, Mar. 2019. doi:10.1016/j.knosys.2019.01.005
  • [36] L. Tian et al., “Multi-scale visual attention for attribute disambiguation in zero-shot learning,” Signal Process. Image Commun., vol. 103, p. 116614, Apr. 2022. doi:10.1016/j.image.2021.116614
  • [37] J. Liu, L. Fu, H. Zhang, Q. Ye, W. Yang, and L. Liu, “Learning discriminative and representative feature with cascade GAN for generalized zero-shot learning,” Knowl.-Based Syst., vol. 236, p. 107780, Jan. 2022. doi:10.1016/j.knosys.2021.107780
  • [38] C. Gautam, S. Parameswaran, A. Mishra, and S. Sundaram, “Tf-GCZSL: Task-free generalized continual zero-shot learning,” Neural Netw., vol. 155, pp. 487–497, Nov. 2022. doi:10.1016/j.neunet.2022.08.034
  • [39] J. Zhang, Y. Geng, W. Wang, W. Sun, Z. Yang, and Q. Li, “Distribution and gradient constrained embedding model for zero-shot learning with fewer seen samples,” Knowl.-Based Syst., vol. 251, p. 109218, Sep. 2022. doi:10.1016/j.knosys.2022.109218
  • [40] Y. Liu, X. Gao, J. Han, L. Liu, and L. Shao, “Zero-shot learning via a specific rank-controlled semantic autoencoder,” Pattern Recognit., vol. 122, p. 108237, Feb. 2022. doi:10.1016/j.patcog.2021.108237
  • [41] C. Niu et al., “Unbiased feature generating for generalized zero-shot learning,” J. Vis. Commun. Image Represent., vol. 89, p. 103657, Nov. 2022. doi:10.1016/j.jvcir.2022.103657
  • [42] X. Xu, X. Bao, X. Lu, R. Zhang, X. Chen, and G. Lu, “An end-to-end deep generative approach with meta-learning optimization for zero-shot object classification,” Inf. Process. Manag., vol. 60, no. 2, p. 103233, Mar. 2023. doi:10.1016/j.ipm.2022.103233
  • [43] J. Zhang, S. Liao, H. Zhang, Y. Long, Z. Zhang, and L. Liu, “Data driven recurrent generative adversarial network for generalized zero shot image classification,” Inf. Sci., vol. 625, pp. 536–552, May 2023. doi:10.1016/j.ins.2023.01.039

Üreteci Ağlar ile Ontoloji Tabanlı Genelleştirilmiş Sıfır-Atışlı Öğrenme

Year 2024, Volume: 10 Issue: 1, 183 - 192, 30.04.2024

Abstract

Sıfır-Atışlı Öğrenme (Zero-Shot Learning - ZSL), eğitim sırasında etiketli görüntülerin bulunduğu kategorilere ait örneklerden ve bazı yardımcı bilgilerden yararlanarak test aşamasında etiketli görüntüleri bulunmayan yeni kategorilere ait örnekleri sınıflandırmayı amaçlar. Buradaki yardımcı bilgiler hem etiketli hem de etiketsiz sınıflar için semantik öznitelikler, metinsel açıklamalar, sözcük gömme gibi doğal dil işleme yaklaşımlarıdır. Oluşturulan sözcük gömmeleri, kategorilerin anlambiliminin yetersiz olduğu semantik öznitelikler ve metinsel açıklamalar ile kısıtlıdır. Bu yazıda ontolojinin sunduğu zengin semantiği üretici ağlara entegre ederek ZSL’nin bir türü olan Genelleştirilmiş Sıfır-Atışlı Öğrenme (Generalized Zero-Shot Learning - GZSL) görevi için bir çalışma tanıtıyoruz. Semantik temsil için kullanılan semantik öznitelikleri ontoloji ile destekliyoruz. Görsel özellikleri sentezlemek için VAE ve GAN ağlarını birlikte kullanıyoruz. Çalışmamızı AWA2 veri seti üzerinde değerlendirdik ve GZSL performansında iyileştirme sağladık.

References

  • [1] F. Lv, J. Zhang, G. Yang, L. Feng, Y. Yu, and L. Duan, “Learning cross-domain semantic-visual relationships for transductive zero-shot learning,” Pattern Recognit., vol. 141, p. 109591, Sep. 2023. doi:10.1016/j.patcog.2023.109591
  • [2] W. Alhoshan, A. Ferrari, and L. Zhao, “Zero-shot learning for requirements classification: An exploratory study,” Inf. Softw. Technol., vol. 159, p. 107202, Jul. 2023. doi:10.1016/j.infsof.2023.107202
  • [3] E. Çelik and T. Dalyan, “Unified benchmark for zero-shot Turkish text classification,” Inf. Process. Manag., vol. 60, no. 3, p. 103298, May 2023. doi:10.1016/j.ipm.2023.103298
  • [4] “Zero-shot stance detection via multi-perspective contrastive learning with unlabeled data - ScienceDirect,” [Online]. Available: https://www.sciencedirect.com/science/article/abs/pii/S0306457323000985. [Accessed: 07 Dec. 2023].
  • [5] X. Li et al., “A structure-enhanced generative adversarial network for knowledge graph zero-shot relational learning,” Inf. Sci., vol. 629, pp. 169–183, Jun. 2023. doi:10.1016/j.ins.2023.01.113
  • [6] J. Eronen, M. Ptaszynski, and F. Masui, “Zero-shot cross-lingual transfer language selection using linguistic similarity,” Inf. Process. Manag., vol. 60, no. 3, p. 103250, May 2023. doi:10.1016/j.ipm.2022.103250
  • [7] X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y.-Y. Wang, “Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classification and Information Retrieval,” in Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado: Association for Computational Linguistics, 2015, pp. 912–921. doi:10.3115/v1/N15-1092
  • [8] F. Al Kassar and F. Armetta, “Extracting Tags from Large Raw Texts Using End-to-End Memory Networks,” in Proceedings of the 2nd Workshop on Semantic Deep Learning (SemDeep-2), D. Gromann, T. Declerck, and G. Heigl, Eds., Montpellier, France: Association for Computational Linguistics, Sep. 2017, pp. 33–40. [Online]. Available: https://aclanthology.org/W17-7305. [Accessed: 04 Dec. 2023].
  • [9] G. Petrucci, C. Ghidini, and M. Rospocher, “Ontology Learning in the Deep,” in Knowledge Engineering and Knowledge Management, vol. 10024, E. Blomqvist, P. Ciancarini, F. Poggi, and F. Vitali, Eds., in Lecture Notes in Computer Science, vol. 10024. , Cham: Springer International Publishing, 2016, pp. 480–495. doi:10.1007/978-3-319-49004-5_31
  • [10] D. Jurafsky and J. H. Martin, “Speech and Language Processing,” [Online]. Available: https://web.stanford.edu/~jurafsky/slp3/. [Accessed: 04 Dec. 2023].
  • [11] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3 edition. Upper Saddle River: Pearson, 2009.
  • [12] “Semantic Web - W3C,” [Online]. Available: https://www.w3.org/standards/semanticweb/. [Accessed: 04 Dec. 2023].
  • [13] J. Hendler, T. Berners-Lee, and E. Miller, “Integrating applications on the semantic web,” J. Inst. Electr. Eng. Jpn., vol. 122, pp. 676–680, Jan. 2002. doi:10.1541/ieejjournal.122.676
  • [14] F. Pourpanah et al., “A Review of Generalized Zero-Shot Learning Methods,” IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–20, 2022. doi:10.1109/TPAMI.2022.3191696
  • [15] “LearnOpenCV,” [Online]. Available: https://learnopencv.com/zero-shot-learning-an-introduction/. [Accessed: 04 Dec. 2023].
  • [16] C. Patrício and J. C. Neves, “Zero-shot face recognition: Improving the discriminability of visual face features using a Semantic-Guided Attention Model,” Expert Syst. Appl., vol. 211, p. 118635, Jan. 2023. doi:10.1016/j.eswa.2022.118635
  • [17] J. Wu, Y. Zhang, X. Zhao, and W. Gao, “A Generalized Zero-Shot Framework for Emotion Recognition from Body Gestures,” arXiv, Oct. 20, 2020. doi:10.48550/arXiv.2010.06362
  • [18] R. Gao et al., “Zero-VAE-GAN: Generating Unseen Features for Generalized and Transductive Zero-Shot Learning,” IEEE Trans. Image Process., vol. 29, pp. 3665–3680, Jan. 2020. doi:10.1109/TIP.2020.2964429
  • [19] S. Narayan, A. Gupta, F. S. Khan, C. G. M. Snoek, and L. Shao, “Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification.” arXiv, Jul. 18, 2020. doi:10.48550/arXiv.2003.07833
  • [20] J. Bao, D. Chen, F. Wen, H. Li, and G. Hua, “CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training.” arXiv, Oct. 12, 2017. doi:10.48550/arXiv.1703.10155
  • [21] Z. Han, Z. Fu, G. Li, and J. Yang, “Inference guided feature generation for generalized zero-shot learning,” Neurocomputing, vol. 430, pp. 150–158, Mar. 2021. doi:10.1016/j.neucom.2020.10.080
  • [22] “Semantic Deep Learning,” [Online]. Available: https://www.dfki.de/~declerck/semdeep-4/index.html. [Accessed: 04 Dec. 2023].
  • [23] H. Wang, “Semantic Deep Learning,” 2015.
  • [24] Y. Geng et al., “Generative Adversarial Zero-shot Learning via Knowledge Graphs,” arXiv, Apr. 06, 2020. doi:10.48550/arXiv.2004.03109
  • [25] Y. Geng et al., “OntoZSL: Ontology-enhanced Zero-shot Learning,” arXiv.org, [Online]. Available: https://arxiv.org/abs/2102.07339v1. [Accessed: 08 Dec. 2023].
  • [26] Y. Geng et al., “Benchmarking knowledge-driven zero-shot learning,” J. Web Semant., vol. 75, p. 100757, Jan. 2023. doi:10.1016/j.websem.2022.100757
  • [27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition.” arXiv, Dec. 10, 2015. doi:10.48550/arXiv.1512.03385
  • [28] H. Zhang, H. Que, J. Ren, and Z. Wu, “Transductive semantic knowledge graph propagation for zero-shot learning,” J. Frankl. Inst., vol. 360, no. 17, pp. 13108–13125, Nov. 2023. doi:10.1016/j.jfranklin.2023.07.009
  • [29] L. Nieto-Piña and R. Johansson, “Automatically Linking Lexical Resources with Word Sense Embedding Models,” in Proceedings of the Third Workshop on Semantic Deep Learning, L. E. Anke, D. Gromann, and T. Declerck, Eds., Santa Fe, New Mexico: Association for Computational Linguistics, Aug. 2018, pp. 23–29. [Online]. Available: https://aclanthology.org/W18-4003. [Accessed: 04 Dec. 2023].
  • [30] Y. Zhou, J. Shah, and S. Schockaert, “Learning Household Task Knowledge from WikiHow Descriptions,” in Proceedings of the 5th Workshop on Semantic Deep Learning (SemDeep-5), L. Espinosa-Anke, T. Declerck, D. Gromann, J. Camacho-Collados, and M. T. Pilehvar, Eds., Macau, China: Association for Computational Linguistics, Aug. 2019, pp. 50–56. [Online]. Available: https://aclanthology.org/W19-5808.
  • [31] D. Loureiro and A. Jorge, “LIAAD at SemDeep-5 Challenge: Word-in-Context (WiC),” in Proceedings of the 5th Workshop on Semantic Deep Learning (SemDeep-5), L. Espinosa-Anke, T. Declerck, D. Gromann, J. Camacho-Collados, and M. T. Pilehvar, Eds., Macau, China: Association for Computational Linguistics, Aug. 2019, pp. 1–5. [Online]. Available: https://aclanthology.org/W19-5801. [Accessed: 04 Dec. 2023].
  • [32] J. Park, K. Kim, W. Hwang, and D. Lee, “Concept embedding to measure semantic relatedness for biomedical information ontologies,” J. Biomed. Inform., vol. 94, p. 103182, Jun. 2019. doi:10.1016/j.jbi.2019.103182
  • [33] T. Murata et al., “Predicting Relations Between RDF Entities by Deep Neural Network,” Nov. 2017, pp. 343–354. doi:10.1007/978-3-319-70407-4_43
  • [34] G. Petrucci, M. Rospocher, and C. Ghidini, “Expressive ontology learning as neural machine translation,” J. Web Semant., vol. 52–53, pp. 66–82, Oct. 2018. doi:10.1016/j.websem.2018.10.002
  • [35] J.-R. Ruiz-Sarmiento, C. Galindo, J. Monroy, F.-A. Moreno, and J. Gonzalez-Jimenez, “Ontology-based conditional random fields for object recognition,” Knowl.-Based Syst., vol. 168, pp. 100–108, Mar. 2019. doi:10.1016/j.knosys.2019.01.005
  • [36] L. Tian et al., “Multi-scale visual attention for attribute disambiguation in zero-shot learning,” Signal Process. Image Commun., vol. 103, p. 116614, Apr. 2022. doi:10.1016/j.image.2021.116614
  • [37] J. Liu, L. Fu, H. Zhang, Q. Ye, W. Yang, and L. Liu, “Learning discriminative and representative feature with cascade GAN for generalized zero-shot learning,” Knowl.-Based Syst., vol. 236, p. 107780, Jan. 2022. doi:10.1016/j.knosys.2021.107780
  • [38] C. Gautam, S. Parameswaran, A. Mishra, and S. Sundaram, “Tf-GCZSL: Task-free generalized continual zero-shot learning,” Neural Netw., vol. 155, pp. 487–497, Nov. 2022. doi:10.1016/j.neunet.2022.08.034
  • [39] J. Zhang, Y. Geng, W. Wang, W. Sun, Z. Yang, and Q. Li, “Distribution and gradient constrained embedding model for zero-shot learning with fewer seen samples,” Knowl.-Based Syst., vol. 251, p. 109218, Sep. 2022. doi:10.1016/j.knosys.2022.109218
  • [40] Y. Liu, X. Gao, J. Han, L. Liu, and L. Shao, “Zero-shot learning via a specific rank-controlled semantic autoencoder,” Pattern Recognit., vol. 122, p. 108237, Feb. 2022. doi:10.1016/j.patcog.2021.108237
  • [41] C. Niu et al., “Unbiased feature generating for generalized zero-shot learning,” J. Vis. Commun. Image Represent., vol. 89, p. 103657, Nov. 2022. doi:10.1016/j.jvcir.2022.103657
  • [42] X. Xu, X. Bao, X. Lu, R. Zhang, X. Chen, and G. Lu, “An end-to-end deep generative approach with meta-learning optimization for zero-shot object classification,” Inf. Process. Manag., vol. 60, no. 2, p. 103233, Mar. 2023. doi:10.1016/j.ipm.2022.103233
  • [43] J. Zhang, S. Liao, H. Zhang, Y. Long, Z. Zhang, and L. Liu, “Data driven recurrent generative adversarial network for generalized zero shot image classification,” Inf. Sci., vol. 625, pp. 536–552, May 2023. doi:10.1016/j.ins.2023.01.039
There are 43 citations in total.

Details

Primary Language English
Subjects Computer Software
Journal Section Research Articles
Authors

Emre Akdemir 0000-0003-2507-9264

Necaattin Barışçı 0000-0002-8762-5091

Early Pub Date April 30, 2024
Publication Date April 30, 2024
Submission Date December 15, 2023
Acceptance Date April 25, 2024
Published in Issue Year 2024 Volume: 10 Issue: 1

Cite

IEEE E. Akdemir and N. Barışçı, “Ontoloji-Based Generalized Zero-Shot Learning with Generative Networks”, GJES, vol. 10, no. 1, pp. 183–192, 2024.

Gazi Journal of Engineering Sciences (GJES) publishes open access articles under a Creative Commons Attribution 4.0 International License (CC BY). 1366_2000-copia-2.jpg