Research Article
BibTex RIS Cite
Year 2025, Volume: 8 Issue: 1, 27 - 37, 28.03.2025
https://doi.org/10.35377/saucis.8.91064.1525350

Abstract

References

  • N. Singh, S. S. Rathore, and S. Kumar, “Towards a super-resolution based approach for improved face recognition in low resolution environment,” Multimed Tools Appl, vol. 81, no. 27, pp. 38887–38919, Nov. 2022, doi: 10.1007/S11042-022-13160-Z/FIGURES/16.
  • J. Jiang, C. Wang, X. Liu, and J. Ma, “Deep Learning-based Face Super-Resolution: A Survey,” ACM Comput Surv, vol. 55, no. 1, Jan. 2021, doi: 10.1145/3485132.
  • C. Dong, C. C. Loy, K. He, and X. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans Pattern Anal Mach Intell, vol. 38, no. 2, pp. 295–307, Dec. 2014, doi: 10.1109/TPAMI.2015.2439281.
  • J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 1646–1654, Nov. 2015, doi: 10.1109/CVPR.2016.182.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 770–778, Dec. 2015, doi: 10.1109/CVPR.2016.90.
  • C. Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 105–114, Sep. 2016, doi: 10.1109/CVPR.2017.19.
  • B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2017-July, pp. 1132–1140, Jul. 2017, doi: 10.1109/CVPRW.2017.151.
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 2261–2269, Aug. 2016, doi: 10.1109/CVPR.2017.243.
  • T. Tong, G. Li, X. Liu, and Q. Gao, “Image Super-Resolution Using Dense Skip Connections”.
  • I. J. Goodfellow et al., “Generative Adversarial Nets”, Accessed: May 07, 2024. [Online]. Available: http://www.github.com/goodfeli/adversarial
  • X. Wang et al., “ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11133 LNCS, pp. 63–79, Sep. 2018, doi: 10.1007/978-3-030-11021-5_5.
  • E. Zhou, H. Fan, Z. Cao, Y. Jiang, and Q. Yin, “Learning face hallucination in the wild,” in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, in AAAI’15. AAAI Press, 2015, pp. 3871–3877.
  • X. Yu and F. Porikli, “Ultra-resolving face images by discriminative generative networks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9909 LNCS, pp. 318–333, 2016, doi: 10.1007/978-3-319-46454-1_20/TABLES/1.
  • Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image Super-Resolution Using Very Deep Residual Channel Attention Networks,” 2018.
  • T. Zhao and C. Zhang, “SAAN: Semantic Attention Adaptation Network for Face Super-Resolution,” in 2020 IEEE International Conference on Multimedia and Expo (ICME), 2020, pp. 1–6. doi: 10.1109/ICME46284.2020.9102926.
  • T. Lu et al., “Face Hallucination via Split-Attention in Split-Attention Network,” in Proceedings of the 29th ACM International Conference on Multimedia, in MM ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 5501–5509. doi: 10.1145/3474085.3475682.
  • A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” ICLR 2021 - 9th International Conference on Learning Representations, Oct. 2020, Accessed: Jul. 10, 2024. [Online]. Available: https://arxiv.org/abs/2010.11929v2
  • Y. Wang et al., “TANet: A new Paradigm for Global Face Super-resolution via Transformer-CNN Aggregation Network,” Sep. 2021, Accessed: Jul. 10, 2024. [Online]. Available: https://arxiv.org/abs/2109.08174v1
  • G. Gao, Z. Xu, J. Li, J. Yang, T. Zeng, and G.-J. Qi, “CTCNet: A CNN-Transformer Cooperation Network for Face Image Super-Resolution,” IEEE Transactions on Image Processing, vol. 32, pp. 1978–1991, Apr. 2022, doi: 10.1109/TIP.2023.3261747.
  • V. R. Khazaie, N. Bayat, and Y. Mohsenzadeh, “Multi Scale Identity-Preserving Image-to-Image Translation Network for Low-Resolution Face Recognition,” Proceedings of the Canadian Conference on Artificial Intelligence, Oct. 2020, doi: 10.21428/594757db.66367c17.
  • “davidsandberg/facenet: Face recognition using Tensorflow.” Accessed: Jul. 15, 2024. [Online]. Available: https://github.com/davidsandberg/facenet?tab=MIT-1-ov-file#readme
  • F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 07-12-June-2015, pp. 815–823, Mar. 2015, doi: 10.1109/cvpr.2015.7298682.
  • Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “VGGFace2: A dataset for recognising faces across pose and age,” in International Conference on Automatic Face and Gesture Recognition, 2018.
  • T. Wang et al., “A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur, Artifact Removal,” Nov. 2022, Accessed: May 08, 2024. [Online]. Available: https://arxiv.org/abs/2211.02831v1
  • R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 586–595, Jan. 2018, doi: 10.1109/CVPR.2018.00068.
  • Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep Learning Face Attributes in the Wild,” CoRR, vol. abs/1411.7766, 2014, [Online]. Available: http://arxiv.org/abs/1411.7766
  • S. Y. Zhang Zhifei and H. Qi, “Age Progression/Regression by Conditional Adversarial Autoencoder,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • C. E. Thomaz and G. A. Giraldi, “A new ranking method for principal components analysis and its application to face image analysis,” Image Vis Comput, vol. 28, no. 6, pp. 902–913, Jun. 2010, doi: 10.1016/J.IMAVIS.2009.11.005.
  • R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-PIE,” Image Vis Comput, vol. 28, no. 5, pp. 807–813, May 2010, doi: 10.1016/J.IMAVIS.2009.08.002.

Face Super Resolution Based on Identity Preserving V-Network

Year 2025, Volume: 8 Issue: 1, 27 - 37, 28.03.2025
https://doi.org/10.35377/saucis.8.91064.1525350

Abstract

Numerous super-resolution methods have been developed to restore and upsample low-resolution and low-detail images to higher resolutions. Specifically, face super-resolution studies aim to restore various degradations in facial images while enhancing their resolution and preserving details. This study proposes the VNet architecture, which consists of a deep learning-based convolutional network for converting low-resolution and degraded facial images into high-quality and detailed images, and a pre-trained FaceNet model to preserve identity information. The architecture leverages the advantages of the Encoder-Decoder structure bidirectionally to maintain details and recover lost information. In the initial stage, the Encoder module compresses the image representation, filtering out unnecessary information. The Decoder module then reconstructs the high-resolution and restored image from the compressed representation. The use of residual connections in this process helps minimize information loss while preserving details. The final stage utilizes the identity feedback from the FaceNet model to enhance the image without deviating from the original identity context. Tests conducted on various facial datasets demonstrate that VNet achieves high metric performance in both super-resolution and restoration tasks. The results indicate that the proposed architecture is effective in producing realistic and high-quality versions of low-resolution and degraded facial images.

References

  • N. Singh, S. S. Rathore, and S. Kumar, “Towards a super-resolution based approach for improved face recognition in low resolution environment,” Multimed Tools Appl, vol. 81, no. 27, pp. 38887–38919, Nov. 2022, doi: 10.1007/S11042-022-13160-Z/FIGURES/16.
  • J. Jiang, C. Wang, X. Liu, and J. Ma, “Deep Learning-based Face Super-Resolution: A Survey,” ACM Comput Surv, vol. 55, no. 1, Jan. 2021, doi: 10.1145/3485132.
  • C. Dong, C. C. Loy, K. He, and X. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Trans Pattern Anal Mach Intell, vol. 38, no. 2, pp. 295–307, Dec. 2014, doi: 10.1109/TPAMI.2015.2439281.
  • J. Kim, J. K. Lee, and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 1646–1654, Nov. 2015, doi: 10.1109/CVPR.2016.182.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 770–778, Dec. 2015, doi: 10.1109/CVPR.2016.90.
  • C. Ledig et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 105–114, Sep. 2016, doi: 10.1109/CVPR.2017.19.
  • B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced Deep Residual Networks for Single Image Super-Resolution,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2017-July, pp. 1132–1140, Jul. 2017, doi: 10.1109/CVPRW.2017.151.
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, vol. 2017-January, pp. 2261–2269, Aug. 2016, doi: 10.1109/CVPR.2017.243.
  • T. Tong, G. Li, X. Liu, and Q. Gao, “Image Super-Resolution Using Dense Skip Connections”.
  • I. J. Goodfellow et al., “Generative Adversarial Nets”, Accessed: May 07, 2024. [Online]. Available: http://www.github.com/goodfeli/adversarial
  • X. Wang et al., “ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11133 LNCS, pp. 63–79, Sep. 2018, doi: 10.1007/978-3-030-11021-5_5.
  • E. Zhou, H. Fan, Z. Cao, Y. Jiang, and Q. Yin, “Learning face hallucination in the wild,” in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, in AAAI’15. AAAI Press, 2015, pp. 3871–3877.
  • X. Yu and F. Porikli, “Ultra-resolving face images by discriminative generative networks,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9909 LNCS, pp. 318–333, 2016, doi: 10.1007/978-3-319-46454-1_20/TABLES/1.
  • Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image Super-Resolution Using Very Deep Residual Channel Attention Networks,” 2018.
  • T. Zhao and C. Zhang, “SAAN: Semantic Attention Adaptation Network for Face Super-Resolution,” in 2020 IEEE International Conference on Multimedia and Expo (ICME), 2020, pp. 1–6. doi: 10.1109/ICME46284.2020.9102926.
  • T. Lu et al., “Face Hallucination via Split-Attention in Split-Attention Network,” in Proceedings of the 29th ACM International Conference on Multimedia, in MM ’21. New York, NY, USA: Association for Computing Machinery, 2021, pp. 5501–5509. doi: 10.1145/3474085.3475682.
  • A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” ICLR 2021 - 9th International Conference on Learning Representations, Oct. 2020, Accessed: Jul. 10, 2024. [Online]. Available: https://arxiv.org/abs/2010.11929v2
  • Y. Wang et al., “TANet: A new Paradigm for Global Face Super-resolution via Transformer-CNN Aggregation Network,” Sep. 2021, Accessed: Jul. 10, 2024. [Online]. Available: https://arxiv.org/abs/2109.08174v1
  • G. Gao, Z. Xu, J. Li, J. Yang, T. Zeng, and G.-J. Qi, “CTCNet: A CNN-Transformer Cooperation Network for Face Image Super-Resolution,” IEEE Transactions on Image Processing, vol. 32, pp. 1978–1991, Apr. 2022, doi: 10.1109/TIP.2023.3261747.
  • V. R. Khazaie, N. Bayat, and Y. Mohsenzadeh, “Multi Scale Identity-Preserving Image-to-Image Translation Network for Low-Resolution Face Recognition,” Proceedings of the Canadian Conference on Artificial Intelligence, Oct. 2020, doi: 10.21428/594757db.66367c17.
  • “davidsandberg/facenet: Face recognition using Tensorflow.” Accessed: Jul. 15, 2024. [Online]. Available: https://github.com/davidsandberg/facenet?tab=MIT-1-ov-file#readme
  • F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A Unified Embedding for Face Recognition and Clustering,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 07-12-June-2015, pp. 815–823, Mar. 2015, doi: 10.1109/cvpr.2015.7298682.
  • Q. Cao, L. Shen, W. Xie, O. M. Parkhi, and A. Zisserman, “VGGFace2: A dataset for recognising faces across pose and age,” in International Conference on Automatic Face and Gesture Recognition, 2018.
  • T. Wang et al., “A Survey of Deep Face Restoration: Denoise, Super-Resolution, Deblur, Artifact Removal,” Nov. 2022, Accessed: May 08, 2024. [Online]. Available: https://arxiv.org/abs/2211.02831v1
  • R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 586–595, Jan. 2018, doi: 10.1109/CVPR.2018.00068.
  • Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep Learning Face Attributes in the Wild,” CoRR, vol. abs/1411.7766, 2014, [Online]. Available: http://arxiv.org/abs/1411.7766
  • S. Y. Zhang Zhifei and H. Qi, “Age Progression/Regression by Conditional Adversarial Autoencoder,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • C. E. Thomaz and G. A. Giraldi, “A new ranking method for principal components analysis and its application to face image analysis,” Image Vis Comput, vol. 28, no. 6, pp. 902–913, Jun. 2010, doi: 10.1016/J.IMAVIS.2009.11.005.
  • R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-PIE,” Image Vis Comput, vol. 28, no. 5, pp. 807–813, May 2010, doi: 10.1016/J.IMAVIS.2009.08.002.
There are 29 citations in total.

Details

Primary Language English
Subjects Software Engineering (Other)
Journal Section Research Article
Authors

Ali Hüsameddin Ateş 0000-0001-7690-7301

Hüseyin Eski 0000-0002-6006-3228

Early Pub Date March 27, 2025
Publication Date March 28, 2025
Submission Date July 31, 2024
Acceptance Date March 20, 2025
Published in Issue Year 2025Volume: 8 Issue: 1

Cite

APA Ateş, A. H., & Eski, H. (2025). Face Super Resolution Based on Identity Preserving V-Network. Sakarya University Journal of Computer and Information Sciences, 8(1), 27-37. https://doi.org/10.35377/saucis.8.91064.1525350
AMA Ateş AH, Eski H. Face Super Resolution Based on Identity Preserving V-Network. SAUCIS. March 2025;8(1):27-37. doi:10.35377/saucis.8.91064.1525350
Chicago Ateş, Ali Hüsameddin, and Hüseyin Eski. “Face Super Resolution Based on Identity Preserving V-Network”. Sakarya University Journal of Computer and Information Sciences 8, no. 1 (March 2025): 27-37. https://doi.org/10.35377/saucis.8.91064.1525350.
EndNote Ateş AH, Eski H (March 1, 2025) Face Super Resolution Based on Identity Preserving V-Network. Sakarya University Journal of Computer and Information Sciences 8 1 27–37.
IEEE A. H. Ateş and H. Eski, “Face Super Resolution Based on Identity Preserving V-Network”, SAUCIS, vol. 8, no. 1, pp. 27–37, 2025, doi: 10.35377/saucis.8.91064.1525350.
ISNAD Ateş, Ali Hüsameddin - Eski, Hüseyin. “Face Super Resolution Based on Identity Preserving V-Network”. Sakarya University Journal of Computer and Information Sciences 8/1 (March 2025), 27-37. https://doi.org/10.35377/saucis.8.91064.1525350.
JAMA Ateş AH, Eski H. Face Super Resolution Based on Identity Preserving V-Network. SAUCIS. 2025;8:27–37.
MLA Ateş, Ali Hüsameddin and Hüseyin Eski. “Face Super Resolution Based on Identity Preserving V-Network”. Sakarya University Journal of Computer and Information Sciences, vol. 8, no. 1, 2025, pp. 27-37, doi:10.35377/saucis.8.91064.1525350.
Vancouver Ateş AH, Eski H. Face Super Resolution Based on Identity Preserving V-Network. SAUCIS. 2025;8(1):27-3.


INDEXING & ABSTRACTING & ARCHIVING


 31045 31044  31046 31047 

31043 28939 28938


29070    The papers in this journal are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License