Expressing the visual content of an image in natural language form has gained relevance due to technological and algorithmic advances together with improved computational processing capacity. Many smartphone applications for image captioning have been developed recently as built-in cameras provide advantages of easy-operation and portability, resulting in capturing an image whenever or wherever needed. Here, an encoder-decoder framework based new image captioning approach with a multi-layer gated recurrent unit is proposed. The Inception-v3 convolutional neural network is employed in the encoder due to its capability of more feature extraction from small regions. The proposed recurrent neural network-based decoder utilizes these features in the multi-layer gated recurrent unit to produce a natural language expression word-by-word. Experimental evaluations on the MSCOCO dataset demonstrate that our proposed approach has the advantage over existing approaches consistently across different evaluation metrics. With the integration of the proposed approach to our custom-designed Android application, named “VirtualEye+”, it has great potential to implement image captioning in daily routine.
Artificial intelligence natural language processing image captioning Android
Birincil Dil | İngilizce |
---|---|
Konular | Yapay Zeka |
Bölüm | Makaleler |
Yazarlar | |
Yayımlanma Tarihi | 31 Ağustos 2021 |
Gönderilme Tarihi | 22 Ocak 2021 |
Kabul Tarihi | 13 Mayıs 2021 |
Yayımlandığı Sayı | Yıl 2021Cilt: 4 Sayı: 2 |
The papers in this journal are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License