An Implementation of Traffic Signs and Road Objects Detection Using Faster R-CNN
Year 2022,
, 216 - 224, 31.08.2022
Emin Güney
,
Cüneyt Bayılmış
Abstract
Traffic signs and road objects detection is significant issue for driver safety. It has become popular with the development of autonomous vehicles and driver-assistant systems. This study presents a real-time system that detects traffic signs and various objects in the driving environment with a camera. Faster R-CNN architecture was used as a detection method in this study. This architecture is a well-known two-stage approach for object detection. Dataset was created by collecting various images for training and testing of the model. The dataset consists of 1880 images containing traffic signs and objects collected from Turkey with the GTSRB dataset. These images were combined and divided into the training set and testing set with the ratio of 80/20. The model's training was carried out in the computer environment for 8.5 hours and approximately 10000 iterations. Experimental results show the real-time performance of Faster R-CNN for robustly traffic signs and objects detection.
Supporting Institution
Sakarya University Scientific Research Projects Coordination Unit
Project Number
2021-7-24-20
Thanks
We want to thank our hardworking teammates Neslihan ÇAKIRBAŞ, Havva Selin ÇAKMAK, Ali Göktuğ YALÇIN and Dilara KOCA. They helped us prepare the datasets and the development of the system.
References
- [1] A. Ruta, Y. Li, and X. Liu, “Real-time traffic sign recognition from video by class-spec. discriminative features,” Pattern Recognition, vol. 43, no. 1, pp. 416–430, 2010.
- [2] H. Li, F. Sun, L. Liu, and L. Wang, “Neurocomputing A novel traffic sign detection method via color segmentation and robust shape matching,” Neurocomputing, vol. 169, pp. 77–88, 2015.
- [3] S. Yin, P. Ouyang, L. Liu, Y. Guo, and S. Wei, “Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature,” pp. 2161–2180, 2015.
- [4] R. Qian, B. Zhang, Z. Wang, and F. Coenen, “Robust Chinese Traffic Sign Detection and Recognition with Deep Convolutional Neural Network,” pp. 791–796, 2015.
- [5] X. Changzhen, W. Cong, M. Weixin, and S. Yanmei, “A Traffic Sign Detection Algorithm Based on Deep Convolutional Neural Network,” pp. 6–9, 2016.
- [6] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “The German Traffic Sign Recognition Benchmark: A multi-class classification competition,” Proceedings of the International Joint Conference on Neural Networks, pp. 1453–1460, 2011.
- [7] J. Zhang, M. Huang, X. Jin, and X. Li, “A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2,” pp. 1–13, 2017.
- [8] Y. Bengio and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” vol. 86, no. 11, 1998.
- [9] C. Liu, F. Yin, D. Wang, and Q. Wang, “Chinese Handwriting Recognition Contest 2010,” pp. 3–7, 2010.
- [10] M. Mathias, R. Timofte, R. Benenson, and L. van Gool, “Traffic sign recognition - How far are we from the solution?,” Proceedings of the International Joint Conference on Neural Networks, 2013.
- [11] T.-Y. Lin et al., “LNCS 8693 - Microsoft COCO: Common Objects in Context,” 2014.
- [12] “INRIA Annotations for Graz-02 (IG02).” https://lear.inrialpes.fr/people/marszalek/data/ig02/ (accessed Nov. 20, 2021).
- [13] X. Xu, J. Jin, S. Zhang, L. Zhang, S. Pu, and Z. Chen, “Smart data driven traffic sign detection method based on adaptive color threshold and shape symmetry,” Future Generation Computer Systems, vol. 94, pp. 381–391, 2019.
- [14] G. Ozturk, R. Koker, O. Eldogan, and D. Karayel, “Recognition of Vehicles, Pedestrians and Traffic Signs Using Convolutional Neural Networks,” Oct. 2020.
- [15] C. Han, G. Gao, and Y. Zhang, “Real-time small traffic sign detection with revised faster-RCNN,” Multimedia Tools and Applications, vol. 78, no. 10, pp. 13263–13278, May 2019.
- [16] K. Zhou, Y. Zhan, and D. Fu, “Learning region-based attention network for traffic sign recognition,” Sensors (Switzerland), vol. 21, no. 3, pp. 1–21, 2021.
- [17] F. Shao, X. Wang, F. Meng, J. Zhu, D. Wang, and J. Dai, “Improved faster R-CNN traffic sign detection based on a second region of interest and highly possible regions proposal network,” Sensors (Switzerland), vol. 19, no. 10, May 2019.
- [18] X. Dai et al., “Multi-task faster R-CNN for nighttime pedestrian detection and distance estimation,” Infrared Physics and Technology, vol. 115, Jun. 2021, doi: 10.1016/j.infrared.2021.103694.
- [19] “Make Sense,” Victorian Studies, vol. 48, no. 3. pp. 395–438, 2006.
- [20] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN.”
- [21] R. Girshick, “Fast R-CNN,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2015 Inter, pp. 1440–1448, 2015.
- [22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object sdetection,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-Decem, pp. 779–788, 2016.
- [23] W. Liu et al., “SSD: Single shot multibox detector,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.
- [24] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”.
- [25] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation.” pp. 580–587, 2014.
- [26] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders, “Selective search for object recognition,” International Journal of Computer Vision, vol. 104, no. 2, pp. 154–171, Sep. 2013.
- [27] “Flagly.” https://www.flagly.org/project/projects/4/sections/42/ (Accessed Dec. 15, 2021).
Year 2022,
, 216 - 224, 31.08.2022
Emin Güney
,
Cüneyt Bayılmış
Project Number
2021-7-24-20
References
- [1] A. Ruta, Y. Li, and X. Liu, “Real-time traffic sign recognition from video by class-spec. discriminative features,” Pattern Recognition, vol. 43, no. 1, pp. 416–430, 2010.
- [2] H. Li, F. Sun, L. Liu, and L. Wang, “Neurocomputing A novel traffic sign detection method via color segmentation and robust shape matching,” Neurocomputing, vol. 169, pp. 77–88, 2015.
- [3] S. Yin, P. Ouyang, L. Liu, Y. Guo, and S. Wei, “Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature,” pp. 2161–2180, 2015.
- [4] R. Qian, B. Zhang, Z. Wang, and F. Coenen, “Robust Chinese Traffic Sign Detection and Recognition with Deep Convolutional Neural Network,” pp. 791–796, 2015.
- [5] X. Changzhen, W. Cong, M. Weixin, and S. Yanmei, “A Traffic Sign Detection Algorithm Based on Deep Convolutional Neural Network,” pp. 6–9, 2016.
- [6] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “The German Traffic Sign Recognition Benchmark: A multi-class classification competition,” Proceedings of the International Joint Conference on Neural Networks, pp. 1453–1460, 2011.
- [7] J. Zhang, M. Huang, X. Jin, and X. Li, “A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2,” pp. 1–13, 2017.
- [8] Y. Bengio and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” vol. 86, no. 11, 1998.
- [9] C. Liu, F. Yin, D. Wang, and Q. Wang, “Chinese Handwriting Recognition Contest 2010,” pp. 3–7, 2010.
- [10] M. Mathias, R. Timofte, R. Benenson, and L. van Gool, “Traffic sign recognition - How far are we from the solution?,” Proceedings of the International Joint Conference on Neural Networks, 2013.
- [11] T.-Y. Lin et al., “LNCS 8693 - Microsoft COCO: Common Objects in Context,” 2014.
- [12] “INRIA Annotations for Graz-02 (IG02).” https://lear.inrialpes.fr/people/marszalek/data/ig02/ (accessed Nov. 20, 2021).
- [13] X. Xu, J. Jin, S. Zhang, L. Zhang, S. Pu, and Z. Chen, “Smart data driven traffic sign detection method based on adaptive color threshold and shape symmetry,” Future Generation Computer Systems, vol. 94, pp. 381–391, 2019.
- [14] G. Ozturk, R. Koker, O. Eldogan, and D. Karayel, “Recognition of Vehicles, Pedestrians and Traffic Signs Using Convolutional Neural Networks,” Oct. 2020.
- [15] C. Han, G. Gao, and Y. Zhang, “Real-time small traffic sign detection with revised faster-RCNN,” Multimedia Tools and Applications, vol. 78, no. 10, pp. 13263–13278, May 2019.
- [16] K. Zhou, Y. Zhan, and D. Fu, “Learning region-based attention network for traffic sign recognition,” Sensors (Switzerland), vol. 21, no. 3, pp. 1–21, 2021.
- [17] F. Shao, X. Wang, F. Meng, J. Zhu, D. Wang, and J. Dai, “Improved faster R-CNN traffic sign detection based on a second region of interest and highly possible regions proposal network,” Sensors (Switzerland), vol. 19, no. 10, May 2019.
- [18] X. Dai et al., “Multi-task faster R-CNN for nighttime pedestrian detection and distance estimation,” Infrared Physics and Technology, vol. 115, Jun. 2021, doi: 10.1016/j.infrared.2021.103694.
- [19] “Make Sense,” Victorian Studies, vol. 48, no. 3. pp. 395–438, 2006.
- [20] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN.”
- [21] R. Girshick, “Fast R-CNN,” Proceedings of the IEEE International Conference on Computer Vision, vol. 2015 Inter, pp. 1440–1448, 2015.
- [22] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object sdetection,” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-Decem, pp. 779–788, 2016.
- [23] W. Liu et al., “SSD: Single shot multibox detector,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.
- [24] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”.
- [25] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation.” pp. 580–587, 2014.
- [26] J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers, and A. W. M. Smeulders, “Selective search for object recognition,” International Journal of Computer Vision, vol. 104, no. 2, pp. 154–171, Sep. 2013.
- [27] “Flagly.” https://www.flagly.org/project/projects/4/sections/42/ (Accessed Dec. 15, 2021).