Research Article
BibTex RIS Cite

Deep Learning-based Road Segmentation & Pedestrian Detection System for Intelligent Vehicles

Year 2023, Volume: 6 Issue: 1, 22 - 31, 30.04.2023
https://doi.org/10.35377/saucis...1170902

Abstract

Correctly determining the driving area and pedestrians is crucial for intelligent vehicles to reduce fatal road accidents risk. But these are challenging tasks in the computer vision field. Various weather, road conditions, etc., make them difficult. This paper presents a vision-based road segmentation and pedestrian detection system. First, the roads are segmented using a deep learning based consecutive triple filter size (CTFS) approach. Then, pedestrians on the segmented roads are detected using the transfer learning approach. The CTFS approach can create feature maps for small and big features. The proposed system is a reliable, low-cost road segmentation and pedestrian detection system for intelligent vehicles.

References

  • K.-W. Chiang and Y.-W. Huang, “An intelligent navigator for seamless INS/GPS integrated land vehicle navigation applications,” Appl Soft Comput, vol. 8, no. 1, pp. 722–733, Jan. 2008, doi: 10.1016/j.asoc.2007.05.010.
  • X. Zhang and M. M. Khan, “Intelligent Vehicle Navigation and Traffic System,” in Principles of Intelligent Automobiles, Singapore: Springer Singapore, 2019, pp. 175–209. doi: 10.1007/978-981-13-2484-0_5.
  • J. Jin and X. Ma, “A group-based traffic signal control with adaptive learning ability,” Eng Appl Artif Intell, vol. 65, pp. 282–293, Oct. 2017, doi: 10.1016/j.engappai.2017.07.022.
  • J.-Z. Yuan, H. Chen, B. Zhao, and Y. Xu, “Estimation of Vehicle Pose and Position with Monocular Camera at Urban Road Intersections,” J Comput Sci Technol, vol. 32, no. 6, pp. 1150–1161, Nov. 2017, doi: 10.1007/s11390-017-1790-3.
  • C. Ma, W. Hao, A. Wang, and H. Zhao, “Developing a Coordinated Signal Control System for Urban Ring Road Under the Vehicle-Infrastructure Connected Environment,” IEEE Access, vol. 6, pp. 52471–52478, 2018, doi: 10.1109/ACCESS.2018.2869890.
  • S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele, “Towards Reaching Human Performance in Pedestrian Detection,” IEEE Trans Pattern Anal Mach Intell, vol. 40, no. 4, pp. 973–986, Apr. 2018, doi: 10.1109/TPAMI.2017.2700460.
  • J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan, “Scale-aware Fast R-CNN for Pedestrian Detection,” IEEE Trans Multimedia, pp. 1–1, 2017, doi: 10.1109/TMM.2017.2759508.
  • B. Ma, S. Lakshmanan, and A. O. Hero, “Simultaneous detection of lane and pavement boundaries using model-based multisensor fusion,” IEEE Transactions on Intelligent Transportation Systems, vol. 1, no. 3, pp. 135–147, 2000, doi: 10.1109/6979.892150.
  • J. Sparbert, K. Dietmayer, and D. Streller, “Lane detection and street type classification using laser range images,” in ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No.01TH8585), pp. 454–459. doi: 10.1109/ITSC.2001.948700.
  • M. Bertozzi and A. Broggi, “GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Transactions on Image Processing, vol. 7, no. 1, pp. 62–81, 1998, doi: 10.1109/83.650851.
  • Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using B-Snake,” Image Vis Comput, vol. 22, no. 4, pp. 269–280, Apr. 2004, doi: 10.1016/j.imavis.2003.10.003.
  • Luo-Wei Tsai, Jun-Wei Hsieh, Chi-Hung Chuang, and Kuo-Chin Fan, “Lane detection using directional random walks,” in 2008 IEEE Intelligent Vehicles Symposium, Jun. 2008, pp. 303–306. doi: 10.1109/IVS.2008.4621271.
  • Q. Li, N. Zheng, and H. Cheng, “Springrobot: A Prototype Autonomous Vehicle and Its Algorithms for Lane Detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 5, no. 4, pp. 300–308, Dec. 2004, doi: 10.1109/TITS.2004.838220.
  • Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using B-Snake,” Image Vis Comput, vol. 22, no. 4, pp. 269–280, Apr. 2004, doi: 10.1016/j.imavis.2003.10.003.
  • J. M. Alvarez, T. Gevers, Y. LeCun, and A. M. Lopez, “Road Scene Segmentation from a Single Image,” 2012, pp. 376–389. doi: 10.1007/978-3-642-33786-4_28.
  • G. L. Oliveira, W. Burgard, and T. Brox, “Efficient deep models for monocular road segmentation,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2016, pp. 4885–4891. doi: 10.1109/IROS.2016.7759717.
  • H. Liu, X. Han, X. Li, Y. Yao, P. Huang, and Z. Tang, “Deep representation learning for road detection using Siamese network,” Multimed Tools Appl, vol. 78, no. 17, pp. 24269–24283, Sep. 2019, doi: 10.1007/s11042-018-6986-1.
  • C.-A. Brust, S. Sickert, M. Simon, E. Rodner, and J. Denzler, “Convolutional Patch Networks with Spatial Prior for Road Detection and Urban Scene Understanding,” Feb. 2015.
  • D. T. Nguyen, W. Li, and P. O. Ogunbona, “Human detection from images and videos: A survey,” Pattern Recognit, vol. 51, pp. 148–175, Mar. 2016, doi: 10.1016/j.patcog.2015.08.027.
  • Y. Kim and T. Moon, “Human Detection and Activity Classification Based on Micro-Doppler Signatures Using Deep Convolutional Neural Networks,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 1, pp. 8–12, Jan. 2016, doi: 10.1109/LGRS.2015.2491329.
  • W. Ouyang and X. Wang, “Joint deep learning for pedestrian detection,” Proceedings of the IEEE International Conference on Computer Vision, pp. 2056–2063, 2013, doi: 10.1109/ICCV.2013.257.
  • I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT Press, 2016.
  • S. Srinivas, R. K. Sarvadevabhatla, K. R. Mopuri, N. Prabhu, S. S. S. Kruthiventi, and R. V. Babu, “A Taxonomy of Deep Convolutional Neural Nets for Computer Vision,” Front Robot AI, Jan. 2016, doi: 10.3389/frobt.2015.00036.
  • V. Nair and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, 2010, pp. 807–814.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014, doi: 10.1214/12-AOS1000.
  • R. Padilla, S. L. Netto, and E. A. B. da Silva, “A Survey on Performance Metrics for Object-Detection Algorithms,” in 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Jul. 2020, pp. 237–242. doi: 10.1109/IWSSIP48289.2020.9145130.
  • J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 779–788. doi: 10.1109/CVPR.2016.91.
  • Y. Tian, G. Yang, Z. Wang, H. Wang, E. Li, and Z. Liang, “Apple detection during different growth stages in orchards using the improved YOLO-V3 model,” Comput Electron Agric, vol. 157, pp. 417–426, Feb. 2019, doi: 10.1016/j.compag.2019.01.012.
  • V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 12, pp. 2481–2495, Dec. 2017, doi: 10.1109/TPAMI.2016.2644615.
  • M. Everingham, L. Van~Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge 2012 (VOC2012) Results.”
  • K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Sep. 2015.
  • X. Song, T. Rui, S. Zhang, J. Fei, and X. Wang, “The Road Segmentation Method Based on the Deep Auto-Encoder with Supervised Learning,” Computers & Electrical Engineering, vol. 68, pp. 381–388, 2018, doi: 10.1007/978-3-319-69877-9_28.
  • J. Liu, B. Liu, and H. Lu, “Detection guided deconvolutional network for hierarchical feature learning,” Pattern Recognit, vol. 48, no. 8, pp. 2645–2655, Aug. 2015, doi: 10.1016/j.patcog.2015.02.002.
  • J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 3431–3440. doi: 10.1109/CVPR.2015.7298965.
Year 2023, Volume: 6 Issue: 1, 22 - 31, 30.04.2023
https://doi.org/10.35377/saucis...1170902

Abstract

References

  • K.-W. Chiang and Y.-W. Huang, “An intelligent navigator for seamless INS/GPS integrated land vehicle navigation applications,” Appl Soft Comput, vol. 8, no. 1, pp. 722–733, Jan. 2008, doi: 10.1016/j.asoc.2007.05.010.
  • X. Zhang and M. M. Khan, “Intelligent Vehicle Navigation and Traffic System,” in Principles of Intelligent Automobiles, Singapore: Springer Singapore, 2019, pp. 175–209. doi: 10.1007/978-981-13-2484-0_5.
  • J. Jin and X. Ma, “A group-based traffic signal control with adaptive learning ability,” Eng Appl Artif Intell, vol. 65, pp. 282–293, Oct. 2017, doi: 10.1016/j.engappai.2017.07.022.
  • J.-Z. Yuan, H. Chen, B. Zhao, and Y. Xu, “Estimation of Vehicle Pose and Position with Monocular Camera at Urban Road Intersections,” J Comput Sci Technol, vol. 32, no. 6, pp. 1150–1161, Nov. 2017, doi: 10.1007/s11390-017-1790-3.
  • C. Ma, W. Hao, A. Wang, and H. Zhao, “Developing a Coordinated Signal Control System for Urban Ring Road Under the Vehicle-Infrastructure Connected Environment,” IEEE Access, vol. 6, pp. 52471–52478, 2018, doi: 10.1109/ACCESS.2018.2869890.
  • S. Zhang, R. Benenson, M. Omran, J. Hosang, and B. Schiele, “Towards Reaching Human Performance in Pedestrian Detection,” IEEE Trans Pattern Anal Mach Intell, vol. 40, no. 4, pp. 973–986, Apr. 2018, doi: 10.1109/TPAMI.2017.2700460.
  • J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan, “Scale-aware Fast R-CNN for Pedestrian Detection,” IEEE Trans Multimedia, pp. 1–1, 2017, doi: 10.1109/TMM.2017.2759508.
  • B. Ma, S. Lakshmanan, and A. O. Hero, “Simultaneous detection of lane and pavement boundaries using model-based multisensor fusion,” IEEE Transactions on Intelligent Transportation Systems, vol. 1, no. 3, pp. 135–147, 2000, doi: 10.1109/6979.892150.
  • J. Sparbert, K. Dietmayer, and D. Streller, “Lane detection and street type classification using laser range images,” in ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No.01TH8585), pp. 454–459. doi: 10.1109/ITSC.2001.948700.
  • M. Bertozzi and A. Broggi, “GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Transactions on Image Processing, vol. 7, no. 1, pp. 62–81, 1998, doi: 10.1109/83.650851.
  • Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using B-Snake,” Image Vis Comput, vol. 22, no. 4, pp. 269–280, Apr. 2004, doi: 10.1016/j.imavis.2003.10.003.
  • Luo-Wei Tsai, Jun-Wei Hsieh, Chi-Hung Chuang, and Kuo-Chin Fan, “Lane detection using directional random walks,” in 2008 IEEE Intelligent Vehicles Symposium, Jun. 2008, pp. 303–306. doi: 10.1109/IVS.2008.4621271.
  • Q. Li, N. Zheng, and H. Cheng, “Springrobot: A Prototype Autonomous Vehicle and Its Algorithms for Lane Detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 5, no. 4, pp. 300–308, Dec. 2004, doi: 10.1109/TITS.2004.838220.
  • Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using B-Snake,” Image Vis Comput, vol. 22, no. 4, pp. 269–280, Apr. 2004, doi: 10.1016/j.imavis.2003.10.003.
  • J. M. Alvarez, T. Gevers, Y. LeCun, and A. M. Lopez, “Road Scene Segmentation from a Single Image,” 2012, pp. 376–389. doi: 10.1007/978-3-642-33786-4_28.
  • G. L. Oliveira, W. Burgard, and T. Brox, “Efficient deep models for monocular road segmentation,” in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2016, pp. 4885–4891. doi: 10.1109/IROS.2016.7759717.
  • H. Liu, X. Han, X. Li, Y. Yao, P. Huang, and Z. Tang, “Deep representation learning for road detection using Siamese network,” Multimed Tools Appl, vol. 78, no. 17, pp. 24269–24283, Sep. 2019, doi: 10.1007/s11042-018-6986-1.
  • C.-A. Brust, S. Sickert, M. Simon, E. Rodner, and J. Denzler, “Convolutional Patch Networks with Spatial Prior for Road Detection and Urban Scene Understanding,” Feb. 2015.
  • D. T. Nguyen, W. Li, and P. O. Ogunbona, “Human detection from images and videos: A survey,” Pattern Recognit, vol. 51, pp. 148–175, Mar. 2016, doi: 10.1016/j.patcog.2015.08.027.
  • Y. Kim and T. Moon, “Human Detection and Activity Classification Based on Micro-Doppler Signatures Using Deep Convolutional Neural Networks,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 1, pp. 8–12, Jan. 2016, doi: 10.1109/LGRS.2015.2491329.
  • W. Ouyang and X. Wang, “Joint deep learning for pedestrian detection,” Proceedings of the IEEE International Conference on Computer Vision, pp. 2056–2063, 2013, doi: 10.1109/ICCV.2013.257.
  • I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT Press, 2016.
  • S. Srinivas, R. K. Sarvadevabhatla, K. R. Mopuri, N. Prabhu, S. S. S. Kruthiventi, and R. V. Babu, “A Taxonomy of Deep Convolutional Neural Nets for Computer Vision,” Front Robot AI, Jan. 2016, doi: 10.3389/frobt.2015.00036.
  • V. Nair and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, 2010, pp. 807–814.
  • N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014, doi: 10.1214/12-AOS1000.
  • R. Padilla, S. L. Netto, and E. A. B. da Silva, “A Survey on Performance Metrics for Object-Detection Algorithms,” in 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Jul. 2020, pp. 237–242. doi: 10.1109/IWSSIP48289.2020.9145130.
  • J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 779–788. doi: 10.1109/CVPR.2016.91.
  • Y. Tian, G. Yang, Z. Wang, H. Wang, E. Li, and Z. Liang, “Apple detection during different growth stages in orchards using the improved YOLO-V3 model,” Comput Electron Agric, vol. 157, pp. 417–426, Feb. 2019, doi: 10.1016/j.compag.2019.01.012.
  • V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 12, pp. 2481–2495, Dec. 2017, doi: 10.1109/TPAMI.2016.2644615.
  • M. Everingham, L. Van~Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge 2012 (VOC2012) Results.”
  • K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Sep. 2015.
  • X. Song, T. Rui, S. Zhang, J. Fei, and X. Wang, “The Road Segmentation Method Based on the Deep Auto-Encoder with Supervised Learning,” Computers & Electrical Engineering, vol. 68, pp. 381–388, 2018, doi: 10.1007/978-3-319-69877-9_28.
  • J. Liu, B. Liu, and H. Lu, “Detection guided deconvolutional network for hierarchical feature learning,” Pattern Recognit, vol. 48, no. 8, pp. 2645–2655, Aug. 2015, doi: 10.1016/j.patcog.2015.02.002.
  • J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 3431–3440. doi: 10.1109/CVPR.2015.7298965.
There are 34 citations in total.

Details

Primary Language English
Subjects Computer Software
Journal Section Articles
Authors

Gozde Yolcu Öztel 0000-0002-7841-2131

İsmail Öztel 0000-0001-5157-7035

Early Pub Date April 28, 2023
Publication Date April 30, 2023
Submission Date September 4, 2022
Acceptance Date February 24, 2023
Published in Issue Year 2023Volume: 6 Issue: 1

Cite

IEEE G. Yolcu Öztel and İ. Öztel, “Deep Learning-based Road Segmentation & Pedestrian Detection System for Intelligent Vehicles”, SAUCIS, vol. 6, no. 1, pp. 22–31, 2023, doi: 10.35377/saucis...1170902.

29070    The papers in this journal are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License