Research Article
BibTex RIS Cite

Yüksek Çözünürlüklü Görüntülerden Yol ve Şerit İşaretlerinin U-Net Tabanlı Tespiti

Year 2023, Volume: 6 Issue: 2, 284 - 299, 23.10.2023
https://doi.org/10.51513/jitsa.1172992

Abstract

Donanım alanındaki teknolojik gelişmelerle birlikte birçok otonom sistem günlük hayatta kullanılmaktadır. Ulaşım sektöründe güvenli seyahatler için tasarlanan otonom araçlar sensörler ve kameralar yardımıyla dinamik çevre kontrolü yapmaktadır. Bu araçlar kameralarından aldığı görüntü verilerini işleyerek anlamlı bilgilere dönüştürmesi gerekmektedir. Yapay zeka tabanlı yaklaşımlar verilerin anlamlı bilgilere dönüştürülmesi için oldukça etkilidir. Bu çalışmada, yüksek çözünürlüklü görüntülerden yol ve şerit işareti alanlarının tespitini ve sınıflandırmasını otomatik olarak gerçekleştirebilen U-Net tabanlı bir sistem önerilmiştir. Halka açık bir veri seti özelleştirilerek modelin eğitim, doğrulama ve test aşamalarında kullanılmıştır. Yüksek çözünürlüklü görüntülerin U-Net modelinin eğitimine dahil edilebilmesi için tasarlanan ön işlem adımları açıklanmıştır. Veri seti örnekleri %70 eğitim, %20 doğrulama ve %10 test aşamasında kullanılmak üzere bölünmüştür. Erken sonlandırma fonksiyonu kullanılarak gerçekleştirilen eğitim aşaması maksimum 100 dönem sayısı sürebilecek şekilde tanımlanmıştır. Çok sınıflı anlamsal bölütleme metoduna uygun olarak gerçekleştirilen eğitim ve doğrulama aşamalarının sayısal verileri paylaşılmıştır. Önerilen model test aşaması sonucunda en düşük %37.14, en yüksek %93.65 ve ortalama %79.48 Birlik üzerinden Kesişme (IoU) değerine ulaşmıştır. Bu modelle, yol ve şerit işaret alanlarının sınıflandırılması ve tespiti, otonom araçların dinamik ortam kontrolüne yardımcı olabilir.

References

  • Kaushal, P., Vatsa, D., Gupta, S., and Raj, R. (2022). Historical Analysis of Wheel and Diving into Future of Wheel Made with Additive Manufacturing. Recent Trends in Industrial and Production Engineering, 95-106. doi:10.1007/978-981-16-3330-0_8
  • Winner, H., and Wachenfeld, W. (2016). Effects of autonomous driving on the vehicle concept. Autonomous Driving, 255-275. doi:10.1007/978-3-662-48847-8_13
  • Milakis, D. (2019). Long-term implications of automated vehicles: An introduction. Transport Reviews, 39(1), 1-8. doi:10.1080/01441647.2019.1545286
  • Zhang, C., and Lu, Y. (2021). Study on artificial intelligence: The state of the art and future prospects. Journal of Industrial Information Integration, 23, 100224. doi:10.1016/j.jii.2021.100224
  • Stanton, N. A., and Salmon, P. M. (2009). Human error taxonomies applied to driving: A generic driver error taxonomy and its implications for intelligent transport systems. Safety Science, 47(2), 227-237. doi:10.1016/j.ssci.2008.03.006
  • Henschke, A. (2020). Trust and resilient autonomous driving systems. Ethics and Information Technology, 22(1), 81-92. doi:10.1007/s10676-019-09517-y
  • Deng, G., and Wu, Y. (2018). Double lane line edge detection method based on constraint conditions hough transform. 17th International symposium on distributed computing and applications for business engineering and science (DCABES), 107-110.
  • He, Y., Wang, H., and Zhang, B. (2004). Color-based road detection in urban traffic scenes. IEEE Transactions on intelligent transportation systems, 5(4), 309-318.
  • Yadav, S., Patra, S., Arora, C., and Banerjee, S. (2017). Deep CNN with color lines model for unmarked road segmentation. 2017 IEEE International Conference on Image Processing (ICIP), 585-589.
  • Dewangan, D. K., and Sahu, S. P. (2021). Road detection using semantic segmentation-based convolutional neural network for intelligent vehicle system. Data engineering and communication technology, 629-637. doi:10.1007/978-981-16-0081-4_63
  • Li, J., Jiang, F., Yang, J., Kong, B., Gogate, M., Dashtipour, K., and Hussain, A. (2021). Lane-deeplab: Lane semantic segmentation in automatic driving scenarios for high-definition maps. Neurocomputing, 465, 15-25. doi:10.1016/j.neucom.2021.08.105
  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., ... Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. IEEE conference on computer vision and pattern recognition, 3213-3223.
  • Geiger, A., Lenz, P., Stiller, C., and Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231-1237. doi:10.1177/0278364913491297
  • Brostow, G. J., Fauqueur, J., and Cipolla, R. (2009). Semantic object classes in video: A high-definition ground truth database. Pattern Recognition Letters, 30(2), 88-97. doi:10.1016/j.patrec.2008.04.005
  • Neuhold, G., Ollmann, T., Rota Bulo, S., and Kontschieder, P. (2017). The mapillary vistas dataset for semantic understanding of street scenes. IEEE international conference on computer vision, 4990-4999.
  • Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention, 234-241.

U-Net-Based Detection of Road and Lane Markings from High-Resolution Images

Year 2023, Volume: 6 Issue: 2, 284 - 299, 23.10.2023
https://doi.org/10.51513/jitsa.1172992

Abstract

With technological developments in the field of hardware, many autonomous systems are used in daily life. Autonomous vehicles designed for safe travel in the transportation sector perform dynamic environmental control with the help of sensors and cameras. These vehicles need to process the image data they receive from their cameras and transform them into meaningful information. Artificial intelligence-based approaches are very effective in transforming data into meaningful information. In this study, a U-Net-based system is proposed that can automatically detect and classify areas of road and lane markings from high-resolution images. A publicly available dataset was customized for the model's training, validation, and testing phases. The pre-processing phase designed to include high-resolution images in the training of the U-Net model is explained. Dataset samples are split into 70% training, 20% validation, and 10% testing. The training phase performed using the early stopping function is defined for a maximum of 100 epochs. The numerical data of the training and validation phases, which were carried out in accordance with the multi-class semantic segmentation method, were shared. As a result of the test phase of the proposed model, the lowest 37.14%, the highest 93.65%, and an average of 79.48% Intersection over Union (IoU) have been achieved. With this model, the classification and detection of road and lane markings areas can help the dynamic environment control of autonomous vehicles.

References

  • Kaushal, P., Vatsa, D., Gupta, S., and Raj, R. (2022). Historical Analysis of Wheel and Diving into Future of Wheel Made with Additive Manufacturing. Recent Trends in Industrial and Production Engineering, 95-106. doi:10.1007/978-981-16-3330-0_8
  • Winner, H., and Wachenfeld, W. (2016). Effects of autonomous driving on the vehicle concept. Autonomous Driving, 255-275. doi:10.1007/978-3-662-48847-8_13
  • Milakis, D. (2019). Long-term implications of automated vehicles: An introduction. Transport Reviews, 39(1), 1-8. doi:10.1080/01441647.2019.1545286
  • Zhang, C., and Lu, Y. (2021). Study on artificial intelligence: The state of the art and future prospects. Journal of Industrial Information Integration, 23, 100224. doi:10.1016/j.jii.2021.100224
  • Stanton, N. A., and Salmon, P. M. (2009). Human error taxonomies applied to driving: A generic driver error taxonomy and its implications for intelligent transport systems. Safety Science, 47(2), 227-237. doi:10.1016/j.ssci.2008.03.006
  • Henschke, A. (2020). Trust and resilient autonomous driving systems. Ethics and Information Technology, 22(1), 81-92. doi:10.1007/s10676-019-09517-y
  • Deng, G., and Wu, Y. (2018). Double lane line edge detection method based on constraint conditions hough transform. 17th International symposium on distributed computing and applications for business engineering and science (DCABES), 107-110.
  • He, Y., Wang, H., and Zhang, B. (2004). Color-based road detection in urban traffic scenes. IEEE Transactions on intelligent transportation systems, 5(4), 309-318.
  • Yadav, S., Patra, S., Arora, C., and Banerjee, S. (2017). Deep CNN with color lines model for unmarked road segmentation. 2017 IEEE International Conference on Image Processing (ICIP), 585-589.
  • Dewangan, D. K., and Sahu, S. P. (2021). Road detection using semantic segmentation-based convolutional neural network for intelligent vehicle system. Data engineering and communication technology, 629-637. doi:10.1007/978-981-16-0081-4_63
  • Li, J., Jiang, F., Yang, J., Kong, B., Gogate, M., Dashtipour, K., and Hussain, A. (2021). Lane-deeplab: Lane semantic segmentation in automatic driving scenarios for high-definition maps. Neurocomputing, 465, 15-25. doi:10.1016/j.neucom.2021.08.105
  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., ... Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. IEEE conference on computer vision and pattern recognition, 3213-3223.
  • Geiger, A., Lenz, P., Stiller, C., and Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231-1237. doi:10.1177/0278364913491297
  • Brostow, G. J., Fauqueur, J., and Cipolla, R. (2009). Semantic object classes in video: A high-definition ground truth database. Pattern Recognition Letters, 30(2), 88-97. doi:10.1016/j.patrec.2008.04.005
  • Neuhold, G., Ollmann, T., Rota Bulo, S., and Kontschieder, P. (2017). The mapillary vistas dataset for semantic understanding of street scenes. IEEE international conference on computer vision, 4990-4999.
  • Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention, 234-241.
There are 16 citations in total.

Details

Primary Language English
Subjects Engineering
Journal Section Articles
Authors

Oğuzhan Katar 0000-0002-5628-3543

Early Pub Date October 20, 2023
Publication Date October 23, 2023
Submission Date September 9, 2022
Acceptance Date April 24, 2023
Published in Issue Year 2023 Volume: 6 Issue: 2

Cite

APA Katar, O. (2023). U-Net-Based Detection of Road and Lane Markings from High-Resolution Images. Akıllı Ulaşım Sistemleri Ve Uygulamaları Dergisi, 6(2), 284-299. https://doi.org/10.51513/jitsa.1172992