Estimation of Road Boundary for Intelligent Vehicles Based on DeepLabV3+ Architecture

Sunanda Das, Awal Ahmed Fime, Nazmul Siddique, M. M. A. Hashem

Research output: Contribution to journalArticlepeer-review

27 Citations (Scopus)

Abstract

Road boundary estimation is an essential task for autonomous vehicles and intelligent driving assistants. It is considerably straightforward to attain the task when roads are marked properly with indicators. However, estimating road boundary reliably without prior knowledge of the road, such as road markings, is extremely difficult. This paper proposes a method to estimate road boundaries in different environments with deep learning-based semantic segmentation, and without any predefined road markings. The proposed method employed an encoder-decoder-based DeepLab architecture for segmentation with different types of backbone networks such as VGG16, VGG19, ResNet-50, and ResNet-101 while handling the class imbalance problem by weighing the loss contribution of the model’s different outputs. The performance of the proposed method is verified using the ‘ICCV09DATA’ dataset. The method outperformed other existing methods and achieved the accuracy, precision, recall, f-measure of 0.9596±0.0097, 0.9453±0.0118, 0.9369±0.0149, and 0.9408±0.0135 respectively while using RestNet-101 as a backbone network and Dice Coefficient as a loss function. The detailed experimental analysis confirms the feasibility of the proposed method for road boundary estimation in different challenging environments.
Original languageEnglish
Pages (from-to)121060-121075
Number of pages16
JournalIEEE Access
Volume9
DOIs
Publication statusPublished (in print/issue) - 24 Aug 2021

Keywords

  • Augmentation
  • class imbalance
  • deep learning
  • DeepLabV3+ architecture
  • road boundary
  • semantic segmentation
  • transfer learning

Fingerprint

Dive into the research topics of 'Estimation of Road Boundary for Intelligent Vehicles Based on DeepLabV3+ Architecture'. Together they form a unique fingerprint.

Cite this