Vehicle-Related Scene Segmentation Using CapsNets

X. Liu, W.Q. Yan, N. Kasabov

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Understanding of traffic scenes is a significant research problem in computer vision. In this paper, we present and implement a robust scene segmentation model by using capsule network (CapsNet) as a basic framework. We collected a large number of image samples related to Auckland traffic scenes of the motorway and labelled the data for multiple classifications. The contribution of this paper is that our model facilitates a better scene understanding based on matrix representation of pose and spatial relationship. We take a step forward to effectively solve the Picasso problem. The methods are based on deep learning and reduce human manipulation of data by completing the training process using only a small size of training data. Our model has the preliminary accuracy up to 74.61% based on our own dataset.
Original languageEnglish
Title of host publicationInternational Conference Image and Vision Computing New Zealand
DOIs
Publication statusPublished - 1 Jan 2020

Keywords

  • CapsNets
  • Visual scene recognition
  • Visual scene understanding
  • Moving object recognition

Fingerprint

Dive into the research topics of 'Vehicle-Related Scene Segmentation Using CapsNets'. Together they form a unique fingerprint.

Cite this