Model-Based Segmentation of Multimodal Images

Xin Hong, Sally McClean, Bryan Scotney, Philip Morrow

Research output: Chapter in Book/Report/Conference proceedingChapter

1 Citation (Scopus)

Abstract

This paper proposes a model-based method for intensity-based segmentation of images acquired from multiple modalities. Pixel intensity within a modality image is represented by a univariate Gaussian distribution mixture in which the components correspond to different segments. The proposed Multi-Modality Expectation-Maximization (MMEM) algorithm then estimates the probability of each segment along with parameters of the Gaussian distributions for each modality by maximum likelihood using the Expectation-Maximization (EM) algorithm. Multimodal images are simultaneously involved in the iterative parameter estimation step. Pixel classes are determined by maximising a posteriori probability contributed from all multimodal images. Experimental results show that the method exploits and fuses complementary information of multimodal images. Segmentation can thus be more precise than when using single-modality images.
Original languageEnglish
Title of host publicationComputer Analysis of Images and Patterns
PublisherSpringer
Pages604-611
Volume4673
ISBN (Print)978-3-540-74271-5
DOIs
Publication statusPublished - 18 Aug 2007

Fingerprint Dive into the research topics of 'Model-Based Segmentation of Multimodal Images'. Together they form a unique fingerprint.

  • Cite this

    Hong, X., McClean, S., Scotney, B., & Morrow, P. (2007). Model-Based Segmentation of Multimodal Images. In Computer Analysis of Images and Patterns (Vol. 4673, pp. 604-611). Springer. https://doi.org/10.1007/978-3-540-74272-2_75