Abstract
The use of omni-directional cameras has become increasingly popular in vision systems for video surveillance and autonomous robot navigation. However, to date most of the research relating to omni-directional cameras has focussed on the design of the camera or the way in which to project the omni-directional image to a panoramic view rather than the processing of such images after capture. Typically images obtained from omni-directional cameras are transformed to sparse panoramic images that are interpolated to obtain a complete panoramic view prior to low level image processing. This interpolation presents a significant computational overhead with respect to real-time vision. We present an efficient design procedure for space variant feature extraction operators that can be applied to a sparse panoramic image and directly processes this sparse image. This paper highlights the reduction of the computational overheads of directly processing images arising from omni-directional cameras through efficient coding and storage, whilst retaining accuracy sufficient for application to real-time robot vision.
Original language | English |
---|---|
Pages (from-to) | 349-361 |
Journal | Journal of Mathematical Imaging and Vision |
Volume | 32 |
Issue number | 2 |
DOIs | |
Publication status | Published (in print/issue) - Nov 2008 |
Keywords
- Sparse images
- Space variant operators
- Omni-directional images