Abstract
The goal of multi-exposure image fusion is to generate synthetic results with abundant details and balanced exposure from low dynamic range (LDR) images. The existing multi-exposure fusion (MEF) methods often use convolution operations to extract features. However, these methods only consider the pixel values in local view field and ignore the long-range dependencies between pixels. To solve the aforementioned problem, we propose a global-local aggregation network for fusing extreme exposure images in an unsupervised way. Firsty, we design a collaborative aggregation module (CAM), composed of two submodules covering a nonlocal attention inference (NLAI) module and a local adaptive learning module, to mine the relevant features from source images. So that we successfully formulate a feature extraction mechanism with aggregating global and local information. Secondly, we provide a special fusion module (FM) to reconstruct fused images, which effectively avoids artifacts and suppresses information decay. Moreover, we further fine-tune the fusion results by a recursive refinement module (RRM) to capture more textural details from source images. The results of both comparative and ablation analyses on two datasets demonstrate that GALFusion achieves the best marks in terms of MEF-structure similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR), outperforming the existing 12 state-of-the-art fusion methods.
Original language | English |
---|---|
Article number | 5011915 |
Pages (from-to) | 16 |
Number of pages | 16 |
Journal | IEEE Transactions on Instrumentation and Measurement |
Volume | 72 |
DOIs | |
Publication status | Published (in print/issue) - 21 Apr 2023 |
Bibliographical note
Publisher Copyright:© 1963-2012 IEEE.
Keywords
- Feature extraction
- Image color analysis
- Transforms
- Task analysis
- Image fusion
- Convolutional neural networks
- Collaboration
- recursive refinement network
- image fusion
- multi-exposure image
- nonlocal attention (NLA)
- Collaborative extraction