Abstract
Nowadays, deep learning has made rapid progress in the field of multi-exposure image fusion. However, it is still challenging to extract available features while retaining texture details and color. To address this difficult issue, in this paper, we propose a coordinated learning network for detail-refinement in an end-to-end manner. Firstly, we obtain shallow feature maps from extreme over/under-exposed source images by a collaborative extraction module. Secondly, smooth attention weight maps are generated under the guidance of a self-attention module, which can draw a global connection to correlate patches in different locations. With the cooperation of the two aforementioned used modules, our proposed network can obtain a coarse fused image. Moreover, by assisting with an edge revision module, edge details of fused results are refined and noise is suppressed effectively. We conduct subjective qualitative and objective quantitative comparisons between the proposed method and twelve state-of-the-art methods on two available public datasets, respectively. The results show that our fused images significantly outperform others in visual effects and evaluation metrics. In addition, we also perform ablation experiments to verify the function and effectiveness of each module in our proposed method. The source code can be achieved at https://github.com/lok-18/LCNDR.
Original language | English |
---|---|
Article number | TCSVT-08872-2022.R2 |
Pages (from-to) | 1-16 |
Number of pages | 16 |
Journal | IEEE Transactions on Circuits and Systems for Video Technology |
Volume | 14 |
Issue number | 8 |
DOIs | |
Publication status | Published (in print/issue) - 29 Aug 2022 |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- Image fusion
- multi-exposure image
- collaborative extraction
- attention mechanism
- edge revision
- Deep learning
- Fuses
- Image color analysis
- Image edge detection
- Feature extraction
- Task analysis