Abstract
Deep learning has made great progress in the field of image fusion. Compared with traditional methods, the image fusion approach based on deep learning requires no cumbersome matrix operations. In this paper, an end-to-end model for the infrared and visible image fusion is proposed. This unsupervised learning network architecture do not employ fusion strategy. In the stage of feature extraction, residual dense blocks are used to generate a fusion image, which preserves the information of source images to the greatest extent. In the model of feature reconstruction, shallow feature maps, residual dense information, and deep feature maps are merged in order to build a fused result. Gradient loss that we proposed for the network can cooperate well with special weight blocks
extracted from input images to more clearly express texture details in fused images. In the training phase, we select 20 source image pairs with obvious characteristics from the TNO dataset, and expand them by random tailoring to serve as the training dataset of the network. Subjective qualitative and objective quantitative results show that the proposed model has advantages over
state-of-the-art methods in the tasks of infrared and visible image fusion. We also use the RoadScene dataset to do ablation experiments to verify the effectiveness of the proposed network for infrared and visible image fusion.
extracted from input images to more clearly express texture details in fused images. In the training phase, we select 20 source image pairs with obvious characteristics from the TNO dataset, and expand them by random tailoring to serve as the training dataset of the network. Subjective qualitative and objective quantitative results show that the proposed model has advantages over
state-of-the-art methods in the tasks of infrared and visible image fusion. We also use the RoadScene dataset to do ablation experiments to verify the effectiveness of the proposed network for infrared and visible image fusion.
Original language | English |
---|---|
Article number | 104486 |
Pages (from-to) | 1-11 |
Number of pages | 11 |
Journal | Infrared Physics & Technology |
Volume | 128 |
Early online date | 6 Dec 2022 |
DOIs | |
Publication status | Published (in print/issue) - 12 Jan 2023 |
Bibliographical note
Funding Information:This work is supported by the National Key Technology R&D Program of China (No. 2018YFC0910500), the National Natural Science Foundation of China (Nos. 62272079, 61751203, 61972266, 61802040), LiaoNing Revitalization Talents Program (No. XLYC2008017 ), the Innovation and Entrepreneurship Team of Dalian University (No. XQN202008 ), Natural Science Foundation of Liaoning Province (Nos. 2021-MS-344, 2021-KF-11-03, 2022-KF-12-14).
Publisher Copyright:
© 2022 Elsevier B.V.
Keywords
- Image fusion
- Unsupervised learning
- End-to-end model
- Infrared image
- Visible image