Multiscale aggregation and illumination‐aware attention network for infrared and visible image fusion

Abstract

Image fusion plays a significant role in computer vision since numerous applications benefit from the fusion results. The existing image fusion methods are incapable of perceiving the most discriminative regions under varying illumination circumstances and thus fail to emphasize the salient targets and ignore the abundant texture details of the infrared and visible images. To address this problem, a multiscale aggregation and illumination-aware attention network (MAIANet) is proposed for infrared and visible image fusion. Specifically, the MAIANet consists of four modules, namely multiscale feature extraction module, lightweight channel attention module, image reconstruction module, and illumination-aware module. The multiscale feature extraction module attempts to extract multiscale features in the images. The role of the lightweight channel attention module is to assign different weights to each channel so as to focus on the essential regions in the infrared and visible images. An illumination-aware module is employed to assess the probability distribution regarding the illumination factor. Meanwhile, an illumination perception loss is formulated by the illumination probabilities to enable the proposed MAIANet to better adjust to the changes in illumination. Experimental results on three datasets, that is, MSRS, TNO, and RoadSence, verify the effectiveness of the MAIANet in both qualitative and quantitative evaluations.

Publication
Wiley Online Library
Mingliang Gao
Mingliang Gao
Associate Professor