In computer vision, merging multi-sensor images is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images.
In remote sensing applications, the increasing availability of spatial sensors provides a motivation for different image fusion algorithms. Several situations in the image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not able to provide such data convincingly. Image fusion techniques allow the integration of different sources of information. The fused image may have complementary spatial and spectral resolution characteristics. However, standard image fusion techniques can distort spectral information from the multispectral data during fusion.
In the satellite image, two types of images are available. The panchromatic image acquired by the satellites is transmitted with the highest available resolution and the multispectral data is transmitted with thicker resolution. This is usually two or four times smaller. At the receiving station, the panchromatic image merges with the multispectral data to convey more information.
There are many methods for merging images. The very basic is the high-pass filtering technique. Subsequent techniques are based on Discrete Wavelet Transform, uniform rational filter bank and Laplacian Pyramid.
The image fusion algorithm using DWT described in the following steps:
1. Size of input images:
Given the two-dimensional images (example, image A, image B) it is necessary to convert it to the same size as a power of two square shapes.
2. Calculation of two dimensions DWT:
In this step, the 2D Discrete Wavelet Transformation must be applied to the resized two-dimensional images.
3. Merger Rule:
The most commonly used image fusion rule using the wavelet transform is the maximum selection, it compares the two DWT coefficients of the two images and selects the maximum between. While the low-pass subband is an approximation of the input image, the three detail subbands transmit information about the detail parts in horizontal, vertical and diagonal directions. Different fusion procedures will be applied to the approach and detail subbands. The low-pass subband will be merged using simple average operations since both contain approximations of the source images.
4. Reverse discrete wavelet transformations:
After selecting the merged low frequency and high frequency bands, the fused coefficient is reconstructed using the inverse discrete fast wave transform to obtain the fused image representing the new image.