Multifocus image fusion method for image acquisition of 3D objects

Appl Opt. 2018 Jun 1;57(16):4514-4523. doi: 10.1364/AO.57.004514.

Abstract

We propose a multifocus image fusion method for achieving all-in-focus images of three-dimensional objects based on the combination of transform domain and spatial domain techniques. First, the source images are decomposed into low-frequency and high-frequency components by the discrete wavelet transform technique. Next, a correlation coefficient is employed to obtain the maximum similarity among low-frequency components. Then, in order not to interrupt the correlations among decomposition layers, the comparison among high-frequency components is executed by transforming them to spatial domain. In addition, a sliding window is used to evaluate the local saliency of the pixels more accurately. Finally, the fused image is synthesized from source images and the saliency map. The variance, entropy, spatial frequency, mutual information, edge intensity, and similarity measure (QAB/F) are used as metrics to evaluate the sharpness of the fused image. Experimental results demonstrate that the fusion performance of the proposed method is enhanced compared with that of the other widely used techniques. In the application of three-dimensional surface optical detection, the proposed method is suitable for obtaining the complete image at varying distances in the same scene, so as to prepare for subsequent defect identification.