Qingyu Liu, Lei Chen, Yeguo Sun, and Lei Chen

Masked Face Image Inpainting Based on Generative Adversarial Network

Face image inpainting is a critical task in computer vision due to the intricate semantic and textural features of facial structures. While existing deep learning-based methods have achieved some progress, they often produce blurred or artifactprone results when handling large occlusions, such as face masks. To address these challenges, this paper proposes a novel generative adversarial network (GAN) framework tailored for masked face inpainting. The generator adopts a U-Net architecture enhanced with a multi-scale mixed-attention residual module (MMRM), which integrates multi-branch convolutions for diverse receptive fields and combines spatial-channel attention mechanisms to prioritize semantically relevant features. The decoder further enhances feature fusion through channel attention mechanism, which selectively emphasizes meaningful patterns during feature map reconstruction. A realistic masked face dataset is synthesized using the CelebA database by dynamically adjusting mask positions, sizes, and angles based on facial landmarks, ensuring alignment with real-world scenarios. Quantitative and qualitative evaluations demonstrate that our method outperforms conventional models in both visual quality and quantitative metrics. Ablation studies further validate the effectiveness of MMRM and attention mechanisms in preserving structural coherence and reducing artifacts. 

Reference:

DOI: 10.36244/ICJ.2025.2.2

Download 

Please cite this paper the following way:

Qingyu Liu, Lei Chen, Yeguo Sun, and Lei Chen "Masked Face Image Inpainting Based on Generative Adversarial Network", Infocommunications Journal, Vol. XVII, No 2, June 2025, pp. x-y., https://doi.org/10.36244/ICJ.2025.2.2