A Novel Generative Image Inpainting Model with Dense Gated Convolutional Network
[摘要] Damaged image inpainting is one of the hottest research fields in computer image processing. The development of deep learning, especially Convolutional Neural Network (CNN), has significantly enhanced the effect of image inpainting. However, the direct connection between convolution layers may increase the risk of gradient disappearance or overfitting during training process. In addition, pixel artifacts or visual inconsistencies may occur if the damaged area is inpainted directly. To solve the above problems, we propose a novel Dense Gated Convolutional Network (DGCN) for generative image inpainting by modifying the gated convolutional network structure in this paper. Firstly, Holistically-nested edge detector (HED) is utilized to predict the edge information of the missing areas to assist the subsequent inpainting task to reduce the generation of artifacts. Then, dense connections are added to the generative network to reduce the network parameters while reducing the risk of instability in the training process. Finally, the experimental results on CelebA and Places2 datasets show that the proposed model achieves better inpainting results in terms of PSNR, SSIM and visual effects compared with other classical image inpainting models. DGCN has the common advantages of gated convolution and dense connection, which can reduce network parameters and improve the inpainting effect of the network.
[发布日期] [发布机构]
[效力级别] [学科分类] 自动化工程
[关键词] Densely Connected Convolutional Networks;Gated Convolution;image inpainting;Generative Adversarial Networks [时效性]