已收录 273763 条政策
 政策提纲
  • 暂无提纲
Learning a 3D-CNN and Transformer prior for hyperspectral image-resolution
[摘要] To address the ill-posed problem of hyperspectral image super-resolution (HSISR), a commonly employed technique is to design a regularization term based on the prior information of hyperspectral images (HSIs) to effectively constrain the objective function. Traditional model-based methods that rely on manually crafted priors are insufficient in fully characterizing the properties of HSIs. Learning-based methods usually use a convolutional neural network (CNN) to learn the implicit priors of HSIs. However, the learning ability of CNN is limited, it only considers the spatial characteristics of the HSIs and ignores the spectral characteristics, and convolution is not effective for long-range dependency modeling. There is still a lot of room for improvement. In this paper, we propose a novel HSISR method that leverages the Transformer architecture instead of the CNN to learn the prior of HSIs. Specifically, we employ the proximal gradient algorithm to solve the HSISR model and simulate the iterative solution process using an unfolding network. The self-attention layer of the Transformer enables global spatial interaction, while a 3D-CNN is added behind the Transformer layers to better capture the spatio-spectral correlation of HSIs. Both quantitative and visual results on three widely used HSI datasets and the real-world dataset demonstrate that the proposed method achieves a considerable gain compared to all the mainstream algorithms including the most competitive conventional methods and the recently proposed deep learning-based methods. The source code and trained models are made publicly available at https://github.com/qingma2016/3DT-Net.
[发布日期] 2023-12-01 [发布机构] 
[效力级别]  [学科分类] 
[关键词] FACE RECOGNITION;SUPERRESOLUTION;FUSION [时效性] 
   浏览次数:3      统一登录查看全文      激活码登录查看全文