Abstract
Multispectral (MS) sensors are mostly of low resolution (LR) and fail to give promising results in remote-sensing applications. In the recovery of edge information from LR images, the sparse representation-based single image super-resolution (SISR) employing patch-based dictionary alone does not give satisfactory results. To overcome this, we propose a parallel SISR framework based on edge-preserving dictionary learning and sparse representations on compute unified device architecture (CUDA)-enabled graphics processing units (GPU). To recover edges, multiple coupled dictionaries, namely, the scale-invariant feature transform (SIFT) keypoints and non-keypoints patch-based dictionaries, are learned. A joint reconstruction model is also designed based on SIFT keypoints-guided patch sparsity and non-local total variation (NLTV)-based gradient sparsity. Simulation results show that the proposed method not only performs better than state-of-the-art methods in terms of visual quality and objective criteria, but also enhances the speed, implying a great potential for real-time applications.
Original language | English |
---|---|
Article number | 312 |
Pages (from-to) | 1-22 |
Number of pages | 22 |
Journal | SN Computer Science |
DOIs | |
Publication status | Published (in print/issue) - 8 Apr 2023 |
Data Access Statement
The authors confirm that all data except LISS-III and LISS-IV can be made available as part of the the article upon reasonable request. However, LISS-III and LISS-IV data are obtained from Indian Space Research Organization (ISRO) Dept. of Space (DoS), Govt. of India, so prior written permission may be required from ISRO for sharing the same before it can be made available from the corresponding author upon reasonable request.Keywords
- SIFT
- super-resolution Imaging
- Sparse representations
- Dictionary learning
- CUDA-enabled GP-GPU