Attention-Guided Dynamic Model Selection for Single Image Super-Resolution Using Deep Ensemble Learning
Keywords:
Super-resolution; Single Image Super-Resolution; attention mechanism; ensemble deep learning; model selection; image reconstructionAbstract
Abstract
The rapid growth of digital imaging technologies has made high-quality visual data increasingly accessible; however, the storage, transmission, and restoration of high-resolution images remain challenging in bandwidth-limited and resource-constrained environments. Although compression methods reduce file size, they may remove critical details required for scientific, medical, remote-sensing, and security applications. To address this limitation, this study proposes an attention-guided dynamic ensemble framework for Single Image Super-Resolution (SISR). The proposed method integrates several representative super-resolution models, including LapSRN, SRResNet, ResNeXt-based SR, SRCNN/FSRCNN, and ESPCN, and uses an attention-guided selection module to assign the most suitable model to different image regions based on local characteristics such as edges, textures, and smooth areas. The selected outputs are then fused by a convolutional integration network to generate the final high-resolution image. Experiments on DIV2K and BSDS300 show that the proposed method improves reconstruction quality, particularly in terms of structural similarity and texture preservation. On DIV2K, the proposed method achieved 33.40 dB PSNR and 0.9172 SSIM; on BSDS300, it achieved 28.13 dB PSNR and 0.8497 SSIM. These findings indicate that dynamic model selection can reduce the limitations of individual super-resolution models and improve detail recovery in feature-diverse images.

