Browsing by Author "Gunturk, B. K."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
ArticlePublication Metadata only Deep learning-based blind image super-resolution with iterative kernel reconstruction and noise estimation(Elsevier, 2023-08) Ateş, Hasan Fehmi; Yildirim, S.; Gunturk, B. K.; Electrical & Electronics Engineering; ATEŞ, Hasan FehmiBlind single image super-resolution (SISR) is a challenging task in image processing due to the ill-posed nature of the inverse problem. Complex degradations present in real life images make it difficult to solve this problem using naïve deep learning approaches, where models are often trained on synthetically generated image pairs. Most of the effort so far has been focused on solving the inverse problem under some constraints, such as for a limited space of blur kernels and/or assuming noise-free input images. Yet, there is a gap in the literature to provide a well-generalized deep learning-based solution that performs well on images with unknown and highly complex degradations. In this paper, we propose IKR-Net (Iterative Kernel Reconstruction Network) for blind SISR. In the proposed approach, kernel and noise estimation and high-resolution image reconstruction are carried out iteratively using dedicated deep models. The iterative refinement provides significant improvement in both the reconstructed image and the estimated blur kernel even for noisy inputs. IKR-Net provides a generalized solution that can handle any type of blur and level of noise in the input low-resolution image. IKR-Net achieves state-of-the-art results in blind SISR, especially for noisy images with motion blur.Conference ObjectPublication Metadata only Dual camera based high spatio-temporal resolution video generation for wide area surveillance(IEEE, 2022) Suluhan, Hasan Umut; Ates, H. F.; Gunturk, B. K.; Suluhan, Hasan UmutWide area surveillance (WAS) requires high spatiotemporal resolution (HSTR) video for better precision. As an alternative to expensive WAS systems, low-cost hybrid imaging systems can be used. This paper presents the usage of multiple video feeds for the generation of HSTR video as an extension of reference based super resolution (RefSR). One feed captures video at high spatial resolution with low frame rate (HSLF) while the other captures low spatial resolution and high frame rate (LSHF) video simultaneously for the same scene. The main purpose is to create an HSTR video from the fusion of HSLF and LSHF videos. In this paper we propose an end-to-end trainable deep network that performs optical flow (OF) estimation and frame reconstruction by combining inputs from both video feeds. The proposed architecture provides significant improvement over existing video frame interpolation and RefSR techniques in terms of PSNR and SSIM metrics and can be deployed on drones with dual cameras.