Paper Title
Super Image Resolution with Deep Learning Generative Models

Abstract
Single image super-resolution is a long studied inverse problem, which aims to infer a high-resolution (HR) image from a single low-resolution (LR) one. Sometimes, we can infer an HR image not only from its corresponding LR image, but also with the guidance of an image from a different modality, e.g., RGB guided depth image super-resolution. This is called multi-modal image super-resolution. In this thesis, we develop methods based on sparse modelling and deep neural network to tackle both the single and multi-modal image super-resolution problems. We also extend the applications to the general multi-modal image restoration and image fusion tasks. Firstly, we present a method for single image superresolution, which aims to achieve the best trade-off between the objective and perceptual image quality. In our method, we show that the objective and perceptual quality are influenced by different elements of an image, and we use stationary wavelet transform to separate these elements. A novel wavelet domain style transfer algorithm is proposed to achieve the best trade-off between the image distortion and perception. Next, we develop a robust algorithm for RGB guided depth image super-resolution, through combining the finite rate of innovation (FRI) theory and a multi-modal dictionary learning algorithm. In addition, to speed up the super-resolution process, we introduce a projection-based rapid upscaling algorithm to pre-calculate the projections from the joint LR depth and HR intensity pairs to the HR depth. Keyword -Finite Rate of Innovation (FRI), High-Resolution (HR), Low-Resolution (LR).