Multi-modal convolutional parameterisation network for guided image inverse problems

Research output: Contribution to journalArticlepeer-review

13 Downloads (Pure)

Abstract

There are several image inverse tasks, such as inpainting or super-resolution, which can be solved using deep internal learning, a paradigm that involves employing deep neural networks to find a solution by learning from the sample itself rather than a dataset. For example, Deep Image Prior is a technique based on fitting a convolutional neural network to output the known parts of the image (such as non-inpainted regions or a low-resolution version of the image). However, this approach is not well adjusted for samples composed of multiple modalities. In some domains, such as satellite image processing, accommodating multi-modal representations could be beneficial or even essential. In this work, Multi-Modal Convolutional Parameterisation Network (MCPN) is proposed, where a convolutional neural network approximates shared information between multiple modes by combining a core shared network with modality-specific head networks. The results demonstrate that these approaches can significantly outperform the single-mode adoption of a convolutional parameterisation network on guided image inverse problems of inpainting and super-resolution.
Original languageEnglish
Article number69
Number of pages23
JournalJournal of Imaging
Volume10
Issue number3
DOIs
Publication statusPublished - 12 Mar 2024

Keywords

  • image synthesis
  • internal learning
  • image inpainting
  • image super-resolution
  • multi-modal learning

Fingerprint

Dive into the research topics of 'Multi-modal convolutional parameterisation network for guided image inverse problems'. Together they form a unique fingerprint.

Cite this