Joint learning of variational representations and solvers for inverse problems with partially-observed data
Résumé
Designing appropriate variational regularization schemes is a crucial part of solving inverse problems, making them better-posed and guaranteeing that the solution of the associated optimization problem satisfies desirable properties. Recently, learning-based strategies have appeared to be very efficient for solving inverse problems, by learning direct inversion schemes or plug-and-play regularizers from available pairs of true states and observations. In this paper, we go a step further and design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in such a supervised setting. The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter. We can jointly learn both components to minimize the data reconstruction error on the true states. This leads to a data-driven discovery of variational models. We consider an application to inverse problems with incomplete datasets (image inpainting and multivariate time series interpolation). We experimentally illustrate that this framework can lead to a significant gain in terms of reconstruction performance, including w.r.t. the direct minimization of the variational formulation derived from the known generative model.
Fichier principal
learning4DVar (8).pdf (1.17 Mo)
Télécharger le fichier
learning4DVar 2.zip (1.32 Mo)
Télécharger le fichier
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...