Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

news

ICCV

Published:

CANUM

Published:

portfolio

publications

Learning Local Regularization for Variational Image Restoration

Published in Scale Space and Variational Methods in Computer Vision: 8th International Conference, (SSVM 2021), 2021

In this work, we propose a framework to learn a local regularization model for solving general image restoration problems. This regularizer is defined with a fully convolutional neural network that sees the image through a receptive field corresponding to small image patches. The regularizer is then learned as a critic between unpaired distributions of clean and degraded patches using a Wasserstein generative adversarial networks based energy. This yields a regularization function that can be incorporated in any image restoration problem. The efficiency of the framework is finally shown on denoising and deblurring applications.

Recommended citation: Jean Prost, Antoine Houdard, Andrés Almansa, Nicolas Papadakis https://link.springer.com/chapter/10.1007/978-3-030-75549-2_29

SCOTCH and SODA: A Transformer Video Shadow Detection Framework

Published in Computer Vision and Pattern Recognition (CVPR 2023), 2022

Shadows in videos are difficult to detect because of the large shadow deformation between frames. In this work, we argue that accounting for shadow deformation is essen- tial when designing a video shadow detection method. To this end, we introduce the shadow deformation attention trajectory (SODA), a new type of video self-attention mod- ule, specially designed to handle the large shadow defor- mations in videos. Moreover, we present a new shadow contrastive learning mechanism (SCOTCH) which aims at guiding the network to learn a unified shadow represen- tation from massive positive shadow pairs across differ- ent videos. We demonstrate empirically the effectiveness of our two contributions in an ablation study. Furthermore, we show that SCOTCH and SODA significantly outperforms existing techniques for video shadow detection. Code is available at the project page: https://lihaoliu- cambridge.github.io/scotch_and_soda/

Recommended citation: Lihao Liu, Jean Prost, Lei Zhu, Nicolas Papadakis, Pietro Lio, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero https://openaccess.thecvf.com/content/CVPR2023/papers/Liu_SCOTCH_and_SODA_A_Transformer_Video_Shadow_Detection_Framework_CVPR_2023_paper.pdf

Inverse problem regularization with hierarchical variational autoencoders

Published in International Conference on Computer Vision (ICCV 2023), 2023

In this paper, we propose to regularize ill-posed inverse problems using a deep hierarchical variational autoencoder (HVAE) as an image prior. The proposed method synthesizes the advantages of i) denoiser-based Plug \& Play approaches and ii) generative model based approaches to inverse problems. First, we exploit VAE properties to design an efficient algorithm that benefits from convergence guarantees of Plug-and-Play (PnP) methods. Second, our approach is not restricted to specialized datasets and the proposed PnP-HVAE model is able to solve image restoration problems on natural images of any size. Our experiments show that the proposed PnP-HVAE method is competitive with both SOTA denoiser-based PnP approaches, and other SOTA restoration methods based on generative models.

Recommended citation: Jean Prost, Antoine Houdard, Nicolas Papadakis, Andrés Almansa https://openaccess.thecvf.com/content/ICCV2023/papers/Prost_Inverse_Problem_Regularization_with_Hierarchical_Variational_Autoencoders_ICCV_2023_paper.pdf

Plug-and-Play image restoration with Stochastic deNOising REgularization

Published in (preprint), 2024

Plug-and-Play (PnP) algorithms are a class of iterative algorithms that address image inverse problems by combining a physical model and a deep neural network for regularization. Even if they produce impressive image restoration results, these algorithms rely on a non-standard use of a denoiser on images that are less and less noisy along the iterations, which contrasts with recent algorithms based on Diffusion Models (DM), where the denoiser is applied only on re-noised images. We propose a new PnP framework, called Stochastic deNOising REgularization (SNORE), which applies the denoiser only on images with noise of the adequate level. It is based on an explicit stochastic regularization, which leads to a stochastic gradient descent algorithm to solve ill-posed inverse problems. A convergence analysis of this algorithm and its annealing extension is provided. Experimentally, we prove that SNORE is competitive with respect to state-of-the-art methods on deblurring and inpainting tasks, both quantitatively and qualitatively.

Recommended citation: Marien Renaud, Jean Prost, Arthur Leclaire, Nicolas Papadakis

Efficient posterior sampling for diverse super-resolution with hierarchical VAE Prior

Published in 19th International Joint Conference on Computer Vision Theory and Applications (VISAPP2024), 2024

We investigate the problem of producing diverse solutions to an image super-resolution problem. From a probabilistic perspective, this can be done by sampling from the posterior distribution of an inverse problem, which requires the definition of a prior distribution on the high-resolution images. In this work, we propose to use a pretrained hierarchical variational autoencoder (HVAE) as a prior. We train a lightweight stochastic encoder to encode low-resolution images in the latent space of a pretrained HVAE. At inference, we combine the low-resolution encoder and the pretrained generative model to super-resolve an image. We demonstrate on the task of face super-resolution that our method provides an advantageous trade-off between the computational efficiency of conditional normalizing flows techniques and the sample quality of diffusion based methods.

Recommended citation: Jean Prost, Antoine Houdard, Nicolas Papadakis, Andrés Almansa

talks

Diverse image super-resolution with hierarchical variational autoencoders

Published:

Image super-resolution is an ill-posed inverse problem in the sense that diverse high- resolution candidates are plausible solutions for each single low-resolution image. In this work we propose to make use of deep hierarchical variational autoencoders (VAE) to produce diverse super resolution. Hierarchical VAEs have shown impressive results for the task of high-resolution image synthesis and provide a strong prior image model via a self- organized hierarchy of latent variables. We find that these structured latent variables are related with the image information at different scales. Based on this observation, we show that pretrainded hierarchical VAEs can be repurposed to perform diverse super-resolution.

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.