We present a latent diffusion model over 3D scenes, that can be trained using only 2D image data. To achieve this, we first design an autoencoder that maps multi-view images to 3D Gaussian splats, and simultaneously builds a compressed latent representation of these splats. Then, we train a multi-view diffusion model over the latent space to learn an efficient generative model. This pipeline does not require object masks nor depths, and is suitable for complex scenes with arbitrary camera positions.
We conduct careful experiments on two large-scale datasets of complex real-world scenes -- MVImgNet and RealEstate10K. We show that our approach enables generating 3D scenes in as little as 0.2 seconds, either from scratch, from a single input view, or from sparse input views. It produces diverse and high-quality results while running an order of magnitude faster than non-latent diffusion models and earlier NeRF-based generative models.
3D Gaussian scenes generated from scratch, without any input images.
Generated indoor scenes from RealEstate10K dataset.
Given only a single input image, our model reconstructs the visible parts of the scene while also synthesizing plausible content in unobserved regions.
Given six input images, our model faithfully reconstructs the details visible in all images, producing high-quality novel views.
Given a single input image, our model can generate diverse plausible completions of the unobserved regions. Below we show multiple samples for the same input image, demonstrating the variety of back-side content our model can synthesize.
@article{henderson2024sampling,
title={Sampling 3D Gaussian Scenes in Seconds with Latent Diffusion Models},
author={Paul Henderson and Melonie de Almeida and Daniela Ivanova and Titas Anciukevi{\v{c}}ius},
journal={arXiv preprint arXiv:2406.13099},
year={2024}
}