Sampling 3D Gaussian Scenes in Seconds with Latent Diffusion Models

1 University of Glasgow    2 University of Edinburgh
Teaser image demonstrating 3D Gaussian scene generation.

Our model generates diverse, high-quality 3D Gaussian scenes in as little as 0.2 seconds

Abstract

We present a latent diffusion model over 3D scenes, that can be trained using only 2D image data. To achieve this, we first design an autoencoder that maps multi-view images to 3D Gaussian splats, and simultaneously builds a compressed latent representation of these splats. Then, we train a multi-view diffusion model over the latent space to learn an efficient generative model. This pipeline does not require object masks nor depths, and is suitable for complex scenes with arbitrary camera positions.

We conduct careful experiments on two large-scale datasets of complex real-world scenes -- MVImgNet and RealEstate10K. We show that our approach enables generating 3D scenes in as little as 0.2 seconds, either from scratch, from a single input view, or from sparse input views. It produces diverse and high-quality results while running an order of magnitude faster than non-latent diffusion models and earlier NeRF-based generative models.

Unconditional Generation on MVImgNet

3D Gaussian scenes generated from scratch, without any input images.

Unconditional Generation on RealEstate10K

Generated indoor scenes from RealEstate10K dataset.

Single-Image Reconstruction

Given only a single input image, our model reconstructs the visible parts of the scene while also synthesizing plausible content in unobserved regions.

Single-Image Reconstruction on MVImgNet (Ours)

Single-Image Reconstruction on RealEstate10K (Ours)

Multi-View Reconstruction

Given six input images, our model faithfully reconstructs the details visible in all images, producing high-quality novel views.

6-View Reconstruction on MVImgNet

6-View Reconstruction on RealEstate10K

Diverse Back-View Generation

Given a single input image, our model can generate diverse plausible completions of the unobserved regions. Below we show multiple samples for the same input image, demonstrating the variety of back-side content our model can synthesize.

Diverse Samples from Single Input

Citation

@article{henderson2024sampling,
    title={Sampling 3D Gaussian Scenes in Seconds with Latent Diffusion Models},
    author={Paul Henderson and Melonie de Almeida and Daniela Ivanova and Titas Anciukevi{\v{c}}ius},
    journal={arXiv preprint arXiv:2406.13099},
    year={2024}
}