SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation

Xin-Yang Zheng 1    Yang Liu 2    Peng-Shuai Wang 2    Xin Tong 2
1 Tsinghua University    2 Microsoft Research Asia   
Computer Graphics Forum (SGP 2022)

Abstract

We present a StyleGAN2-based deep learning approach for 3D shape generation, called SDF-StyleGAN, with the aim of reducing visual and geometric dissimilarity between generated shapes and a shape collection. We extend StyleGAN2 to 3D generation and utilize the implicit signed distance function (SDF) as the 3D shape representation, and introduce two novel global and local shape discriminators that distinguish real and fake SDF values and gradients to significantly improve shape geometry and visual quality. We further complement the evaluation metrics of 3D generative models with the shading-image-based Fréchet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes. Experiments on shape generation demonstrate the superior performance of SDF-StyleGAN over the state-of-the-art. We further demonstrate the efficacy of SDF-StyleGAN in various tasks based on GAN inversion, including shape reconstruction, shape completion from partial point clouds, single-view image-based shape generation, and shape style editing. Extensive ablation studies justify the efficacy of our framework design. Our code and trained models are available at https://github.com/Zhengxinyang/SDF-StyleGAN.

Demo

Links

Paper [PDF]

Code [Github]

Supplemental [ZIP]

Citation [BibTeX]