OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs

1Shanghai AI Laboratory , 2Tsinghua University, 3School of Computer and Communication Sciences, EPFL


Overview Video

Abstract

We present a novel method for generating realistic and view-consistent images with fine geometry from 2D image collections. Our method proposes a hybrid explicit-implicit representation called OrthoPlanes, which encodes fine-grained 3D information in feature maps that can be efficiently generated by modifying 2D StyleGANs. Compared to previous representations, our method has better scalability and expressiveness with clear and explicit information. As a result, our method can handle more challenging view-angles and synthesize articulated objects with high spatial degree of freedom. Experiments demonstrate that our method achieves state-of-the-art results on FFHQ and SHHQ datasets, both quantitatively and qualitatively.

Qualitative Results

Results on FFHQ and AFHQv2-Cats

Results on SHHQ

Downstream Applications

Interpolation

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...


Style-Mixing


Results on FFHQ.

Results on SHHQ.


Pose-Controllable Avatar

Related Works

EG3D proposes tri-plane representation for 3D-aware image synthesis.

StyleGAN-Human collects and annotates a large-scale human image dataset for StyleGAN-based human generation.

3DHumanGAN proposes a style-based generator architecture to generate 3D-aware human images from 2D image collections.

EVA3D proposes a compositional framework to generate animatable 3D-aware human images from 2D image collections.

AvatarGen proposes a disentangled framework to generate animatable 3D-aware human images from 2D image collections.

BibTeX

@article{he2023orthoplanes,
      title={OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs},
      author={Honglin He and Zhuoqian Yang and Shikai Li and Bo Dai and Wayne Wu},
      year={2023},
      eprint={2309.15830},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}