[go: up one dir, main page]

Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural RenderingDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 OralReaders: Everyone
Keywords: Differentiable rendering, inverse graphics, GANs
Abstract: Differentiable rendering has paved the way to training neural networks to perform “inverse graphics” tasks such as predicting 3D geometry from monocular photographs. To train high performing models, most of the current approaches rely on multi-view imagery which are not readily available in practice. Recent Generative Adversarial Networks (GANs) that synthesize images, in contrast, seem to acquire 3D knowledge implicitly during training: object viewpoints can be manipulated by simply manipulating the latent codes. However, these latent codes often lack further physical interpretation and thus GANs cannot easily be inverted to perform explicit 3D reasoning. In this paper, we aim to extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers. Key to our approach is to exploit GANs as a multi-view data generator to train an inverse graphics network using an off-the-shelf differentiable renderer, and the trained inverse graphics network as a teacher to disentangle the GAN's latent code into interpretable 3D properties. The entire architecture is trained iteratively using cycle consistency losses. We show that our approach significantly outperforms state-of-the-art inverse graphics networks trained on existing datasets, both quantitatively and via user studies. We further showcase the disentangled GAN as a controllable 3D “neural renderer", complementing traditional graphics renderers.
One-sentence Summary: We marry generative models with differentiable rendering to extract and disentangle 3D knowledge learned implicitly by generative image synthesis models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Data: [PASCAL3D+](https://paperswithcode.com/dataset/pascal3d-2), [ShapeNet](https://paperswithcode.com/dataset/shapenet)
8 Replies

Loading