default search action
49th SIGGRAPH 2022: Vancouver, BC, Canada - Conference Paper Track
- Munkhtsetseg Nandigjav, Niloy J. Mitra, Aaron Hertzmann:
SIGGRAPH '22: Special Interest Group on Computer Graphics and Interactive Techniques Conference, Vancouver, BC, Canada, August 7 - 11, 2022. ACM 2022, ISBN 978-1-4503-9337-9
Computational Photography
- Param Hanji, Rafal Mantiuk, Gabriel Eilertsen, Saghi Hajisharif, Jonas Unger:
Comparison of single image HDR reconstruction methods - the caveats of quality assessment. 1:1-1:8
Shape Analysis and Approximation
- Xianghao Xu, Yifan Ruan, Srinath Sridhar, Daniel Ritchie:
Unsupervised Kinematic Motion Detection for Part-segmented 3D Shape Collections. 2:1-2:9 - Xifeng Gao, Kui Wu, Zherong Pan:
Low-poly Mesh Generation for Building Models. 3:1-3:9
Volumes and Materials
- Jiahui Fan, Beibei Wang, Milos Hasan, Jian Yang, Ling-Qi Yan:
Neural Layered BRDFs. 4:1-4:8 - Yiwei Hu, Paul Guerrero, Milos Hasan, Holly E. Rushmeier, Valentin Deschaintre:
Node Graph Optimization Using Differentiable Proxies. 5:1-5:9
An Ode to Solvers
- Jiong Chen, Mathieu Desbrun:
Go Green: General Regularized Green's Functions for Elasticity. 6:1-6:8
Neural Objects, Materials and Illumination
- Ziang Cheng, Hongdong Li, Richard Hartley, Yinqiang Zheng, Imari Sato:
Diffeomorphic Neural Surface Parameterization for 3D and Reflectance Acquisition. 7:1-7:10 - Sayantan Datta, Derek Nowrouzezahrai, Christoph Schied, Zhao Dong:
Neural Shadow Mapping. 8:1-8:9 - Alexandr Kuznetsov, Xuezheng Wang, Krishna Mullia, Fujun Luan, Zexiang Xu, Milos Hasan, Ravi Ramamoorthi:
Rendering Neural Materials on Curved Surfaces. 9:1-9:9
Meshing and Mapping
- Karran Pandey, Jakob Andreas Bærentzen, Karan Singh:
Face Extrusion Quad Meshes. 10:1-10:9
New Wrinkles in Cloth and Shells
- Xiaoyu Pan, Jiaming Mai, Xinwei Jiang, Dongxue Tang, Jingxiang Li, Tianjia Shao, Kun Zhou, Xiaogang Jin, Dinesh Manocha:
Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks. 11:1-11:10
Image/Video Editing and Generation
- Yuxin Zhang, Fan Tang, Weiming Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu:
Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning. 12:1-12:8 - Anyi Rao, Linning Xu, Dahua Lin:
Shoot360: Normal View Video Creation from City Panorama Footage. 13:1-13:9 - Yuxuan Han, Ruicheng Wang, Jiaolong Yang:
Single-View View Synthesis in the Wild with Learned Adaptive Multiplane Images. 14:1-14:8 - Chitwan Saharia, William Chan, Huiwen Chang, Chris A. Lee, Jonathan Ho, Tim Salimans, David J. Fleet, Mohammad Norouzi:
Palette: Image-to-Image Diffusion Models. 15:1-15:10 - Yunzhe Liu, Rinon Gal, Amit H. Bermano, Baoquan Chen, Daniel Cohen-Or:
Self-Conditioned GANs for Image Editing. 16:1-16:9
Ray Tracing and Monte Carlo Methods
- Cyril Soler, Ronak Molazem, Kartic Subr:
A Theoretical Analysis of Compactness of the Light Transport Operator. 17:1-17:9
Sampling, Reconstruction and Appearance
- Jonghee Back, Binh-Son Hua, Toshiya Hachisuka, Bochang Moon:
Self-Supervised Post-Correction for Monte Carlo Denoising. 18:1-18:8
Sketches, Strokes, and Ropes
- Felix Hähnlein, Yulia Gryaditskaya, Alla Sheffer, Adrien Bousseau:
Symmetry-driven 3D Reconstruction from Concept Sketches. 19:1-19:8 - William Neveu, Ivan Puhachov, Bernhard Thomaszewski, Mikhail Bessmeltsev:
Stability-Aware Simplification of Curve Networks. 20:1-20:9
Design, Direct, Plan and Program
- Kartik Chandra, Tzu-Mao Li, Joshua B. Tenenbaum, Jonathan Ragan-Kelley:
Designing Perceptual Puzzles by Differentiating Probabilistic Programs. 21:1-21:9
Physics-Based Character Control
- Jungnam Park, Sehee Min, Phil Sik Chang, Jaedong Lee, Moon Seok Park, Jehee Lee:
Generative GaitNet. 22:1-22:9 - Seunghwan Lee, Phil Sik Chang, Jehee Lee:
Deep Compliant Control. 23:1-23:9 - Daniele Reda, Hung Yu Ling, Michiel van de Panne:
Learning to Brachiate via Simplified Model Imitation. 24:1-24:9 - Zhaoming Xie, Sebastian Starke, Hung Yu Ling, Michiel van de Panne:
Learning Soccer Juggling Skills with Layer-wise Mixture-of-Experts. 25:1-25:9
Large Scenes and Fast Neural Rendering
- Jiaming Sun, Xi Chen, Qianqian Wang, Zhengqi Li, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely:
Neural 3D Reconstruction in the Wild. 26:1-26:9 - Animesh Karnewar, Tobias Ritschel, Oliver Wang, Niloy J. Mitra:
ReLU Fields: The Little Non-linearity That Could. 27:1-27:9
Neural Geometry Processing
- Amir Belder, Gal Yefet, Ran Ben Izhak, Ayellet Tal:
Random Walks for Adversarial Meshes. 28:1-28:9 - Honghua Chen, Zeyong Wei, Yabin Xu, Mingqiang Wei, Jun Wang:
ImLoveNet: Misaligned Image-supported Registration Network for Low-overlap Point Cloud Pairs. 29:1-29:9
Convolutions and Neural Fields
- Thomas W. Mitchel, Noam Aigerman, Vladimir G. Kim, Michael Kazhdan:
Möbius Convolutions for Spherical CNNs. 30:1-30:9 - Hsueh-Ti Derek Liu, Francis Williams, Alec Jacobson, Sanja Fidler, Or Litany:
Learning Smooth Neural Functions via Lipschitz Regularization. 31:1-31:13
Display, Write, and Unwrap
- Suyeon Choi, Manu Gopakumar, Yifan Peng, Jonghyun Kim, Matthew O'Toole, Gordon Wetzstein:
Time-multiplexed Neural Holography: A Flexible Framework for Holographic Near-eye Displays with Fast Heavily-quantized Spatial Light Modulators. 32:1-32:9 - Jonghyun Kim, Manu Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes, Gordon Wetzstein:
Holographic Glasses for Virtual Reality. 33:1-33:9 - Ke Ma, Sagnik Das, Zhixin Shu, Dimitris Samaras:
Learning From Documents in the Wild to Improve Document Unwarping. 34:1-34:9
Fluid Simulation
- Amir Hossein Rabbani, Jean-Philippe Guertin, Damien Rioux-Lavoie, Arnaud Schoentgen, Kaitai Tong, Alexandre Sirois-Vigneux, Derek Nowrouzezahrai:
Compact Poisson Filters for Fast Fluid Simulation. 35:1-35:9
Benchmarks, Datasets and Learning
- Zhenyu Tang, Rohith Aralikatti, Anton Jeran Ratnarajah, Dinesh Manocha:
GWA: A Large High-Quality Acoustic Dataset for Audio Processing. 36:1-36:9 - Yongxu Jin, Yushan Han, Zhenglin Geng, Joseph Teran, Ronald Fedkiw:
Analytically Integratable Zero-restlength Springs for Capturing Dynamic Modes unrepresented by Quasistatic Neural Networks. 37:1-37:9
Differentiable Rendering and Neural Fields
- Xi Deng, Fujun Luan, Bruce Walter, Kavita Bala, Steve Marschner:
Reconstructing Translucent Objects using Differentiable Rendering. 38:1-38:10 - Mojtaba Bemana, Karol Myszkowski, Jeppe Revall Frisvad, Hans-Peter Seidel, Tobias Ritschel:
Eikonal Fields for Refractive Novel-View Synthesis. 39:1-39:9 - Lei Xiao, Salah Nouri, Joel Hegland, Alberto Garcia Garcia, Douglas Lanman:
NeuralPassthrough: Learned Real-Time View Synthesis for VR. 40:1-40:9 - Towaki Takikawa, Alex Evans, Jonathan Tremblay, Thomas Müller, Morgan McGuire, Alec Jacobson, Sanja Fidler:
Variable Bitrate Neural Fields. 41:1-41:9
Reconstruction
- Marc Alexa:
-Functions Piecewise-linear Approximation from Noisy and Hermite Data. 42:1-42:9
Reflectance, Shading Models and Shaders
- Weizhen Huang, Sebastian Merzbach, Clara Callenberg, Doekele Stavenga, Matthias B. Hullin:
Rendering Iridescent Rock Dove Neck Feathers. 43:1-43:8 - Yuchi Huo, Shi Li, Yazhen Yuan, Xu Chen, Rui Wang, Wenting Zheng, Hai Lin, Hujun Bao:
ShaderTransformer: Predicting Shader Quality via One-shot Embedding for Fast Simplification. 44:1-44:9
Character Animation
- Zhize Zhou, Qing Shuai, Yize Wang, Qi Fang, Xiaopeng Ji, Fashuai Li, Hujun Bao, Xiaowei Zhou:
QuickPose: Real-time Multi-view Multi-person Pose Estimation in Crowded Scenes. 45:1-45:9 - Ikhsanul Habibie, Mohamed Elgharib, Kripasindhu Sarkar, Ahsan Abdullah, Simbarashe Nyatsanga, Michael Neff, Christian Theobalt:
A Motion Matching-based Framework for Controllable Gesture Synthesis from Speech. 46:1-46:9 - Tianxin Tao, Matthew Wilson, Ruiyu Gou, Michiel van de Panne:
Learning to Get Up. 47:1-47:10
Learning "In Style"
- Rameen Abdal, Peihao Zhu, John Femiani, Niloy J. Mitra, Peter Wonka:
CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions. 48:1-48:9 - Axel Sauer, Katja Schwarz, Andreas Geiger:
StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets. 49:1-49:10 - Ron Mokady, Omer Tov, Michal Yarom, Oran Lang, Inbar Mosseri, Tali Dekel, Daniel Cohen-Or, Michal Irani:
Self-Distilled StyleGAN: Towards Generation from Internet Photos. 50:1-50:9
Perception
- Phillip Guan, Olivier Mercier, Michael Shvartsman, Douglas Lanman:
Perceptual Requirements for Eye-Tracked Distortion Correction in VR. 51:1-51:8
Computational Design and Fabrication
- Iñigo Fermín Ezcurdia, Rafael Morales, Marco A. B. Andrade, Asier Marzo:
LeviPrint: Contactless Fabrication using Full Acoustic Trapping of Elongated Parts. 52:1-52:9
Phenomenological Animation
- Andreas Panayiotou, Theodoros Kyriakou, Marilena Lemonari, Yiorgos Chrysanthou, Panayiotis Charalambous:
CCP: Configurable Crowd Profiles. 53:1-53:10 - Hideki Todo, Kunihiko Kobayashi, Jin Katsuragi, Haruna Shimotahira, Shizuo Kaji, Yonghao Yue:
Stroke Transfer: Example-based Synthesis of Animatable Stroke Styles. 54:1-54:10
Neural Pets, People and Avatars
- Daoye Wang, Prashanth Chandran, Gaspard Zoss, Derek Bradley, Paulo F. U. Gotardo:
MoRF: Morphable Radiance Fields for Multiview Neural Head Modeling. 55:1-55:9 - Edoardo Remelli, Timur M. Bagautdinov, Shunsuke Saito, Chenglei Wu, Tomas Simon, Shih-En Wei, Kaiwen Guo, Zhe Cao, Fabian Prada, Jason M. Saragih, Yaser Sheikh:
Drivable Volumetric Avatars using Texel-Aligned Features. 56:1-56:9 - Qing Shuai, Chen Geng, Qi Fang, Sida Peng, Wenhao Shen, Xiaowei Zhou, Hujun Bao:
Novel View Synthesis of Human Interactions from Sparse Multi-view Videos. 57:1-57:10
Faces and Facial Animation
- Feitong Tan, Sean Fanello, Abhimitra Meka, Sergio Orts-Escolano, Danhang Tang, Rohit Pandey, Jonathan Taylor, Ping Tan, Yinda Zhang:
VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting. 58:1-58:9 - Yucheol Jung, Wonjong Jang, Soongjin Kim, Jiaolong Yang, Xin Tong, Seungyong Lee:
Deep Deformable 3D Caricatures with Learned Shape Control. 59:1-59:9 - Ran Yi, Zipeng Ye, Ruoyu Fan, Yezhi Shu, Yong-Jin Liu, Yu-Kun Lai, Paul L. Rosin:
Animating Portrait Line Drawings from a Single Face Photo and a Speech Signal. 60:1-60:8 - Xinya Ji, Hang Zhou, Kaisiyuan Wang, Qianyi Wu, Wayne Wu, Feng Xu, Xun Cao:
EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model. 61:1-61:10
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.