default search action
SIGGRAPH Asia 2010 Posters: Seoul, Republic of Korea
- Marie-Paule Cani, Alla Sheffer:
ACM SIGGRAPH ASIA 2010 Posters, Seoul, Republic of Korea, December 15 - 18, 2010. ACM 2010, ISBN 978-1-4503-0524-2
Animation
- Lien Fan Shen:
Ghost interruption: (Copyright restrictions prevent ACM from providing the full text for this article). 1:1 - Yann Savoye, Jean-Sébastien Franco:
Conversion of performance mesh animation into cage-based animation. 2:1 - Nobuhiko Mukai, Tomoaki Hirano:
Rendering method of multiple reflections on a semi-cylindrical mirror. 3:1 - Jeong-Sik Kim, Young-Ju Cho, Gyoung-Ah Lee, Myoung-Hee Kim:
A realistic 3D facial deformation constrained with facial asymmetry of healthy subjects. 4:1 - Nik Isrozaidi bin Nik Ismail, Masaki Oshita:
Data glove-based interface for real-time character motion control. 5:1 - Sriranjan Rasakatla, Bipin Indurkhya:
Optical flow based head tracking for camera mouse, immersive 3D and gaming. 6:1-6:2 - Petr Kmoch, Ugo Bonanni, Josef Pelikán:
Towards a GPU-only rod-based hair animation system. 7:1 - Tai-Wei Kan, Chin-Hung Teng:
Life Twitter: connecting everyday commodities with social networking service. 8:1 - Erin Ashenhurst, Thecla Schiphorst:
Scene(ic): performativity and visual narrative in amateur digital photography. 9:1 - Young-Mi Kim, Jong-Soo Choi:
Interactive oriental orchid read with the spirit and mind. 10:1 - Yuki Morimoto, Kenji Ono:
New cloth modeling for designing dyed patterns. 11:1 - Miji Park, Juhyun Eune, Suzung Kim:
Tangible user interface design for home automation energy management appliances. 12:1 - Yi-Heng Lee, Chao-Ming Wang:
Cell voice: touching minds and hearts through voice and light. 13:1 - Seokhwan Cheon:
Pondang: artificial creature ecology with evolutionary sound. 14:1 - Semi Kim, Hwanik Jo, Junghwan Sung, Hyohoun No, Byongsue Kang, Euisang Oh:
Pendulum, media art beyond the boundary between presence and absence. 15:1 - Christian M. Hahn, Paul J. Diefenbach:
Surge: an experiment in real-time music analysis for gaming. 16:1
Image & video processing
- Zhengguo Li, Zijian Zhu, Shiqian Wu, Susanto Rahardja:
Fast patching of moving regions for high dynamic range imaging. 17:1 - Nitin Singhal, Byungjun Son, Sungdae Cho:
Motion sketch. 18:1 - Daisuke Miyazaki, Saori Kagimoto, Masashi Baba, Naoki Asada:
Creating digital model of origami crane through recognition of origami states from image sequence. 19:1 - Masahiko Yoda, Kazuhisa Yanaka:
Real-time integral photography using extended fractional view method. 20:1 - Er Li, Xiaopeng Zhang, Wujun Che:
Fast and symmetry-aware quadrangulation. 21:1 - Cheolhun Jang, Seungyong Lee:
Object motion based video key-frame extraction. 22:1
Interaction
- Chao-Chi Hsu, Pey-Chwen Lin:
Interactive installation art: >. 23:1 - Kaori Onodera, Hiroki Imamura, Michiko Nishiyama, Kazuhiro Watanabe:
A glove-typed input device using hetero-core fiber sensors for 3D-CG modeling. 24:1 - Chieh Jen Chen, Chin-Hung Teng:
Reiki: the dark light. 25:1 - Jae Youn Shim, Hyun-Seong Sung, Seong-Whan Kim:
Cognitive laser: new gaming device for first person shooter games using laser shooter and laser-cognizable big screen. 26:1 - Tsukasa Mizumata, Ryuuki Sakamoto:
A pinch up gesture on multi-touch table with hover detection. 27:1 - Byongsue Kang, Euisang Oh, Junghwan Sung, Semi Kim, Hwanik Jo:
The system for activity-visualization of the experience game of smart phone. 28:1 - Yu Ebihara, Chihiro Kondo, Maki Sugimoto, Satoru Tokuhisa, Takuji Tokiwa, Kentaro Harada, Hiroaki Miyasho, Toshitugu Yasaka, Anusha I. Withana, Masa Inakage:
Composing sounds and images for public display using correlated KANSEI information: (Copyright restrictions prevent ACM from providing the full text for this article). 29:1 - Kazutaka Mitobe, Junichi Kodama, Takeshi Miura, Hideo Tamamoto, Masafumi Suzuki, Noboru Yoshimura:
Developments of the learning assist system for dextrous finger movements: concept of "ubiquitous tenalai-docolo". 30:1 - Shinsuke Akabane, Johnson Leu, Ruri Araki, Jae Won Choi, Emily Chang, Saori Nakayama, Hayato Shibahara, Madoka Terasaki, Susumu Furukawa, Masa Inakage:
ZOOTOPIA: a tangible and accessible zoo for hospitalized children. 31:1 - Sriranjan Rasakatla, Madhav Krishna:
Gesture based control of snake robot and its simulated gaits. 32:1-32:2 - Sriranjan Rasakatla:
Multi-touch based on the metaphor of persistence of vision. 33:1-33:2
Modeling
- Shinji Koka, Kenshi Nomaki, Kimio Sugita, Kensei Tsuchida, Takeo Yaku:
Ridge detection with a drop of water principle. 34:1 - Kaisei Sakurai, Kazunori Miyata:
Procedural modeling of multiple rocks piled on flat ground. 35:1-35:2 - Naoki Kita, Kazunori Miyata:
A rule-based method for generating bookshelf models. 36:1-36:2 - Yoon-Seok Choi, JiHyung Lee, Bon-Ki Koo:
Intuitive 3D caricature face maker. 37:1 - Hongjun Li, Xiaopeng Zhang, Yi-Kuan Zhang:
Modeling trees with crown shape constraints. 38:1 - Seung-Chan Kim, Byung-Kil Han, Jeong-Yean Yang, Dong-Soo Kwon:
Interaction with objects inside a media space. 39:1 - Galina Pasko, Turlif Vilbrandt, Oleg Fryazinov, Alexander A. Pasko:
Bounding volumes for implicit intersections. 40:1-40:2
Production
- Yu Okano, Shogo Fukushima, Masahiro Furukawa, Hiroyuki Kajimoto:
Embedded motion: generating the perception of motion in peripheral vision. 41:1 - Jaehwan Kim, Jae-Hean Kim, Sang-Hyun Joo, Byoung-Tae Choi, Il-Kwon Jeong:
Volume matting: object tracking based matting tool. 42:1 - Duncan Tebbs:
A task-parallel programming language for interactive applications. 43:1 - Yi-Hsiu Chen, Wen-Shou Chou:
Chladni satellite: converting real-time information of solar wind into aural and visual experience. 44:1 - Chieh-Ming Chang, Wen-Shou Chou:
The design and application for interpersonal communication interface by using the technology of face detection and gaze tracking. 45:1
Rendering
- Kenta Matsubuchi, Hitomi Okajima, Kumiko Hori, Hidefumi Watanabe, Takafumi Saito:
Square deformed map with simultaneous expression of close and distant view. 46:1 - Yusuke Tokuyoshi, Shinji Ogaki, Sebastian Schoellhammer:
Final gathering using adaptive multiple importance sampling. 47:1 - Jiaze Wu, Changwen Zheng, Xiaohui Hu, Fanjiang Xu:
Lens dispersion simulation using dispersive lens model and spectral rendering method. 48:1 - Ningping Sun, Ryo Miyazaki, Naoki Yoshida:
Complex mapping with the interpolated Julia set and Mandelbrot set. 49:1 - Yunpeng Song, Fang Liu, James Xu:
Horizon-based screen-space ambient occlusion using mixture sampling. 50:1 - John Ferraris, Feng Tian, Christos Gatzidis:
Feature-based probability blending. 51:1 - Masashi Baba, Misa Yamamoto, Masayuki Mukunoki, Naoki Asada:
Camera model for inverse perspective. 52:1 - Dacre Denny, Bill Rogers:
Water history in a deferred shader: wet sand on the beach. 53:1 - Dohyeong Kim, Pio Claudio, Tae-Joon Kim, Sung-Eui Yoon:
Interactive view-dependent rendering with culling for articulated models in crowd scenes. 54:1
Virtual and augmented reality
- Tao Ren, Yewei Wang:
A Japanese text based mobile augmented reality application. 55:1 - Tatyana Koutepova, Yantong Liu, Xiao Lan, Jihyun Jeong:
Enhancing video games in real time with biofeedback data. 56:1 - Moohyun Cha, Byungil Choi:
Visualizing and experiencing harmful gases in the VR environment. 57:1 - Wataru Wakita, Katsuhito Akahane, Masaharu Isshiki, Hiromi T. Tanaka:
A realtime and direct-touch interaction for 3D woven cultural artifact exhibition. 58:1 - Keiho Imanishi, Megumi Nakao, Kotaro Minato:
Direct volume drilling of internal structures using a 2D pointing device. 59:1 - Kazuyoshi Nomura, Wataru Wakita, Hiromi T. Tanaka:
A 3D active touch interaction based on the meso-structure analyzing. 60:1 - Seokyeol Kim, Jihwan Park, Jinah Park:
Progressive mesh cutting for real-time haptic incision simulator. 61:1 - Matt Adcock, Chris Gunn:
Annotating with 'sticky' light for remote guidance. 62:1 - Ruben Zonenschein, Luiz Velho:
Visorama 2.0: a platform for multimedia gigapixel panoramas. 63:1 - Wendy Ann Mansilla, Jordi Puig, Andrew Perkis, Touradj Ebrahimi:
Flick flock: the distant and distinct characteristics of the masses in immersive aesthetic space. 64:1
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.