default search action
9th ICMI 2007: Nagoya, Aichi, Japan
- Dominic W. Massaro, Kazuya Takeda, Deb Roy, Alexandros Potamianos:
Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI 2007, Nagoya, Aichi, Japan, November 12-15, 2007. ACM 2007, ISBN 978-1-59593-817-6
Oral session 1: spontaneous behavior 1
- Ahmed Bilal Ashraf, Simon Lucey, Jeffrey F. Cohn, Tsuhan Chen, Zara Ambadar, Kenneth M. Prkachin, Patty Solomon, Barry-John Theobald:
The painful face: pain expression recognition using active appearance models. 9-14 - Gwen Littlewort, Marian Stewart Bartlett, Kang Lee:
Faces of pain: automated measurement of spontaneousallfacial expressions of genuine and posed pain. 15-21 - Shaogang Gong, Caifeng Shan, Tao Xiang:
Visual inference of human emotion and behaviour. 22-29
Oral session 2: spontaneous behavior 2
- Björn W. Schuller, Ronald Müller, Benedikt Hörnler, Anja Höthker, Hitoshi Konosu, Gerhard Rigoll:
Audiovisual recognition of spontaneous interest within conversations. 30-37 - Michel François Valstar, Hatice Gunes, Maja Pantic:
How to distinguish posed from spontaneous smiles using geometric features. 38-45 - Rana El Kaliouby, Alea Teeters:
Eliciting, capturing and tagging spontaneous facialaffect in autism spectrum disorder. 46-53
Poster session 1
- Kazuhiro Morimoto, Chiyomi Miyajima, Norihide Kitaoka, Katunobu Itou, Kazuya Takeda:
Statistical segmentation and recognition of fingertip trajectories for a gesture interface. 54-57 - Andreas J. Schmid, Martin Hoffmann, Heinz Wörn:
A tactile language for intuitive human-robot communication. 58-65 - Yosuke Matsusaka, Mika Enomoto, Yasuharu Den:
Simultaneous prediction of dialog acts and address types in three-party conversations. 66-73 - Alexander Kasper, Regine Becher, Peter Steinhaus, Rüdiger Dillmann:
Developing and analyzing intuitive modes for interactive object modeling. 74-81 - Yuichi Sawamoto, Yuichi Koyama, Yasushi Hirano, Shoji Kajita, Kenji Mase, Kimiko Katsuyama, Kazunobu Yamauchi:
Extraction of important interactions in medical interviewsusing nonverbal information. 82-85 - Zhiwen Yu, Motoyuki Ozeki, Yohsuke Fujii, Yuichi Nakamura:
Towards smart meeting: enabling technologies and a real-world application. 86-93 - Jacques M. B. Terken, Irene Joris, Linda De Valk:
Multimodalcues for addressee-hood in triadic communication with a human information retrieval agent. 94-101 - Manolis Perakakis, Alexandros Potamianos:
The effect of input mode on inactivity and interaction times of multimodal systems. 102-109 - Ye Kyaw Thu, Yoshiyori Urano:
Positional mapping: keyboard mapping based on characters writing positions for mobile devices. 110-117 - Christine Szentgyorgyi, Edward Lank:
Five-key text input using rhythmic mappings. 118-121 - Paulo Barthelmess, Edward C. Kaiser, David McGee:
Toward content-aware multimodal tagging of personal photo collections. 122-125 - Zhihong Zeng, Maja Pantic, Glenn I. Roisman, Thomas S. Huang:
A survey of affect recognition methods: audio, visual and spontaneous expressions. 126-133 - Barry-John Theobald, Iain A. Matthews, Jeffrey F. Cohn, Steven M. Boker:
Real-time expression cloning using appearance models. 134-139 - Tomoko Yonezawa, Hirotake Yamazoe, Akira Utsumi, Shinji Abe:
Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking. 140-145 - Michael Rohs, Johannes Schöning, Martin Raubal, Georg Essl, Antonio Krüger:
Map navigation with mobile devices: virtual versus physical movement with and without visual context. 146-153
Oral session 3: cross-modality
- Maria Danninger, Leila Takayama, Qianying Wang, Courtney Schultz, Jörg Beringer, Paul Hofmann, Frankie James, Clifford Nass:
Can you talk or only touch-talk: A VoIP-based phone feature for quick, quiet, and private communication. 154-161 - Eve E. Hoggan, Stephen A. Brewster:
Designing audio and tactile crossmodal icons for mobile devices. 162-169 - Jaime Ruiz, Edward Lank:
A study on the scalability of non-preferred hand mode manipulation. 170-177
Poster session 2
- Susumu Harada, T. Scott Saponas, James A. Landay:
Voicepen: augmenting pen input with simultaneous non-linguisitic vocalization. 178-185 - Shinya Kiriyama, Goh Yamamoto, Naofumi Otani, Shogo Ishikawa, Yoichi Takebayashi:
A large-scale behavior corpus including multi-angle video data for observing infants' long-term developmental processes. 186-192 - Thomas Pietrzak, Benoît Martin, Isabelle Pecci, Rami Saarinen, Roope Raisamo, Janne Järvi:
The micole architecture: multimodal support for inclusion of visually impaired children. 193-200 - Evandro Manara Miletto, Luciano Vargas Flores, Marcelo Soares Pimenta, Jérôme Rutily, Leonardo Santagada:
Interfaces for musical activities and interfaces for musicians are not the same: the case for codes, a web-based environment for cooperative music prototyping. 201-207 - Rony Kubat, Philip DeCamp, Brandon Roy:
Totalrecall: visualization and semi-automatic annotation of very large audio-visual corpora. 208-215 - Vitor Fernandes, Tiago João Vieira Guerreiro, Bruno Araújo, Joaquim A. Jorge, João Pereira:
Extensible middleware framework for multimodal interfaces in distributed environments. 216-219 - Jong-Seok Lee, Cheol Hoon Park:
Temporal filtering of visual speech for audio-visual speech recognition in acoustically and visually challenging environments. 220-227 - Tomoyuki Morita, Kenji Mase, Yasushi Hirano, Shoji Kajita:
Reciprocal attentive communication in remote meeting with a humanoid robot. 228-235 - Naveen Sundar Govindarajulu, Sriganesh Madhvanath:
Password management using doodles. 236-239 - Andrea Corradini:
A computational model for spatial expression resolution. 240-246 - Katherine Everitt, Susumu Harada, Jeff A. Bilmes, James A. Landay:
Disambiguating speech commands using physical context. 247-254
Oral session 4: meeting applications
- Kazuhiro Otsuka, Hiroshi Sawada, Junji Yamato:
Automatic inference of cross-modal nonverbal interactions in multiparty conversations: "who responds to whom, when, and how?" from gaze, head gestures, and utterances. 255-262 - Janienke Sturm, Olga Houben-van Herwijnen, Anke Eyck, Jacques M. B. Terken:
Influencing social dynamics in meetings through a peripheral display. 263-270 - Wen Dong, Bruno Lepri, Alessandro Cappelletti, Alex Pentland, Fabio Pianesi, Massimo Zancanaro:
Using the influence model to recognize functional roles in meetings. 271-278
Poster session 3
- Hiroko Tochigi, Kazuhiko Shinozawa, Norihiro Hagita:
User impressions of a stuffed doll robot's facing direction in animation systems. 279-284 - Kouzi Osaki, Tomio Watanabe, Michiya Yamamoto:
Speech-driven embodied entrainment character system with hand motion input in mobile environment. 285-290 - Meriam Horchani, Benjamin Caron, Laurence Nigay, Franck Panaget:
Natural multimodal dialogue systems: a configurable dialogue and presentation strategies component. 291-298 - Tobias Klug, Max Mühlhäuser:
Modeling human interaction resources to support the design of wearable multimodal systems. 299-306 - Edward Tse, Mark S. Hancock, Saul Greenberg:
Speech-filtered bubble ray: improving target acquisition on display walls. 307-314 - Natalie Ruiz, Ronnie Taib, Yu (David) Shi, Eric H. C. Choi, Fang Chen:
Using pen input features as indices of cognitive load. 315-318 - Werner Breitfuss, Helmut Prendinger, Mitsuru Ishizuka:
Automated generation of non-verbal behavior for virtual embodied characters. 319-322 - Sy Bor Wang, David Demirdjian, Trevor Darrell:
Detecting communication errors from visual cues during the system's conversational turn. 323-326 - Pilar Manchón Portillo, Carmen del Solar, Gabriel Amores Carredano, Guillermo Pérez García:
Multimodal interaction analysis in a smart house. 327-334 - Norman Lin, Shoji Kajita, Kenji Mase:
A multi-modal mobile device for learning japanese kanji characters through mnemonic stories. 335-338
Oral session 5: interactive systems 1
- Kia-Chuan Ng, Tillman Weyde, Oliver Larkin, Kerstin Neubarth, Thijs Koerselman, Bee Ong:
3d augmented mirror: a multimodal interface for string instrument learning and teaching with gesture support. 339-345 - Boris Brandherm, Helmut Prendinger, Mitsuru Ishizuka:
Interest estimation based on dynamic bayesian networks for visual attentive presentation agents. 346-349 - Athanasios K. Noulas, Ben J. A. Kröse:
On-line multi-modal speaker diarization. 350-357
Oral session 6: interactive systems 2
- Kazutaka Kurihara, Masataka Goto, Jun Ogata, Yosuke Matsusaka, Takeo Igarashi:
Presentation sensei: a presentation training system using speech and image processing. 358-365 - Yasuhiro Minami, Minako Sawaki, Kohji Dohsaka, Ryuichiro Higashinaka, Kentaro Ishizuka, Hideki Isozaki, Tatsushi Matsubayashi, Masato Miyoshi, Atsushi Nakamura, Takanobu Oba, Hiroshi Sawada, Takeshi Yamada, Eisaku Maeda:
The world of mushrooms: human-computer interaction prototype systems for ambient intelligence. 366-373 - Rock Leung, Karon E. MacLean, Martin Bue Bertelsen, Mayukh Saubhasik:
Evaluation of haptically augmented touchscreen gui elements under cognitive load. 374-381
Workshops
- Naoto Iwahashi, Mikio Nakano:
Multimodal interfaces in semantic interaction. 382 - Paulo Barthelmess, Edward C. Kaiser:
Workshop on tagging, mining and retrieval of human related activity information. 383-384 - Christopher Richard Wren, Yuri A. Ivanov:
Workshop on massive datasets. 385 - Yuri Ivanov:
Interfacing life: a year in the life of a research lab. 1 - Norihiro Hagita:
The great challenge of multimodal interfacestowards symbiosis of human and robots. 2 - Dominic W. Massaro:
Just in time learning: implementing principles of multimodal processing and learning for education. 3-8
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.