default search action
Journal on Multimodal User Interfaces, Volume 13
Volume 13, Number 1, March 2019
- Merijn Bruijnes, Jeroen Linssen, Dirk Heylen:
Special issue editorial: Virtual Agents for Social Skills Training. 1-2 - Kim Veltman, Harmen de Weerd, Rineke Verbrugge:
Training the use of theory of mind using artificial agents. 3-18 - Samuel Recht, Ouriel Grynszpan:
The sense of social agency in gaze leading. 19-30 - Berglind Sveinbjörnsdóttir, Snorri Hjörvar Jóhannsson, Júlía Oddsdóttir, Tinna Þuríður Sigurðardóttir, Gunnar Ingi Valdimarsson, Hannes Högni Vilhjálmsson:
Virtual discrete trial training for teacher trainees. 31-40 - Magalie Ochs, Daniel Mestre, Grégoire de Montcheuil, Jean-Marie Pergandi, Jorane Saubesty, Evelyne Lombardo, Daniel Francon, Philippe Blache:
Training doctors' social skills to break bad news: evaluation of the impact of virtual environment displays on the sense of presence. 41-51
Volume 13, Number 2, June 2019
- Dirk Schnelle-Walka, David R. McGee, Bastian Pfleging:
Multimodal interaction in automotive applications. 53-54 - Jason Sterkenburg, Steven Landry, Myounghoon Jeon:
Design and evaluation of auditory-supported air gesture controls in vehicles. 55-70 - Michael Braun, Nora Broy, Bastian Pfleging, Florian Alt:
Visualizing natural language interaction for conversational in-vehicle information systems to minimize driver distraction. 71-88 - Florian Roider, Sonja Rümelin, Bastian Pfleging, Tom Gross:
Investigating the effects of modality switches on driver distraction and interaction efficiency in the car. 89-97 - Jana Fank, Natalie Tara Richardson, Frank Diermeyer:
Anthropomorphising driver-truck interaction: a study on the current state of research and the introduction of two innovative concepts. 99-117 - Andreas Löcken, Fei Yan, Wilko Heuten, Susanne Boll:
Investigating driver gaze behavior during lane changes using two visual cues: ambient light and focal icons. 119-136 - Seyedeh Maryam Fakhrhosseini, Myounghoon Jeon:
How do angry drivers respond to emotional music? A comprehensive perspective on assessing emotion. 137-150
Volume 13, Number 3, September 2019
- Jiajun Yang, Thomas Hermann, Roberto Bresin:
Introduction to the special issue on interactive sonification. 151-153 - Pieter-Jan Maes, Valerio Lorenzoni, Joren Six:
The SoundBike: musical sonification strategies to enhance cyclists' spontaneous synchronization to external music. 155-166 - Valerio Lorenzoni, Pieter Van den Berghe, Pieter-Jan Maes, Tijl De Bie, Dirk De Clercq, Marc Leman:
Design and validation of an auditory biofeedback system for modification of running parameters. 167-180 - Emma Frid, Ludvig Elblaus, Roberto Bresin:
Interactive sonification of a fluid dance movement: an exploratory study. 181-189 - Radoslaw Niewiadomski, Maurizio Mancini, Andrea Cera, Stefano Piana, Corrado Canepa, Antonio Camurri:
Does embodied training improve the recognition of mid-level expressive movement qualities sonification? 191-203 - Tim Ziemer, Holger Schultheis:
Psychoacoustic auditory display for navigation: an auditory assistance system for spatial orientation tasks. 205-218 - Piotr Skulimowski, Mateusz Owczarek, Andrzej Radecki, Michal Bujacz, Dariusz Rzeszotarski, Pawel Strumillo:
Interactive sonification of U-depth images in a navigation aid for the visually impaired. 219-230 - Antonio Polo, Xavier Sevillano:
Musical Vision: an interactive bio-inspired sonification tool to convert images into music. 231-243 - KatieAnna E. Wolf, Rebecca Fiebrink:
Personalised interactive sonification of musical performance data. 245-265
Volume 13, Number 4, December 2019
- Hsinfu Huang, Ta-chun Huang:
Thumb touch control range and usability factors of virtual keys for smartphone games. 267-278 - Emma Frid, Jonas Moll, Roberto Bresin, Eva-Lotta Sallnäs Pysander:
Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task. 279-290 - Emma Frid, Jonas Moll, Roberto Bresin, Eva-Lotta Sallnäs Pysander:
Correction to: Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task. 291 - Yousra Bendaly Hlaoui, Lamia Zouhaier, Leila Ben Ayed:
Model driven approach for adapting user interfaces to the context of accessibility: case of visually impaired users. 293-320 - Burak Benligiray, Cihan Topal, Cuneyt Akinlar:
SliceType: fast gaze typing with a merging keyboard. 321-334 - Hyoung Il Son:
The contribution of force feedback to human performance in the teleoperation of multiple unmanned aerial vehicles. 335-342 - Yogesh Kumar Meena, Hubert Cecotti, KongFatt Wong-Lin, Girijesh Prasad:
Design and evaluation of a time adaptive multimodal virtual keyboard. 343-361 - Jinghua Li, Huarui Huai, Junbin Gao, Dehui Kong, Lichun Wang:
Spatial-temporal dynamic hand gesture recognition via hybrid deep learning model. 363-371 - Niklas Rönnberg:
Sonification supports perception of brightness contrast. 373-381 - Kunhee Ryu, Joong-Jae Lee, Jung-Min Park:
GG Interaction: a gaze-grasp pose interaction for 3D virtual object selection. 383-393 - Nadia Elouali:
Time Well Spent with multimodal mobile interactions. 395-404 - Hari Singh, Jaswinder Singh:
Object acquisition and selection using automatic scanning and eye blinks in an HCI system. 405-417
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.