default search action
IUI 2018: Tokyo, Japan - Companion
- Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion, Tokyo, Japan, March 07-11, 2018. ACM 2018
Demos
- Jun Goto, Taro Miyazaki, Yuka Takei, Kiminobu Makino:
Automatic Tweet Detection based on Data Specified through News Production. 1:1-1:2 - Minghao Cai, Soh Masuko, Jiro Tanaka:
Gesture-based Mobile Communication System Providing Side-by-side Shopping Feeling. 2:1-2:2 - Yiwei Zhang, Jiani Hu, Shumpei Sano, Toshihiko Yamasaki, Kiyoharu Aizawa:
Computer Vision Based and FPRank Based Tag Recommendation for Social Popularity Enhancement. 3:1-3:2 - Kazuya Nakae, Koji Tsukada:
Support System to Review Manufacturing Workshop through Multiple Videos. 4:1-4:2 - Meng-Chieh Ko, Zih-Hong Lin:
Chatbot: A Chatbot for Business Card Management. 5:1-5:2 - Mizuki Okuyama, Yasushi Matoba, Itiro Siio:
Cylindrical M-sequence Markers and its Application to AR Fitting System for Kimono Obi. 6:1-6:2 - Goro Otsubo:
Search Interface for Deep Thinking. 7:1-7:2 - David Massimo, Elena Not, Francesco Ricci:
User Behaviour Analysis in a Simulated IoT Augmented Space. 8:1-8:2 - Ayano Nishimura, Takayuki Itoh:
Implementation of an Interactive System for the Translation of Lyrics. 9:1-9:2 - Qiyu Zhi, Suwen Lin, Shuai He, Ronald A. Metoyer, Nitesh V. Chawla:
VisPod: Content-Based Audio Visual Navigation. 10:1-10:2 - Layne Jackson Hubbard, Boskin Erkocevic, Dylan Cassady, Chen Hao Cheng, Andrea Chamorro, Tom Yeh:
MindScribe: Toward Intelligently Augmented Interactions in Highly Variable Early Childhood Environments. 11:1-11:2 - Xuan Wang, Chunmeng Lu, Soh Masuko, Jiro Tanaka:
Interactive Online Shopping with Personalized Robot Agent. 12:1-12:2 - Yasuo Kawai, Yurie Kaizu, Kenta Kawahara, Youhei Obuchi, Satoshi Otsuka, Shiori Tomimatsu:
Development of a Tsunami Evacuation Behavior Simulation System with Massive Evacuation Agents. 13:1-13:2 - Hayato Araki, Taichi Ikeda, Takumi Ozawa, Kenta Kawahara, Yasuo Kawai:
Development of a Horror Game that Route Branches by the Player's Pulse Rate. 14:1-14:2 - Kunihiko Sato, Jun Rekimoto:
Detecting Utterance Scenes of a Specific Person. 15:1-15:2 - Michal Shmueli-Scheuer, Tommy Sandbank, David Konopnicki, Ora Peled Nakash:
Exploring the Universe of Egregious Conversations in Chatbots. 16:1-16:2 - Hanaë Rateau, Yosra Rekik, Edward Lank, Laurent Grisoni:
Ether-Toolbars: Evaluating Off-Screen Toolbars for Mobile Interaction. 17:1-17:2 - Eran Toch, Netta Rager, Tal Florentin, Dan Linenberg, Daya Sellman, Noam Shomron:
Augmented-Genomics: Protecting Privacy for Clinical Genomics with Inferential Interfaces. 18:1-18:2 - Alexander Prange, Michael Barz, Daniel Sonntag:
Medical 3D Images in Multimodal Virtual Reality. 19:1-19:2 - Harshit Agrawal, Junichi Yamaoka, Yasuaki Kakehi:
(author)rise: Artificial Intelligence Output Via the Human Body. 20:1-20:2 - Melissa Roemmele, Andrew S. Gordon:
Automated Assistance for Creative Writing with an RNN Language Model. 21:1-21:2 - Benett Axtell, Cosmin Munteanu:
Frame of Mind: Using Storytelling for Speech-Based Clustering of Family Pictures. 22:1-22:2 - Shiori Itou, Masaaki Iseki, Shingo Kato, Takamichi Nakamoto:
Olfactory and Visual Presentation Using Olfactory Display Using SAW Atomizer and Solenoid Valves. 23:1-23:2 - Daisaku Shibata, Shoko Wakamiya, Kaoru Ito, Mai Miyabe, Ayae Kinoshita, Eiji Aramaki:
VocabChecker: Measuring Language Abilities for Detecting Early Stage Dementia. 24:1-24:2 - Gopakumar Gopalakrishnan, Madhusudhan M. Aithal, Anjaneyulu Pasala:
Visual Analytics of Organizational Performance Network. 25:1-25:2 - Yuanyuan Wang, Yihong Zhang, Panote Siriaraya, Yukiko Kawai, Adam Jatowt:
Language Density Driven Route Navigation System for Pedestrians based on Twitter Data. 26:1-26:2 - Natsuki Hamanishi, Michinari Kono, Shunichi Suwa, Takashi Miyaki, Jun Rekimoto:
Flufy: Recyclable and Edible Rapid Prototyping using Fluffed Sugar. 27:1-27:2 - Riku Takano, Ken Wakita:
Fluid UI for HIGH-dimensional Analysis of Social Networks. 28:1-28:2 - Meg Pirrung, Nathan Hilliard, Nancy O'Brien, Artëm Yankov, Court D. Corley, Nathan O. Hodas:
SHARKZOR: Human in the Loop ML for User-Defined Image Classification. 29:1-29:2
Posters
- Koichi Miyazaki, Hiroaki Tobita:
SinkAmp: Interactive Sink to Detect Living Habits for Healthcare and Quality of Life. 30:1-30:2 - Daniel Reinhardt, Jörn Hurtienne:
Cursor Entropy Reveals Decision Fatigue. 31:1-31:2 - Mondheera Pituxcoosuvarn, Toru Ishida, Naomi Yamashita, Toshiyuki Takasaki, Yumiko Mori:
Supporting a Children's Workshop with Machine Translation. 32:1-32:2 - Yuki Umezawa, Takatsugu Hirayama, Yu Enokibori, Kenji Mase:
Egocentric Video Multi-viewer for Analyzing Skilled Behaviors based on Gaze Object. 33:1-33:2 - Yichao Lu, Ruihai Dong, Barry Smyth:
Convolutional Matrix Factorization for Recommendation Explanation. 34:1-34:2 - Ching-Chun Chen, Chia-Min Wu, I-Chao Shen, Bing-Yu Chen:
A Deep Learning Based Method For 3D Human Pose Estimation From 2D Fisheye Images. 35:1-35:2 - Toshinori Hayashi, Yuanyuan Wang, Yukiko Kawai, Kazutoshi Sumiya:
A Recommender System based on Detected Users' Complaints by Analyzing Reviews. 36:1-36:2 - Takashi Totsuka, Yuichiro Kinoshita, Shota Shiraga, Kentaro Go:
Impression-based Fabrication: A Framework to Reflect Personal Preferences in the Fabrication Process. 37:1-37:2 - Suguru Arinami, Yu Suzuki:
Information Display Method to Give the Non-Mechanical Impression by Imitating the Communication with Pets. 38:1-38:2 - Tiffany Ya Tang, Pinata Winoto:
A Configurable and Contextually Expandable Interactive Picture Exchange Communication System (PECS) for Chinese Children with Autism. 39:1-39:2 - Tiffany Ya Tang, Pinata Winoto:
Providing Adaptive and Personalized Visual Support based on Behavioral Tracking of Children with Autism for Assessing Reciprocity and Coordination Skills in a Joint Attention Training Application. 40:1-40:2 - Chi-Lan Yang, Hao-Chuan Wang:
Can You Help Me without Knowing Much?: Exploring Cued-Knowledge Sharing for Instructors' Tutorial Generation. 41:1-41:2 - Bachar Senno, Pedro Barcha:
Customizing User Experience with Adaptive Virtual Reality. 42:1-42:2 - Michal Shmueli-Scheuer, Jonathan Herzig, Tommy Sandbank, David Konopnicki:
On the Expression of Agent Emotions in Customer Support Dialogs in Social Media. 43:1-43:2 - Heesun Kim, Dongeon Lee, Min Gyeong Kim, Hyejin Jang, Ji-Hyung Park:
Omni-Gesture: A Hand Gesture Interface for Omnidirectional Devices. 44:1-44:2 - Iuliia Brishtel, Shoya Ishimaru, Olivier Augereau, Koichi Kise, Andreas Dengel:
Assessing Cognitive Workload on Printed and Electronic Media using Eye-Tracker and EDA Wristband. 45:1-45:2 - Ceenu George, Malin Eiband, Michael Hufnagel, Heinrich Hussmann:
Trusting Strangers in Immersive Virtual Reality. 46:1-46:2 - Kaya Okada, Mitsuo Yoshida, Takayuki Itoh, Tobias Czauderna, Kingsley Stephens:
Spatio-Temporal Visualization of Tweet Data around Tokyo Disneyland Using VR. 47:1-47:2 - Anders Hast, Ekta Vats:
An Intelligent User Interface for Efficient Semi-automatic Transcription of Historical Handwritten Documents. 48:1-48:2 - Hagit Ben-Shoshan, Osnat Mokryn:
ActiveMap: Visual Analysis of Temporal Activity in Social Media Sites. 49:1-49:2 - Sangbong Yoo, Sujin Jeong, Yun Jang:
Gaze Data Clustering and Analysis. 50:1-50:2 - Seraphina Yong, Hao-Chuan Wang:
Using Spatialized Audio to Improve Human Spatial Knowledge Acquisition in Virtual Reality. 51:1-51:2 - Cedric Caremel, Gemma Liu, George Chernyshov, Kai Kunze:
Muscle-Wire Glove: Pressure-Based Haptic Interface. 52:1-52:2 - Mei-Ling Chen, Hao-Chuan Wang:
How Personal Experience and Technical Knowledge Affect Using Conversational Agents. 53:1-53:2 - Ricky J. Sethi, Catherine A. Buell, William P. Seeley:
WAIVS: An Intelligent Interface for Visual Stylometry Using Semantic Workflows. 54:1-54:2 - June Han, Tom Hope:
Pair Matching: Transdisciplinary Study for Introducing Computational Intelligence to Guide Dog Associations. 55:1-55:2 - Fatema Akbar, Ted Grover, Gloria Mark, Michelle X. Zhou:
The Effects of Virtual Agents' Characteristics on User Impressions and Language Use. 56:1-56:2 - Felipe Costa, Sixun Ouyang, Peter Dolog, Aonghus Lawlor:
Automatic Generation of Natural Language Explanations. 57:1-57:2 - Yong Zheng:
Personality-Aware Decision Making In Educational Learning. 58:1-58:2 - Chun-Hua Tsai, Peter Brusilovsky:
Explaining Social Recommendations to Casual Users: Design Principles and Opportunities. 59:1-59:2 - Petru-Vasile Cioata, Radu-Daniel Vatavu:
In Tandem: Exploring Interactive Opportunities for Dual Input and Output on Two Smartwatches. 60:1-60:2 - Ziang Xiao, Yuqi Yao, Wai-Tat Fu:
An Intelligent Educational Platform for Training Spatial Visualization Skills. 61:1-61:2 - Takumi Kawahara, Daisuke Iwai, Kosuke Sato:
Dynamic Path Planning of Flying Projector Considering Collision Avoidance with Observer and Bright Projection. 62:1-62:2 - Paritosh Bahirat, Qizhang Sun, Bart P. Knijnenburg:
Scenario Context v/s Framing and Defaults in Managing Privacy in Household IoT. 63:1-63:2 - Elayne Ruane, Théo Faure, Ross Smith, Dan Bean, Julie Carson-Berndsen, Anthony Ventresque:
BoTest: a Framework to Test the Quality of Conversational Agents Using Divergent Input Examples. 64:1-64:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.