default search action
ETRA 2010: Austin, Texas, USA
- Carlos Hitoshi Morimoto, Howell O. Istance, Aulikki Hyrskykari, Qiang Ji:
Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications, ETRA 2010, Austin, Texas, USA, March 22-24, 2010. ACM 2010, ISBN 978-1-60558-994-7
Keynote address
- I. Scott MacKenzie:
An eye on input: research challenges in using the eye for computer input control. 11-12
Long papers 1 -- Advances in eye tracking technology
- Dan Witzner Hansen, Javier San Agustin, Arantxa Villanueva:
Homography normalization for robust gaze estimation in uncalibrated setups. 13-20 - John M. Franchak, Kari S. Kretch, Kasey C. Soska, Jason S. Babcock, Karen E. Adolph:
Head-mounted eye-tracking of infants' natural interactions: a new method. 21-27 - Dmitri Model, Moshe Eizenman:
User-calibration-free remote gaze estimation system. 29-36
Short papers 1 -- Eye tracking applications and data analysis
- Yun Zhang, Hong Fu, Zhen Liang, Zheru Chi, David Dagan Feng:
Eye movement as an interaction mechanism for relevance feedback in a content-based image retrieval system. 37-40 - Zhen Liang, Hong Fu, Yun Zhang, Zheru Chi, David Dagan Feng:
Content-based image retrieval using a combination of visual features and eye tracking data. 41-44 - David Rosengrant:
Gaze scribing in physics problem solving. 45-48 - Sheree Josephson, Michael E. Holmes:
Have you seen any of these men?: looking at whether eyewitnesses use scanpaths to recognize suspects in photo lineups. 49-52 - Minoru Nakayama, Yuko Hayashi:
Estimation of viewer's response for contextual understanding of tasks using features of eye-movements. 53-56 - Oleg V. Komogortsev, Sampath Jayarathna, Cecilia R. Aragon, Mahmoud Mechehoul:
Biometric identification via an oculomotor plant mathematical model. 57-60
Short papers 2 -- Poster presentations
- Roxanne L. Canosa:
Saliency-based decision support. 61-63 - Oleg V. Komogortsev, Sampath Jayarathna, Do Hyong Koh, Sandeep A. Munikrishne Gowda:
Qualitative and quantitative scoring and evaluation of the eye movement classification algorithms. 65-68 - Alberto Faro, Daniela Giordano, Concetto Spampinato, Davide De Tommaso, Simona Ullo:
An interactive interface for remote administration of clinical tests based on eye tracking. 69-72 - Alberto Faro, Daniela Giordano, Carmelo Pino, Concetto Spampinato:
Visual attention for implicit relevance feedback in a content based image retrieval. 73-76 - Javier San Agustin, Henrik H. T. Skovsgaard, Emilie Møllenbach, Maria Barret, Martin Tall, Dan Witzner Hansen, John Paulin Hansen:
Evaluation of a low-cost open-source gaze tracker. 77-80 - Craig Hennessey, Andrew T. Duchowski:
An open source eye-gaze interface: expanding the adoption of eye-gaze in everyday applications. 81-84 - Meredith McLendon, Ann McNamara, Tim McLaughlin, Ravindra Dwivedi:
Using eye tracking to investigate important cues for representative creature motion. 85-88 - Hans-Joachim Bieg, Lewis L. Chuang, Roland W. Fleming, Harald Reiterer, Heinrich H. Bülthoff:
Eye and pointer coordination in search and selection tasks. 89-92 - Mario H. Urbina, Maike Lorenz, Anke Huckauf:
Pies with EYEs: the limits of hierarchical pie menus in gaze control. 93-96 - Brian Daugherty, Andrew T. Duchowski, Donald H. House, Celambarasan Ramasamy:
Measuring vergence over stereoscopic video with a remote eye tracker. 97-100 - Thomas Grindinger, Andrew T. Duchowski, Michael W. Sawyer:
Group-wise similarity and classification of aggregate scanpaths. 101-104 - Melih Kandemir, Veli-Matti Saarinen, Samuel Kaski:
Inferring object relevance from gaze in dynamic scenes. 105-108 - Sophie Stellmach, Lennart E. Nacke, Raimund Dachselt:
Advanced gaze visualizations for three-dimensional virtual environments. 109-112 - Vasily G. Moshnyaga:
The use of eye tracking for PC energy management. 113-116 - Stefan Kohlbecher, Klaus Bartl, Stanislavs Bardins, Erich Schneider:
Low-latency combined eye and head tracking system for teleoperating a robotic head in real-time. 117-120 - Tobit Kollenberg, Alexander Neumann, Dorothe Schneider, Tessa-Karina Tews, Thomas Hermann, Helge J. Ritter, Angelika Dierker, Hendrik Koesling:
Visual search in the (un)real world: how head-mounted displays affect eye movements, head movements and target detection. 121-124 - Pieter J. Blignaut:
Visual span and other parameters for the generation of heatmaps. 125-128 - Jeffrey B. Mulligan, Kevin Gabayan:
Robust optical eye detection during head movement. 129-132 - Erik Wästlund, Kay Sponseller, Ola Pettersson:
What you see is where you go: testing a gaze-driven power wheelchair for individuals with severe multiple disabilities. 133-136 - Flavio Luiz Coutinho, Carlos Hitoshi Morimoto:
A depth compensation method for cross-ratio based eye tracking. 137-140 - Oskar Palinko, Andrew L. Kun, Alexander Shyrokov, Peter A. Heeman:
Estimating cognitive load using remote eye tracking in a driving simulator. 141-144 - Henrik H. T. Skovsgaard, Julio C. Mateo, John M. Flach, John Paulin Hansen:
Small-target selection with gaze alone. 145-148 - Geoffrey Tien, M. Stella Atkins, Bin Zheng, Colin Swindells:
Measuring situation awareness of surgeons in laparoscopic training. 149-152 - Juyeon Park, Emily Woods, Marilyn DeLong:
Quantification of aesthetic viewing using eye-tracking technology: the influence of previous training in apparel design. 153-155 - Kentaro Takemura, Yuji Kohashi, Tsuyoshi Suenaga, Jun Takamatsu, Tsukasa Ogasawara:
Estimating 3D point-of-regard and visualizing gaze trajectories under natural head movements. 157-160 - Yang Liu, Lawrence K. Cormack, Alan C. Bovik:
Natural scene statistics at stereo fixations. 161-164 - Michiya Yamamoto, Takashi Nagamatsu, Tomio Watanabe:
Development of eye-tracking pen display based on stereo bright pupil technique. 165-168 - Detlev Droege, Dietrich Paulus:
Pupil center detection in low resolution images. 169-172 - Tanya René Beelders, Pieter J. Blignaut:
Using vision and voice to create a multimodal interface for Microsoft Word 2007. 173-176 - Emilie Møllenbach, Martin Lillholm, Alastair G. Gale, John Paulin Hansen:
Single gaze gestures. 177-180 - Zakria Hussain, Kitsuchart Pasupa, John Shawe-Taylor:
Learning relevant eye movement feature spaces across users. 181-185 - Tomi Kinnunen, Filip Sedlak, Roman Bednarik:
Towards task-independent person authentication using eye movement signals. 187-190 - Yvonne Kammerer, Wolfgang Beinhauer:
Gaze-based web search: the impact of interface design on search result selection. 191-194 - Scott B. Stevenson, Austin Roorda, Girish Kumar:
Eye tracking with the adaptive optics scanning laser ophthalmoscope. 195-198 - Elias Daniel Guestrin, Moshe Eizenman:
Listing's and Donders' laws and the estimation of the point-of-gaze. 199-202
Long papers 2 -- Scanpath representation and comparison methods
- Joseph H. Goldberg, Jonathan I. Helfman:
Visual scanpath representation. 203-210 - Halszka Jarodzka, Kenneth Holmqvist, Marcus Nyström:
A vector-based, multidimensional scanpath similarity measure. 211-218 - Andrew T. Duchowski, Jason Driver, Sheriff Jolaoso, William Tan, Beverly N. Ramey, Ami Robbins:
Scanpath comparison revisited. 219-226
Long papers 3 -- Analysis and interpretation of eye movements
- Joseph H. Goldberg, Jonathan I. Helfman:
Scanpath clustering and aggregation. 227-234 - Wayne J. Ryan, Andrew T. Duchowski, Ellen A. Vincent, Dina Battisto:
Match-moving for area-based analysis of eye movements in natural tasks. 235-242 - Miquel Prats, Steve Garner, Iestyn Jowers, Alison McKay, Nieves Pedreira:
Interpretation of geometric shapes: an eye movement study. 243-250
Short papers 3 -- Advances in eye tracking technology
- Takashi Nagamatsu, Ryuichi Sugano, Yukina Iwamoto, Junzo Kamahara, Naoki Tanaka:
User-calibration-free gaze tracking with estimation of the horizontal angles between the visual and the optical axes of both eyes. 251-254 - Takashi Nagamatsu, Yukina Iwamoto, Junzo Kamahara, Naoki Tanaka, Michiya Yamamoto:
Gaze estimation method based on an aspherical model of the cornea: surface of revolution about the optical axis of the eye. 255-258 - Jeff Klingner:
The pupillometric precision of a remote video eye tracker. 259-262 - Margarita Vinnikov, Robert S. Allison:
Contingency evaluation of gaze-contingent displays for real-time visual field simulations. 263-266 - Daniel F. Pontillo, Thomas B. Kinsman, Jeff B. Pelz:
SemantiCode: using content similarity and database-driven matching to code wearable eyetracker gaze data. 267-270 - Carlos Hitoshi Morimoto, Arnon Amir:
Context switching for fast key selection in text entry applications. 271-274
Long papers 4 -- Analysis and understanding of visual tasks
- Jeff Klingner:
Fixation-aligned pupillary response averaging. 275-282 - Pernilla Qvarfordt, Jacob T. Biehl, Gene Golovchinsky, Tony Dunnigan:
Understanding the benefits of gaze enhanced visual search. 283-290 - David R. Hardoon, Kitsuchart Pasupa:
Image ranking with implicit feedback from eye movements. 291-298
Long papers 5 -- Gaze interfaces and interactions
- Yvonne Kammerer, Peter Gerjets:
How the interface design influences users' spontaneous trustworthiness evaluations of web search results: comparing a list and a grid interface. 299-306 - Michael Dorr, Halszka Jarodzka, Erhardt Barth:
Space-variant spatio-temporal filtering of video for gaze visualization and perceptual learning. 307-314 - Mario H. Urbina, Anke Huckauf:
Alternatives to single character entry and dwell time selection on eye typing. 315-322
Long papers 6 -- Eye tracking and accessibility
- Howell O. Istance, Aulikki Hyrskykari, Lauri Immonen, Santtu Mansikkamaa, Stephen Vickers:
Designing gaze gestures for gaming: an investigation of performance. 323-330 - Marco Porta, Alice Ravarelli, Giovanni Spagnoli:
ceCursor, a contextual eye cursor for general pointing in windows environments. 331-337 - Behrooz Ashtiani, I. Scott MacKenzie:
BlinkWrite2: an improved text entry method using eye blinks. 339-345
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.