default search action
BELIV 2014: Paris, France
- Heidi Lam, Petra Isenberg, Tobias Isenberg, Michael Sedlmair:
Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization, BELIV 2014, Paris, France, November 10, 2014. ACM 2014, ISBN 978-1-4503-3209-5
Rethinking evaluation level-abstracted task vs. in situ evaluation
- Matthew Brehmer, Michael Sedlmair, Stephen Ingram, Tamara Munzner:
Visualizing dimensionally-reduced data: interviews with analysts and a characterization of task sequences. 1-8 - Alexander Rind, Wolfgang Aigner, Markus Wagner, Silvia Miksch, Tim Lammarsch:
User tasks for evaluation: untangling the terminology throughout visualization design and development. 9-15 - Kirsten M. Winters, Denise H. Lach, Judith Bayard Cushing:
Considerations for characterizing domain problems. 16-22 - Michael Correll, Eric C. Alexander, Danielle Albers, Alper Sarikaya, Michael Gleicher:
Navigating reductionism and holism in evaluation. 23-26
Cognitive processes & interaction
- Eric D. Ragan, John R. Goodall:
Evaluation methodology for comparing memory and communication of analytic processes in visual analytics. 27-34 - Michael Smuc:
Just the other side of the coin?: from error- to insight-analysis. 35-40 - Khairi Reda, Andrew E. Johnson, Jason Leigh, Michael E. Papka:
Evaluating user behavior and strategy during visual exploration. 41-45 - John T. Stasko:
Value-driven evaluation of visualizations. 46-53
New techniques I---eye tracking
- Kuno Kurzhals, Cyrill Fabian Bopp, Jochen Bässler, Felix Ebinger, Daniel Weiskopf:
Benchmark data for evaluating visualization and analysis techniques for eye tracking for video stimuli. 54-60 - Kuno Kurzhals, Brian D. Fisher, Michael Burch, Daniel Weiskopf:
Evaluating visual analytics with eye tracking. 61-69 - Tanja Blascheck, Thomas Ertl:
Towards analyzing eye tracking data for evaluating interactive visualization systems. 70-77
New techniques II---crowdsourcing
- Nafees U. Ahmed, Klaus Mueller:
Gamification as a paradigm for the evaluation of visual analytics systems. 78-86 - Yuet Ling Wong, Niklas Elmqvist:
Crowdster: enabling social navigation in web-based visualization using crowdsourced evaluation. 87-94 - Alfie Abdul-Rahman, Karl J. Proctor, Brian Duffy, Min Chen:
Repeated measures design in crowdsourcing-based experiments for visualization. 95-102
Adopting methods from other fields
- Tanja Mercun:
Evaluation of information visualization techniques: analysing user experience with reaction cards. 103-109 - Alvin Tarrell, Ann L. Fruhling, Rita Borgo, Camilla Forsell, Georges G. Grinstein, Jean Scholtz:
Toward visualization-specific heuristic evaluation. 110-117 - Simone Kriglstein, Margit Pohl, Nikolaus Suchy, Johannes Gärtner, Theresia Gschwandtner, Silvia Miksch:
Experiences and challenges with evaluation methods in practice: a case study. 118-125 - Leslie M. Blaha, Dustin Arendt, Fairul Mohd-Zaid:
More bang for your research buck: toward recommender systems for visual analytics. 126-133 - Michaël Aupetit:
Sanity check for class-coloring-based evaluation of dimension reduction techniques. 134-141
Experience reports
- Sung-Hee Kim, Ji Soo Yi, Niklas Elmqvist:
Oopsy-daisy: failure stories in quantitative evaluation studies for visualizations. 142-146 - Matthew Brehmer, Sheelagh Carpendale, Bongshin Lee, Melanie Tory:
Pre-design empiricism for information visualization: scenarios, methods, and challenges. 147-151 - Linda T. Kaastra, Brian D. Fisher:
Field experiment methodology for pair analytics. 152-159 - Jean Scholtz, Oriana Love, Mark A. Whiting, Duncan Hodges, Lia R. Emanuel, Danaë Stanton Fraser:
Utility evaluation of models. 160-167
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.