CS295J/Literature class 2.11

From VrlWiki
Revision as of 01:30, 14 September 2011 by Diem Tran (talk | contribs)
Jump to navigation Jump to search
  • Project Ernestine A research project from way back that demonstrated that modeling cognition plus motor plus perceptual tasks by telephone operators could predict the efficiency of a new user interface. The efficiency turned out to be lower than the old, low-tech version, which was a surprise. This paper is just the kind of result I'd like to be able to publish about more complex user interfaces. (Owner: David Laidlaw, discussion: ?, discussant: ?)
  • Cognitive Strategies and Eye Movements for Searching Hierarchical Computer Displays This paper uses predictive modeling and eye-tracking data together to explain search behavior in hierarchical or non-hierarchical layouts. The layouts were lists of items organized in labeled groups, with the labels being either useful (hierarchical condition) or random (non-hierarchical condition). The research question is about whether people use different strategies when searching for a target item in each condition. They compared their model's predictions to observed eye movements and found them to be a pretty good fit, and therefore characterized search strategies using the model. (Owner: Caroline Ziemkiewicz, discussion: ?, discussant: ?)
  • Mapping Human Whole-Brain Structural Networks with Diffusion MRI The authors use diffusion MRI to create network maps with significantly larger detail than previous models of physical connectivity. Their methods allow them to study live humans and model the interconnectivity of neuronal groups as networks with thousands of nodes versus previous methods with less than 100 nodes studied from post-mortem animal subjects. Based on these new experimental methods they demonstrate that the brain network is in the form of a small world. (Owner: Stephen Brawner, discussion: ?, discussant: ?)
This paper is a good entry point for Ware's other work on neural modeling for visualization. It describes how spatial receptor patterns in the visual cortex enable contour interpretation and related visualization tasks (e.g., particle advection in flow fields). There's also some good discussion about a perception-based approach to visualization, validating visual mappings with perceptual theories. (Owner: Steven Gomez, discussion: ?, discussant: ?)
Discusses the merits of HCI (here called muCI, for muscle-computer interaction) by detection of forearm muscle activity rather than manipulation of an object such as a mouse or keyboard. Michael Spector 14:16, 13 September 2011 (EDT)
Discusses a speech detection system that uses both auditory and visual cues to more accurately detect speech commands. It aims to recognize the user's intention to speak, and to ignore background noise, or speech recognized as not being directed at the system. Although it is fairly dated, this paper is relevant in that it discusses applications of cognition/perception to HCI. Michael Spector 14:16, 13 September 2011 (EDT)
The authors use a brain measuring device to detect activations in the brain when a user is performing a task. In this way, the authors are able to measure a range of cognitive workload states, known as subjective factors, which are difficult to measure using qualitative studies. They quantify workloads of users in three different UI tasks and point out the usability of a UI design choice as well as the low-level cognitive resources in the brain correspond to a task.
  How this paper relates to our project: As this paper presents a novel approach to measure effectiveness of a UI, it is relevant that we consider this in our process to evaluate the tools we develop. 

(Owner: Diem Tran, Discussant: ?, Discussant: ?)