CS295J/Research proposal (draft 2): Difference between revisions
Eric Sodomka (talk | contribs) |
Adam Darlow (talk | contribs) |
||
| Line 1: | Line 1: | ||
= Introduction = | = Introduction = | ||
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert | We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations. | ||
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can't necessarily adapt it to take into account results from new cognitive research. | |||
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower "entry cost" for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations. | |||
= Overview of Contributions = | = Overview of Contributions = | ||
Revision as of 03:52, 21 April 2009
Introduction
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can't necessarily adapt it to take into account results from new cognitive research.
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower "entry cost" for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.
Overview of Contributions
Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.
- Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.
- Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.
- System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.
- Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.
- Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.
- A design tool that can provide a designer with recommendations for interface improvements, based on a unified matrix of cognitive principles and heuristic design guidelines.
- The creation of a language for abstractly representing user interfaces in terms of the layout of graphical components and the functional relationships between these components.
- A system for generating interaction histories within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.
- A system that takes user traces and creates a GOMS model that decomposes user actions into various cognitive, perceptual, and motor control tasks.
- The development of other evaluation methods using various cognitive/HCI models and guidelines.
- A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.
Background / Related Work
Methodology
Collecting User Traces
Owner: Trevor
Generalizing User Traces
Owner: Trevor
Evaluation and Recommendation via Modules
Owner: E.J., Jon?
Sample Modules
CPM-GOMS
Owner: Gideon, Steven
Cognitive/HCI Guidelines
Owner: E.J.
Fitts's Law
Interruptions
Owner: Andrew
Interaction Prediction
Owner: Trevor