CS295J/Contributions for class 12: Difference between revisions

From VrlWiki
Jump to navigation Jump to search
Line 1: Line 1:
== Eric ==
== A mixed-initiative system for interface design ==  
Owner: Eric


Contributions are based on the following proposal:
=== Proposal Overview ===
(Click here for [http://vrl.cs.brown.edu/wiki/images/b/b4/Flowchart.pdf flowchart])
Note: click here for [http://vrl.cs.brown.edu/wiki/images/b/b4/Flowchart.pdf flowchart].


We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert based on past accuracy, and outputs to the developer a merged evaluation score and a weighted set of recommendations.
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert based on past accuracy, and outputs to the developer a merged evaluation score and a weighted set of recommendations.
Line 10: Line 11:
We evaluate our framework through a series of user studies. Interfaces passed to our committee of experts receive evaluation scores on a number of different dimensions, such as time, accuracy, and ease of use for novices versus experts. We can compare these predicted scores to the actual scores observed in user studies to evaluate performance. The aggregator can retroactively weight the experts' opinions to determine which weighting would have given the best predictions of user behavior for the given interface, and observe whether that weighting generalizes to other interface evaluations.
We evaluate our framework through a series of user studies. Interfaces passed to our committee of experts receive evaluation scores on a number of different dimensions, such as time, accuracy, and ease of use for novices versus experts. We can compare these predicted scores to the actual scores observed in user studies to evaluate performance. The aggregator can retroactively weight the experts' opinions to determine which weighting would have given the best predictions of user behavior for the given interface, and observe whether that weighting generalizes to other interface evaluations.


=== Contributions ===
* Coming soon...
* Coming soon...

Revision as of 01:13, 15 April 2009

A mixed-initiative system for interface design

Owner: Eric

Proposal Overview

Note: click here for flowchart.

We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert based on past accuracy, and outputs to the developer a merged evaluation score and a weighted set of recommendations.

Different users have different abilities and interface preferences. For example, a user at NASA probably cares more about interface accuracy than speed. By passing this information to our committee of experts, we can create interfaces that are tuned to maximize the utility of a particular user type.

We evaluate our framework through a series of user studies. Interfaces passed to our committee of experts receive evaluation scores on a number of different dimensions, such as time, accuracy, and ease of use for novices versus experts. We can compare these predicted scores to the actual scores observed in user studies to evaluate performance. The aggregator can retroactively weight the experts' opinions to determine which weighting would have given the best predictions of user behavior for the given interface, and observe whether that weighting generalizes to other interface evaluations.

Contributions

  • Coming soon...