<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://vrl.cs.brown.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Adam+Darlow</id>
	<title>VrlWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://vrl.cs.brown.edu/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Adam+Darlow"/>
	<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php/Special:Contributions/Adam_Darlow"/>
	<updated>2026-04-19T22:27:24Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3694</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3694"/>
		<updated>2009-08-21T19:39:48Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Preliminary Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]], [[User: Trevor O&#039;Brien | Trevor]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Propose:&#039;&#039;&#039; The design, application and evaluation of a novel, cognition-based, computational framework for assessing interface design and providing automated suggestions to optimize usability.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evaluation Methodology:&#039;&#039;&#039;  Our techniques will be evaluated quantitatively through a series of user-study trials, as well as qualitatively by a team of expert interface designers. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions and Significance:&#039;&#039;&#039;  We expect this work to make the following contributions:  &lt;br /&gt;
# design-space analysis and quantitative evaluation of cognition-based techniques for assessing user interfaces.  &lt;br /&gt;
# design and quantitative evaluation of techniques for suggesting optimized interface-design changes. &lt;br /&gt;
# an extensible, multimodal software architecture for capturing user traces integrated with pupil-tracking data, auditory recognition, and muscle-activity monitoring.  &lt;br /&gt;
# specification (language?) of how to define an interface evaluation module and how to integrate it into a larger system.  &lt;br /&gt;
# (there may be more here, like testing different cognitive models, generating a markup language to represent interfaces, maybe even a unique metric space for interface usability)&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
Each person should add the background related to their specific aims.&lt;br /&gt;
&lt;br /&gt;
* Steven Ellis - Cognitive models of HCI, including GOMS variations and ACT-R&lt;br /&gt;
* EJ - Design Guidelines&lt;br /&gt;
* Jon - Perception and Action&lt;br /&gt;
* Andrew - Multiple task environments&lt;br /&gt;
* Gideon - Cognition and dual systems&lt;br /&gt;
* Ian - Interface design process&lt;br /&gt;
* Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using)&lt;br /&gt;
&lt;br /&gt;
== Cognitive Models ==&lt;br /&gt;
I plan to port over most of the background on cognitive models of HCI from the old proposal&lt;br /&gt;
&lt;br /&gt;
Additions will comprise of:&lt;br /&gt;
*CPM-GOMS as a bridge from GOMS architecture to the promising procedural optimization of the Model Human Processor&lt;br /&gt;
**Context of CPM development, discuss its relation to original GOMS and KLM&lt;br /&gt;
***Establish the tasks which were relevant for optimization when CPM was developed and note that its obsolescence may have been unavoidable&lt;br /&gt;
**Focus on CPM as the first step in transitioning from descriptive data, provided by mounting efforts in the cognitive sciences realm to discover the nature of task processing and accomplishment, to prescriptive algorithms which can predict an interface’s efficiency and suggest improvements&lt;br /&gt;
**CPM’s purpose as an abstraction of cognitive processing – a symbolic representation not designed for accuracy but precision&lt;br /&gt;
**CPM’s successful trials, e.g. Ernestine&lt;br /&gt;
***Implications of this project include CPM’s ability to accurately estimate processing at a psychomotor level&lt;br /&gt;
***Project does suggest limitations, however, when one attempts to examine more complex tasks which involve deeper and more numerous cognitive processes&lt;br /&gt;
*ACT-R as an example of a progressive cognitive modeling tool&lt;br /&gt;
**A tool clearly built by and for cognitive scientists, and as a result presents a much more accurate view of human processing – helpful for our research&lt;br /&gt;
**Built-in automation, which now seems to be a standard feature of cognitive modeling tools&lt;br /&gt;
**Still an abstraction of cognitive processing, but makes adaptation to cutting-edge cognitive research findings an integral aspect of its modular structure&lt;br /&gt;
**Expand on its focus on multi-tasking, taking what was a huge advance between GOMS and its CPM variation and bringing the simulation several steps closer to approximating the nature of cognition in regards to HCI&lt;br /&gt;
**Far more accessible both for researchers and the lay user/designer in its portability to LISP, pre-construction of modules representing cognitive capacities and underlying algorithms modeling paths of cognitive processing&lt;br /&gt;
&lt;br /&gt;
==Design guidelines==&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;[http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf Borchers, Jan O.  &amp;quot;A Pattern Approach to Interaction Design.&amp;quot;  2000.]&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
===Application to AUE===&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;[http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf Ivory, M and Hearst, M.  &amp;quot;The State of the Art in Automated Usability Evaluation of User Interfaces.&amp;quot; ACM Computing Surveys (CSUR), 2001.]&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
===Popular and seminal examples===&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Elements and goals of a guideline set===&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;[http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Czerwinski, Horvitz, and White. &amp;quot;A Diary Study of Task Switching and Interruptions.&amp;quot;  Proceedings of the SIGCHI conference on Human factors in computing systems, 2004.]&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application.&lt;br /&gt;
&lt;br /&gt;
== Perception and Action (in progress) ==&lt;br /&gt;
&lt;br /&gt;
*Information Processing Approach&lt;br /&gt;
**Advantages&lt;br /&gt;
***Formalism eases translation of theory into scripting language&lt;br /&gt;
**Disadvantages&lt;br /&gt;
***Assumes symbolic representation&lt;br /&gt;
&lt;br /&gt;
*Ecological (Gibsonian) Approach&lt;br /&gt;
**Advantages&lt;br /&gt;
***Emphasis on bodily and environmental constraints&lt;br /&gt;
**Disadvantages&lt;br /&gt;
***Lack of formalism hinders translation of theory into scripting language&lt;br /&gt;
&lt;br /&gt;
The contributions section has been moved to a [[CS295J/Final contributions|standalone page]].&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
===Workflow, Multi-tasking, and Interruption===&lt;br /&gt;
&lt;br /&gt;
====I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
The goals of the preliminary work are to gain qualitative insight into how information workers practice metawork, and to determine whether people might be better-supported with software which facillitates metawork and interruptions.  Thus, the preliminary work should investigate, and demonstrate, the need and impact of the core goals of the project.&lt;br /&gt;
&lt;br /&gt;
====II.  &#039;&#039;&#039;Methodology&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
Seven information workers, ages 20-38 (5 male, 2 female), were interviewed to determine which methods they use to &amp;quot;stay organized&amp;quot;.  An initial list of metawork strategies was established from two pilot interviews, and then a final list was compiled.  Participants then responded to a series of 17 questions designed to gain insight into their metawork strategies and process.  In addition, verbal interviews were conducted to get additional open-ended feedback.&lt;br /&gt;
&lt;br /&gt;
====III.  &#039;&#039;&#039;Final Results&#039;&#039;&#039;====&lt;br /&gt;
A histogram of methods people use to &amp;quot;stay organized&amp;quot; in terms of tracking things they need to do (TODOs), appointments and meetings, etc. is shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:AcbGraph.jpg]]&lt;br /&gt;
&lt;br /&gt;
In addition to these methods, participants also used a number of other methods, including:&lt;br /&gt;
&lt;br /&gt;
* iCal&lt;br /&gt;
* Notes written in xterms&lt;br /&gt;
* &amp;quot;Inbox zero&amp;quot; method of email organization&lt;br /&gt;
* iGoogle Notepad (for tasks)&lt;br /&gt;
* Tag emails as &amp;quot;TODO&amp;quot;, &amp;quot;Important&amp;quot;, etc.&lt;br /&gt;
* Things (Organizer Software)&lt;br /&gt;
* Physical items placed to &amp;quot;remind me of things&amp;quot;&lt;br /&gt;
* Sometimes arranging windows on desk&lt;br /&gt;
* Keeping browser tabs open&lt;br /&gt;
* Bookmarking web pages&lt;br /&gt;
* Keep programs/files open scrolled to certain locations sometimes with things selected&lt;br /&gt;
&lt;br /&gt;
In addition, three participants said that when interrupted they &amp;quot;rarely&amp;quot; or &amp;quot;very rarely&amp;quot; were able to resume the task they were working on prior to the interruption.  Three of the participants said that they would not actively recommend their metawork strategies for other people, and two said that staying organized was &amp;quot;difficult&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Four participants were neutral to the idea of new tools to help them stay organized and three said that they would like to have such a tool/tools.&lt;br /&gt;
&lt;br /&gt;
====IV.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;====&lt;br /&gt;
These results qunatiatively support our hypothesis that there is no clearly dominant set of metawork strategies employed by information workers.  This highly fragemented landscape is surprising, even though most information workers work in a similar environment - at a desk, on the phone, in meetings - and with the same types of tools - computers, pens, paper, etc.  We believe that this suggests that there are complex tradeoffs between these methods and that no single method is sufficient.  We therefore believe that users will be better supported with a new set of software-based metawork tools.&lt;br /&gt;
&lt;br /&gt;
===Causal perception of interfaces===&lt;br /&gt;
Owner: [[Adam]]&lt;br /&gt;
&lt;br /&gt;
====I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
The goals of the preliminary work are to demonstrate the importance of the principles of causal perception to how efficiently people learn to use a new interface. Causal reasoning is a fast growing field in cognitive psychology  which has demonstrated that much of how people perceive and understand the world is influenced by what causal relations they perceive. These preliminary results demonstrate that a novel human-computer interface is easier to learn when it can more naturally be understood in terms of causes (control elements) having effects (upon data elements). The demonstration focuses on the principle of causal order, that causes always precede their effects. While the issue of order (termed noun-verb or action-object order) has been addressed in the HCI literature (e.g., Shneiderman), it is commonly the opposite order that is championed, because choosing an object before an action limits the number of relevant actions. This project will demonstrate that when all else is equal the action-object order is easier for users to learn, presumably because it accords with causal order. The ultimate goal is to measure the adherence of interfaces to this and other principles of causal perception and inference.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====II.  &#039;&#039;&#039;Methodology&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
For the purpose of this demonstration, I created a game in which most objects are both controls and data. Each object has intrinsic properties which can be transferred to other objects. These properties can be modified by other objects&#039; intrinsic properties. For example, the blue object can color other objects blue, but if it was modified by the gradient object it colors other objects gradient blue. The goal is to create an object with a certain combination of properties. The game can be seen [http://www.cog.brown.edu:16080/~adarlow/HCI/ here]. There were two conditions, one consistent with causal order and one inconsistent. In the consistent condition, the participant had to click object A then object B in order to effect object A&#039;s property on object B. In the inconsistent condition, the order was reversed. If participants in the consistent condition solve the game faster than those in the inconsistent condition, this is taken to be evidence that causal interpretation helps learn novel interfaces.&lt;br /&gt;
9 students part&lt;br /&gt;
&lt;br /&gt;
Nine students participated by playing the game and reporting the time and number of clicks it took them to complete the game. They were assigned randomly to the two conditions. Unfortunately, the random assignment put 6 participants in the inconsistent condition and only 3 in the consistent condition.&lt;br /&gt;
&lt;br /&gt;
====III.  &#039;&#039;&#039;Results&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
Despite the small sample size, the differences between the two groups were large enough to be statistically significant. Participants in the consistent condition completed the game in less time (M=2.22 minutes, SD=0.54) and fewer clicks (M=61 clicks, SD=15.4) than participants in the inconsistent condition (M=6.12 minutes, SD=2.28; M=140.7 clicks, SD=65.3).&lt;br /&gt;
&lt;br /&gt;
====IV.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
These results support our hypothesis that an interface is easier to learn and use when it satisfies people&#039;s expectations of causal systems. Participants who had to use a novel interface  took nearly three times as long to complete a task when the interface dynamics defied natural causal order. These results should be expanded to other causal principles and other interfaces. &lt;br /&gt;
Worth noting about this interface is the lack of a delineation between commands and data, each object serves as both. This is not typical of current interfaces, but it could become more prevalent in the future. Modern interfaces give users more and more control over the interface and as these manipulations become easier and more natural they become a larger part of typical workflow. Thus users will spend more time manipulating the interface, paving the way for meta-interface commands and tools. This progression seems even more natural in real-world simulation type environments like BumpTop, which attempt to capitalize on people&#039;s physical world intuitions. One real-world convention which these interfaces haven&#039;t yet adopted is that tools are objects, too. Just because we use a hammer to manipulate other objects, doesn&#039;t mean we can&#039;t paint the hammer red. Identifying the causal principles that people use to understand and interact with the world will allow us to abstract away from the rigid adherence to real-world physics without losing the richness and intuition it provides.&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Final_contributions&amp;diff=3476</id>
		<title>CS295J/Final contributions</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Final_contributions&amp;diff=3476"/>
		<updated>2009-05-13T15:42:18Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: added temporal patterns sub-project&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Specific Contributions =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[I have removed almost everything that does not describe itself as a contribution.  If your part is missing, a revision is in order and you can find what the former version on the page linked below (David)]&lt;br /&gt;
&lt;br /&gt;
[[CS295J/Thursday 2pm version of final contribution|Thursday 2pm version of final contribution]]&lt;br /&gt;
&lt;br /&gt;
== Workflow, Multi-tasking and Interruptions ==&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Aims&#039;&#039;&#039;&lt;br /&gt;
* Develop a Theory of Multi-tasking Workflow and the Impact of Interruptions&lt;br /&gt;
* Build a Meta-work Assistance Tool&lt;br /&gt;
:# We will perform a series of ecologically-valid studies to compare user performance between a state of the art/popular task management system (control group) and our meta-work assistance tool (experimental group) &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Didn&#039;t your study show that there is nothing close to a killer app in this department? - Steven]&amp;lt;/span&amp;gt; [By &amp;quot;state of the art&amp;quot; I mean a popular, well-recognized system (or systems), such as Outlook, that are used by large numbers of people.]&lt;br /&gt;
* Evaluate the Module-based Model of HCI by using it to predict the outcome of the above-mentioned evaluation; then compare with empirical results&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;&lt;br /&gt;
* Theory of Multi-tasking Workflow and the Impact of Interruptions&lt;br /&gt;
:# We will undertake detailed studies to help understand the following questions:&lt;br /&gt;
::# How does the size of a user&#039;s working set impact interruption resumption time?&lt;br /&gt;
::# How does the size of a user&#039;s working set, when used for rapid multi-tasking, impact performance metrics?&lt;br /&gt;
::# How does a user interface which supports multiple simultaneous working sets benefit interruption resumption time?&lt;br /&gt;
:* Internal dependencies: none&lt;br /&gt;
* Empirical Evaluation of Meta-work Assistance Tool in an Ecologically-valid Context&lt;br /&gt;
:* Internal dependencies: Completion of a testable meta-work tool&lt;br /&gt;
* Baseline Comparison Between Module-based Model of HCI and Core Multi-tasking Study&lt;br /&gt;
:# We will compare the results of the above-mentioned study in multi-tasking against results predicted by the module-based model of HCI in this proposal; this will give us an important baseline comparison, particularly given that multi-tasking and interruption involve higher brain functioning and are therefore likely difficult to predict&lt;br /&gt;
:* Internal dependencies: Dependent on core study completion, as well as most of the rest of the proposal being completed to the point of being testable&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstration (we might consider calling these deliverables)&#039;&#039;&#039;&lt;br /&gt;
* A predictive, quantiative model of controlled task performance and interruption resumption time for varying sized working sets&lt;br /&gt;
* An ecologically valid evaluation which shows that the completed meta-work assistance tool outperforms a state of the art system such as Outlook&lt;br /&gt;
* A comparison between the result predicted by the module-based model of HCI and the evaluation mentioned above; accompanied by a detailed analysis of factors contributing to the result&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;External Dependencies&#039;&#039;&#039;&lt;br /&gt;
* Dependent on most of the rest of the proposal being completed to the point of being usable to predict the outcome of the meta-work tool evaluation&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background and Related Work&#039;&#039;&#039;&lt;br /&gt;
* Salvucci et al&#039;s integrated theory of multi-tasking continuum integrates concurrent and sequential task performance&lt;br /&gt;
* Gloria Mark&#039;s research group has conducted a number of empirical studies into multi-tasking and interruption&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* [[Study of Meta-work Tools and Strategies of 7 Information Workers]]&lt;br /&gt;
:* Results suggest there is not a single &amp;quot;ideal&amp;quot; meta-work tool today, motivating our proposal to build a new meta-work support tool based in part on a scientific theory of multi-tasking and interruptions.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reviews&#039;&#039;&#039;&lt;br /&gt;
* Not in the expected format -- split into appropriate parts and label them as in the other sections (David)&lt;br /&gt;
&lt;br /&gt;
== Recording User-Interaction Primitives ==&lt;br /&gt;
&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Aims&#039;&#039;&#039;&lt;br /&gt;
* Develop system for logging low-level user-interactions within existing applications.  By low-level interactions, we refer to those interactions for which a quantitative, predictive, GOMS-like model may be generated.&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[How do you propose to get around the problem Andrew brought up of accurately linking this low-level data to the interface/program&#039;s state at the moment it occurred? - Steven]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Low-level interactions will need to be logged along with a markup-language-style state description of the application itself.  This would either require access to the source code, or a detailed interface description from the interface designer that would allow for complete simulation of the application.]&amp;lt;/span&amp;gt;&lt;br /&gt;
* At the same scale of granularity, integrate explicit interface-interactions with multimodal-sensing data.  i.e. pupil-tracking, muscle-activity monitoring, auditory recognition, EEG data, and facial and posture-focused video. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;&lt;br /&gt;
* Extensible, multi-modal, HCI framework for recording rich interaction-history data in existing applications.&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Perhaps something should be said about sectioning this data off into modules - i.e. compiling all e-mail writing recordings to analyze separately from e-mail sorting recordings in order to facilitate analysis. - Steven]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[I believe this is addressed in the &amp;quot;semantic-level chunking&amp;quot; section, given below.]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstration&#039;&#039;&#039;&lt;br /&gt;
* Techniques for pupil-tracking, auditory-recognition, and muscle-activity monitoring will be evaluated with respect to accuracy, sampling rate, and computational cost in a series of quantitative studies.  &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Is there an explicit, quantifiable demonstration you can propose?  Such some kind of conclusive comparison between automated capture techniques and manual techniques?  Or maybe a demonstration of some kind of methodical, but still low-level, procedure for clustering related interaction primitives? [[User:E J Kalafarski|E J Kalafarski]] 15:02, 29 April 2009 (UTC)]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Things like pupil-tracking can be quantitatively evaluated through users tests.  For instance, we could instruct users to shift the focus of their eyes from one on-screen object to another as quickly as possible some fixed number of times. We could then compare the results from our system to determine accuracy and speed, and by varying the size and distance of the target objects, we could obtain richer data for analysis.  With respect to clustering low-level interactions, I believe that issue is addressed in the following section.]&amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Should this take into account the need to match recordings to interface state? - Steven]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Possibly, if we go down the simulation route.  If we&#039;re actually logging all of the events from within the source code itself, I believe the demonstration of syncing low-level interaction with application-state is self-evident.]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependencies&#039;&#039;&#039;&lt;br /&gt;
* Hardware for pupil-tracking and muscle activity monitoring.  Commercial software packages for such software may be required.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background and Related Work&#039;&#039;&#039;&lt;br /&gt;
* The utility of interaction histories with respect to assessing interface design has been demonstrated in &amp;lt;ref name = bob&amp;gt; [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation] (InfoVis 2008) &amp;lt;/ref&amp;gt;. &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;Can you briefly summarize the utility with one sentence? Or is that what the next bullet does? (Jon)&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[In the instance of this work, utility was demonstrated with respect to automatically generating presentations from data explorations, and providing quantitative interface-usage analysis that aided future decisions for interface designers.  i.e., this operations was performed a lot, and this one was not; therefore we should focus on making the first more efficient, perhaps at the expense of the second.  The results of the work are fairly basic, but they do demonstrate that logging histories is a useful practice for a number of reasons.]&amp;lt;/span&amp;gt;&lt;br /&gt;
* In addition, data management histories have been shown effective in the visualization community in &amp;lt;ref&amp;gt; [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-MED.pdf Callahan-2006-MED] &amp;lt;/ref&amp;gt;  &amp;lt;ref&amp;gt; [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-VVM.pdf Callahan-2006-VVM] &amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt; [http://www.cs.brown.edu/people/trevor/Papers/Bavoil-2005-VEI.pdf Bavoil-2005-VEI]&amp;lt;/ref&amp;gt;, providing visualizations by analogy &amp;lt;ref&amp;gt; [http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;amp;arnumber=4376187&amp;amp;isnumber=4376125 Querying and Creating Visualizations by Analogy] &amp;lt;/ref&amp;gt; and offering automated suggestions &amp;lt;ref&amp;gt; [http://www.cs.utah.edu/~juliana/pub/tvcg-recommendation2008.pdf VisComplete: Automating Suggestions from Visualization Pipelines] &amp;lt;/ref&amp;gt;, which we expect to generalize to user interaction history.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;review&#039;&#039;&#039;&#039;&lt;br /&gt;
* This is recording everything possible during an interaction. It is necessary to do for our project. (Gideon)&lt;br /&gt;
* Two thumbs up. - Steven&lt;br /&gt;
&lt;br /&gt;
== Semantic-level Interaction Chunking ==&lt;br /&gt;
&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Aims&#039;&#039;&#039;&lt;br /&gt;
* Develop techniques for &#039;&#039;chunking&#039;&#039; low-level interaction primitives into &#039;&#039;semantic-level&#039;&#039; interactions, given an application&#039;s functionality and data-context.  (And de-chunking? Invertible mapping needed?)&lt;br /&gt;
* Perform user-study evaluation to validate chunking methods.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;&lt;br /&gt;
* Design and user-study evaluation of semantic-chunking techniques for interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstration&#039;&#039;&#039;&lt;br /&gt;
* These techniques will be validated quantitatively with respect to efficiency and user error rate on a set of timed-tasks over a series of case studies.  In addition, qualitative feedback on user satisfaction will be incorporated in the evaluation of our methods.  &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[As above, how will this be validated quantitatively?  Comparing two methodologies for manual chunking?  What would these differing methodologies be? [[User:E J Kalafarski|E J Kalafarski]] 15:06, 29 April 2009 (UTC)]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[I don&#039;t intend for these techniques to be performed manually.  Application-specific rules for chunking will be created manually, most likely by the interface designer, but those rules will be applied automatically to filter low-level primitives and application-states into semantic-level interactions.  The success of such methods could be evaluated by having automatically-generated semantic-level histories compared to those manually created by a user.  Of course, there&#039;s no guarantee that what a user manually creates is any good, so I think it would be more effective to have a quantitative study of user performance in which timed-tasks are completed with and without the ability to review and revisit semantic-level histories.  The test without any history at all would serve as a baseline, and then performance with histories created using different chunking rules would speak to the effectiveness of different strategies for chunking.]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependencies&#039;&#039;&#039;&lt;br /&gt;
* Software framework for collecting user interactions.&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[I&#039;d like to see this fleshed out in a theoretical manner. - Steven]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[I believe the previous section speaks to this question, but perhaps not with enough detail.  It seems appropriate to have this discussion in class.]&amp;lt;/span&amp;gt;&lt;br /&gt;
* Formal description of interface functionality.&lt;br /&gt;
* Description of data objects that can be manipulated through interaction.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;review&#039;&#039;&#039;&lt;br /&gt;
* With what level of accuracy can this be performed. I like the idea, its worthy of a nice project in and of itself. (Gideon)  &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[This is a great question.  It really depends on what&#039;s meant by accuracy.  If the only level of semantics we want is that defined by the interface-designer, then it&#039;s not really an interesting question.  As we begin to talk about semantics with respect to what the user &#039;&#039;perceives&#039;&#039; or what the user&#039;s &#039;&#039;intent&#039;&#039; might be, it becomes more interesting, and the term accuracy becomes more ambiguous.  I suppose accuracy could be tested by having users verbally speak their intent as they use an interface, and compare their descriptions with the output of the system, but I think that might be a messy study.  Interesting to think about.]&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;Can you include something about the broader impact of this contribution? (Jon)&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[I think the broader impact here is offering interface designers tools to analyze interaction at a high enough level that they can begin to observe user&#039;s intent with an interface, and potentially infer overarching user-goals.  If these tools prove effective, and could be implemented in real-time, there&#039;s a wealth of possibility in adaptive interfaces.  In addition, the semantic-level interaction history construct has the potential to provide meaningful training data for machine-learning algorithms that could probabilistically model user-action and be used to offer suggested interactions.]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Reconciling Usability Heuristics with Cognitive Theory ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski|E J Kalafarski]] 14:56, 28 April 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contribution&#039;&#039;&#039;: A framework for the unification of established heuristic usability guidelines and accepted cognitive principles.  We propose a matrix pairing cognitive principles and heuristic guidelines into specific interface improvement recommendations.&lt;br /&gt;
&lt;br /&gt;
[David: I&#039;m having trouble parsing this contribution.  How do you weight a framework?  Also, I&#039;d like to have a little more sense of how this might fit into the bigger picture.  The assignment didn&#039;t ask for that, but is there some way to provide some of that context?]&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[I agree that the wording is is a little confusing. The framework is not weighted, but rather it weights the different guidelines/principles. It might also be worth explaining how this is a useful contribution, e.g. does it allow for more accurate interface evaluation? -Eric]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Language cleaned up.  While I still think it is important to give more &amp;quot;weight&amp;quot; to cognitive/heuristic principle pairs that agree more strongly than others and output more useful recommendations, I believe I&#039;ve pitched it wrong and made it too prominent an attribute of this module.  I&#039;ve adjusted. [[User:E J Kalafarski|E J Kalafarski]] 15:34, 1 May 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstration&#039;&#039;&#039;: Three groups of experts anecdotally apply cognitive principles, heuristic usability principles, and a combination of the two.&lt;br /&gt;
* A human &amp;quot;cognition expert,&amp;quot; given a constrained, limited-functionality interface, develops an independent evaluative value for each interface element based on accepted cognitive principles.&lt;br /&gt;
* A human &amp;quot;usability expert&amp;quot; develops an independent evaluative value for each interface element based on accepted heuristic guidelines.&lt;br /&gt;
* A third expert applies several recommendations from a matrix of cogntive/heuristic principle pairs.&lt;br /&gt;
* User testing demonstrates the assumed efficacy and applicability of the principle pairs versus separate application of cognitive and heuristic guidelines.&lt;br /&gt;
* Matrix can be improved incrementally by applying &amp;quot;weight&amp;quot; to each cell of the matrix, increasing its influence on final output recommendations, based on the measurable success of this study.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[There is still the question of how these weights are determined. Is this the job of the third expert? Or is the third expert given these weights, and he just determines how to apply them? -Eric]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Added more explanatory description of methodology as a bullet under Demo.  [[User:E J Kalafarski|E J Kalafarski]] 15:49, 1 May 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependency&#039;&#039;&#039;: A set of established cognitive principles, selected with an eye toward heuristic analogues.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependency&#039;&#039;&#039;: A set of established heuristic design guidelines, selected with an eye toward cognitive analogues.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;review&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* This seems like the 2d matrix? Is this implemented as a module? (Gideon) &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Yes. [[User:E J Kalafarski|E J Kalafarski]] 15:49, 1 May 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Gideon&#039;s comment brings up an important point - in what form will the weights be presented? - Steven &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Values which contribute to the &amp;quot;influence&amp;quot; of each recommendation. [[User:E J Kalafarski|E J Kalafarski]] 15:49, 1 May 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
* Could be more explicit about whether the experts are simulated agents or real people (Jon)&lt;br /&gt;
&lt;br /&gt;
== Evaluation for Recommendation and Incremental Improvement ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski|E J Kalafarski]] 14:56, 28 April 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;: Using a set of interface evaluation modules for analysis, we demonstrate the proposed efficiency and accuracy of a method for aggregating these interface suggestions.  In this preliminary demonstration, we limit the field to a strictly-defined interface and 2–3 CPM principles (e.g. Fitts&#039; Law and Affordance).  Unified recommendations are generated for application to the interface for incremental improvement in usability.&lt;br /&gt;
&lt;br /&gt;
[David: I&#039;m a little lost.  Can you make this long sentence into a couple shorter simpler ones?  I think I can imagine what you are getting at, but I&#039;m not sure.]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstration&#039;&#039;&#039;: A limited but strictly-controlled study to demonstrate the efficacy and efficiency of automated aggregation of interface evaluation versus independent, analysis.  Shows not total replacement, but a gain in speed of evaluation comparable to the loss of control and feature.&lt;br /&gt;
* Given a carefully-constrained interface, perhaps with as few as two buttons and a minimalist feature set, expert interface designer given the individual results of several basic evaluation modules makes recommendations and suggestions to the design of the interface.&lt;br /&gt;
* Aggregation meta-module conducts similar survey of module outputs, outputting recommendations and suggestions for improvement of given interface.&lt;br /&gt;
* Separate independent body of experts then implements the two sets of suggestions and performs user study on the resultant interfaces, analyzing usability change and comparing it to the time and resources committed by the evaluation expert and the aggregation module, respectively.&lt;br /&gt;
&lt;br /&gt;
[David: nice demonstration!]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependency&#039;&#039;&#039;: Module for the analysis of Fitts&#039; Law as it applies to the individual elements of a given interface.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependency&#039;&#039;&#039;: Module for the analysis of a Affordance as it applies to the individual elements of a given interface.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependency&#039;&#039;&#039;: Framework for the input of a given interface to individual modules, and standardization of the output of said modules.&lt;br /&gt;
&lt;br /&gt;
[David: I think this also has, as a dependency, some kind of framework for the modules.  &amp;quot;narrow but comprehensive&amp;quot; sounds challenging.  ]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;review&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* I see this as being the &amp;quot;black box&amp;quot; for our architecture? If so, good. Wouldn&#039;t the dependencies be any/all modules? (Gideon) &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[I limited the scope of this initial demonstration. [[User:E J Kalafarski|E J Kalafarski]] 15:46, 1 May 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
* &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[I think this is the same as my contribution, and that we should merge them together. You&#039;ve covered some things that I didn&#039;t, such as a more detailed demonstration of how our framework is better than a human interface designer&#039;s evaluation. You also gave a demonstration of comparing recommendations, while I only covered evaluations. -Eric]&amp;lt;/span&amp;gt;&lt;br /&gt;
* I think this contribution and mine (the CPM-GOMS improvement) will need to be reconciled into a single unit. - Steven&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Is it feasible to combine all three of these contributions? [[User:E J Kalafarski|E J Kalafarski]] 15:47, 1 May 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
* Interesting to note that this could describe the overarching architecture but it depends upon components of the architecture to work.  Would it depend on any other modules? (Jon) &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[Yes, several modules are listed as dependencies. [[User:E J Kalafarski|E J Kalafarski]] 15:46, 1 May 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Evaluation Metrics ==&lt;br /&gt;
Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
:*Architecture Outputs&lt;br /&gt;
::* Time (time to complete task)&lt;br /&gt;
:::# In user studies, this module predicted time-to-completion more accurately than the standard CPM-GOMS model.&lt;br /&gt;
:::* Dependencies&lt;br /&gt;
:::# CPM-GOMS Module with proposed cognitive load extension&lt;br /&gt;
::* Cognitive Load&lt;br /&gt;
:::# User studies were conducted during which participants brain&#039;s were scanned with EEG to measure activation levels.&lt;br /&gt;
:::# Self-report measures were obtained describing the amount of concentration/effort required on the part of the user.&lt;br /&gt;
:::# The literature base provided an accurate assessment of cognitive load in the tasks involved.&lt;br /&gt;
:::# Facial recognition allows us to approximate spans of high-load (e.g., furled brow)&lt;br /&gt;
:::* Dependencies&lt;br /&gt;
::::# CPM-GOMS Module with cognitive load extension&lt;br /&gt;
::::# Facial Gesture Recognition Module&lt;br /&gt;
::::# Self-Report Measure&lt;br /&gt;
::::# Database of load per cognitive tasks&lt;br /&gt;
::* Frustration&lt;br /&gt;
:::# Frustration is measured by asking participants to respond on the self-report scale.&lt;br /&gt;
:::# Facial recognition allows us to approximate spans of frustration (e.g., frowning, sighing, furled brow)&lt;br /&gt;
:::# Galvanic Skin Response allows detection of emotional arousal&lt;br /&gt;
:::# Heart rate monitoring allows detection of emotional arousal&lt;br /&gt;
:::* Dependencies&lt;br /&gt;
::::# Facial Gesture Recognition Module&lt;br /&gt;
::::# Galvanic Skin Response input&lt;br /&gt;
::::# Heart rate response input&lt;br /&gt;
::::# Interface Efficiency Module&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;review&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Parallel Framework for Evaluation Modules ==&lt;br /&gt;
* Owner: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section will describe in more detail the inputs, outputs and architecture that were presented in the introduction.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contribution:&#039;&#039;&#039; Create a framework that provides better interface evaluations than currently existing techniques, and a module weighting system that provides better evaluations than any of its modules taken in isolation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstration:&#039;&#039;&#039; Run a series of user studies and compare users&#039; performance to expected performance, as given by the following interface evaluation methods:&lt;br /&gt;
# Traditional, manual interface evaluation &lt;br /&gt;
#* As a baseline.&lt;br /&gt;
# Using our system with a single module&lt;br /&gt;
#* &amp;quot;Are any of our individual modules better than currently existing methods of interface evaluation?&amp;quot;.&lt;br /&gt;
# Using our system with multiple modules, but have aggregator give a fixed, equal weighting to each module&lt;br /&gt;
#* As a baseline for our aggregator: want to show that the value of adding the dynamic weighting.&lt;br /&gt;
# Using our system with multiple modules, and allow the aggregator to adjust weightings for each module, but have each module get a single weighting for all dimensions (time, fatigue, etc.)&lt;br /&gt;
#* For validating the use of a dynamic weighting system.&lt;br /&gt;
# Using our system with multiple modules, and allow the aggregator to adjust weightings for each module, and allow the module to give different weightings to every dimension of the module (time, fatigue, etc.) &lt;br /&gt;
#* For validating the use of weighting across multiple utility dimensions.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependencies:&#039;&#039;&#039; Requires a good set of modules to plug into the framework.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;review&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* What exactly are the differences between this and EJ&#039;s earlier contribution? I think that if they are the same, this one is a bit more clear, IMHO. (Gideon)&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[I have a similar question to Gideon&#039;s.  My hunch is that this defines a sort of &amp;quot;namespace&amp;quot; for the modules themselves, while the other contribution (the one you guys assigned to me) is more explicitly the aggregator of these outputs.  Correct me if I&#039;m wrong.  But if that&#039;s the case, the two contributions might need to be validated similarly/together. [[User:E J Kalafarski|E J Kalafarski]] 15:17, 29 April 2009 (UTC)]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;[I agree that this pretty close to E.J.&#039;s contribution, and that maybe they should be merged. E.J. covered some things that I didn&#039;t, such as a more detailed demonstration of how our framework is better than a human interface designer&#039;s evaluation, and his gave a demonstration of comparing recommendations, while I only covered evaluations.]&amp;lt;/span&amp;gt;&lt;br /&gt;
* I feel like this structure&#039;s clarity may come at expense of specificity, I&#039;d like to know more than that it&#039;s parallel. - Steven&lt;br /&gt;
* Very thorough demonstration(s)! (Jon)&lt;br /&gt;
* I think that the contribution is much more clearly stated here -- I would argue for integrating them together. (Andrew)&lt;br /&gt;
&lt;br /&gt;
== Anti-Pattern Recognition &amp;amp; Resolution ==&lt;br /&gt;
Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interface design patterns are defined as reusable elements which provide solutions to common problems. For instance, we expect that an arrow in the top left corner of a window which points to the left, when clicked upon, will take the user to a previous screen. Furthermore, we expect that buttons in the top left corner of a window will in some way relate to navigation. An anti-pattern is a design pattern which breaks standard design convention, creating more problems than it solves. An example of an anti-pattern would be if in place of the &#039;back&#039; button on your web browser, there would be a &#039;view history&#039; button.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Aims:&#039;&#039;&#039;&lt;br /&gt;
* Develop a framework for analyzing an interface and recognizing violations of  established patterns of possible interaction&lt;br /&gt;
* Develop an algorithm to quantitatively rate the amount of anti-pattern violations&lt;br /&gt;
* Integrate cognitive theory in providing suitable alternatives&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions:&#039;&#039;&#039;&lt;br /&gt;
* Creation of established patterns of interaction as determined by user studies&lt;br /&gt;
* Creation of algorithm to detect match and library elements to interface elements&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstration:&#039;&#039;&#039;&lt;br /&gt;
* The viability of this module will be tested through providing it with a poorly designed interface. &lt;br /&gt;
** The design would be analyzed and tested by: a usability expert, normal users, the module whereby each would have to eventually provide a value as to how much the interface exhibits anti-patterns.&lt;br /&gt;
** Based on the results provided, we should expect to see the expert and module reports to be similar&lt;br /&gt;
	&lt;br /&gt;
&#039;&#039;&#039;Dependencies:&#039;&#039;&#039;&lt;br /&gt;
* Library of standard design patterns&lt;br /&gt;
* Affordances module&lt;br /&gt;
* Custom design pattern libraries (optional)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs:&#039;&#039;&#039;&lt;br /&gt;
* Formal interface description&lt;br /&gt;
* Tasks which can be performed within the interface&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Outputs:&#039;&#039;&#039;&lt;br /&gt;
* Identification of interface elements whose placement or function are contrary to the pattern library&lt;br /&gt;
* Recommendations for alternative functionality or placement of such elements.&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Aims&#039;&#039;&#039;&lt;br /&gt;
*Provides an estimate of the required time to complete various tasks that have been decomposed into formalized sequences of interactions with interface elements, and will provide evaluations and recommendations for optimizing the time required to complete those tasks using the interface.&lt;br /&gt;
*Integrate this module into our novel cognitive framework for interface evaluation.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;&lt;br /&gt;
* [Not sure here.  Is this really novel?]  &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[I like it.  I think it&#039;s necessary.  I think we demonstrated in class that this has not been formalized and standardized already. [[User:E J Kalafarski|E J Kalafarski]] 15:22, 29 April 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstrations&#039;&#039;&#039;&lt;br /&gt;
*We will demonstrate the feasibility of this module by generating a formal description of an interface for scientific visualization, a formal description of a task to perform with the interface, and we will correlate the estimated time to complete the task based on Fitt&#039;s law with the actual time required based on several user traces.  &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Check. [[User:E J Kalafarski|E J Kalafarski]] 15:22, 29 April 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependencies&#039;&#039;&#039;&lt;br /&gt;
*Requires a formal description of the interface with graph nodes representing clickable interface elements and graph edges representing the physical (on-screen) distance between adjacent nodes. &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Is this graph format you&#039;re suggesting part of the proposal, or is the &amp;quot;language&amp;quot; itself a dependency from literature? [[User:E J Kalafarski|E J Kalafarski]] 15:22, 29 April 2009 (UTC)]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;I think the graph structure is just a good way of representing something that will be derived from an algorithm that simply measures the distances between interface elements -- it&#039;s not something novel that we&#039;re developing.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Module Description&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Inputs &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[In theory could the non-semantic inputs be automatically read from the interface? - Steven]&amp;lt;/span&amp;gt;&lt;br /&gt;
**A formal description of the interface and its elements (e.g. buttons).&lt;br /&gt;
**A formal description of a particular task and the possible paths through a subset of interface elements that permit the user to accomplish that task.&lt;br /&gt;
**The physical distances between interface elements along those paths.&lt;br /&gt;
**The width of those elements along the most likely axes of motion.&lt;br /&gt;
**Device (e.g. mouse) characteristics including start/stop time and the inherent speed limitations of the device.&lt;br /&gt;
&lt;br /&gt;
*Output&lt;br /&gt;
**The module will then use the Shannon formulation of Fitt&#039;s Law to compute the average time needed to complete the task along those paths.&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Aims&#039;&#039;&#039;&lt;br /&gt;
*To provide interface evaluations and recommendations based on a measure of the extent to which the user perceives the relevant affordances of the interface when performing specified tasks.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;&lt;br /&gt;
*A quantitative measure of the extent to which an interface suggests to the user the actions that it is capable of performing.&lt;br /&gt;
*A quantitative, indirect measure of the extent to which an interface facilitates (or hinders) the use of fast perceptual mechanisms.&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Again, I&#039;m a fan.  I don&#039;t think this has been formalized already. [[User:E J Kalafarski|E J Kalafarski]] 15:24, 29 April 2009 (UTC)]&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstrations&#039;&#039;&#039;&lt;br /&gt;
*We will demonstrate the feasibility of this module through the following experiment:&lt;br /&gt;
**Specify a task for a user to perform with scientific visualization software.&lt;br /&gt;
**There should be several different ways to complete the task (paths through the space of possible interface actions).&lt;br /&gt;
**Some of these paths will be more direct than others.&lt;br /&gt;
**We will then measure the number of task-relevant affordances that were perceived and acted upon by analyzing the user trace, and the time required to complete the task.&lt;br /&gt;
**Use the formula: (affordances perceived) / [(relevant affordances present) * (time to complete task)].&lt;br /&gt;
**Correlate the resulting scores with verbal reports on naturalness and ease-of-use for the interface.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependencies&#039;&#039;&#039;&lt;br /&gt;
*Requires a method for capturing user traces that include formalized records of interface elements used for a particular, pre-specified task or set of tasks.&lt;br /&gt;
*Providing suggestions/recommendations will require interaction with other modules that analyze the perceptual salience of interface elements.&lt;br /&gt;
&amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Some kind of formalized list or arbitrary classification of affordances might be necessary to limit the scope of this contribution. [[User:E J Kalafarski|E J Kalafarski]] 15:24, 29 April 2009 (UTC)]&amp;lt;span&amp;gt;&amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;I&#039;m not sure I understand the comment -- are you asking how affordances will be formalized?&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of the Module&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Module Inputs&#039;&#039;&lt;br /&gt;
**Formalized descriptions of...&lt;br /&gt;
***Interface elements&lt;br /&gt;
***Their associated actions&lt;br /&gt;
***The functions of those actions&lt;br /&gt;
***A particular task&lt;br /&gt;
***User traces for that task &amp;lt;span style=&amp;quot;color: gray;&amp;quot;&amp;gt;[Could this module benefit from eye-tracking? - Steven]&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;Response: Absolutely, and it would also benefit from some measure of endogenous attention as well because determining whether someone perceived an affordance requires knowing where they were looking, what they were attending to, and whether or not they actually performed the afforded action.&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Processing&lt;br /&gt;
**Inputs (1-4) are then used to generate a &amp;quot;user-independent&amp;quot; space of possible functions that the interface is capable of performing with respect to a given task -- what the interface &amp;quot;affords&amp;quot; the user.  From this set of possible interactions, our model will then determine the subset of optimal paths for performing a particular task.  The user trace (5) is then used to determine what functions actually were performed in the course of a given task of interest and this information is then compared to the optimal path data to determine the extent to which affordances of the interface are present but not perceived.  &lt;br /&gt;
&lt;br /&gt;
*Output&lt;br /&gt;
**The output of this module is a simple ratio of (affordances perceived) / [(relevant affordances present) * (time to complete task)] which provides a quantitative measure of the extent to which the interface is &amp;quot;natural&amp;quot; to use for a particular task.&lt;br /&gt;
&lt;br /&gt;
== Temporal Patterns in Interface Interpretation ==&lt;br /&gt;
&lt;br /&gt;
* Owner: [[User:Adam Darlow | Adam Darlow]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Aims&#039;&#039;&#039;&lt;br /&gt;
* Identify the interface factors that enable users to generate a causal theory of an application&#039;s behavior and whether that theory will be mechanistic or intentional.&lt;br /&gt;
* Apply cognitive theory of event perception to identify the cues by which users segment events under each of the overarching theories.&lt;br /&gt;
* Use the above knowledge to evaluate and improve the intuitiveness and discoverability of application interfaces.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;&lt;br /&gt;
* Formulations of temporal interaction &#039;&#039;design patterns&#039;&#039; for use by interface designers. For example, a design pattern for rigid physical link would state that it is a mechanistic relation between two interface items whereby when one item is moved by the user, the other item makes exactly the same movements in synchrony. It would also state how much movement is sufficient to generate the impression of the rigid link.&lt;br /&gt;
* Implementation and evaluation of the addition of temporal patterns to an existing interface for more intuitive and dicoverable interactions.&lt;br /&gt;
* Implementation of an interface evaluation module that measures the presence and consistency of temporal patterns in an interface.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Demonstration&#039;&#039;&#039;&lt;br /&gt;
* We will demonstrate significant improvements to the memorability and discoverability of functions in existing interfaces by adding temporal patterns to the interface interactions. We will show that these patterns affect how users conceptualize the application and how they interact with it. &lt;br /&gt;
* We will demonstrate that the corresponding evaluation module can determine when an interface is lacking in temporal patterns or has temporal patterns that are inconsistent in terms of a mechanistic or intentional conceptualization of the application. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependencies&#039;&#039;&#039;&lt;br /&gt;
* Video capture of user interfaces synchronized with traces of the user interactions.&lt;br /&gt;
* The parallel evaluation framework proposed elsewhere in this proposal.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Background and Related Work&#039;&#039;&#039;&lt;br /&gt;
* Increased computational power allows us to go beyond metaphors from the real world and use realistic dynamic interactions to make interfaces more intuitive. The naive approach to this has been to simulate a complete physical environment &amp;lt;ref&amp;gt; [http://portal.acm.org/citation.cfm?id=1124772.1124965 Keepin’ It Real: Pushing the Desktop Metaphor with Physics, Piles and the Pen] &amp;lt;/ref&amp;gt;. While effective at making functionality more intuitive and discoverable, this approach has only been applied to file organization and it isn&#039;t clear how it generalizes to applications in general.&lt;br /&gt;
* Research in cognitive science suggests a more sophisitcated approach. People automatically develop high level conceptualizations of a display based on certain low-level patterns of temporal dynamics &amp;lt;ref&amp;gt;[http://vrl.cs.brown.edu/wiki/Scholl-2000-PCA Perceptual causality and animacy] &amp;lt;/ref&amp;gt;. Different patterns of interaction among elements suggest different causal or intentional interpretations. People use these interpretations when predicting how the system will behave. Therefore, having temporal dynamics based on a coherent conceptualization can make functions more intuitive, memorable and discoverable.&lt;br /&gt;
* System conceptualization also influences how events in that system are segregated &amp;lt;ref&amp;gt; [http://www.michaelwmorris.com/UsefulPapers/ZacksTversky2001.pdf Event Structure in Perception and Conception] &amp;lt;/ref&amp;gt;. The user will make different generalizations from an interaction event depending on whether it is perceived as a single complex event or a series of simpler ones. This affects the user&#039;s ability to remember the interaction and to discover other possible interactions.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Significance&#039;&#039;&#039;&lt;br /&gt;
* To the best of my knowledge, no existing research has studied the general application of temporal dynamics from the real world to human-computer interface design beyond systems that simulate a physical environment. This approach aims to extend the advantages of simulated physical environments to other real world metaphors in interface design.&lt;br /&gt;
* One goal of this project is to open up a new field of research in the use of realistic temporal interactions to harness users&#039; existing intuitions. Simulated physical environments &amp;lt;ref&amp;gt; [http://portal.acm.org/citation.cfm?id=1124772.1124965 Keepin’ It Real: Pushing the Desktop Metaphor with Physics, Piles and the Pen] &amp;lt;/ref&amp;gt; use rich temporal interactions, but are limited to isolated projects because of the overhead in designing the system and the focus on the desktop metaphor. Other real world metaphors are ubiquitous in interface design &amp;lt;ref&amp;gt; [http://portal.acm.org/citation.cfm?doid=1188816.1188820 The reification of metaphor as a design tool] &amp;lt;/ref&amp;gt; but don&#039;t take advantage of users&#039; sensitivity to temporal dynamics. Our research aims to demonstrate that temporal dynamics can be applied to many interfaces with different metaphors.&lt;br /&gt;
* Another goal of the project is to better understand how different conceptualizations of an application lead users to interact with it in different ways. For example, when thinking of what a function would be called, a user with a mechanistic conceptualization of the interface might look for something denoting the process of the function, while a user with an intentional conceptualization might look for something denoting the end result of the function. If both interface designers and interface users think about interfaces in terms of these (and other) high level conceptualizations, this can lead to more consistent, intuitive and enjoyable interfaces. The idea of application conceptualization is wide open for research. As useful as it is, the distinction between mechanical and intentional is only the tip of the iceberg.&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3163</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3163"/>
		<updated>2009-04-24T15:34:57Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Module Inputs (Incomplete) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]], [[User: Trevor O&#039;Brien | Trevor]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Propose:&#039;&#039;&#039; The design, application and evaluation of a novel, cognition-based, computational framework for assessing interface design and providing automated suggestions to optimize usability.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evaluation Methodology:&#039;&#039;&#039;  Our techniques will be evaluated quantitatively through a series of user-study trials, as well as qualitatively by a team of expert interface designers. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions and Significance:&#039;&#039;&#039;  We expect this work to make the following contributions:  &lt;br /&gt;
# design-space analysis and quantitative evaluation of cognition-based techniques for assessing user interfaces.  &lt;br /&gt;
# design and quantitative evaluation of techniques for suggesting optimized interface-design changes. &lt;br /&gt;
# an extensible, multimodal software architecture for capturing user traces integrated with pupil-tracking data, auditory recognition, and muscle-activity monitoring.  &lt;br /&gt;
# specification (language?) of how to define an interface evaluation module and how to integrate it into a larger system.  &lt;br /&gt;
# (there may be more here, like testing different cognitive models, generating a markup language to represent interfaces, maybe even a unique metric space for interface usability)&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
Each person should add the background related to their specific aims.&lt;br /&gt;
&lt;br /&gt;
* Steven Ellis - Cognitive models of HCI, including GOMS variations and ACT-R&lt;br /&gt;
* EJ - Design Guidelines&lt;br /&gt;
* Jon - Perception and Action&lt;br /&gt;
* Andrew - Multiple task environments&lt;br /&gt;
* Gideon - Cognition and dual systems&lt;br /&gt;
* Ian - Interface design process&lt;br /&gt;
* Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using)&lt;br /&gt;
&lt;br /&gt;
== Cognitive Models ==&lt;br /&gt;
I plan to port over most of the background on cognitive models of HCI from the old proposal&lt;br /&gt;
&lt;br /&gt;
Additions will comprise of:&lt;br /&gt;
*CPM-GOMS as a bridge from GOMS architecture to the promising procedural optimization of the Model Human Processor&lt;br /&gt;
**Context of CPM development, discuss its relation to original GOMS and KLM&lt;br /&gt;
***Establish the tasks which were relevant for optimization when CPM was developed and note that its obsolescence may have been unavoidable&lt;br /&gt;
**Focus on CPM as the first step in transitioning from descriptive data, provided by mounting efforts in the cognitive sciences realm to discover the nature of task processing and accomplishment, to prescriptive algorithms which can predict an interface’s efficiency and suggest improvements&lt;br /&gt;
**CPM’s purpose as an abstraction of cognitive processing – a symbolic representation not designed for accuracy but precision&lt;br /&gt;
**CPM’s successful trials, e.g. Ernestine&lt;br /&gt;
***Implications of this project include CPM’s ability to accurately estimate processing at a psychomotor level&lt;br /&gt;
***Project does suggest limitations, however, when one attempts to examine more complex tasks which involve deeper and more numerous cognitive processes&lt;br /&gt;
*ACT-R as an example of a progressive cognitive modeling tool&lt;br /&gt;
**A tool clearly built by and for cognitive scientists, and as a result presents a much more accurate view of human processing – helpful for our research&lt;br /&gt;
**Built-in automation, which now seems to be a standard feature of cognitive modeling tools&lt;br /&gt;
**Still an abstraction of cognitive processing, but makes adaptation to cutting-edge cognitive research findings an integral aspect of its modular structure&lt;br /&gt;
**Expand on its focus on multi-tasking, taking what was a huge advance between GOMS and its CPM variation and bringing the simulation several steps closer to approximating the nature of cognition in regards to HCI&lt;br /&gt;
**Far more accessible both for researchers and the lay user/designer in its portability to LISP, pre-construction of modules representing cognitive capacities and underlying algorithms modeling paths of cognitive processing&lt;br /&gt;
&lt;br /&gt;
==Design guidelines==&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;[http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf Borchers, Jan O.  &amp;quot;A Pattern Approach to Interaction Design.&amp;quot;  2000.]&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
===Application to AUE===&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;[http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf Ivory, M and Hearst, M.  &amp;quot;The State of the Art in Automated Usability Evaluation of User Interfaces.&amp;quot; ACM Computing Surveys (CSUR), 2001.]&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
===Popular and seminal examples===&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Elements and goals of a guideline set===&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;[http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Czerwinski, Horvitz, and White. &amp;quot;A Diary Study of Task Switching and Interruptions.&amp;quot;  Proceedings of the SIGCHI conference on Human factors in computing systems, 2004.]&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application.&lt;br /&gt;
&lt;br /&gt;
== Perception and Action (in progress) ==&lt;br /&gt;
&lt;br /&gt;
*Information Processing Approach&lt;br /&gt;
**Advantages&lt;br /&gt;
***Formalism eases translation of theory into scripting language&lt;br /&gt;
**Disadvantages&lt;br /&gt;
***Assumes symbolic representation&lt;br /&gt;
&lt;br /&gt;
*Ecological (Gibsonian) Approach&lt;br /&gt;
**Advantages&lt;br /&gt;
***Emphasis on bodily and environmental constraints&lt;br /&gt;
**Disadvantages&lt;br /&gt;
***Lack of formalism hinders translation of theory into scripting language&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specific Aims&#039;&#039;&#039;&lt;br /&gt;
* Incorporate interaction history mechanisms into a set of existing applications.&lt;br /&gt;
* Perform user-study evaluation of history-collection techniques.&lt;br /&gt;
* Distill a set of cognitive principles/models, and evaluate empirically?&lt;br /&gt;
* Build/buy sensing system to include pupil-tracking, muscle-activity monitoring, auditory recognition.&lt;br /&gt;
* Design techniques for manual/semi-automated/automated construction of &amp;lt;insert favorite cognitive model here&amp;gt; from interaction histories and sensing data.&lt;br /&gt;
* Design system for posterior analysis of interaction history w.r.t. &amp;lt;insert favorite cognitive model here&amp;gt;, evaluating critical path &amp;lt;or equivalent&amp;gt; trajectories.&lt;br /&gt;
* Design cognition-based techniques for detecting bottlenecks in critical paths, and offering optimized alternatives. &lt;br /&gt;
* Perform quantitative user-study evaluations, collect qualitative feedback from expert interface designers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design-space analysis and quantitative evaluation of cognition-based techniques for assessing user interfaces. &lt;br /&gt;
* Design and quantitative evaluation of techniques for suggesting optimized interface-design changes. &lt;br /&gt;
* An extensible, multimodal software architecture for capturing user traces integrated with pupil-tracking data, auditory recognition, and muscle-activity monitoring. &lt;br /&gt;
* (there may be more here, like testing different cognitive models, generating a markup language to represent interfaces, maybe even a unique metric space for interface usability) &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
See the [http://vrl.cs.brown.edu/wiki/images/b/b1/Flowchart2.pdf flowchart] for a visual overview of our aims.&lt;br /&gt;
&lt;br /&gt;
In order to use this framework, a designer will have to provide:&lt;br /&gt;
* Functional specification - what are the possible interactions between the user and the application. This can be thought of as method signatures, with a name (e.g., setVolume), direction (to user or from user) and a list of value types (boolean, number, text, video, ...) for each interaction.&lt;br /&gt;
* GUI specification - a mapping of interactions to interface elements (e.g., setVolume is mapped to the grey knob in the bottom left corner with clockwise turning increasing the input number).&lt;br /&gt;
* Functional user traces - sequences of representative ways in which the application is used. Instead of writing them, the designer could have users use the application with a trial interface and then use our methods to generalize the user traces beyond the specific interface (The second method is depicted in the diagram). As a form of pre-processing, the system also generates an interaction transition matrix which lists the probability of each type of interaction given the previous interaction.&lt;br /&gt;
* Utility function - this is a weighting of various performance metrics (time, cognitive load, fatigue, etc.), where the weighting expresses the importance of a particular dimension to the user. For example, a user at NASA probably cares more about interface accuracy than speed. By passing this information to our committee of experts, we can create interfaces that are tuned to maximize the utility of a particular user type.&lt;br /&gt;
&lt;br /&gt;
Each of the modules can use all of this information or a subset of it. Our approach stresses flexibility and the ability to give more meaningful feedback the more information is provided. After processing the information sent by the system of experts, the aggregator will output:&lt;br /&gt;
* An evaluation of the interface. Evaluations are expressed both in terms of the utility function components (i.e. time, fatigue, cognitive load, etc.), and in terms of the overall utility for this interface (as defined by the utility function). These evaluations are given in the form of an efficiency curve, where the utility received on each dimension can change as the user becomes more accustomed to the interface. &lt;br /&gt;
* Suggested improvements for the GUI are also output. These suggestions are meant to optimize the utility function that was input to the system. If a user values accuracy over time, interface suggestions will be made accordingly.&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Parallel Framework for Evaluation Modules ==&lt;br /&gt;
* Owner: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section will describe in more detail the inputs, outputs and architecture that were presented in the introduction.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Module Inputs (Incomplete) ====&lt;br /&gt;
* A set of utility dimensions {d1, d2, ...} are defined in the framework. These could be {d1=&amp;quot;time&amp;quot;, d2=&amp;quot;fatigue&amp;quot;, ...}&lt;br /&gt;
* A set of interaction functions. These specify all of the information that the application wants to give users or get from them. It is not tied to a specific interface. For example, the fact that an applications shows videos would be included here. Whether it displays them embedded, in a pop-up window or fullscreen would not.&lt;br /&gt;
* A mapping of interaction functions to interface elements (e.g., buttons, windows, dials,...). Lots of optional information describing visual properties, associated text, physical interactions (e.g., turning the dial clockwise increases the input value) and timing.&lt;br /&gt;
* Functional user traces - sequences of interaction functions that represent typical user interactions with the application. Could include a goal hierarchy, in which case the function sequence is at the bottom of the hierarchy.&lt;br /&gt;
&lt;br /&gt;
==== Module Outputs ====&lt;br /&gt;
* Every module ouputs at least one of the following:&lt;br /&gt;
** An evaluation of the interface&lt;br /&gt;
*** This can be on any or all of the utility dimensions, e.g. evaluation={d1=score1, d2=score2, ...}&lt;br /&gt;
*** This can alternately be an overall evaluation, ignoring dimensions, e.g. evaluation={score}&lt;br /&gt;
**** In this case, the aggregator will treat this as the module giving the same score to all dimensions. Which dimension this evaluator actually predicts well on can be learned by the aggregator over time.&lt;br /&gt;
** Recommendation(s) for improving the interface&lt;br /&gt;
***This can be a textual description of what changes the designer should make&lt;br /&gt;
***This can alternately be a transformation that can automatically be applied to the interface language (without designer intervention)&lt;br /&gt;
***In addition to the textual or transformational description of the recommendation, a &amp;quot;change in evaluation&amp;quot; is output to describe how specifically the value will improve the interface&lt;br /&gt;
****Recommendation = {description=&amp;quot;make this change&amp;quot;, Δevaluation={d1=score1, d2=score2, ...}&lt;br /&gt;
****Like before, this Δevaluation can cover any number of dimensions, or it can be generic.&lt;br /&gt;
***Either a single recommendation or a set of recommendations can be output&lt;br /&gt;
&lt;br /&gt;
==== Aggregator Inputs ====&lt;br /&gt;
The aggregator receives as input the outputs of all the modules.&lt;br /&gt;
&lt;br /&gt;
==== Aggregator Outputs ====&lt;br /&gt;
&lt;br /&gt;
Outputs for the aggregator are the same as the outputs for each module. The difference is that the aggregator will consider all the module outputs, and arrive at a merged output based on the past performance of each of the modules.&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
A &amp;quot;meta-module&amp;quot; called the aggregator will be responsible for assembling and formatting the output of all other modules into a structure that is both extensible and immediately usable, by both an automated designer or a human designer.&lt;br /&gt;
&lt;br /&gt;
===Requirements===&lt;br /&gt;
The aggregator&#039;s functionality, then, is defined by its &#039;&#039;&#039;inputs&#039;&#039;&#039;, the outputs of the other modules, and the desired &#039;&#039;&#039;output&#039;&#039;&#039; of the system as a whole, per its position in the architecture.  Its purpose is largely formatting and reconciliation of the products of the multitudinous (and extensible) modules.  The output of the aggregator must meet several requirements: first, to generate a set of human-readable suggestions for the improvement of the given interface; second, to generate a machine-readable, but also analyzable, evaluation of the various characteristics of the interface and accompanying user traces.&lt;br /&gt;
&lt;br /&gt;
From these specifications, it is logical to assume that a common language or format will be required for the output of individual modules.  We propose an XML-based file format, allowing: (1) a section for the standardized identification of problem areas, applicable rules, and proposed improvements, generalized by the individual module and mapped to a single element, or group of elements, in the original interface specification; (2) a section for specification of generalizable &amp;quot;utility&amp;quot; functions, allowing a module to specify how much a measurable quantity of utility is positively or negatively affected by properties of the input interface; (3) new, user-definable sections for evaluations of the given interface not covered by the first two sections.  The first two sections are capable of conveying the vast majority of module outputs predicted at this time, but the XML can extensibly allow modules to pass on whatever information may become prominent in the future.&lt;br /&gt;
&lt;br /&gt;
===Specification===&lt;br /&gt;
 &amp;lt;module id=&amp;quot;Fitts-Law&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;interface-elements&amp;gt;&lt;br /&gt;
 		&amp;lt;element&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;submit button&amp;lt;/desc&amp;gt;&lt;br /&gt;
 			&amp;lt;problem&amp;gt;&lt;br /&gt;
 				&amp;lt;desc&amp;gt;size&amp;lt;/desc&amp;gt;&lt;br /&gt;
 				&amp;lt;suggestion&amp;gt;width *= 2&amp;lt;/suggestion&amp;gt;&lt;br /&gt;
 				&amp;lt;suggetsion&amp;gt;height *= 2&amp;lt;/suggestions&amp;gt;&lt;br /&gt;
 				&amp;lt;human-suggestion&amp;gt;Increase size relative to other elements&amp;lt;/human-suggestion&amp;gt;&lt;br /&gt;
 			&amp;lt;/problem&amp;gt;&lt;br /&gt;
 		&amp;lt;/element&amp;gt;&lt;br /&gt;
 	&amp;lt;/interface-elements&amp;gt;&lt;br /&gt;
 	&amp;lt;utility&amp;gt;&lt;br /&gt;
 		&amp;lt;dimension&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;time&amp;lt;/desc&amp;gt;&lt;br /&gt;
 			&amp;lt;value&amp;gt;0:15:35&amp;lt;/value&amp;gt;&lt;br /&gt;
 		&amp;lt;/dimension&amp;gt;&lt;br /&gt;
 		&amp;lt;dimension&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;frustration&amp;lt;/desc&amp;gt;&lt;br /&gt;
 			&amp;lt;value&amp;gt;pulling hair out&amp;lt;/value&amp;gt;&lt;br /&gt;
 		&amp;lt;/dimension&amp;gt;&lt;br /&gt;
 		&amp;lt;dimension&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;efficiency&amp;lt;/desc&amp;gt;&lt;br /&gt;
 			&amp;lt;value&amp;gt;13.2s/KPM task&amp;lt;/value&amp;gt;&lt;br /&gt;
 			&amp;lt;value&amp;gt;0.56m/CPM task&amp;lt;/value&amp;gt;&lt;br /&gt;
 		&amp;lt;/dimension&amp;gt;&lt;br /&gt;
 	&amp;lt;/utility&amp;gt;&lt;br /&gt;
 	&amp;lt;tasks&amp;gt;&lt;br /&gt;
 		&amp;lt;task&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;complete form&amp;lt;/desc&amp;gt;&lt;br /&gt;
 		&amp;lt;/task&amp;gt;&lt;br /&gt;
 		&amp;lt;task&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;lookup SSN&amp;lt;/desc&amp;gt;&lt;br /&gt;
 		&amp;lt;/task&amp;gt;&lt;br /&gt;
 		&amp;lt;task&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;format phone number&amp;lt;/desc&amp;gt;&lt;br /&gt;
 		&amp;lt;/task&amp;gt;&lt;br /&gt;
 	&amp;lt;/tasks&amp;gt;&lt;br /&gt;
 &amp;lt;/module&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Logic===&lt;br /&gt;
This file provided by each module is then the input for the aggregator.  The aggregator&#039;s most straightforward function is the compilation of the &amp;quot;problem areas,&amp;quot; assembling them and noting problem areas and suggestions that are recommended by more than one module, and weighting them accordingly in its final report.  These weightings can begin in an equal state, but the aggregator should be capable of learning iteratively which modules&#039; results are most relevant to the user and update weightings accordingly.  This may need to be accomplished with manual tuning, or a machine-learning algorithm capable of determining which modules most often agree with others.&lt;br /&gt;
&lt;br /&gt;
Secondly, the aggregator compiles the utility functions provided by the module specs.  This, again, is a summation of similarly-described values from the various modules.&lt;br /&gt;
&lt;br /&gt;
When confronted with user-defined sections of the XML spec, the aggregator is primarily responsible for compiling them and sending them along to the output of the machine.  Even if the aggregator does not recognize a section or property of the evaluative spec, if it sees the property reported by more than one module it should be capable of aggregating these intelligently.  In future versions of the spec, it should be possible for a module to provide instructions for the aggregator on how to handle unrecognized sections of the XML.&lt;br /&gt;
&lt;br /&gt;
From these compilations, then, the aggregator should be capable of outputting both aggregated human-readable suggestions on interface improvements for a human designer, as well as a comprehensive evaluation of the interface&#039;s effectiveness at the given task traces.  Again, this is dependent on the specification of the system as a whole, but is likely to include measures and comparisons, graphings of task versus utility, and quantitative measures of an element&#039;s effectiveness.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section is necessarily defined by the output of the individual modules (which I already expect to be of varied and arbitrary structure) and the desired output of the machine as a whole.  It will likely need to be revised heavily after other modules and the &amp;quot;Parellel Framework&amp;quot; section are defined.&#039;&#039; [[User:E J Kalafarski|E J Kalafarski]] 12:34, 24 April 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
I’m hoping to have some input on this section, because it seems to be the crux of the “black box” into which we take the inputs of interface description, user traces, etc. and get our outputs (time, recommendations, etc.).  I know at least a few people have pretty strong thoughts on the matter and we ought to discuss the final structure.&lt;br /&gt;
&lt;br /&gt;
That said – my proposal for the module:&lt;br /&gt;
*In my opinion the concept of the Model Human Processor (at least as applied in CPM) is outdated – it’s too economic/overly parsimonious in its conception of human activity.  I think we need to create a structure which accounts for more realistic conditions of HCI including multitasking, aspects of distributed cognition (and other relevant uses of tools – as far as I can tell CPM doesn’t take into account any sort of productivity aids), executive control processes of attention, etc.  ACT-R appears to take steps towards this but we would probably need to look at their algorithms to know for sure.&lt;br /&gt;
*Critical paths will continue to play an important role – we should in fact emphasize that part of this tool’s purpose will be a description not only of ways in which the interface should be modified to best fit a critical path, but also ways in which the user’s ought to be instructed in their use of the path.  This feedback mechanism could be bidirectional – if the model’s predictions of the user’s goals are incorrect, the critical path determined will also be incorrect and the interface inherently suboptimal.  The user could be prompted with a tooltip explaining in brief why and how the interface has changed, along with options to revert, select other configurations (euphemized by goals), and to view a short video detailing how to properly use the interface.&lt;br /&gt;
*Call me crazy but, if we assume designers will be willing to code a model of their interfaces into our ACT-R-esque language, could we allow that model to be fairly transparent to the user, who could use a gui to input their goals to find an analogue in the program which would subsequently rearrange its interface to fit the user’s needs?  Even if not useful to the users, such dynamic modeling could really help designers (IMO)&lt;br /&gt;
*I think the model should do its best to accept models written for ACT-R and whatever other cognitive models there are out there – gives us the best chance of early adoption&lt;br /&gt;
*I would particularly appreciate input on the number/complexity/type of inputs we’ll be using, as well as the same qualities for the output.&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
Schneiderman&#039;s Eight Golden Rules and Jakob Nielsen&#039;s Ten Heuristics are perhaps the most famous and well-regarded heuristic design guidelines to emerge over the last twenty years.  Although the explicit theoretical basis for such heuristics is controversial and not well-explored, the empirical success of these guidelines is established and accepted.  This module will parse out up to three or four common (that is, intersecting) principles from these accepted guidelines and apply them to the input interface.&lt;br /&gt;
&lt;br /&gt;
As an example, we identify an analogous principle that appears in Schneiderman (&amp;quot;Reduce short-term memory load&amp;quot;)&amp;lt;ref&amp;gt;http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html&amp;lt;/ref&amp;gt; and Nielsen (&amp;quot;Recognition rather than recall/Minimize the user&#039;s memory load&amp;quot;)&amp;lt;ref&amp;gt;http://www.useit.com/papers/heuristic/heuristic_list.html&amp;lt;/ref&amp;gt;.  The input interface is then evaluated for the consideration of the principle, based on an explicit formal description of the interface, such as XAML or XUL.  The module attempts to determine how effectively the interface demonstrates the principle.  When analyzing an interface for several principles that may be conflicting or opposing in a given context, the module makes use of a hard-coded but iterative (and evolving) weighting of these principles, based on (1) how often they appear in the training set of accepted sets of guidelines, (2) how analogues a heuristic principle is to a cognitive principle in a parallel training set, and (3) how effective the principle&#039;s associated suggestion is found to be using a feedback mechanism.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&amp;lt;br/&amp;gt;&lt;br /&gt;
# A formal description of the interface and its elements (e.g. buttons).&lt;br /&gt;
# A formal description of a particular task and the possible paths through a subset of interface elements that permit the user to accomplish that task.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Output&#039;&#039;&#039;&amp;lt;br/&amp;gt;&lt;br /&gt;
Standard XML-formatted file containing problem areas of the input interface, suggestions for each problem area based on principles that were found to have a strong application to a problem element and the problem itself, and a human-readable generated analysis of the element&#039;s affinity for the principle.  Quantitative outputs will not be possible based on heuristic guidelines, and the &amp;quot;utility&amp;quot; section of this module&#039;s output is likely to be blank.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module provides an estimate of the required time to complete various tasks that have been decomposed into formalized sequences of interactions with interface elements, and will provide evaluations and recommendations for optimizing the time required to complete those tasks using the interface.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. A formal description of the interface and its elements (e.g. buttons).&lt;br /&gt;
&lt;br /&gt;
2. A formal description of a particular task and the possible paths through a subset of interface elements that permit the user to accomplish that task.&lt;br /&gt;
&lt;br /&gt;
3. The physical distances between interface elements along those paths.&lt;br /&gt;
&lt;br /&gt;
4. The width of those elements along the most likely axes of motion.&lt;br /&gt;
&lt;br /&gt;
5. Device (e.g. mouse) characteristics including start/stop time and the inherent speed limitations of the device.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Output&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The module will then use the Shannon formulation of Fitt&#039;s Law to compute the average time needed to complete the task along those paths.&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on a measure of the extent to which the user perceives the relevant affordances of the interface when performing a number of specified tasks.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Formalized descriptions of...&lt;br /&gt;
&lt;br /&gt;
1. Interface elements&lt;br /&gt;
&lt;br /&gt;
2. Their associated actions&lt;br /&gt;
&lt;br /&gt;
3. The functions of those actions&lt;br /&gt;
&lt;br /&gt;
4. A particular task&lt;br /&gt;
&lt;br /&gt;
5. User traces for that task.  &lt;br /&gt;
&lt;br /&gt;
Inputs (1-4) are then used to generate a &amp;quot;user-independent&amp;quot; space of possible functions that the interface is capable of performing with respect to a given task -- what the interface &amp;quot;affords&amp;quot; the user.  From this set of possible interactions, our model will then determine the subset of optimal paths for performing a particular task.  The user trace (5) is then used to determine what functions actually were performed in the course of a given task of interest and this information is then compared to the optimal path data to determine the extent to which affordances of the interface are present but not perceived.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Output&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The output of this module is a simple ratio of (affordances perceived) / [(relevant affordances present) * (time to complete task)] which provides a quantitative measure of the extent to which the interface is &amp;quot;natural&amp;quot; to use for a particular task.&lt;br /&gt;
&lt;br /&gt;
=== Workflow, Multi-tasking and Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
The purpose of this module is to integrate existing work on multi-tasking, interruption and higher-level workflow into a framework which can predict user recovery times from interruptions.  Specifically, the goals of this framework will be to:&lt;br /&gt;
&lt;br /&gt;
* Understand the role of the larger workflow context in user interfaces&lt;br /&gt;
* Understand the impact of interruptions on user workflow&lt;br /&gt;
* Understand how to design software which fits into the larger working spheres in which information work takes place&lt;br /&gt;
&lt;br /&gt;
It is important to point out that because workflow and multi-tasking rely heavily on higher-level brain functioning, it is unrealistic within the scope of this grant to propose a system which can predict user performance given a description of a set of arbitrary software programs.  Therefore, we believe this module will function much more in a qualitative role to provide context to the rest of the model.  Specifically, our findings related to interruption and multi-tasking will advance the basic research question of &amp;quot;how do you users react to interruptions when using working sets of varying sizes?&amp;quot;.  This core HCI contribution will help to inform the rest of the outputs of the model in a qualitative manner.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
N/A&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Outputs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
N/A&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Models&lt;br /&gt;
** Beddeley&#039;s Model of Working Memory&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Baddeley%27s_model_of_working_memory&amp;lt;/ref&amp;gt;&lt;br /&gt;
*** Episodic Buffer&lt;br /&gt;
** George Miller&#039;s &amp;quot;The magic number 7 plus or minus 2&amp;quot;&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two&amp;lt;/ref&amp;gt;&lt;br /&gt;
** The magic number 4&amp;lt;ref&amp;gt;Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87-185.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Chunking&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Chunking_(psychology)&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Priming&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Priming_(psychology)&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Subitizing and Counting&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Subitizing&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&lt;br /&gt;
* Visual Stimuli&lt;br /&gt;
* Audio Stimuli&lt;br /&gt;
&#039;&#039;&#039;Outputs&#039;&#039;&#039;&lt;br /&gt;
* Remembered percepts&lt;br /&gt;
* Half-Life of percepts&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Learning&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Learning#Mathematical_models_of_learning&amp;lt;/ref&amp;gt;&lt;br /&gt;
** Logan&#039;s instance theory of automatization&amp;lt;ref&amp;gt;http://74.125.95.132/search?q=cache:IZ-Zccsu3SEJ:psych.wisc.edu/ugstudies/psych733/logan_1988.pdf+logan+isntance+teory&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=us&amp;amp;client=firefox-a&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Fluency&lt;br /&gt;
** As meta-cognitive information &amp;lt;ref&amp;gt;http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VH9-4SM7PFK-4&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=10cd279fa80958981fcc3c06684c09af&amp;lt;/ref&amp;gt;&lt;br /&gt;
** As a cognitive &#039;&#039;heuristic&#039;&#039;&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Fluency_heuristic&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&lt;br /&gt;
* Interface&lt;br /&gt;
* User goals&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Outputs&#039;&#039;&#039;&lt;br /&gt;
* Learning curve&lt;br /&gt;
&lt;br /&gt;
== Integration into the Design Process ==&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section outlines the process of designing an HCI interface and at what stages our proposal fits in and how.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
===Workflow, Multi-tasking, and Interruption===&lt;br /&gt;
&lt;br /&gt;
====I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
The goals of the preliminary work are to gain qualitative insight into how information workers practice metawork, and to determine whether people might be better-supported with software which facillitates metawork and interruptions.  Thus, the preliminary work should investigate, and demonstrate, the need and impact of the core goals of the project.&lt;br /&gt;
&lt;br /&gt;
====II.  &#039;&#039;&#039;Methodology&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
Seven information workers, ages 20-38 (5 male, 2 female), were interviewed to determine which methods they use to &amp;quot;stay organized&amp;quot;.  An initial list of metawork strategies was established from two pilot interviews, and then a final list was compiled.  Participants then responded to a series of 17 questions designed to gain insight into their metawork strategies and process.  In addition, verbal interviews were conducted to get additional open-ended feedback.&lt;br /&gt;
&lt;br /&gt;
====III.  &#039;&#039;&#039;Final Results&#039;&#039;&#039;====&lt;br /&gt;
A histogram of methods people use to &amp;quot;stay organized&amp;quot; in terms of tracking things they need to do (TODOs), appointments and meetings, etc. is shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:AcbGraph.jpg]]&lt;br /&gt;
&lt;br /&gt;
In addition to these methods, participants also used a number of other methods, including:&lt;br /&gt;
&lt;br /&gt;
* iCal&lt;br /&gt;
* Notes written in xterms&lt;br /&gt;
* &amp;quot;Inbox zero&amp;quot; method of email organization&lt;br /&gt;
* iGoogle Notepad (for tasks)&lt;br /&gt;
* Tag emails as &amp;quot;TODO&amp;quot;, &amp;quot;Important&amp;quot;, etc.&lt;br /&gt;
* Things (Organizer Software)&lt;br /&gt;
* Physical items placed to &amp;quot;remind me of things&amp;quot;&lt;br /&gt;
* Sometimes arranging windows on desk&lt;br /&gt;
* Keeping browser tabs open&lt;br /&gt;
* Bookmarking web pages&lt;br /&gt;
* Keep programs/files open scrolled to certain locations sometimes with things selected&lt;br /&gt;
&lt;br /&gt;
In addition, three participants said that when interrupted they &amp;quot;rarely&amp;quot; or &amp;quot;very rarely&amp;quot; were able to resume the task they were working on prior to the interruption.  Three of the participants said that they would not actively recommend their metawork strategies for other people, and two said that staying organized was &amp;quot;difficult&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Four participants were neutral to the idea of new tools to help them stay organized and three said that they would like to have such a tool/tools.&lt;br /&gt;
&lt;br /&gt;
====IV.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;====&lt;br /&gt;
These results qunatiatively support our hypothesis that there is no clearly dominant set of metawork strategies employed by information workers.  This highly fragemented landscape is surprising, even though most information workers work in a similar environment - at a desk, on the phone, in meetings - and with the same types of tools - computers, pens, paper, etc.  We believe that this suggests that there are complex tradeoffs between these methods and that no single method is sufficient.  We therefore believe that users will be better supported with a new set of software-based metawork tools.&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3162</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3162"/>
		<updated>2009-04-24T15:23:17Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]], [[User: Trevor O&#039;Brien | Trevor]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Propose:&#039;&#039;&#039; The design, application and evaluation of a novel, cognition-based, computational framework for assessing interface design and providing automated suggestions to optimize usability.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evaluation Methodology:&#039;&#039;&#039;  Our techniques will be evaluated quantitatively through a series of user-study trials, as well as qualitatively by a team of expert interface designers. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions and Significance:&#039;&#039;&#039;  We expect this work to make the following contributions:  &lt;br /&gt;
# design-space analysis and quantitative evaluation of cognition-based techniques for assessing user interfaces.  &lt;br /&gt;
# design and quantitative evaluation of techniques for suggesting optimized interface-design changes. &lt;br /&gt;
# an extensible, multimodal software architecture for capturing user traces integrated with pupil-tracking data, auditory recognition, and muscle-activity monitoring.  &lt;br /&gt;
# specification (language?) of how to define an interface evaluation module and how to integrate it into a larger system.  &lt;br /&gt;
# (there may be more here, like testing different cognitive models, generating a markup language to represent interfaces, maybe even a unique metric space for interface usability)&lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
Each person should add the background related to their specific aims.&lt;br /&gt;
&lt;br /&gt;
* Steven Ellis - Cognitive models of HCI, including GOMS variations and ACT-R&lt;br /&gt;
* EJ - Design Guidelines&lt;br /&gt;
* Jon - Perception and Action&lt;br /&gt;
* Andrew - Multiple task environments&lt;br /&gt;
* Gideon - Cognition and dual systems&lt;br /&gt;
* Ian - Interface design process&lt;br /&gt;
* Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using)&lt;br /&gt;
&lt;br /&gt;
== Cognitive Models ==&lt;br /&gt;
I plan to port over most of the background on cognitive models of HCI from the old proposal&lt;br /&gt;
&lt;br /&gt;
Additions will comprise of:&lt;br /&gt;
*CPM-GOMS as a bridge from GOMS architecture to the promising procedural optimization of the Model Human Processor&lt;br /&gt;
**Context of CPM development, discuss its relation to original GOMS and KLM&lt;br /&gt;
***Establish the tasks which were relevant for optimization when CPM was developed and note that its obsolescence may have been unavoidable&lt;br /&gt;
**Focus on CPM as the first step in transitioning from descriptive data, provided by mounting efforts in the cognitive sciences realm to discover the nature of task processing and accomplishment, to prescriptive algorithms which can predict an interface’s efficiency and suggest improvements&lt;br /&gt;
**CPM’s purpose as an abstraction of cognitive processing – a symbolic representation not designed for accuracy but precision&lt;br /&gt;
**CPM’s successful trials, e.g. Ernestine&lt;br /&gt;
***Implications of this project include CPM’s ability to accurately estimate processing at a psychomotor level&lt;br /&gt;
***Project does suggest limitations, however, when one attempts to examine more complex tasks which involve deeper and more numerous cognitive processes&lt;br /&gt;
*ACT-R as an example of a progressive cognitive modeling tool&lt;br /&gt;
**A tool clearly built by and for cognitive scientists, and as a result presents a much more accurate view of human processing – helpful for our research&lt;br /&gt;
**Built-in automation, which now seems to be a standard feature of cognitive modeling tools&lt;br /&gt;
**Still an abstraction of cognitive processing, but makes adaptation to cutting-edge cognitive research findings an integral aspect of its modular structure&lt;br /&gt;
**Expand on its focus on multi-tasking, taking what was a huge advance between GOMS and its CPM variation and bringing the simulation several steps closer to approximating the nature of cognition in regards to HCI&lt;br /&gt;
**Far more accessible both for researchers and the lay user/designer in its portability to LISP, pre-construction of modules representing cognitive capacities and underlying algorithms modeling paths of cognitive processing&lt;br /&gt;
&lt;br /&gt;
==Design guidelines==&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;[http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf Borchers, Jan O.  &amp;quot;A Pattern Approach to Interaction Design.&amp;quot;  2000.]&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
===Application to AUE===&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;[http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf Ivory, M and Hearst, M.  &amp;quot;The State of the Art in Automated Usability Evaluation of User Interfaces.&amp;quot; ACM Computing Surveys (CSUR), 2001.]&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
===Popular and seminal examples===&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
===Elements and goals of a guideline set===&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;[http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Czerwinski, Horvitz, and White. &amp;quot;A Diary Study of Task Switching and Interruptions.&amp;quot;  Proceedings of the SIGCHI conference on Human factors in computing systems, 2004.]&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application.&lt;br /&gt;
&lt;br /&gt;
== Perception and Action (in progress) ==&lt;br /&gt;
&lt;br /&gt;
*Information Processing Approach&lt;br /&gt;
**Advantages&lt;br /&gt;
***Formalism eases translation of theory into scripting language&lt;br /&gt;
**Disadvantages&lt;br /&gt;
***Assumes symbolic representation&lt;br /&gt;
&lt;br /&gt;
*Ecological (Gibsonian) Approach&lt;br /&gt;
**Advantages&lt;br /&gt;
***Emphasis on bodily and environmental constraints&lt;br /&gt;
**Disadvantages&lt;br /&gt;
***Lack of formalism hinders translation of theory into scripting language&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specific Aims&#039;&#039;&#039;&lt;br /&gt;
* Incorporate interaction history mechanisms into a set of existing applications.&lt;br /&gt;
* Perform user-study evaluation of history-collection techniques.&lt;br /&gt;
* Distill a set of cognitive principles/models, and evaluate empirically?&lt;br /&gt;
* Build/buy sensing system to include pupil-tracking, muscle-activity monitoring, auditory recognition.&lt;br /&gt;
* Design techniques for manual/semi-automated/automated construction of &amp;lt;insert favorite cognitive model here&amp;gt; from interaction histories and sensing data.&lt;br /&gt;
* Design system for posterior analysis of interaction history w.r.t. &amp;lt;insert favorite cognitive model here&amp;gt;, evaluating critical path &amp;lt;or equivalent&amp;gt; trajectories.&lt;br /&gt;
* Design cognition-based techniques for detecting bottlenecks in critical paths, and offering optimized alternatives. &lt;br /&gt;
* Perform quantitative user-study evaluations, collect qualitative feedback from expert interface designers.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Contributions&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design-space analysis and quantitative evaluation of cognition-based techniques for assessing user interfaces. &lt;br /&gt;
* Design and quantitative evaluation of techniques for suggesting optimized interface-design changes. &lt;br /&gt;
* An extensible, multimodal software architecture for capturing user traces integrated with pupil-tracking data, auditory recognition, and muscle-activity monitoring. &lt;br /&gt;
* (there may be more here, like testing different cognitive models, generating a markup language to represent interfaces, maybe even a unique metric space for interface usability) &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
See the [http://vrl.cs.brown.edu/wiki/images/b/b1/Flowchart2.pdf flowchart] for a visual overview of our aims.&lt;br /&gt;
&lt;br /&gt;
In order to use this framework, a designer will have to provide:&lt;br /&gt;
* Functional specification - what are the possible interactions between the user and the application. This can be thought of as method signatures, with a name (e.g., setVolume), direction (to user or from user) and a list of value types (boolean, number, text, video, ...) for each interaction.&lt;br /&gt;
* GUI specification - a mapping of interactions to interface elements (e.g., setVolume is mapped to the grey knob in the bottom left corner with clockwise turning increasing the input number).&lt;br /&gt;
* Functional user traces - sequences of representative ways in which the application is used. Instead of writing them, the designer could have users use the application with a trial interface and then use our methods to generalize the user traces beyond the specific interface (The second method is depicted in the diagram). As a form of pre-processing, the system also generates an interaction transition matrix which lists the probability of each type of interaction given the previous interaction.&lt;br /&gt;
* Utility function - this is a weighting of various performance metrics (time, cognitive load, fatigue, etc.), where the weighting expresses the importance of a particular dimension to the user. For example, a user at NASA probably cares more about interface accuracy than speed. By passing this information to our committee of experts, we can create interfaces that are tuned to maximize the utility of a particular user type.&lt;br /&gt;
&lt;br /&gt;
Each of the modules can use all of this information or a subset of it. Our approach stresses flexibility and the ability to give more meaningful feedback the more information is provided. After processing the information sent by the system of experts, the aggregator will output:&lt;br /&gt;
* An evaluation of the interface. Evaluations are expressed both in terms of the utility function components (i.e. time, fatigue, cognitive load, etc.), and in terms of the overall utility for this interface (as defined by the utility function). These evaluations are given in the form of an efficiency curve, where the utility received on each dimension can change as the user becomes more accustomed to the interface. &lt;br /&gt;
* Suggested improvements for the GUI are also output. These suggestions are meant to optimize the utility function that was input to the system. If a user values accuracy over time, interface suggestions will be made accordingly.&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Parallel Framework for Evaluation Modules ==&lt;br /&gt;
* Owner: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section will describe in more detail the inputs, outputs and architecture that were presented in the introduction.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Module Inputs (Incomplete) ====&lt;br /&gt;
* A set of utility dimensions {d1, d2, ...} are defined in the framework. These could be {d1=&amp;quot;time&amp;quot;, d2=&amp;quot;fatigue&amp;quot;, ...}&lt;br /&gt;
&lt;br /&gt;
==== Module Outputs ====&lt;br /&gt;
* Every module ouputs at least one of the following:&lt;br /&gt;
** An evaluation of the interface&lt;br /&gt;
*** This can be on any or all of the utility dimensions, e.g. evaluation={d1=score1, d2=score2, ...}&lt;br /&gt;
*** This can alternately be an overall evaluation, ignoring dimensions, e.g. evaluation={score}&lt;br /&gt;
**** In this case, the aggregator will treat this as the module giving the same score to all dimensions. Which dimension this evaluator actually predicts well on can be learned by the aggregator over time.&lt;br /&gt;
** Recommendation(s) for improving the interface&lt;br /&gt;
***This can be a textual description of what changes the designer should make&lt;br /&gt;
***This can alternately be a transformation that can automatically be applied to the interface language (without designer intervention)&lt;br /&gt;
***In addition to the textual or transformational description of the recommendation, a &amp;quot;change in evaluation&amp;quot; is output to describe how specifically the value will improve the interface&lt;br /&gt;
****Recommendation = {description=&amp;quot;make this change&amp;quot;, Δevaluation={d1=score1, d2=score2, ...}&lt;br /&gt;
****Like before, this Δevaluation can cover any number of dimensions, or it can be generic.&lt;br /&gt;
***Either a single recommendation or a set of recommendations can be output&lt;br /&gt;
&lt;br /&gt;
==== Aggregator Inputs ====&lt;br /&gt;
The aggregator receives as input the outputs of all the modules.&lt;br /&gt;
&lt;br /&gt;
==== Aggregator Outputs ====&lt;br /&gt;
&lt;br /&gt;
Outputs for the aggregator are the same as the outputs for each module. The difference is that the aggregator will consider all the module outputs, and arrive at a merged output based on the past performance of each of the modules.&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
A &amp;quot;meta-module&amp;quot; called the aggregator will be responsible for assembling and formatting the output of all other modules into a structure that is both extensible and immediately usable, by both an automated designer or a human designer.&lt;br /&gt;
&lt;br /&gt;
===Requirements===&lt;br /&gt;
The aggregator&#039;s functionality, then, is defined by its &#039;&#039;&#039;inputs&#039;&#039;&#039;, the outputs of the other modules, and the desired &#039;&#039;&#039;output&#039;&#039;&#039; of the system as a whole, per its position in the architecture.  Its purpose is largely formatting and reconciliation of the products of the multitudinous (and extensible) modules.  The output of the aggregator must meet several requirements: first, to generate a set of human-readable suggestions for the improvement of the given interface; second, to generate a machine-readable, but also analyzable, evaluation of the various characteristics of the interface and accompanying user traces.&lt;br /&gt;
&lt;br /&gt;
From these specifications, it is logical to assume that a common language or format will be required for the output of individual modules.  We propose an XML-based file format, allowing: (1) a section for the standardized identification of problem areas, applicable rules, and proposed improvements, generalized by the individual module and mapped to a single element, or group of elements, in the original interface specification; (2) a section for specification of generalizable &amp;quot;utility&amp;quot; functions, allowing a module to specify how much a measurable quantity of utility is positively or negatively affected by properties of the input interface; (3) new, user-definable sections for evaluations of the given interface not covered by the first two sections.  The first two sections are capable of conveying the vast majority of module outputs predicted at this time, but the XML can extensibly allow modules to pass on whatever information may become prominent in the future.&lt;br /&gt;
&lt;br /&gt;
===Specification===&lt;br /&gt;
 &amp;lt;module id=&amp;quot;Fitts-Law&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;interface-elements&amp;gt;&lt;br /&gt;
 		&amp;lt;element&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;submit button&amp;lt;/desc&amp;gt;&lt;br /&gt;
 			&amp;lt;problem&amp;gt;&lt;br /&gt;
 				&amp;lt;desc&amp;gt;size&amp;lt;/desc&amp;gt;&lt;br /&gt;
 				&amp;lt;suggestion&amp;gt;width *= 2&amp;lt;/suggestion&amp;gt;&lt;br /&gt;
 				&amp;lt;suggetsion&amp;gt;height *= 2&amp;lt;/suggestions&amp;gt;&lt;br /&gt;
 				&amp;lt;human-suggestion&amp;gt;Increase size relative to other elements&amp;lt;/human-suggestion&amp;gt;&lt;br /&gt;
 			&amp;lt;/problem&amp;gt;&lt;br /&gt;
 		&amp;lt;/element&amp;gt;&lt;br /&gt;
 	&amp;lt;/interface-elements&amp;gt;&lt;br /&gt;
 	&amp;lt;utility&amp;gt;&lt;br /&gt;
 		&amp;lt;dimension&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;time&amp;lt;/desc&amp;gt;&lt;br /&gt;
 			&amp;lt;value&amp;gt;0:15:35&amp;lt;/value&amp;gt;&lt;br /&gt;
 		&amp;lt;/dimension&amp;gt;&lt;br /&gt;
 		&amp;lt;dimension&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;frustration&amp;lt;/desc&amp;gt;&lt;br /&gt;
 			&amp;lt;value&amp;gt;pulling hair out&amp;lt;/value&amp;gt;&lt;br /&gt;
 		&amp;lt;/dimension&amp;gt;&lt;br /&gt;
 		&amp;lt;dimension&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;efficiency&amp;lt;/desc&amp;gt;&lt;br /&gt;
 			&amp;lt;value&amp;gt;13.2s/KPM task&amp;lt;/value&amp;gt;&lt;br /&gt;
 			&amp;lt;value&amp;gt;0.56m/CPM task&amp;lt;/value&amp;gt;&lt;br /&gt;
 		&amp;lt;/dimension&amp;gt;&lt;br /&gt;
 	&amp;lt;/utility&amp;gt;&lt;br /&gt;
 	&amp;lt;tasks&amp;gt;&lt;br /&gt;
 		&amp;lt;task&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;complete form&amp;lt;/desc&amp;gt;&lt;br /&gt;
 		&amp;lt;/task&amp;gt;&lt;br /&gt;
 		&amp;lt;task&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;lookup SSN&amp;lt;/desc&amp;gt;&lt;br /&gt;
 		&amp;lt;/task&amp;gt;&lt;br /&gt;
 		&amp;lt;task&amp;gt;&lt;br /&gt;
 			&amp;lt;desc&amp;gt;format phone number&amp;lt;/desc&amp;gt;&lt;br /&gt;
 		&amp;lt;/task&amp;gt;&lt;br /&gt;
 	&amp;lt;/tasks&amp;gt;&lt;br /&gt;
 &amp;lt;/module&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Logic===&lt;br /&gt;
This file provided by each module is then the input for the aggregator.  The aggregator&#039;s most straightforward function is the compilation of the &amp;quot;problem areas,&amp;quot; assembling them and noting problem areas and suggestions that are recommended by more than one module, and weighting them accordingly in its final report.  These weightings can begin in an equal state, but the aggregator should be capable of learning iteratively which modules&#039; results are most relevant to the user and update weightings accordingly.  This may need to be accomplished with manual tuning, or a machine-learning algorithm capable of determining which modules most often agree with others.&lt;br /&gt;
&lt;br /&gt;
Secondly, the aggregator compiles the utility functions provided by the module specs.  This, again, is a summation of similarly-described values from the various modules.&lt;br /&gt;
&lt;br /&gt;
When confronted with user-defined sections of the XML spec, the aggregator is primarily responsible for compiling them and sending them along to the output of the machine.  Even if the aggregator does not recognize a section or property of the evaluative spec, if it sees the property reported by more than one module it should be capable of aggregating these intelligently.  In future versions of the spec, it should be possible for a module to provide instructions for the aggregator on how to handle unrecognized sections of the XML.&lt;br /&gt;
&lt;br /&gt;
From these compilations, then, the aggregator should be capable of outputting both aggregated human-readable suggestions on interface improvements for a human designer, as well as a comprehensive evaluation of the interface&#039;s effectiveness at the given task traces.  Again, this is dependent on the specification of the system as a whole, but is likely to include measures and comparisons, graphings of task versus utility, and quantitative measures of an element&#039;s effectiveness.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section is necessarily defined by the output of the individual modules (which I already expect to be of varied and arbitrary structure) and the desired output of the machine as a whole.  It will likely need to be revised heavily after other modules and the &amp;quot;Parellel Framework&amp;quot; section are defined.&#039;&#039; [[User:E J Kalafarski|E J Kalafarski]] 12:34, 24 April 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
I’m hoping to have some input on this section, because it seems to be the crux of the “black box” into which we take the inputs of interface description, user traces, etc. and get our outputs (time, recommendations, etc.).  I know at least a few people have pretty strong thoughts on the matter and we ought to discuss the final structure.&lt;br /&gt;
&lt;br /&gt;
That said – my proposal for the module:&lt;br /&gt;
*In my opinion the concept of the Model Human Processor (at least as applied in CPM) is outdated – it’s too economic/overly parsimonious in its conception of human activity.  I think we need to create a structure which accounts for more realistic conditions of HCI including multitasking, aspects of distributed cognition (and other relevant uses of tools – as far as I can tell CPM doesn’t take into account any sort of productivity aids), executive control processes of attention, etc.  ACT-R appears to take steps towards this but we would probably need to look at their algorithms to know for sure.&lt;br /&gt;
*Critical paths will continue to play an important role – we should in fact emphasize that part of this tool’s purpose will be a description not only of ways in which the interface should be modified to best fit a critical path, but also ways in which the user’s ought to be instructed in their use of the path.  This feedback mechanism could be bidirectional – if the model’s predictions of the user’s goals are incorrect, the critical path determined will also be incorrect and the interface inherently suboptimal.  The user could be prompted with a tooltip explaining in brief why and how the interface has changed, along with options to revert, select other configurations (euphemized by goals), and to view a short video detailing how to properly use the interface.&lt;br /&gt;
*Call me crazy but, if we assume designers will be willing to code a model of their interfaces into our ACT-R-esque language, could we allow that model to be fairly transparent to the user, who could use a gui to input their goals to find an analogue in the program which would subsequently rearrange its interface to fit the user’s needs?  Even if not useful to the users, such dynamic modeling could really help designers (IMO)&lt;br /&gt;
*I think the model should do its best to accept models written for ACT-R and whatever other cognitive models there are out there – gives us the best chance of early adoption&lt;br /&gt;
*I would particularly appreciate input on the number/complexity/type of inputs we’ll be using, as well as the same qualities for the output.&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
Schneiderman&#039;s Eight Golden Rules and Jakob Nielsen&#039;s Ten Heuristics are perhaps the most famous and well-regarded heuristic design guidelines to emerge over the last twenty years.  Although the explicit theoretical basis for such heuristics is controversial and not well-explored, the empirical success of these guidelines is established and accepted.  This module will parse out up to three or four common (that is, intersecting) principles from these accepted guidelines and apply them to the input interface.&lt;br /&gt;
&lt;br /&gt;
As an example, we identify an analogous principle that appears in Schneiderman (&amp;quot;Reduce short-term memory load&amp;quot;)&amp;lt;ref&amp;gt;http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html&amp;lt;/ref&amp;gt; and Nielsen (&amp;quot;Recognition rather than recall/Minimize the user&#039;s memory load&amp;quot;)&amp;lt;ref&amp;gt;http://www.useit.com/papers/heuristic/heuristic_list.html&amp;lt;/ref&amp;gt;.  The input interface is then evaluated for the consideration of the principle, based on an explicit formal description of the interface, such as XAML or XUL.  The module attempts to determine how effectively the interface demonstrates the principle.  When analyzing an interface for several principles that may be conflicting or opposing in a given context, the module makes use of a hard-coded but iterative (and evolving) weighting of these principles, based on (1) how often they appear in the training set of accepted sets of guidelines, (2) how analogues a heuristic principle is to a cognitive principle in a parallel training set, and (3) how effective the principle&#039;s associated suggestion is found to be using a feedback mechanism.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&amp;lt;br/&amp;gt;&lt;br /&gt;
# A formal description of the interface and its elements (e.g. buttons).&lt;br /&gt;
# A formal description of a particular task and the possible paths through a subset of interface elements that permit the user to accomplish that task.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Output&#039;&#039;&#039;&amp;lt;br/&amp;gt;&lt;br /&gt;
Standard XML-formatted file containing problem areas of the input interface, suggestions for each problem area based on principles that were found to have a strong application to a problem element and the problem itself, and a human-readable generated analysis of the element&#039;s affinity for the principle.  Quantitative outputs will not be possible based on heuristic guidelines, and the &amp;quot;utility&amp;quot; section of this module&#039;s output is likely to be blank.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module provides an estimate of the required time to complete various tasks that have been decomposed into formalized sequences of interactions with interface elements, and will provide evaluations and recommendations for optimizing the time required to complete those tasks using the interface.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. A formal description of the interface and its elements (e.g. buttons).&lt;br /&gt;
&lt;br /&gt;
2. A formal description of a particular task and the possible paths through a subset of interface elements that permit the user to accomplish that task.&lt;br /&gt;
&lt;br /&gt;
3. The physical distances between interface elements along those paths.&lt;br /&gt;
&lt;br /&gt;
4. The width of those elements along the most likely axes of motion.&lt;br /&gt;
&lt;br /&gt;
5. Device (e.g. mouse) characteristics including start/stop time and the inherent speed limitations of the device.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Output&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The module will then use the Shannon formulation of Fitt&#039;s Law to compute the average time needed to complete the task along those paths.&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on a measure of the extent to which the user perceives the relevant affordances of the interface when performing a number of specified tasks.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
Formalized descriptions of...&lt;br /&gt;
&lt;br /&gt;
1. Interface elements&lt;br /&gt;
&lt;br /&gt;
2. Their associated actions&lt;br /&gt;
&lt;br /&gt;
3. The functions of those actions&lt;br /&gt;
&lt;br /&gt;
4. A particular task&lt;br /&gt;
&lt;br /&gt;
5. User traces for that task.  &lt;br /&gt;
&lt;br /&gt;
Inputs (1-4) are then used to generate a &amp;quot;user-independent&amp;quot; space of possible functions that the interface is capable of performing with respect to a given task -- what the interface &amp;quot;affords&amp;quot; the user.  From this set of possible interactions, our model will then determine the subset of optimal paths for performing a particular task.  The user trace (5) is then used to determine what functions actually were performed in the course of a given task of interest and this information is then compared to the optimal path data to determine the extent to which affordances of the interface are present but not perceived.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Output&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The output of this module is a simple ratio of (affordances perceived) / [(relevant affordances present) * (time to complete task)] which provides a quantitative measure of the extent to which the interface is &amp;quot;natural&amp;quot; to use for a particular task.&lt;br /&gt;
&lt;br /&gt;
=== Workflow, Multi-tasking and Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
The purpose of this module is to integrate existing work on multi-tasking, interruption and higher-level workflow into a framework which can predict user recovery times from interruptions.  Specifically, the goals of this framework will be to:&lt;br /&gt;
&lt;br /&gt;
* Understand the role of the larger workflow context in user interfaces&lt;br /&gt;
* Understand the impact of interruptions on user workflow&lt;br /&gt;
* Understand how to design software which fits into the larger working spheres in which information work takes place&lt;br /&gt;
&lt;br /&gt;
It is important to point out that because workflow and multi-tasking rely heavily on higher-level brain functioning, it is unrealistic within the scope of this grant to propose a system which can predict user performance given a description of a set of arbitrary software programs.  Therefore, we believe this module will function much more in a qualitative role to provide context to the rest of the model.  Specifically, our findings related to interruption and multi-tasking will advance the basic research question of &amp;quot;how do you users react to interruptions when using working sets of varying sizes?&amp;quot;.  This core HCI contribution will help to inform the rest of the outputs of the model in a qualitative manner.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
N/A&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Outputs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
N/A&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Models&lt;br /&gt;
** Beddeley&#039;s Model of Working Memory&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Baddeley%27s_model_of_working_memory&amp;lt;/ref&amp;gt;&lt;br /&gt;
*** Episodic Buffer&lt;br /&gt;
** George Miller&#039;s &amp;quot;The magic number 7 plus or minus 2&amp;quot;&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two&amp;lt;/ref&amp;gt;&lt;br /&gt;
** The magic number 4&amp;lt;ref&amp;gt;Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87-185.&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Chunking&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Chunking_(psychology)&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Priming&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Priming_(psychology)&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Subitizing and Counting&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Subitizing&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&lt;br /&gt;
* Visual Stimuli&lt;br /&gt;
* Audio Stimuli&lt;br /&gt;
&#039;&#039;&#039;Outputs&#039;&#039;&#039;&lt;br /&gt;
* Remembered percepts&lt;br /&gt;
* Half-Life of percepts&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Learning&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Learning#Mathematical_models_of_learning&amp;lt;/ref&amp;gt;&lt;br /&gt;
** Logan&#039;s instance theory of automatization&amp;lt;ref&amp;gt;http://74.125.95.132/search?q=cache:IZ-Zccsu3SEJ:psych.wisc.edu/ugstudies/psych733/logan_1988.pdf+logan+isntance+teory&amp;amp;cd=1&amp;amp;hl=en&amp;amp;ct=clnk&amp;amp;gl=us&amp;amp;client=firefox-a&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Fluency&lt;br /&gt;
** As meta-cognitive information &amp;lt;ref&amp;gt;http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6VH9-4SM7PFK-4&amp;amp;_user=10&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000050221&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=10&amp;amp;md5=10cd279fa80958981fcc3c06684c09af&amp;lt;/ref&amp;gt;&lt;br /&gt;
** As a cognitive &#039;&#039;heuristic&#039;&#039;&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Fluency_heuristic&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Inputs&#039;&#039;&#039;&lt;br /&gt;
* Interface&lt;br /&gt;
* User goals&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Outputs&#039;&#039;&#039;&lt;br /&gt;
* Learning curve&lt;br /&gt;
&lt;br /&gt;
== Integration into the Design Process ==&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section outlines the process of designing an HCI interface and at what stages our proposal fits in and how.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
===Workflow, Multi-tasking, and Interruption===&lt;br /&gt;
&lt;br /&gt;
====I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
The goals of the preliminary work are to gain qualitative insight into how information workers practice metawork, and to determine whether people might be better-supported with software which facillitates metawork and interruptions.  Thus, the preliminary work should investigate, and demonstrate, the need and impact of the core goals of the project.&lt;br /&gt;
&lt;br /&gt;
====II.  &#039;&#039;&#039;Methodology&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
Seven information workers, ages 20-38 (5 male, 2 female), were interviewed to determine which methods they use to &amp;quot;stay organized&amp;quot;.  An initial list of metawork strategies was established from two pilot interviews, and then a final list was compiled.  Participants then responded to a series of 17 questions designed to gain insight into their metawork strategies and process.  In addition, verbal interviews were conducted to get additional open-ended feedback.&lt;br /&gt;
&lt;br /&gt;
====III.  &#039;&#039;&#039;Final Results&#039;&#039;&#039;====&lt;br /&gt;
A histogram of methods people use to &amp;quot;stay organized&amp;quot; in terms of tracking things they need to do (TODOs), appointments and meetings, etc. is shown in the figure below.&lt;br /&gt;
&lt;br /&gt;
[[Image:AcbGraph.jpg]]&lt;br /&gt;
&lt;br /&gt;
In addition to these methods, participants also used a number of other methods, including:&lt;br /&gt;
&lt;br /&gt;
* iCal&lt;br /&gt;
* Notes written in xterms&lt;br /&gt;
* &amp;quot;Inbox zero&amp;quot; method of email organization&lt;br /&gt;
* iGoogle Notepad (for tasks)&lt;br /&gt;
* Tag emails as &amp;quot;TODO&amp;quot;, &amp;quot;Important&amp;quot;, etc.&lt;br /&gt;
* Things (Organizer Software)&lt;br /&gt;
* Physical items placed to &amp;quot;remind me of things&amp;quot;&lt;br /&gt;
* Sometimes arranging windows on desk&lt;br /&gt;
* Keeping browser tabs open&lt;br /&gt;
* Bookmarking web pages&lt;br /&gt;
* Keep programs/files open scrolled to certain locations sometimes with things selected&lt;br /&gt;
&lt;br /&gt;
In addition, three participants said that when interrupted they &amp;quot;rarely&amp;quot; or &amp;quot;very rarely&amp;quot; were able to resume the task they were working on prior to the interruption.  Three of the participants said that they would not actively recommend their metawork strategies for other people, and two said that staying organized was &amp;quot;difficult&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Four participants were neutral to the idea of new tools to help them stay organized and three said that they would like to have such a tool/tools.&lt;br /&gt;
&lt;br /&gt;
====IV.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;====&lt;br /&gt;
These results qunatiatively support our hypothesis that there is no clearly dominant set of metawork strategies employed by information workers.  This highly fragemented landscape is surprising, even though most information workers work in a similar environment - at a desk, on the phone, in meetings - and with the same types of tools - computers, pens, paper, etc.  We believe that this suggests that there are complex tradeoffs between these methods and that no single method is sufficient.  We therefore believe that users will be better supported with a new set of software-based metawork tools.&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3083</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3083"/>
		<updated>2009-04-21T16:38:18Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Inputs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
Each person should add the background related to their specific aims.&lt;br /&gt;
&lt;br /&gt;
* Steven Ellis - Cognitive models of HCI, including GOMS variations and ACT-R&lt;br /&gt;
* EJ - Design Guidelines&lt;br /&gt;
* Jon - Perception and Action&lt;br /&gt;
* Andrew - Multiple task environments&lt;br /&gt;
* Gideon - Cognition and dual systems&lt;br /&gt;
* Ian - Interface design process&lt;br /&gt;
* Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using)&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
See the [http://vrl.cs.brown.edu/wiki/images/b/b1/Flowchart2.pdf flowchart] for a visual overview of our aims.&lt;br /&gt;
&lt;br /&gt;
== Inputs ==&lt;br /&gt;
In order to use this framework, a designer will have to provide:&lt;br /&gt;
* Functional specification - what are the possible interactions between the user and the application. This can be thought of as method signatures, with a name (e.g., setVolume), direction (to user or from user) and a list of value types (boolean, number, text, video, ...) for each interaction.&lt;br /&gt;
* GUI specification - a mapping of interactions to interface elements (e.g., setVolume is mapped to the grey knob in the bottom left corner with clockwise turning increasing the input number).&lt;br /&gt;
* Functional user traces - sequences of representative ways in which the application is used. Instead of writing them, the designer could have users use the application with a trial interface and then use our methods to generalize the user traces beyond the specific interface (The second method is depicted in the diagram). As a form of pre-processing, the system also generates an interaction transition matrix which lists the probability of each type of interaction given the previous interaction.&lt;br /&gt;
&lt;br /&gt;
Each of the modules can use all of this information or a subset of it. Our approach stresses flexibility and the ability to give more meaningful feedback the more information is provided.&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
&lt;br /&gt;
Also passed as input is the utility function to optimize over. This utility function is a weighting of various performance metrics (time, cognitive load, fatigue, etc.), where the weighting expresses the importance of a particular dimension to the user. For example, a user at NASA probably cares more about interface accuracy than speed. By passing this information to our committee of experts, we can create interfaces that are tuned to maximize the utility of a particular user type.&lt;br /&gt;
&lt;br /&gt;
As output, the aggregator will provide an evaluation of the interface and a set of recommended improvements. Evaluations are expressed both in terms of the utility function components (i.e. time, fatigue, cognitive load, etc.), and in terms of the overall utility for this interface (as defined by the utility function). These evaluations are given in the form of an efficiency curve, where the utility received on each dimension can change as the user becomes more accustomed to the interface. &lt;br /&gt;
&lt;br /&gt;
Suggested improvements for the GUI are also output. These suggestions are meant to optimize the utility function that was input to the system. If a user values accuracy over time, interface suggestions will be made accordingly.&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Parallel Framework for Evaluation Modules ==&lt;br /&gt;
* Owner: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section will describe in more detail the inputs, outputs and architecture that were presented in the introduction.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Integration into the Design Process ==&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section outlines the process of designing an HCI interface and at what stages our proposal fits in and how.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Each person should come up with a single paragraph describing fictional (or not) preliminary results pertaining to their owned specific aims and contributions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3080</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3080"/>
		<updated>2009-04-21T16:23:19Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Overview of Contributions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
Each person should add the background related to their specific aims.&lt;br /&gt;
&lt;br /&gt;
* Steven Ellis - Cognitive models of HCI, including GOMS variations and ACT-R&lt;br /&gt;
* EJ - Design Guidelines&lt;br /&gt;
* Jon - Perception and Action&lt;br /&gt;
* Andrew - Multiple task environments&lt;br /&gt;
* Gideon - Cognition and dual systems&lt;br /&gt;
* Ian - Interface design process&lt;br /&gt;
* Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using)&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
See the [http://vrl.cs.brown.edu/wiki/images/b/b1/Flowchart2.pdf flowchart] for a visual overview of our aims.&lt;br /&gt;
&lt;br /&gt;
== Inputs ==&lt;br /&gt;
&lt;br /&gt;
== Outputs ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Parallel Framework for Evaluation Modules ==&lt;br /&gt;
* Owner: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section will describe in more detail the inputs, outputs and architecture that were presented in the introduction.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Integration into the Design Process ==&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section outlines the process of designing an HCI interface and at what stages our proposal fits in and how.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Each person should come up with a single paragraph describing fictional (or not) preliminary results pertaining to their owned specific aims and contributions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3076</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3076"/>
		<updated>2009-04-21T15:16:30Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Background / Related Work */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
Each person should add the background related to their specific aims.&lt;br /&gt;
&lt;br /&gt;
* Steven Ellis - Cognitive models of HCI, including GOMS variations and ACT-R&lt;br /&gt;
* EJ - Design Guidelines&lt;br /&gt;
* Jon - Perception and Action&lt;br /&gt;
* Andrew - Multiple task environments&lt;br /&gt;
* Gideon - Cognition and dual systems&lt;br /&gt;
* Ian - Interface design process&lt;br /&gt;
* Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using)&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Parallel Framework for Evaluation Modules ==&lt;br /&gt;
* Owner: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section will describe in more detail the inputs, outputs and architecture that were presented in the introduction.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Integration into the Design Process ==&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section outlines the process of designing an HCI interface and at what stages our proposal fits in and how.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Each person should come up with a single paragraph describing fictional (or not) preliminary results pertaining to their owned specific aims and contributions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3075</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3075"/>
		<updated>2009-04-21T15:10:45Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Specific Aims and Contributions (to be separated later) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Parallel Framework for Evaluation Modules ==&lt;br /&gt;
* Owner: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section will describe in more detail the inputs, outputs and architecture that were presented in the introduction.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Integration into the Design Process ==&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section outlines the process of designing an HCI interface and at what stages our proposal fits in and how.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Each person should come up with a single paragraph describing fictional (or not) preliminary results pertaining to their owned specific aims and contributions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3074</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3074"/>
		<updated>2009-04-21T15:06:01Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Preliminary Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Integration into the Design Process ==&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section outlines the process of designing an HCI interface and at what stages our proposal fits in and how.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Each person should come up with a single paragraph describing fictional (or not) preliminary results pertaining to their owned specific aims and contributions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3073</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3073"/>
		<updated>2009-04-21T15:04:14Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Specific Aims and Contributions (to be separated later) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Integration into the Design Process ==&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section outlines the process of designing an HCI interface and at what stages our proposal fits in and how.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3072</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3072"/>
		<updated>2009-04-21T15:02:23Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Methodology */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Specific Aims and Contributions (to be separated later) =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3071</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3071"/>
		<updated>2009-04-21T14:55:42Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: Undo revision 3070 by Adam Darlow (Talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3070</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3070"/>
		<updated>2009-04-21T14:55:00Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* [Criticisms] */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here (NP-completeness...).&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3069</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3069"/>
		<updated>2009-04-21T14:47:08Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Generalizing User Traces */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3068</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3068"/>
		<updated>2009-04-21T14:45:55Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Generalizing User Traces */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The user traces that are collected are tied to a specific interface. In order to use them with different interfaces to the same application, they should be generalized to be based only on the functional description of the application and the user&#039;s goal hierarchy. This would abstract away from actions like accessing a menu.&lt;br /&gt;
&lt;br /&gt;
In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3066</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3066"/>
		<updated>2009-04-21T14:43:18Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Collecting User Traces */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owners: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3065</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3065"/>
		<updated>2009-04-21T14:42:50Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Collecting User Traces */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&lt;br /&gt;
In addition to specific user traces, many modules could use a transition probability matrix based on interaction predictions.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owners: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3062</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3062"/>
		<updated>2009-04-21T14:37:38Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Sample Modules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owners: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section could include an example or two of established design guidelines that could easily be implemented as modules.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interaction Prediction ===&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given the history of user interactions, we may be able to improve user performance by modifying the interface so that frequently performed interactions from a given state are more readily accessible to the user. This module recommendations such modifications based on user traces.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3061</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3061"/>
		<updated>2009-04-21T14:35:22Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Sample Modules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owners: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Cognitive/HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module weights various interface guidelines from the HCI and cognitive science literature, and uses these weights to provide suggested improvements for the given interface. It identifies interface elements that are detrimental to user performance and suggests effective alternatives.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interaction Prediction ===&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given the history of user interactions, we may be able to improve user performance by modifying the interface so that frequently performed interactions from a given state are more readily accessible to the user. This module recommendations such modifications based on user traces.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Working Memory Load ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Automaticity of Interaction ===&lt;br /&gt;
* Owner: [[User:Gideon Goldin | Gideon Goldin]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3060</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3060"/>
		<updated>2009-04-21T14:31:38Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* CPM-GOMS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owners: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Cognitive/HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module weights various interface guidelines from the HCI and cognitive science literature, and uses these weights to provide suggested improvements for the given interface. It identifies interface elements that are detrimental to user performance and suggests effective alternatives.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interaction Prediction ===&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given the history of user interactions, we may be able to improve user performance by modifying the interface so that frequently performed interactions from a given state are more readily accessible to the user. This module recommendations such modifications based on user traces.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3059</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3059"/>
		<updated>2009-04-21T14:30:58Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Evaluation and Recommendation via Modules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owners: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Gideon Goldin | Gideon Goldin]], [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Cognitive/HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module weights various interface guidelines from the HCI and cognitive science literature, and uses these weights to provide suggested improvements for the given interface. It identifies interface elements that are detrimental to user performance and suggests effective alternatives.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interaction Prediction ===&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given the history of user interactions, we may be able to improve user performance by modifying the interface so that frequently performed interactions from a given state are more readily accessible to the user. This module recommendations such modifications based on user traces.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3058</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3058"/>
		<updated>2009-04-21T14:30:28Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: added affordances&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
&#039;&#039;&#039;TODO: Add some intro paragraph here.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given an interface, our first step is to run users on the interface and log these user interactions. We want to log actions at a sufficiently low level so that a GOMS model can be generated from the data. When possible, we&#039;d also like to log data using additional sensing technologies, such as pupil-tracking, muscle-activity monitoring and auditory recognition; this information will help to analyze the explicit contributions of perception, cognition and motor skills with respect to user performance.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
* Owners: [[User:E J Kalafarski | E J Kalafarski]], [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.&#039;s cognitive/HCI guidelines).&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
* Owners: [[User:Gideon Goldin | Gideon Goldin]], [[User:Steven Ellis | Steven Ellis]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Cognitive/HCI Guidelines ===&lt;br /&gt;
* Owner: [[User:E J Kalafarski | E J Kalafarski]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This module weights various interface guidelines from the HCI and cognitive science literature, and uses these weights to provide suggested improvements for the given interface. It identifies interface elements that are detrimental to user performance and suggests effective alternatives.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will use Fitts&#039;s Law to provide interface evaluations and recommendations.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Affordances ===&lt;br /&gt;
* Owner: [[User:Jon Ericson | Jon Ericson]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;This simple module will provide interface evaluations and recommendations based on perceived affordances and if possible a comparison to actual affordances.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;While most usability testing focuses on low-level task performance, there is also previous work suggesting that users also work at a higher, working sphere level. This module attempts to evaluate a given interface with respect to these higher-level considerations, such as task switching.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Interaction Prediction ===&lt;br /&gt;
* Owner: [[User:Trevor O&#039;Brien | Trevor O&#039;Brien]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Given the history of user interactions, we may be able to improve user performance by modifying the interface so that frequently performed interactions from a given state are more readily accessible to the user. This module recommendations such modifications based on user traces.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
* Owner: [[User:Ian Spector | Ian Spector]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Initially, let&#039;s make up some fictional (but reasonable) preliminary results that we&#039;d like to see and think we can accomplish before submitting the proposal.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;br /&gt;
* Owner: [[User:Andrew Bragdon | Andrew Bragdon]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Any criticisms or questions we have regarding the proposal can go here.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3026</id>
		<title>CS295J/Research proposal (draft 2)</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal_(draft_2)&amp;diff=3026"/>
		<updated>2009-04-21T03:52:00Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction =&lt;br /&gt;
&lt;br /&gt;
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts&#039;s law, Maeda&#039;s design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.&lt;br /&gt;
&lt;br /&gt;
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can&#039;t necessarily adapt it to take into account results from new cognitive research.&lt;br /&gt;
&lt;br /&gt;
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower &amp;quot;entry cost&amp;quot; for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.&lt;br /&gt;
&lt;br /&gt;
= Overview of Contributions =&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: For reference, this is the aggregate set contributions from last week. Maybe we can edit/add/remove from this as needed.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Design and user-study evaluation of novel techniques for collecting and filtering user traces with respect to user goals.&lt;br /&gt;
* Extensible, low-cost architecture for integrating pupil-tracking, muscle-activity monitoring, and auditory recognition with user traces in existing applications.&lt;br /&gt;
* System for isolating cognitive, perceptual, and motor tasks from an interface design to generate CPM_GOMS models for analysis.&lt;br /&gt;
* Design and quantitative evaluation of semi-automated techniques for extracting critical paths from an existing CPM_GOMS model.&lt;br /&gt;
* Novel algorithm for analyzing and optimizing critical paths based on established research in cognitive science.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements, based on a &#039;&#039;unified matrix&#039;&#039; of cognitive principles and heuristic design guidelines.&lt;br /&gt;
* The creation of a language for &#039;&#039;abstractly representing user interfaces&#039;&#039; in terms of the layout of graphical components and the functional relationships between these components.&lt;br /&gt;
* A system for &#039;&#039;generating interaction histories&#039;&#039; within user interfaces to facilitate individual and collaborative scientific discovery, and to enable researchers to more easily document and analyze user behavior.&lt;br /&gt;
* A system that takes user traces and creates a GOMS model that decomposes user actions  into various cognitive, perceptual, and motor control tasks. &lt;br /&gt;
* The development of other evaluation methods using various cognitive/HCI models and guidelines.&lt;br /&gt;
* A design tool that can provide a designer with recommendations for interface improvements. These recommendations can be made for a specific type of user or for the average user, as expressed by a utility function.&lt;br /&gt;
&lt;br /&gt;
= Background / Related Work =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Methodology =&lt;br /&gt;
== Collecting User Traces ==&lt;br /&gt;
 &lt;br /&gt;
Owner: Trevor&lt;br /&gt;
&lt;br /&gt;
== Generalizing User Traces ==&lt;br /&gt;
&lt;br /&gt;
Owner: Trevor&lt;br /&gt;
&lt;br /&gt;
== Evaluation and Recommendation via Modules ==&lt;br /&gt;
&lt;br /&gt;
Owner: E.J., Jon?&lt;br /&gt;
&lt;br /&gt;
== Sample Modules ==&lt;br /&gt;
&lt;br /&gt;
=== CPM-GOMS ===&lt;br /&gt;
Owner: Gideon, Steven&lt;br /&gt;
&lt;br /&gt;
=== Cognitive/HCI Guidelines ===&lt;br /&gt;
Owner: E.J.&lt;br /&gt;
&lt;br /&gt;
=== Fitts&#039;s Law ===&lt;br /&gt;
&lt;br /&gt;
=== Interruptions ===&lt;br /&gt;
Owner: Andrew&lt;br /&gt;
&lt;br /&gt;
=== Interaction Prediction ===&lt;br /&gt;
Owner: Trevor&lt;br /&gt;
&lt;br /&gt;
= Preliminary Results =&lt;br /&gt;
&lt;br /&gt;
= [Criticisms] =&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Assignments&amp;diff=2957</id>
		<title>CS295J/Assignments</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Assignments&amp;diff=2957"/>
		<updated>2009-04-15T20:43:04Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: added OpenGazer link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Assignment 11 (out April 10, due April 17, 2009) ==&lt;br /&gt;
&lt;br /&gt;
=== Part A (due Tuesday at noon) ===&lt;br /&gt;
First, revise any contribution that you added to the proposal to reflect our converging view of what we are proposing. Given the evolution of the proposal since the contributions were originally created, if you wish to start from a fresh set of contributions, you can outline them [[CS295J/Contributions for class 12 |here]].&lt;br /&gt;
&lt;br /&gt;
Own one or more of the possible tasks below (enough to spend 8-10 hours on).  Edit this page to indicate your ownership and to describe enough details of what you&#039;ll do for the group to avoid duplication.  It&#039;s ok, for exmaple, for two folks to try using or extending CPM-GOMS with different extensions or a different application; indicating what your extension/application is, though would be helpful.  It&#039;s also fine to add tasks to the list.&lt;br /&gt;
&lt;br /&gt;
=== Part B (due class time) ===&lt;br /&gt;
&lt;br /&gt;
Revise any parts of the preliminary results that you have added.  That may include adding things that you would like to do, taking out pieces that don&#039;t fit into our worldview any longer, or revising things.  The &amp;quot;CPM-GOMS for gmail&amp;quot; project presented today, for example, should go into the preliminary work section to establish viability of CPM-GOMS in the context of a more modern application.&lt;br /&gt;
&lt;br /&gt;
Complete owned task(s).&lt;br /&gt;
&lt;br /&gt;
=== Possible Tasks (also consider those from last week -- add here if you pick one) ===&lt;br /&gt;
&lt;br /&gt;
* Possible additions to the proposed research&lt;br /&gt;
** Experiments with EEG to determine if they can inform modeling (&#039;&#039;&#039;Andrew Bragdon&#039;&#039;&#039; -- I will look into the EEG literature from the CHI community and determine what could be put in the proposal on this topic)&lt;br /&gt;
** Maybe similar experiments with muscle sensing or body motion&lt;br /&gt;
* Possible preliminary experiments&lt;br /&gt;
** Try some EEG&lt;br /&gt;
** Try some low-cost muscle tracking&lt;br /&gt;
** Try to decode some captured events&lt;br /&gt;
** Try to get some pupil-tracking thing going and sync&#039;ed with other events (I found an open-source eye-tracking program called [http://www.inference.phy.cam.ac.uk/opengazer/ OpenGazer], but it&#039;s written for linux and might require considerable work to port. Would someone with a linux machine like to try it out? Adam)&lt;br /&gt;
** Try some light-weight JavaScript-based mouse tracking (will see how far I can get in 10 hours) ([[User:E J Kalafarski|E J Kalafarski]] 15:16, 14 April 2009 (UTC))&lt;br /&gt;
* Other&lt;br /&gt;
** Get &amp;quot;Human information processor model&amp;quot; concept into related work; ernestine is canonical example, but other more recent work too&lt;br /&gt;
** Revise proposal introduction again, emphasis on our new converging view ([[User:E J Kalafarski|E J Kalafarski]] 15:13, 14 April 2009 (UTC), [[User:Trevor|Trevor O&#039;Brien]])&lt;br /&gt;
** Will revise and expand &amp;quot;big world&amp;quot; significance and contributions, based on the output/architecture we discussed last week ([[User:E J Kalafarski|E J Kalafarski]] 15:15, 14 April 2009 (UTC), [[User:Trevor|Trevor O&#039;Brien]])&lt;br /&gt;
* CPM-GOMS&lt;br /&gt;
** Extend the cognition element in the CPM-GOMS model to account for dual-process theories of reasoning. (OWNER - Gideon)&lt;br /&gt;
** Extend the model by incorporating an &#039;Affect&#039; output, instead of just relying on time as the only dependent variable. (SUGGESTED BY - Gideon; Ian (I like this idea))&lt;br /&gt;
&lt;br /&gt;
== Assignment 10 (out April 3, due April 10, 2009) ==&lt;br /&gt;
&lt;br /&gt;
=== Part A (due Tuesday at noon) ===&lt;br /&gt;
Own one or more of the possible tasks below (enough to spend 8-10 hours on).  Edit this page to indicate your ownership and to describe enough details of what you&#039;ll do for the group to avoid duplication.  It&#039;s ok, for exmaple, for two folks to try using or extending CPM-GOMS with different extensions or a different application; indicating what your extension/application is, though would be helpful.  It&#039;s also fine to add tasks to the list.&lt;br /&gt;
&lt;br /&gt;
=== Part B (due class time) ===&lt;br /&gt;
&lt;br /&gt;
Complete owned task(s).&lt;br /&gt;
&lt;br /&gt;
=== Possible Tasks ===&lt;br /&gt;
&lt;br /&gt;
* read CPM-GOMS and consider as theme (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;, &#039;&#039;&#039;Eric&#039;&#039;&#039;, &#039;&#039;&#039;Jon&#039;&#039;&#039;, &#039;&#039;&#039;Gideon&#039;&#039;&#039;, &#039;&#039;&#039;[[User:Trevor O&#039;Brien|Trevor]]&#039;&#039;&#039;, &#039;&#039;&#039;[[User:Steven Ellis|Steven]]&#039;&#039;&#039;, &#039;&#039;&#039;Ian&#039;&#039;&#039;)&lt;br /&gt;
* converge proposal (ID weaknesses &amp;amp; fix or propose fixes)&lt;br /&gt;
** make contributions and aims agree (&#039;&#039;&#039;[[User:Trevor O&#039;Brien|Trevor]]&#039;&#039;&#039;)&lt;br /&gt;
** make contributions and aims compelling and significant&lt;br /&gt;
*** intro acceptable (merge the two best) (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;: will rewrite intro, merging the two best &amp;quot;in progress&amp;quot; intros, with an emphasis on the project&#039;s interdisciplinary aspect)&lt;br /&gt;
*** improves world, lots of people, in big ways, long time (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;: will brainstorm and add to &amp;quot;big picture&amp;quot; contributions, but someone else should brainstorm this as well, &#039;&#039;&#039;Eric&#039;&#039;&#039;)&lt;br /&gt;
*** extends knowledge, increased productivity (designers and users) (&#039;&#039;&#039;Eric&#039;&#039;&#039;)&lt;br /&gt;
*** bg &amp;amp; significance section consistent with contribs/aims (&#039;&#039;&#039;Ian&#039;&#039;&#039;)&lt;br /&gt;
* try CPM-GOMS, maybe w/ 1 extension (&#039;&#039;&#039;Gideon&#039;&#039;&#039;, &#039;&#039;&#039;[[User:Steven Ellis|Steven]]&#039;&#039;&#039;)&lt;br /&gt;
* what&#039;s input and output (architecture)? (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;: will attempt to come up with an architecture that accommodates all seven proposed modules and presents a simple, useful final product, &#039;&#039;Jon: Will brainstorm b/c I&#039;ve been wondering about this for a long time&#039;&#039;, &#039;&#039;&#039;Adam&#039;&#039;&#039;, &#039;&#039;&#039;[[User:Trevor O&#039;Brien|Trevor]]&#039;&#039;&#039;)&lt;br /&gt;
* Gajos paper (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;: will read, consider architecture method, &#039;&#039;&#039;Adam&#039;&#039;&#039;, &#039;&#039;&#039;[[User:Trevor O&#039;Brien|Trevor]]&#039;&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== Assignment 9 (out March 20, due April 3, 2009) ==&lt;br /&gt;
&lt;br /&gt;
Part A is what we talked about in class -- a revision of the introduction to re-evaluate the big picture.  Part B is the original assignment.  While I&#039;ve asked for both below, it may not be realistic given the time available.  Please make sure to finish A by class time and to do as much of B as you can.&lt;br /&gt;
&lt;br /&gt;
=== Part A, due noon Tuesday March 31 ===&lt;br /&gt;
&lt;br /&gt;
Refine your introduction to be consistent with the theme of bringing knowledge of cognitive and perceptual modeling to bear in a principled way on human-computer interface design.  Evaluate each of the suggested contributions for their impact, their relevance to the theme, and their cost.  Triage as appropriate to achieve the best overall proposal given the 5 years with 5 people scope.&lt;br /&gt;
&lt;br /&gt;
* [[CS295J/Proposal intros from class 9|Link to Intros]]&lt;br /&gt;
* [[CS295J/Triage for class 10|Link to Triages]]&lt;br /&gt;
&lt;br /&gt;
=== Part B, due at class time ===&lt;br /&gt;
&lt;br /&gt;
Complete your 30 hour preliminary work project and writeup in the proposal.  It should explicitly support the contributions of the proposal.  Consider this a hard, externally imposed deadline, i.e., you have to have something as finished as possible that could be reviewed by an outside reviewer.  Trim scope, if necessary, but finish!&lt;br /&gt;
&lt;br /&gt;
Pick another gap in the proposal and plug it.  Any section that you have &amp;quot;owned&amp;quot; should also be &amp;quot;final&amp;quot; and reviewable.&lt;br /&gt;
&lt;br /&gt;
====Gaps====&lt;br /&gt;
* &amp;lt;s&amp;gt;Significance (expand further with unified description of relatable components)&amp;lt;/s&amp;gt;&lt;br /&gt;
* Specific contributions &amp;gt; Multi-modal HCI (question marks, needs content)&lt;br /&gt;
* Background &amp;gt; Workflow analysis &amp;gt; Interface improvements (incomplete)&lt;br /&gt;
* Research plan (we may have to cut this section, it is woefully underdeveloped)&lt;br /&gt;
&lt;br /&gt;
Read [[CS295J/Literature to read for class 10|&amp;quot;must reads&amp;quot;]] -- add one if you want.&lt;br /&gt;
&lt;br /&gt;
== Assignment 8 (out March 13, due March 20, 2009) ==&lt;br /&gt;
=== Part A, due Tuesday noon ===&lt;br /&gt;
* Outline a coherent 250 word summary to a coherent proposal [[CS295J/Proposal intros from class 9|here]]&lt;br /&gt;
* Select a gap to own&lt;br /&gt;
** &#039;&#039;&#039;Andrew Bragdon&#039;&#039;&#039;:  Metawork Support Tool proposal; will examine integrating this into the main proposal vs. making a separate proposal.&lt;br /&gt;
** &#039;&#039;&#039;EJ&#039;&#039;&#039;: &amp;quot;Significance/intellectual merit&amp;quot; section is currently bare, I can take a stab at that.  I believe this can/needs to incorporate a gap Trevor identified, &amp;quot;mapping between individual contributions and centralized theme of the proposal;&amp;quot; I&#039;ll try to start to illustrate the relationships between our individual projects. [[User:E J Kalafarski|E J Kalafarski]] 12:20, 17 March 2009 (UTC)&lt;br /&gt;
**&#039;&#039;&#039;Jon&#039;&#039;&#039;: The &amp;quot;models of perception&amp;quot; section needs to be revised and expanded.  I&#039;ve changed &amp;quot;Gibsonianism&amp;quot; to &amp;quot;The Ecological Approach to Perception&amp;quot; and I&#039;ve added a section on &amp;quot;The Computational Approach to Perception&amp;quot;; I will update these for Friday.&lt;br /&gt;
**&#039;&#039;&#039;Gideon&#039;&#039;&#039;: More up-to-date background section on distributed cognition. I know Jon is doing this for the other areas, but I feel that this is very important.&lt;br /&gt;
** &#039;&#039;&#039;Trevor&#039;&#039;&#039;:  I&#039;ll take a pass through the &#039;&#039;Specific Aims&#039;&#039; section, which currently lacks specificity.   --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 15:29, 17 March 2009 (UTC) &lt;br /&gt;
**&#039;&#039;&#039;Eric&#039;&#039;&#039;: I&#039;ll flesh out the &#039;&#039;Workflow Analysis&#039;&#039; section. What&#039;s there could be cleaned up, and more could be added, particularly on learning from interaction histories.&lt;br /&gt;
**&#039;&#039;&#039;Steven&#039;&#039;&#039;: There&#039;s currently no background (slash anything) on the topic of embodied models of cognition, I&#039;ll fill in some background on that.&lt;br /&gt;
* suggest 0.5 &amp;quot;must read&amp;quot; [[CS295J/Literature to read for class 9|papers]]&lt;br /&gt;
&lt;br /&gt;
=== Part B, due in class ===&lt;br /&gt;
* Finish writing the coherent 250 word summary to a coherent proposal [[CS295J/Proposal intros from class 9|still here]]&lt;br /&gt;
* Fill your gap&lt;br /&gt;
* read &amp;quot;must read&amp;quot;s for discussion w.r.t. proposal relevance&lt;br /&gt;
* get to another week&#039;s worth of results in your preliminary results -- make sure to be consistent with your summary!&lt;br /&gt;
* be prepared to tell us about those new results in class, tied to the intellectual claims&lt;br /&gt;
&lt;br /&gt;
== Assignment 7 (out March 6, due March 13, 2009) ==&lt;br /&gt;
=== Part A, due Tuesday noon ===&lt;br /&gt;
* review [[CS295J/Research proposal|proposal]] for&lt;br /&gt;
** intellectual contribution (1-2 paragraphs)&lt;br /&gt;
** major gaps (bullet list) (don&#039;t duplicate gaps already listed)&lt;br /&gt;
** review goes [[CS295J/Proposal reviews from class 8|here]]&lt;br /&gt;
** feel free to improve any part of the proposal rather than criticizing it, if you wish :-)&lt;br /&gt;
* suggest 0.5 &amp;quot;must read&amp;quot; [[CS295J/Literature to read for class 8|papers]]&lt;br /&gt;
&lt;br /&gt;
=== Part B, due in class ===&lt;br /&gt;
* read &amp;quot;must read&amp;quot;s for discussion w.r.t. proposal relevance&lt;br /&gt;
* get to another week&#039;s worth of results in your preliminary results&lt;br /&gt;
* be prepared to tell us about those new results in class, tied to the intellectual claims&lt;br /&gt;
&lt;br /&gt;
== Assignment 6 (out February 27, due March 6, 2009) ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you are not familiar with NIH proposals, skim through [http://vis.cs.brown.edu/docs/pdf/bib/Laidlaw-2005-DA2.pdf this active proposal] to get a sense of the sections.  The [http://vis.cs.brown.edu/docs/pdf/bib/NIH-2001-PHS.pdf guide to nih proposals] may also be helpful.&lt;br /&gt;
* Create some preliminary results&lt;br /&gt;
** Start the work you proposed in your poster for assignment 4&lt;br /&gt;
** Start the work someone else proposed :-)&lt;br /&gt;
** Create a more detailed critique-enabled workflow of a scientific user&lt;br /&gt;
** Working in pairs is fine; preferably, pair members should have different backgrounds&lt;br /&gt;
* Write them up as a subsection of the preliminary results section of the proposal.&lt;br /&gt;
** By Wednesday 9am have an outline of the section you will produce&lt;br /&gt;
*** This should be in past tense, as it will be when the work is done.&lt;br /&gt;
*** At the top level, it should state how the preliminary results enable the overall multi-year proposal or how they demonstrate feasibility of some questionable or risky part.&lt;br /&gt;
*** Check out the NIH proposal for examples, albeit in a different domain.&lt;br /&gt;
** By class have as much filled in from that outline as possible&lt;br /&gt;
* Add any &amp;quot;must reads&amp;quot; to the [[CS295J/Literature to read for class 6]] page.  You are &amp;quot;expected&amp;quot; to add 1/2 of a reference.&lt;br /&gt;
* Read and be prepared to discuss the &amp;quot;must reads&amp;quot;.&lt;br /&gt;
** Put in fictional placeholders for parts that are not done by Friday.&lt;br /&gt;
* By Wednesday 5pm e-mail constructive comments about at least 1/2 of the outlines to the entire class&lt;br /&gt;
* Success criteria for this assignment&lt;br /&gt;
*# Proposal section will demonstrate a feasibility or prerequisite for interesting research&lt;br /&gt;
*# Results by class are complete and concrete enough that they are interesting, even though some parts may be missing&lt;br /&gt;
&lt;br /&gt;
== Assignment 5 (out February 20, due February 27, 2009) ==&lt;br /&gt;
# Videotape a 15-20 minute interactive session with a user doing a visually challenging task for which they are at least an advanced beginner (ie, they don&#039;t have to look up what to do, but they have not yet internalized the operations and made them subconscious).  Analyze the session and report your observations and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Assignment 4 (out February 13, due February 20, 2009) ==&lt;br /&gt;
&lt;br /&gt;
# Be prepared to decide on a subset of applications to critique.  Flesh out at least one potential application in [[CS295J/Application Critiques]], including a specific workflow.  Modify any others you want, particularly in terms of arguments for or against proceeding with them.  These should be completed by Thursday noon.&lt;br /&gt;
#* Some possibilities: Google scholar, Mathematica, Tableau, Google notebook, Matlab, paper, media wiki, Ensight Gold, AVS, VisTrails.  Note that a given piece of software could be represented more than once with a different workflow.&lt;br /&gt;
#* Possible criteria: main purpose is analysis, perhaps scientific; interative; scientific; amenable to cognition-driven improvemennts; interesting; fun&lt;br /&gt;
# Bring your ranking of the subset of (application+workflow)s we should critique&lt;br /&gt;
# Be prepared to decide on the [[CS295J/Model elements]] of cognition we will emulate to critique each application.  Revise some part of this list so that it can be used as a concrete basis for evaluation.&lt;br /&gt;
# Add any &amp;quot;must reads&amp;quot; to the [[CS295J/Literature to read for class 5]] page.  You are &amp;quot;expected&amp;quot; to add 1/3 of a reference.&lt;br /&gt;
# Read the &amp;quot;must reads&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Assignment 3 (out February 6, due February 13) ==&lt;br /&gt;
* Flesh out and support a contribution within the proposal&lt;br /&gt;
** Add any new references to the literature section.&lt;br /&gt;
** Many of our readings date from 8+ years ago; check to make sure your contribution hasn&#039;t already been done.&lt;br /&gt;
** If any of the new references are &amp;quot;must reads&amp;quot;, add to the [[CS295J/Literature to read for class 4]] (2/13/09) page.  You are &amp;quot;expected&amp;quot; to add 1/2 of a reference.&lt;br /&gt;
** Estimate the impact of the contribution.&lt;br /&gt;
** Estimate the risk and costs of the contribution.&lt;br /&gt;
** Propose a 3-week (30 hour) result that you could create to demonstrate feasibility.&lt;br /&gt;
** Bring a printout of your contribution concept to class.  It should be legible from 2 meters away, so use big text.  You can hand-write it on posterboard or paste together printouts.&lt;br /&gt;
* Bring a list of holes in the proposal -- e.g., &amp;quot;background section needs something on workflow capture.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Assignment 2 (out January 30, 2009) ==&lt;br /&gt;
* Refine literature short summaries to include the relationship to our project.&lt;br /&gt;
* Draft by tuesday noon a subsection in the background section of the [[CS295J/Research proposal]], making sure to include citations to the relevant materials in the literature page.  Add any new references to the literature page.&lt;br /&gt;
* Read and comment/edit by Thursday noon all background sections&lt;br /&gt;
* Revise your background section by Friday noon (so David can print before class)&lt;br /&gt;
&lt;br /&gt;
* Add to or refine one or more specific contribution in the [[CS295J/Research proposal]]; each contribution must have a list of ways it can be demonstrated.  Some will become part of the preliminary results, others will be parts of the future work that will be proposed.&lt;br /&gt;
* Add to or refine one or more specific aim in the [[CS295J/Research proposal]]; make it consistent with the contribution you added.&lt;br /&gt;
&lt;br /&gt;
* Identify the one additional most important paper for us to read this week also by Tuesday noon; be prepared to summarize relevance in 2 minutes in class&lt;br /&gt;
* Read those &amp;quot;most important&amp;quot; papers for class discussion.&lt;br /&gt;
&lt;br /&gt;
* Be prepared to summarize to the class your contributions to the background, contributions, and aims sections.&lt;br /&gt;
* If there is preliminary work that will help to make decisions about contributions and aims, please get started on it (and be ready to report on what you&#039;d like to do or what you have done).&lt;br /&gt;
&lt;br /&gt;
== Assignment 1 (out January 23, 2009) ==&lt;br /&gt;
* spend 10 hours adding to any part of the wiki you think is relevant&lt;br /&gt;
* by Monday noon add any potential readings.  If you&#039;ve got a tentative summary evaluation, go ahead and add it.  It&#039;s ok to edit folks summary evaluations, but try to make the result more accurate or precise without losing information.&lt;br /&gt;
* by Wednesday noon finish with any summary evaluation and also identify at least one as-relevant-as-possible reading as yours.  Put your name on that entry in the reading list as the &amp;quot;owner&amp;quot; so that there are no duplicates.&lt;br /&gt;
* by Wednesday 5pm -- select 2 additional relevant readings that are owned and that you will read by Friday and be prepared to discuss.  Put your name as a &amp;quot;discussant&amp;quot; in the reading list; there should be a max of two discussants per reading.&lt;br /&gt;
* by Friday class -- author a summary description, less than 250 words, in the wiki of how the reading you own relates to our project.  Be prepared to describe, in two minutes, how your reading relates to the project. Also be prepared for everyone in class to discuss after your description.  You may bring notes for yourself, but no slides.  The wiki page for your reading will be displayed while you talk.&lt;br /&gt;
* by Friday class -- read and be prepared to discuss the other two readings you choose.&lt;br /&gt;
* by Friday class -- make one more wiki page titled &amp;quot;&#039;&#039;&#039;&amp;lt;Last-Name&amp;gt; week 1&#039;&#039;&#039;&amp;quot; with a list of the keys for the citations you added, the readings  you summarized, the reading you presented, the two readings you were a discussant on, and any other readings you did in detail.&lt;br /&gt;
* Let me know if you have any kind of problems.  You should be spending right around 10 hours -- if that&#039;s a problem, let&#039;s talk.&lt;br /&gt;
* The [[../How Tos|How Tos]] page has some tips.  Edit or add as you find others.&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Assignments&amp;diff=2888</id>
		<title>CS295J/Assignments</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Assignments&amp;diff=2888"/>
		<updated>2009-04-07T15:26:08Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Possible Tasks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Assignment 10 (out April 3, due April 10, 2009) ==&lt;br /&gt;
&lt;br /&gt;
=== Part A (due Tuesday at noon) ===&lt;br /&gt;
Own one or more of the possible tasks below (enough to spend 8-10 hours on).  Edit this page to indicate your ownership and to describe enough details of what you&#039;ll do for the group to avoid duplication.  It&#039;s ok, for exmaple, for two folks to try using or extending CPM-GOMS with different extensions or a different application; indicating what your extension/application is, though would be helpful.  It&#039;s also fine to add tasks to the list.&lt;br /&gt;
&lt;br /&gt;
=== Part B (due class time) ===&lt;br /&gt;
&lt;br /&gt;
Complete owned task(s).&lt;br /&gt;
&lt;br /&gt;
=== Possible Tasks ===&lt;br /&gt;
&lt;br /&gt;
* read CPM-GOMS and consider as theme (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;, &#039;&#039;&#039;Eric&#039;&#039;&#039;, &#039;&#039;&#039;Jon&#039;&#039;&#039;, &#039;&#039;&#039;Gideon&#039;&#039;&#039;)&lt;br /&gt;
* converge proposal (ID weaknesses &amp;amp; fix or propose fixes)&lt;br /&gt;
** make contributions and aims agree&lt;br /&gt;
** make contributions and aims compelling and significant&lt;br /&gt;
*** intro acceptable (merge the two best) (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;: will rewrite intro, merging the two best &amp;quot;in progress&amp;quot; intros, with an emphasis on the project&#039;s interdisciplinary aspect)&lt;br /&gt;
*** improves world, lots of people, in big ways, long time (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;: will brainstorm and add to &amp;quot;big picture&amp;quot; contributions, but someone else should brainstorm this as well, &#039;&#039;&#039;Eric&#039;&#039;&#039;)&lt;br /&gt;
*** extends knowledge, increased productivity (designers and users) (&#039;&#039;&#039;Eric&#039;&#039;&#039;)&lt;br /&gt;
*** bg &amp;amp; significance section consistent with contribs/aims&lt;br /&gt;
* try CPM-GOMS, maybe w/ 1 extension (&#039;&#039;&#039;Gideon&#039;&#039;&#039;)&lt;br /&gt;
* what&#039;s input and output (architecture)? (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;: will attempt to come up with an architecture that accommodates all seven proposed modules and presents a simple, useful final product, &#039;&#039;Jon: Will brainstorm b/c I&#039;ve been wondering about this for a long time&#039;&#039;, &#039;&#039;&#039;Adam&#039;&#039;&#039;)&lt;br /&gt;
* Gajos paper (&#039;&#039;&#039;[[User:E J Kalafarski|E J Kalafarski]]&#039;&#039;&#039;: will read, consider architecture method, &#039;&#039;&#039;Adam&#039;&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== Assignment 9 (out March 20, due April 3, 2009) ==&lt;br /&gt;
&lt;br /&gt;
Part A is what we talked about in class -- a revision of the introduction to re-evaluate the big picture.  Part B is the original assignment.  While I&#039;ve asked for both below, it may not be realistic given the time available.  Please make sure to finish A by class time and to do as much of B as you can.&lt;br /&gt;
&lt;br /&gt;
=== Part A, due noon Tuesday March 31 ===&lt;br /&gt;
&lt;br /&gt;
Refine your introduction to be consistent with the theme of bringing knowledge of cognitive and perceptual modeling to bear in a principled way on human-computer interface design.  Evaluate each of the suggested contributions for their impact, their relevance to the theme, and their cost.  Triage as appropriate to achieve the best overall proposal given the 5 years with 5 people scope.&lt;br /&gt;
&lt;br /&gt;
* [[CS295J/Proposal intros from class 9|Link to Intros]]&lt;br /&gt;
* [[CS295J/Triage for class 10|Link to Triages]]&lt;br /&gt;
&lt;br /&gt;
=== Part B, due at class time ===&lt;br /&gt;
&lt;br /&gt;
Complete your 30 hour preliminary work project and writeup in the proposal.  It should explicitly support the contributions of the proposal.  Consider this a hard, externally imposed deadline, i.e., you have to have something as finished as possible that could be reviewed by an outside reviewer.  Trim scope, if necessary, but finish!&lt;br /&gt;
&lt;br /&gt;
Pick another gap in the proposal and plug it.  Any section that you have &amp;quot;owned&amp;quot; should also be &amp;quot;final&amp;quot; and reviewable.&lt;br /&gt;
&lt;br /&gt;
====Gaps====&lt;br /&gt;
* &amp;lt;s&amp;gt;Significance (expand further with unified description of relatable components)&amp;lt;/s&amp;gt;&lt;br /&gt;
* Specific contributions &amp;gt; Multi-modal HCI (question marks, needs content)&lt;br /&gt;
* Background &amp;gt; Workflow analysis &amp;gt; Interface improvements (incomplete)&lt;br /&gt;
* Research plan (we may have to cut this section, it is woefully underdeveloped)&lt;br /&gt;
&lt;br /&gt;
Read [[CS295J/Literature to read for class 10|&amp;quot;must reads&amp;quot;]] -- add one if you want.&lt;br /&gt;
&lt;br /&gt;
== Assignment 8 (out March 13, due March 20, 2009) ==&lt;br /&gt;
=== Part A, due Tuesday noon ===&lt;br /&gt;
* Outline a coherent 250 word summary to a coherent proposal [[CS295J/Proposal intros from class 9|here]]&lt;br /&gt;
* Select a gap to own&lt;br /&gt;
** &#039;&#039;&#039;Andrew Bragdon&#039;&#039;&#039;:  Metawork Support Tool proposal; will examine integrating this into the main proposal vs. making a separate proposal.&lt;br /&gt;
** &#039;&#039;&#039;EJ&#039;&#039;&#039;: &amp;quot;Significance/intellectual merit&amp;quot; section is currently bare, I can take a stab at that.  I believe this can/needs to incorporate a gap Trevor identified, &amp;quot;mapping between individual contributions and centralized theme of the proposal;&amp;quot; I&#039;ll try to start to illustrate the relationships between our individual projects. [[User:E J Kalafarski|E J Kalafarski]] 12:20, 17 March 2009 (UTC)&lt;br /&gt;
**&#039;&#039;&#039;Jon&#039;&#039;&#039;: The &amp;quot;models of perception&amp;quot; section needs to be revised and expanded.  I&#039;ve changed &amp;quot;Gibsonianism&amp;quot; to &amp;quot;The Ecological Approach to Perception&amp;quot; and I&#039;ve added a section on &amp;quot;The Computational Approach to Perception&amp;quot;; I will update these for Friday.&lt;br /&gt;
**&#039;&#039;&#039;Gideon&#039;&#039;&#039;: More up-to-date background section on distributed cognition. I know Jon is doing this for the other areas, but I feel that this is very important.&lt;br /&gt;
** &#039;&#039;&#039;Trevor&#039;&#039;&#039;:  I&#039;ll take a pass through the &#039;&#039;Specific Aims&#039;&#039; section, which currently lacks specificity.   --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 15:29, 17 March 2009 (UTC) &lt;br /&gt;
**&#039;&#039;&#039;Eric&#039;&#039;&#039;: I&#039;ll flesh out the &#039;&#039;Workflow Analysis&#039;&#039; section. What&#039;s there could be cleaned up, and more could be added, particularly on learning from interaction histories.&lt;br /&gt;
**&#039;&#039;&#039;Steven&#039;&#039;&#039;: There&#039;s currently no background (slash anything) on the topic of embodied models of cognition, I&#039;ll fill in some background on that.&lt;br /&gt;
* suggest 0.5 &amp;quot;must read&amp;quot; [[CS295J/Literature to read for class 9|papers]]&lt;br /&gt;
&lt;br /&gt;
=== Part B, due in class ===&lt;br /&gt;
* Finish writing the coherent 250 word summary to a coherent proposal [[CS295J/Proposal intros from class 9|still here]]&lt;br /&gt;
* Fill your gap&lt;br /&gt;
* read &amp;quot;must read&amp;quot;s for discussion w.r.t. proposal relevance&lt;br /&gt;
* get to another week&#039;s worth of results in your preliminary results -- make sure to be consistent with your summary!&lt;br /&gt;
* be prepared to tell us about those new results in class, tied to the intellectual claims&lt;br /&gt;
&lt;br /&gt;
== Assignment 7 (out March 6, due March 13, 2009) ==&lt;br /&gt;
=== Part A, due Tuesday noon ===&lt;br /&gt;
* review [[CS295J/Research proposal|proposal]] for&lt;br /&gt;
** intellectual contribution (1-2 paragraphs)&lt;br /&gt;
** major gaps (bullet list) (don&#039;t duplicate gaps already listed)&lt;br /&gt;
** review goes [[CS295J/Proposal reviews from class 8|here]]&lt;br /&gt;
** feel free to improve any part of the proposal rather than criticizing it, if you wish :-)&lt;br /&gt;
* suggest 0.5 &amp;quot;must read&amp;quot; [[CS295J/Literature to read for class 8|papers]]&lt;br /&gt;
&lt;br /&gt;
=== Part B, due in class ===&lt;br /&gt;
* read &amp;quot;must read&amp;quot;s for discussion w.r.t. proposal relevance&lt;br /&gt;
* get to another week&#039;s worth of results in your preliminary results&lt;br /&gt;
* be prepared to tell us about those new results in class, tied to the intellectual claims&lt;br /&gt;
&lt;br /&gt;
== Assignment 6 (out February 27, due March 6, 2009) ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* If you are not familiar with NIH proposals, skim through [http://vis.cs.brown.edu/docs/pdf/bib/Laidlaw-2005-DA2.pdf this active proposal] to get a sense of the sections.  The [http://vis.cs.brown.edu/docs/pdf/bib/NIH-2001-PHS.pdf guide to nih proposals] may also be helpful.&lt;br /&gt;
* Create some preliminary results&lt;br /&gt;
** Start the work you proposed in your poster for assignment 4&lt;br /&gt;
** Start the work someone else proposed :-)&lt;br /&gt;
** Create a more detailed critique-enabled workflow of a scientific user&lt;br /&gt;
** Working in pairs is fine; preferably, pair members should have different backgrounds&lt;br /&gt;
* Write them up as a subsection of the preliminary results section of the proposal.&lt;br /&gt;
** By Wednesday 9am have an outline of the section you will produce&lt;br /&gt;
*** This should be in past tense, as it will be when the work is done.&lt;br /&gt;
*** At the top level, it should state how the preliminary results enable the overall multi-year proposal or how they demonstrate feasibility of some questionable or risky part.&lt;br /&gt;
*** Check out the NIH proposal for examples, albeit in a different domain.&lt;br /&gt;
** By class have as much filled in from that outline as possible&lt;br /&gt;
* Add any &amp;quot;must reads&amp;quot; to the [[CS295J/Literature to read for class 6]] page.  You are &amp;quot;expected&amp;quot; to add 1/2 of a reference.&lt;br /&gt;
* Read and be prepared to discuss the &amp;quot;must reads&amp;quot;.&lt;br /&gt;
** Put in fictional placeholders for parts that are not done by Friday.&lt;br /&gt;
* By Wednesday 5pm e-mail constructive comments about at least 1/2 of the outlines to the entire class&lt;br /&gt;
* Success criteria for this assignment&lt;br /&gt;
*# Proposal section will demonstrate a feasibility or prerequisite for interesting research&lt;br /&gt;
*# Results by class are complete and concrete enough that they are interesting, even though some parts may be missing&lt;br /&gt;
&lt;br /&gt;
== Assignment 5 (out February 20, due February 27, 2009) ==&lt;br /&gt;
# Videotape a 15-20 minute interactive session with a user doing a visually challenging task for which they are at least an advanced beginner (ie, they don&#039;t have to look up what to do, but they have not yet internalized the operations and made them subconscious).  Analyze the session and report your observations and conclusions.&lt;br /&gt;
&lt;br /&gt;
== Assignment 4 (out February 13, due February 20, 2009) ==&lt;br /&gt;
&lt;br /&gt;
# Be prepared to decide on a subset of applications to critique.  Flesh out at least one potential application in [[CS295J/Application Critiques]], including a specific workflow.  Modify any others you want, particularly in terms of arguments for or against proceeding with them.  These should be completed by Thursday noon.&lt;br /&gt;
#* Some possibilities: Google scholar, Mathematica, Tableau, Google notebook, Matlab, paper, media wiki, Ensight Gold, AVS, VisTrails.  Note that a given piece of software could be represented more than once with a different workflow.&lt;br /&gt;
#* Possible criteria: main purpose is analysis, perhaps scientific; interative; scientific; amenable to cognition-driven improvemennts; interesting; fun&lt;br /&gt;
# Bring your ranking of the subset of (application+workflow)s we should critique&lt;br /&gt;
# Be prepared to decide on the [[CS295J/Model elements]] of cognition we will emulate to critique each application.  Revise some part of this list so that it can be used as a concrete basis for evaluation.&lt;br /&gt;
# Add any &amp;quot;must reads&amp;quot; to the [[CS295J/Literature to read for class 5]] page.  You are &amp;quot;expected&amp;quot; to add 1/3 of a reference.&lt;br /&gt;
# Read the &amp;quot;must reads&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Assignment 3 (out February 6, due February 13) ==&lt;br /&gt;
* Flesh out and support a contribution within the proposal&lt;br /&gt;
** Add any new references to the literature section.&lt;br /&gt;
** Many of our readings date from 8+ years ago; check to make sure your contribution hasn&#039;t already been done.&lt;br /&gt;
** If any of the new references are &amp;quot;must reads&amp;quot;, add to the [[CS295J/Literature to read for class 4]] (2/13/09) page.  You are &amp;quot;expected&amp;quot; to add 1/2 of a reference.&lt;br /&gt;
** Estimate the impact of the contribution.&lt;br /&gt;
** Estimate the risk and costs of the contribution.&lt;br /&gt;
** Propose a 3-week (30 hour) result that you could create to demonstrate feasibility.&lt;br /&gt;
** Bring a printout of your contribution concept to class.  It should be legible from 2 meters away, so use big text.  You can hand-write it on posterboard or paste together printouts.&lt;br /&gt;
* Bring a list of holes in the proposal -- e.g., &amp;quot;background section needs something on workflow capture.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Assignment 2 (out January 30, 2009) ==&lt;br /&gt;
* Refine literature short summaries to include the relationship to our project.&lt;br /&gt;
* Draft by tuesday noon a subsection in the background section of the [[CS295J/Research proposal]], making sure to include citations to the relevant materials in the literature page.  Add any new references to the literature page.&lt;br /&gt;
* Read and comment/edit by Thursday noon all background sections&lt;br /&gt;
* Revise your background section by Friday noon (so David can print before class)&lt;br /&gt;
&lt;br /&gt;
* Add to or refine one or more specific contribution in the [[CS295J/Research proposal]]; each contribution must have a list of ways it can be demonstrated.  Some will become part of the preliminary results, others will be parts of the future work that will be proposed.&lt;br /&gt;
* Add to or refine one or more specific aim in the [[CS295J/Research proposal]]; make it consistent with the contribution you added.&lt;br /&gt;
&lt;br /&gt;
* Identify the one additional most important paper for us to read this week also by Tuesday noon; be prepared to summarize relevance in 2 minutes in class&lt;br /&gt;
* Read those &amp;quot;most important&amp;quot; papers for class discussion.&lt;br /&gt;
&lt;br /&gt;
* Be prepared to summarize to the class your contributions to the background, contributions, and aims sections.&lt;br /&gt;
* If there is preliminary work that will help to make decisions about contributions and aims, please get started on it (and be ready to report on what you&#039;d like to do or what you have done).&lt;br /&gt;
&lt;br /&gt;
== Assignment 1 (out January 23, 2009) ==&lt;br /&gt;
* spend 10 hours adding to any part of the wiki you think is relevant&lt;br /&gt;
* by Monday noon add any potential readings.  If you&#039;ve got a tentative summary evaluation, go ahead and add it.  It&#039;s ok to edit folks summary evaluations, but try to make the result more accurate or precise without losing information.&lt;br /&gt;
* by Wednesday noon finish with any summary evaluation and also identify at least one as-relevant-as-possible reading as yours.  Put your name on that entry in the reading list as the &amp;quot;owner&amp;quot; so that there are no duplicates.&lt;br /&gt;
* by Wednesday 5pm -- select 2 additional relevant readings that are owned and that you will read by Friday and be prepared to discuss.  Put your name as a &amp;quot;discussant&amp;quot; in the reading list; there should be a max of two discussants per reading.&lt;br /&gt;
* by Friday class -- author a summary description, less than 250 words, in the wiki of how the reading you own relates to our project.  Be prepared to describe, in two minutes, how your reading relates to the project. Also be prepared for everyone in class to discuss after your description.  You may bring notes for yourself, but no slides.  The wiki page for your reading will be displayed while you talk.&lt;br /&gt;
* by Friday class -- read and be prepared to discuss the other two readings you choose.&lt;br /&gt;
* by Friday class -- make one more wiki page titled &amp;quot;&#039;&#039;&#039;&amp;lt;Last-Name&amp;gt; week 1&#039;&#039;&#039;&amp;quot; with a list of the keys for the citations you added, the readings  you summarized, the reading you presented, the two readings you were a discussant on, and any other readings you did in detail.&lt;br /&gt;
* Let me know if you have any kind of problems.  You should be spending right around 10 hours -- if that&#039;s a problem, let&#039;s talk.&lt;br /&gt;
* The [[../How Tos|How Tos]] page has some tips.  Edit or add as you find others.&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_to_read_for_class_9&amp;diff=2523</id>
		<title>CS295J/Literature to read for class 9</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_to_read_for_class_9&amp;diff=2523"/>
		<updated>2009-03-17T14:45:53Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: Added cognitive load from wikipedia&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;*[http://www.cise.ufl.edu/~lok/teaching/hci-f08/ HCI Course at UF] Benjamin Lok. This is a course I took when at UF. The slides and links might be useful for those wanting a more of a background in HCI. (Gideon)&lt;br /&gt;
&lt;br /&gt;
*[http://portal.acm.org/citation.cfm?id=26115.26124 Applying cognitive psychology to user-interface design] Marshall, Nelson, and Gardiner, 1987.  Dated, but sounds very relevant.  Contains a chapter called &amp;quot;Design Guidelines.&amp;quot;  Unfortunately, I have not been able to find this book anywhere yet or the text available online.  Will keep looking, but this may not be readable by Friday. [[User:E J Kalafarski|E J Kalafarski]] 12:13, 17 March 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
*[http://www.aaai.org/Papers/Workshops/1998/WS-98-09/WS98-09-020.pdf Acting on a visual world: The role of multimodal perception in HCI].  Wolff, F., Angelia, A., Romary, L. (1998).  Short paper looking at the effects of perceptual organization on multimodal communication when using an interface. (Jon)&lt;br /&gt;
&lt;br /&gt;
*[http://en.wikipedia.org/wiki/Cognitive_load Cognitive Load]. Relevant to the problem of learning curves. One prediction from cognitive load theory is that when using an interface to do complex tasks, learning by doing becomes ineffective because of the cumulative working memory load. It also makes the important distinction of intrinsic-extrinsic load that we were discussing. (Adam)&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Proposal_intros_from_class_9&amp;diff=2486</id>
		<title>CS295J/Proposal intros from class 9</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Proposal_intros_from_class_9&amp;diff=2486"/>
		<updated>2009-03-13T21:45:22Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Collaborative */  Added a second option for a collaboration starting point (too polite to delete)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Collaborative==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Collaborative 2==&lt;br /&gt;
&lt;br /&gt;
Existing guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no common theoretical foundation, many sets of guidelines have emerged and there is no way to compare or unify them. We propose to develop a theoretical foundation for interface design by drawing on recent advances in cognitive science, the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. Where possible we will use computational models to enable richer automatic interface assessment than is currently available.&lt;br /&gt;
&lt;br /&gt;
A large part of our project will be to broaden the range of cognitive theories that are used in HCI design. Only a few low level theories of perception and action, such as Fitts&#039;s law, have garnered general acceptance in the HCI community because they are simple, make quantitative predictions and apply without modification to a broad range of tasks and interfaces. Our aim is to produce similar predictive models that apply to higher levels of cognition, including higher level vision, learning, memory, attention and task management. &lt;br /&gt;
&lt;br /&gt;
Much of our work will focus on how cognitive principles can enable interface design to go beyond the focus of the functionality of the individual application.  Much research has accumulated regarding how people manage multiple tasks and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a different perspective by examining the human-computer system as a unified cognitive entity. We will extract and test  principles from this literature on how to ensure that the human part of the system is only responsible for those parts of the task for which it is more capable than the computer.&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Proposal_reviews_from_class_8&amp;diff=2412</id>
		<title>CS295J/Proposal reviews from class 8</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Proposal_reviews_from_class_8&amp;diff=2412"/>
		<updated>2009-03-10T22:06:12Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Adam */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These are reviews and intellectual contributions for the [[CS295J/Research proposal|research proposal]], as specified in [[CS295J/Assignments#Part_A.2C_due_Tuesday_noon|assignment 7, part A]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Name ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
(your paragraph here)&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* Your&lt;br /&gt;
* Bullet&lt;br /&gt;
* List&lt;br /&gt;
* Here&lt;br /&gt;
&lt;br /&gt;
== Gideon ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
A meta-analysis of a subset of the cognitive psychology and human-computer interaction literature presents evidence that interactions between humans and computers can be improved by taking into account the cognitive resources required for different types of tasks. It is well known that humans and computers excel at different types of tasks, but the field has not made an explicit effort to standardize a set of guidelines that interface designers may use when developing computer systems. For people and computers to function in an optimized, complementary fashion, we still need a systematic way of distributing tasks amongst them.&lt;br /&gt;
&lt;br /&gt;
It is often the case that what computers excel at, humans have difficulty with (e.g., arithmetic). While, the opposite is also true (e.g., pattern-recognition). After a preliminary search of the literature, we&#039;ve explored common tasks in software today that have neglected consideration of this performance dichotomy. Designers have not been appropriately addressed these gaps in the computer industry due to a lack of multidisciplinary research. Our meta-analysis presents data that supports our view on two different tasks: 3D shape-rotation and face recognition.&lt;br /&gt;
&lt;br /&gt;
We have demonstrated in just a 30-hour study that computer assisted 3D Shape Rotation is consistently preferred over human-only mental rotation. Complementarily, humans consistently outperform modern computer systems in face recognition. Sometimes these performance gaps are obvious due to a lack of technology or common-sense. However, that is not only the case. Our preliminary study clearly demonstrates that systems benefit from consistent and rule-based task distribution guidelines. Furthermore, the literature is rich with other types of tasks which await our systematic exploitation. Upon further study, the community will benefit from a tested and systematic approach for designing improved human-computer interfaces. &lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* Your&lt;br /&gt;
* Bullet&lt;br /&gt;
* List&lt;br /&gt;
* Here&lt;br /&gt;
&lt;br /&gt;
== EJ ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
While attempts have been made in the past to apply cognitive theory to the task of developing human-computer interfaces, there remains much work to be done.  No standard and widespread model for the cognitive interaction with a computer exists.  The roles of perception and cognition, while examined and studied independently, are often at odds with empirical and successful design guidelines in practice.  Methods of study and evaluation, such as eye-tracking and workflow analysis, are still governed primarily by the needs at the end of the development process, with no quantitative model capable of influencing efficiency and consistency in the field.&lt;br /&gt;
&lt;br /&gt;
We demonstrate in wide-ranging preliminary work that cognitive theory has a tangible and valuable role in all the stages of interface design and evaluation: models of distributed cognition can exert useful influence on the design of interfaces and the guidelines that govern it; algorithmic workflow analysis can lead to new interaction methods, including predictive options; a model of human perception can greatly enhance the usefulness of multimodal user study techniques; a better understanding of &#039;&#039;why&#039;&#039; classical strategies work will bring us closer to the &amp;quot;holy grail&amp;quot; of automated interface evaluation and recommendation.  We can bring the field further down many of the only partially-explored avenues of the field in the years ahead.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
* Project summary needs expanding and context, but this may be impossible before we solidify other sections&lt;br /&gt;
* Background is strong, and does not have an excess of context, but that may be desirable here.  Some notes/outlines/questions need to be answered/expanded/removed.&lt;br /&gt;
* Is &amp;quot;Significance&amp;quot; supposed to be significantly (ha) different from a projection of the influence and effects of the study?  Perhaps these estimates should be parsed out of contributions and placed here.&lt;br /&gt;
* Need stronger distinction/clarification between &amp;quot;Aims&amp;quot; and &amp;quot;Contributions.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Eric ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
With the emergence of documented interaction histories in scientific visualization comes a new source of data for predicting user interactions. Correct prediction and corresponding UI modifications allow for a more personalized interface that can improve the user&#039;s efficiency in data exploration and enables groups of researchers working on the same type of task to more efficiently learn from one another.&lt;br /&gt;
&lt;br /&gt;
In a 30-hour preliminary study, we have implemented a basic interaction prediction module using a relational markov model and shown through a series of user studies that it predicts on average 35% better than chance. We have also created a module that provides basic recommendations to the user based on these interaction predictions, and have shown that the user clicks on a recommended action 20% of the time, leading to an average task speedup of 8%. &lt;br /&gt;
&lt;br /&gt;
The tasks for future work are twofold: first, we will improve and generalize our prediction module to allow for more accurate predictions in a wide variety of interfaces, including those with a larger number of possible actions and states. Second, we will further study the question of, given predictions of future interactions, how to modify the interface beyond giving basic recommendations. Ultimately, research in both of these directions will allow researchers to more efficiently glean information from complex data, enabling them to more quickly and easily contribute to their respective fields.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* A few places in the proposal are cut off mid-sentence or are otherwise incomplete, (e.g. some unfinished lists with &amp;quot;???&amp;quot; bullet points)&lt;br /&gt;
* The project summary makes it sound like our projects are more integrated than perhaps they are. Maybe we need to either re-evaluate our project summary, or better unify our individual projects somehow.&lt;br /&gt;
&lt;br /&gt;
== Adam ==&lt;br /&gt;
&lt;br /&gt;
Note: EJ and I have generated joint specific aims and contributions and plan to generate unified preliminary results.&lt;br /&gt;
&lt;br /&gt;
Second Note: According to other examples I misunderstood when I construed this section as a review of intellectual merit instead of writing an intellectual merit section. I&#039;ll add one soon.&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
The intellectual merit of the program is currently hard to determine because it is so fractionated. It seems to me that we have two or three goals:&lt;br /&gt;
# Using cognitive principles to develop new design guidelines, especially ones that are above the level of the individual task. The content related to working spheres and distributed cognition mostly belongs here.&lt;br /&gt;
# Using cognitive principles to make existing design principles more quantifiable. This could involve building an integrated cognitive model, but I think it&#039;s more plausible to generate separate measures for different cognitive/design principles.&lt;br /&gt;
# Improving methods for assessing interfaces. To the degree that this involves using cognitive simulations, it overlaps with goal number 2. Development of an eye-tracking method or an information integration framework would be unrelated to goal number 2. I don&#039;t think anyone has been pursuing either of these ideas and I would recommend dropping them.&lt;br /&gt;
&lt;br /&gt;
I think reorganizing our efforts so that they all fit together, either along these lines or some other way, is essential at this point.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Currently I perceive more gaps than content. See above.&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Proposal_reviews_from_class_8&amp;diff=2411</id>
		<title>CS295J/Proposal reviews from class 8</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Proposal_reviews_from_class_8&amp;diff=2411"/>
		<updated>2009-03-10T22:05:14Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Adam */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These are reviews and intellectual contributions for the [[CS295J/Research proposal|research proposal]], as specified in [[CS295J/Assignments#Part_A.2C_due_Tuesday_noon|assignment 7, part A]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Name ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
(your paragraph here)&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* Your&lt;br /&gt;
* Bullet&lt;br /&gt;
* List&lt;br /&gt;
* Here&lt;br /&gt;
&lt;br /&gt;
== Gideon ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
A meta-analysis of a subset of the cognitive psychology and human-computer interaction literature presents evidence that interactions between humans and computers can be improved by taking into account the cognitive resources required for different types of tasks. It is well known that humans and computers excel at different types of tasks, but the field has not made an explicit effort to standardize a set of guidelines that interface designers may use when developing computer systems. For people and computers to function in an optimized, complementary fashion, we still need a systematic way of distributing tasks amongst them.&lt;br /&gt;
&lt;br /&gt;
It is often the case that what computers excel at, humans have difficulty with (e.g., arithmetic). While, the opposite is also true (e.g., pattern-recognition). After a preliminary search of the literature, we&#039;ve explored common tasks in software today that have neglected consideration of this performance dichotomy. Designers have not been appropriately addressed these gaps in the computer industry due to a lack of multidisciplinary research. Our meta-analysis presents data that supports our view on two different tasks: 3D shape-rotation and face recognition.&lt;br /&gt;
&lt;br /&gt;
We have demonstrated in just a 30-hour study that computer assisted 3D Shape Rotation is consistently preferred over human-only mental rotation. Complementarily, humans consistently outperform modern computer systems in face recognition. Sometimes these performance gaps are obvious due to a lack of technology or common-sense. However, that is not only the case. Our preliminary study clearly demonstrates that systems benefit from consistent and rule-based task distribution guidelines. Furthermore, the literature is rich with other types of tasks which await our systematic exploitation. Upon further study, the community will benefit from a tested and systematic approach for designing improved human-computer interfaces. &lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* Your&lt;br /&gt;
* Bullet&lt;br /&gt;
* List&lt;br /&gt;
* Here&lt;br /&gt;
&lt;br /&gt;
== EJ ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
While attempts have been made in the past to apply cognitive theory to the task of developing human-computer interfaces, there remains much work to be done.  No standard and widespread model for the cognitive interaction with a computer exists.  The roles of perception and cognition, while examined and studied independently, are often at odds with empirical and successful design guidelines in practice.  Methods of study and evaluation, such as eye-tracking and workflow analysis, are still governed primarily by the needs at the end of the development process, with no quantitative model capable of influencing efficiency and consistency in the field.&lt;br /&gt;
&lt;br /&gt;
We demonstrate in wide-ranging preliminary work that cognitive theory has a tangible and valuable role in all the stages of interface design and evaluation: models of distributed cognition can exert useful influence on the design of interfaces and the guidelines that govern it; algorithmic workflow analysis can lead to new interaction methods, including predictive options; a model of human perception can greatly enhance the usefulness of multimodal user study techniques; a better understanding of &#039;&#039;why&#039;&#039; classical strategies work will bring us closer to the &amp;quot;holy grail&amp;quot; of automated interface evaluation and recommendation.  We can bring the field further down many of the only partially-explored avenues of the field in the years ahead.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
* Project summary needs expanding and context, but this may be impossible before we solidify other sections&lt;br /&gt;
* Background is strong, and does not have an excess of context, but that may be desirable here.  Some notes/outlines/questions need to be answered/expanded/removed.&lt;br /&gt;
* Is &amp;quot;Significance&amp;quot; supposed to be significantly (ha) different from a projection of the influence and effects of the study?  Perhaps these estimates should be parsed out of contributions and placed here.&lt;br /&gt;
* Need stronger distinction/clarification between &amp;quot;Aims&amp;quot; and &amp;quot;Contributions.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Eric ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
With the emergence of documented interaction histories in scientific visualization comes a new source of data for predicting user interactions. Correct prediction and corresponding UI modifications allow for a more personalized interface that can improve the user&#039;s efficiency in data exploration and enables groups of researchers working on the same type of task to more efficiently learn from one another.&lt;br /&gt;
&lt;br /&gt;
In a 30-hour preliminary study, we have implemented a basic interaction prediction module using a relational markov model and shown through a series of user studies that it predicts on average 35% better than chance. We have also created a module that provides basic recommendations to the user based on these interaction predictions, and have shown that the user clicks on a recommended action 20% of the time, leading to an average task speedup of 8%. &lt;br /&gt;
&lt;br /&gt;
The tasks for future work are twofold: first, we will improve and generalize our prediction module to allow for more accurate predictions in a wide variety of interfaces, including those with a larger number of possible actions and states. Second, we will further study the question of, given predictions of future interactions, how to modify the interface beyond giving basic recommendations. Ultimately, research in both of these directions will allow researchers to more efficiently glean information from complex data, enabling them to more quickly and easily contribute to their respective fields.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* A few places in the proposal are cut off mid-sentence or are otherwise incomplete, (e.g. some unfinished lists with &amp;quot;???&amp;quot; bullet points)&lt;br /&gt;
* The project summary makes it sound like our projects are more integrated than perhaps they are. Maybe we need to either re-evaluate our project summary, or better unify our individual projects somehow.&lt;br /&gt;
&lt;br /&gt;
== Adam ==&lt;br /&gt;
&lt;br /&gt;
Note: EJ and I have generated joint specific aims and contributions and plan to generate unified preliminary results.&lt;br /&gt;
Second Note: According to other examples I misunderstood when I construed this section as a review of intellectual merit instead of writing an intellectual merit section. I&#039;ll add one soon.&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
The intellectual merit of the program is currently hard to determine because it is so fractionated. It seems to me that we have two or three goals:&lt;br /&gt;
# Using cognitive principles to develop new design guidelines, especially ones that are above the level of the individual task. The content related to working spheres and distributed cognition mostly belongs here.&lt;br /&gt;
# Using cognitive principles to make existing design principles more quantifiable. This could involve building an integrated cognitive model, but I think it&#039;s more plausible to generate separate measures for different cognitive/design principles.&lt;br /&gt;
# Improving methods for assessing interfaces. To the degree that this involves using cognitive simulations, it overlaps with goal number 2. Development of an eye-tracking method or an information integration framework would be unrelated to goal number 2. I don&#039;t think anyone has been pursuing either of these ideas and I would recommend dropping them.&lt;br /&gt;
&lt;br /&gt;
I think reorganizing our efforts so that they all fit together, either along these lines or some other way, is essential at this point.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Currently I perceive more gaps than content. See above.&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Proposal_reviews_from_class_8&amp;diff=2410</id>
		<title>CS295J/Proposal reviews from class 8</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Proposal_reviews_from_class_8&amp;diff=2410"/>
		<updated>2009-03-10T22:03:10Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: Added my review&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These are reviews and intellectual contributions for the [[CS295J/Research proposal|research proposal]], as specified in [[CS295J/Assignments#Part_A.2C_due_Tuesday_noon|assignment 7, part A]]. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Name ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
(your paragraph here)&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* Your&lt;br /&gt;
* Bullet&lt;br /&gt;
* List&lt;br /&gt;
* Here&lt;br /&gt;
&lt;br /&gt;
== Gideon ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
A meta-analysis of a subset of the cognitive psychology and human-computer interaction literature presents evidence that interactions between humans and computers can be improved by taking into account the cognitive resources required for different types of tasks. It is well known that humans and computers excel at different types of tasks, but the field has not made an explicit effort to standardize a set of guidelines that interface designers may use when developing computer systems. For people and computers to function in an optimized, complementary fashion, we still need a systematic way of distributing tasks amongst them.&lt;br /&gt;
&lt;br /&gt;
It is often the case that what computers excel at, humans have difficulty with (e.g., arithmetic). While, the opposite is also true (e.g., pattern-recognition). After a preliminary search of the literature, we&#039;ve explored common tasks in software today that have neglected consideration of this performance dichotomy. Designers have not been appropriately addressed these gaps in the computer industry due to a lack of multidisciplinary research. Our meta-analysis presents data that supports our view on two different tasks: 3D shape-rotation and face recognition.&lt;br /&gt;
&lt;br /&gt;
We have demonstrated in just a 30-hour study that computer assisted 3D Shape Rotation is consistently preferred over human-only mental rotation. Complementarily, humans consistently outperform modern computer systems in face recognition. Sometimes these performance gaps are obvious due to a lack of technology or common-sense. However, that is not only the case. Our preliminary study clearly demonstrates that systems benefit from consistent and rule-based task distribution guidelines. Furthermore, the literature is rich with other types of tasks which await our systematic exploitation. Upon further study, the community will benefit from a tested and systematic approach for designing improved human-computer interfaces. &lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* Your&lt;br /&gt;
* Bullet&lt;br /&gt;
* List&lt;br /&gt;
* Here&lt;br /&gt;
&lt;br /&gt;
== EJ ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
While attempts have been made in the past to apply cognitive theory to the task of developing human-computer interfaces, there remains much work to be done.  No standard and widespread model for the cognitive interaction with a computer exists.  The roles of perception and cognition, while examined and studied independently, are often at odds with empirical and successful design guidelines in practice.  Methods of study and evaluation, such as eye-tracking and workflow analysis, are still governed primarily by the needs at the end of the development process, with no quantitative model capable of influencing efficiency and consistency in the field.&lt;br /&gt;
&lt;br /&gt;
We demonstrate in wide-ranging preliminary work that cognitive theory has a tangible and valuable role in all the stages of interface design and evaluation: models of distributed cognition can exert useful influence on the design of interfaces and the guidelines that govern it; algorithmic workflow analysis can lead to new interaction methods, including predictive options; a model of human perception can greatly enhance the usefulness of multimodal user study techniques; a better understanding of &#039;&#039;why&#039;&#039; classical strategies work will bring us closer to the &amp;quot;holy grail&amp;quot; of automated interface evaluation and recommendation.  We can bring the field further down many of the only partially-explored avenues of the field in the years ahead.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
* Project summary needs expanding and context, but this may be impossible before we solidify other sections&lt;br /&gt;
* Background is strong, and does not have an excess of context, but that may be desirable here.  Some notes/outlines/questions need to be answered/expanded/removed.&lt;br /&gt;
* Is &amp;quot;Significance&amp;quot; supposed to be significantly (ha) different from a projection of the influence and effects of the study?  Perhaps these estimates should be parsed out of contributions and placed here.&lt;br /&gt;
* Need stronger distinction/clarification between &amp;quot;Aims&amp;quot; and &amp;quot;Contributions.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Eric ==&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
With the emergence of documented interaction histories in scientific visualization comes a new source of data for predicting user interactions. Correct prediction and corresponding UI modifications allow for a more personalized interface that can improve the user&#039;s efficiency in data exploration and enables groups of researchers working on the same type of task to more efficiently learn from one another.&lt;br /&gt;
&lt;br /&gt;
In a 30-hour preliminary study, we have implemented a basic interaction prediction module using a relational markov model and shown through a series of user studies that it predicts on average 35% better than chance. We have also created a module that provides basic recommendations to the user based on these interaction predictions, and have shown that the user clicks on a recommended action 20% of the time, leading to an average task speedup of 8%. &lt;br /&gt;
&lt;br /&gt;
The tasks for future work are twofold: first, we will improve and generalize our prediction module to allow for more accurate predictions in a wide variety of interfaces, including those with a larger number of possible actions and states. Second, we will further study the question of, given predictions of future interactions, how to modify the interface beyond giving basic recommendations. Ultimately, research in both of these directions will allow researchers to more efficiently glean information from complex data, enabling them to more quickly and easily contribute to their respective fields.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
* A few places in the proposal are cut off mid-sentence or are otherwise incomplete, (e.g. some unfinished lists with &amp;quot;???&amp;quot; bullet points)&lt;br /&gt;
* The project summary makes it sound like our projects are more integrated than perhaps they are. Maybe we need to either re-evaluate our project summary, or better unify our individual projects somehow.&lt;br /&gt;
&lt;br /&gt;
== Adam ==&lt;br /&gt;
&lt;br /&gt;
Note: EJ and I have generated joint specific aims and contributions and plan to generate unified preliminary results.&lt;br /&gt;
&lt;br /&gt;
=== Intellectual merit ===&lt;br /&gt;
&lt;br /&gt;
The intellectual merit of the program is currently hard to determine because it is so fractionated. It seems to me that we have two or three goals:&lt;br /&gt;
# Using cognitive principles to develop new design guidelines, especially ones that are above the level of the individual task. The content related to working spheres and distributed cognition mostly belongs here.&lt;br /&gt;
# Using cognitive principles to make existing design principles more quantifiable. This could involve building an integrated cognitive model, but I think it&#039;s more plausible to generate separate measures for different cognitive/design principles.&lt;br /&gt;
# Improving methods for assessing interfaces. To the degree that this involves using cognitive simulations, it overlaps with goal number 2. Development of an eye-tracking method or an information integration framework would be unrelated to goal number 2. I don&#039;t think anyone has been pursuing either of these ideas and I would recommend dropping them.&lt;br /&gt;
&lt;br /&gt;
I think reorganizing our efforts so that they all fit together, either along these lines or some other way, is essential at this point.&lt;br /&gt;
&lt;br /&gt;
=== Gaps ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Currently I perceive more gaps than content. See above.&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2409</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2409"/>
		<updated>2009-03-10T21:57:56Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Specific Aims */ Merged EJ&amp;#039;s and my ideas&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Impact: Such a model would allow us to predict human performance with interfaces.  Validation of the model would allow us to more rapidly converge on ideal interfaces while simultaneously ruling out sub-optimal ones.&lt;br /&gt;
## 3-Week Feasibility Study: (1) Distill (perhaps from review articles) the major findings in all of the relevant subfields into a set of principles that will be grouped to form appropriate model components.  Some of the most relevant subfields include: memory, attention, visual perception, psychoacoustics, task switching, categorization, event perception, haptics (ANY OTHERS?).  (2)  Use these to devise a simple predictive model of human interaction.  This model could simply consist of the set of design principles but should allow some form of quantitative scoring/evaluation of interfaces.  (3) Develop a small set (5-10) of candidate GUIs that are designed to help the user accomplish the same overarching task(s) (e.g. importing, analyzing, and flexibly graphing data in Matlab).   (4)  Test/validate the model by comparing predicted performance to actual performance with the GUIs.&lt;br /&gt;
## Risks/Costs: There are always potential risks to any human subjects that might participate in testing the model which might necessitate IRB approval.  Costs might include: (1) paying for any necessary hardware and software for developing, displaying, and testing the GUIs, and (2) paying for human subjects.&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A predictive, fully-integrated model of user workflow which encompasses low-level tasks, working spheres, communication chains, interruptions and multi-tasking. (OWNER: Andrew Bragdon)&lt;br /&gt;
##Traditionally, software design and usability testing is focused on low-level task performance.  However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, &#039;&#039;working sphere&#039;&#039; level.  Su, et al., develops a predictive model of task switching based on communication chains.  Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching.  We will conduct initial exploratory studies to test specific instances of this high-level hypothesis.  We will then use the refined model to identify specific predictions for the outcome of a formal, ecologically valid study involving a complex, non-trivial application.&lt;br /&gt;
## Impact: To truly design computing systems which are designed around the way users work, we must understand &#039;&#039;how&#039;&#039; users work.  To do this, we need to establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains.  Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context.  Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work.&lt;br /&gt;
## Risk and Costs: Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk.  The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel.  While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda.  The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis.  The cost will span student support, both Ph.D. and Master&#039;s students, as well as full-time research staff.  Projected cost: $1.5 million over three years.&lt;br /&gt;
## 3-week Feasibility Study:  To ascertain the feasibility of this project we will conduct an initial pilot test to investigate the core idea: a predictive model of user workflow.  We will spend 1 week studying the real workflow of several people through job shadowing.  We will then create two systems designed to help a user accomplish some simple information work task.  One system will be designed to take larger workflow into account (experimental group), while one will not (control group).  In a synthetic environment, participants will perform a controlled series of tasks while receiving interruptions at controlled times.  If the two groups perform roughly the same, then we will need to reassess this avenue of research.  However, if the two groups perform differrently then our pilot test will have lent support to our approach and core hypothesis.&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A classification of standard design guidelines into overarching principles and measurement of the  relation of each design principle with quantifiable cognitive principles. (Owners: EJ and Adam)&lt;br /&gt;
## A ranking/weighting of the relevance of each cognitive principle to each design principle.&lt;br /&gt;
## Specific rules for designing/assessing interfaces with respect to leveraging each cognitive principle for its most closely related design principles.&lt;br /&gt;
# A systematic method to determine task distribution based on psychological principles. (Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
## The theory of distributed cognition (Clark-1998-TEM) is a well-suited basis for constructing a human-computer interaction framework (Hollan-2000-DCF). However, a systematic method of determining which tasks should be distributed to which agent in a distributed environment has yet to be clearly defined. I propose that the cognitive psychology literature is ripe with empirical evidence on task performance variables. Dual-process theories (Evans-2003-ITM) usually agree on the nature of high-level cognitive operations which are used in human reasoning. It is also argued that low-level processing, which is based on perceptual similarity, contiguity, and association, is determined by a set of autonomous subsystems. In general, one might say that humans excel at these low-level functions in relation to traditional von-neuman architectures (and even current neural-networks), but recently, cognitive science has been less focused on research on high-level reasoning in humans as the evidence is showing that we rarely engage in such demanding cognitive operations. Therefore, we believe that an optimal configuration for distributed cognition amongst people and computers will take advantage of these specialties and deficiencies, and distribute tasks accordingly. Our research project will contribute a set of guidelines, or heuristics, to allow engineers to effectively determine which tasks should be assigned to which entities.&lt;br /&gt;
## The impact of this contribution will be a systematic approach to interface design. Engineers will finally have a well-documented standard about how to determine what operations humans should be responsible for, and which should be off-loaded to the computer.&lt;br /&gt;
## There is no risk for this contribution. Costs associated will be an extensive search of cognitive psychology literature for the past 25 years or so. Research on memory, reasoning, perception and more will be required in order to conduct a complete and accurate assessment.&lt;br /&gt;
## In the course of 30 hours, we may perform a hand-analysis of the empirical results in psychology in order to develop a set of approximately 10 guidelines or rules.&lt;br /&gt;
# A method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time. (Owner - Eric)&lt;br /&gt;
##In order to reach a model of human cognitive and perceptual abilities when using computers, experimental analysis of human performance on these tasks will likely be necessary. User studies can often aid in this analysis, but they require much money, much time, and are subject to user fatigue. Alternately, we propose a web-based method for evaluating user performance in perceptual, motor-control, and cognitive tasks. The idea is to take a task would normally be measured through user studies in a laboratory and map this task into a simple online game. Somewhat similar work has been done by Popovic et. al. at the University of Washington, in that they took the task of folding proteins and mapped it into an online game ( http://www.economist.com/displaystory.cfm?story_id=11326188 ) with much success. We will analyze the value of this method by comparing it to similar tasks performed in laboratory experiments, both in terms of user performance and deployment costs.&lt;br /&gt;
##By converting the task into a simple game, we hope to reduce the problem of user fatigue. Additionally, if the game is played on a social networking site, we are able to track basic information of users who perform the tasks and, more importantly, can identify returning users. Thus, we can track not only a user&#039;s performance, but also how they improve at a given task over time.&lt;br /&gt;
##There are no clear risks involved with this study. Potential costs would be those required for development of each experiment and for web hosting.&lt;br /&gt;
##As a prototype, we can select one particular task to map into a simple online game. To check for feasibility, we need to ensure that the results we get from our proposed method are similar to the results found in laboratory settings. There are two possible effects we must test for: bias in results due to the mapping into a game, and bias in results due to the sample of subjects or any change caused by the web-based component. A simple test would be to first test users in a lab using traditional methods as a baseline, and then see how performance differs from that in  laboratory tests using the game-based mapping. This will determine if the game appropriately measures the given task. Next, an online version of the game can be introduced, and performance can be compared with the laboratory settings. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost.&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
# Accurately assess computational and psychological costs for tasks and subtasks.  To do this, we will develop two non-trivial prototype systems; a conventional control system and a novel system which is based on our model of task switching.  We will use our model to make specific predictions about relative task performance and user affect responses, and then test these predictions empirically in a formal study.&lt;br /&gt;
# Develop model that accounts for qualitatively different psychological tasks&lt;br /&gt;
# Test model on real-world data&lt;br /&gt;
# Extract general design principles that are common across multiple sets of guidelines.&lt;br /&gt;
# Extract quantifiable cognitive principles that are relevant to HCI.&lt;br /&gt;
# Collect ratings of the design and cognitive principles on a range of interfaces and use this data to generate a relevance matrix.&lt;br /&gt;
# Generate specific rules for designing/assessing interfaces with regard to highly correlated pairs of design and cognitive principles.&lt;br /&gt;
# Investigate the possibility of using cognitive simulations to generate such assessments.&lt;br /&gt;
# Long-term goal: build a mixed-initiative interface generation system that takes some basic GUI requirements from the designer as input and attempts to maximize its score on the &amp;quot;evaluation rubric&amp;quot; within these constraints. (Eric)&lt;br /&gt;
# Develop model component to predict performance following interruptions or changes between work spheres.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget the context of the original task ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. Enacting the intention to switch becomes easier when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
(Edited by Andrew)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantitative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert users dynamically with accuracies as high as 91%.  This classifier was then used to provide different information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately? What tasks should we purposely leave the computer out of?&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature [http://vrl.cs.brown.edu/wiki/CS295J/Literature Evans-2003-ITM]. What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
(Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
&lt;br /&gt;
In considering users as information processors, interfaces should take into account people&#039;s computational limitations on short term memory, learning and vision as well as the algorithms and representations that they use to process information and pursue goals.&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as direct, non-inferential, unmediated (by retinal images or mental representations) epistemic contact with behaviorally-relevant features of the environment(Warren, 2005).  The possiblities for action that the environment offers a given animal are taken to be specified by information available in structured energy distributions (e.g. the optic array of light arriving at the eyes), and these possibilities for action constitute the affordances of the environment with respect that animal(Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
====Application to AUE====&lt;br /&gt;
&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
====Popular and seminal examples====&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
====Elements and goals of a guideline set====&lt;br /&gt;
&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application. [[User:E J Kalafarski|E J Kalafarski]] 15:21, 6 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Interaction capture====&lt;br /&gt;
&lt;br /&gt;
[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4376144 Yi et. al.] have performed a survey of the visualization literature and categorized different types of interactions that users were faced with. They are as follows:&lt;br /&gt;
# Select: mark something as interesting &lt;br /&gt;
# Explore: show me something else &lt;br /&gt;
# Reconfigure: show me a different arrangement &lt;br /&gt;
# Encode: show me a different representation&lt;br /&gt;
# Abstract/Elaborate: show me more or less detail &lt;br /&gt;
# Filter: show me something conditionally &lt;br /&gt;
# Connect: show me related items &lt;br /&gt;
&lt;br /&gt;
Different GUI components may be able to perform the same type of interaction. We would like to categorize GUI components or patterns that are used bring about these interactions. We then have a library of components we can use to complete a given task. The goal is to create components for a given interaction that can minimize cost to the user. Because the cost of a component is likely dependent on the other components used, the goal of the designer might be to choose a combination of components that minimizes this cost. To do this, we need a way to measure costs, which is discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
In [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4658124 A Framework of Interaction Costs in Information Visualization], Lam performs a survey of 32 user studies and classifies several types of costs that can be used for qualitative interface evaluation. The classification scheme is based on Donald Norman&#039;s [http://en.wikipedia.org/wiki/Seven_stages_of_action Seven Stages of Action] from his book, [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746 The Design of Everyday Things] ([http://www.networksplus.net/tracyj/everydaythings.pdf summary]).&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Decision costs:&#039;&#039;&#039; How does user performance decrease when there is an overwhelming amount of data to observe or too many possible actions to take.&lt;br /&gt;
# &#039;&#039;&#039;System-power costs:&#039;&#039;&#039; How does the user translate a high-level goal into a sequence of allowable actions by the interface?&lt;br /&gt;
# &#039;&#039;&#039;Multiple input mode costs:&#039;&#039;&#039; Cost of providing an action selection system that is not unified, for example, if there is one button that does two different things, depending on context. &lt;br /&gt;
# &#039;&#039;&#039;Physical-motion costs:&#039;&#039;&#039; Physical cost to the user to interact with the interface, for example, measuring mouse movement costs with Fitts&#039; Law.&lt;br /&gt;
# &#039;&#039;&#039;Visual-cluttering costs:&#039;&#039;&#039; Cost due to unwanted visual distractions, such as a mouse hovering pop-up occluding part of the screen.&lt;br /&gt;
# &#039;&#039;&#039;View- and State-change costs:&#039;&#039;&#039; when the user causes the interface to change views, this new view should be consistent with the old one, in that it should meet the users expectations of where things should be in the new view, based on his knowledge of the old one.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation in practice ====&lt;br /&gt;
User interfaces are usually evaluated in practice using two methods: &#039;&#039;usability inspection methods&#039;&#039;, where a programmer or one or more experts evaluates the interface through inspection; or &#039;&#039;usability testing&#039;&#039;, where empirical tests are performed with some group of naive human users. Some usability inspection methods include [http://en.wikipedia.org/wiki/Cognitive_walkthrough Cognitive walkthrough], [http://en.wikipedia.org/wiki/Heuristic_evaluation Heuristic evaluation], and [http://en.wikipedia.org/wiki/Pluralistic_walkthrough Pluralistic walkthrough]. While these inspection methods do not using naive human subjects, the details of the methods might be useful in helping to formalize what interactions are made between a user and an interface, and what each interactions&#039; costs are for a given design.&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=108862 Jeffries et. al.] provide a real-world comparison between two of the usability inspection methods (heuristic evaluation and cognitive walkthrough), the usability testing method, as well as following some published software guidelines for interface design.&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Some recent advances in the pragmatic use of EEGs in HCI research can be seen in [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]  The possibility of using brain function to interface with a machine is cause for great excitement in the HCI community, and further advances in non-invasive techniques for accessing brain function may allow teleo-HCI to become a reality.  &lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
=== Workflow analysis ===&lt;br /&gt;
&lt;br /&gt;
Research in workflow and interaction analysis remains relatively sparse, though its utility would appear to be many-fold.  Tools for such analysis have the potential to facilitate data navigation, provide search mechanisms, and allow for more efficient collaborative discovery.  In addition, awareness and caching of interaction histories readily allows for explanatory presentations of results, and has the potential to provide training data for machine learning mechanisms.&lt;br /&gt;
&lt;br /&gt;
VisTrails is an optimized workflow system developed at the Sci Institute at the University of Utah, and implemented within their VTK visualization package.  The primary purpose of the system is to increase performance when working with multiple visualizations simultaneously.  This is accomplished by storing low-level workflow processes to reduce computational redundancy.  Three papers on VisTrails can be found here: [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-MED.pdf Callahan-2006-MED], [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-VVM.pdf Callahan-2006-VVM], [http://www.cs.brown.edu/people/trevor/Papers/Bavoil-2005-VEI.pdf Bavoil-2005-VEI]&lt;br /&gt;
&lt;br /&gt;
Jeff Heer of Stanford (formerly Berkley) has presented work on using Graphical Interaction Histories within the Tableau InfoVis application.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including some evaluation from the deployment of his techniques within Tableau.  His paper from Infovis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Heer-2008-GraphicalHistories]&lt;br /&gt;
&lt;br /&gt;
If you want to check out some of Trevor&#039;s work having to do with using interaction histories in 3D, time-varying scientific visualizations, his preliminary work that was presented at Vis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visDemo.pdf Abstract], [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visPoster.pdf Poster]&lt;br /&gt;
&lt;br /&gt;
Optimizing workflows that have been captured -- Tovi?&lt;br /&gt;
&lt;br /&gt;
Does ethnography fit in here?&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
=== Gideon ===&lt;br /&gt;
&lt;br /&gt;
====Preliminary Psychological Measures Show the Need for a Standardized Division of Cognitive Labor Across Humans and Computers.====&lt;br /&gt;
&lt;br /&gt;
A meta-analysis of a subset of the cognitive psychology and human-computer interaction literature presents evidence that interactions between humans and computers can be improved by taking into account the cognitive resources required for different types of tasks. It is well known that humans and computers excel at different types of tasks, but the field has not made an explicit effort to standardize a set of guidelines that interface designers may use when developing computer systems. For people and computers to function in an optimized, complementary fashion, we still need a systematic way of distributing tasks amongst them.&lt;br /&gt;
&lt;br /&gt;
It is often the case that what computers excel at, humans have difficulty with (e.g., arithmetic). While, the opposite is also true (e.g., pattern-recognition). After a preliminary search of the literature, we&#039;ve explored common tasks in software today that have neglected consideration of this performance dichotomy. Designers have not been appropriately addressed these gaps in the computer industry due to a lack of multidisciplinary research. Our meta-analysis presents data that supports our view on two different tasks: 3D shape-rotation and face recognition.&lt;br /&gt;
&lt;br /&gt;
We have demonstrated in just a 30-hour study that computer assisted 3D Shape Rotation is consistently preferred over human-only mental rotation. Complementarily, humans consistently outperform modern computer systems in face recognition. Sometimes these performance gaps are obvious due to a lack of technology or common-sense. However, that is not only the case. Our preliminary study clearly demonstrates that systems benefit from consistent and rule-based task distribution guidelines. Furthermore, the literature is rich with other types of tasks which await our systematic exploitation. Upon further study, the community will benefit from a tested and systematic approach for designing improved human-computer interfaces.&lt;br /&gt;
&lt;br /&gt;
=== Steven ===&lt;br /&gt;
**C.1.3 Terms&lt;br /&gt;
*C.2 Garage Band study&lt;br /&gt;
:This study presented a participant with the Apple Garage Band program.  A sample song had been synthesized using the program, and parts of it “broken” in various ways (e.g. pitch changed, track segment deleted, etc.).  The user was then tasked with returning the song to its original form.&lt;br /&gt;
**C.2.1 Feasibility of a creative problem-solving task&lt;br /&gt;
::This task required users to approach the song from a fairly mechanistic perspective, as segments which sounded incorrect could only be fixed via the manipulation of track variables.  The search for such broken segments required a qualitative listening trial, however, and the user thus had to correlate the song’s aural product with its technical components.  These individual problems allow each user’s intuitive problem-solving process to be assessed several times in the use of one gui without the process becoming repetitive.  If the gui adheres to proper design rules and maintains task saliency, the user ought to have minimal difficulty in performing the first task, but even if there is an initial learning curve associated with the program, the user ought to see continual improvement in the effort required to solve each task.&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
::The task revealed a few very interesting flaws in the program’s user interface design, as well as a few intriguing insights into the user’s process for reaching the goal.  The interface was designed to appear elegant and to convey ease of use, but in doing so it seems to have neglected sign salience for the average user.  Although the program has the ability to create very high quality music, it is bundled with Mac OSX and thus ought to be fairly easily comprehended by the average user.  The study showed that the user had great difficulty in finding certain functions, however, and in some cases even tasks as simple as dragging a track segment to extend its length were achieved only after multiple attempts.&lt;br /&gt;
::The user’s behavior was both puzzling and insightful.  He replayed the entire song several times throughout the trial, both before and after each task and often during.  He also would replay the segment of the song upon which he was working several times between steps.  In attempting to fix the segment which had an altered pitch, for example, the user played through the entire song three times, then played only the concerned region repeatedly, then every segment besides the one with an altered pitch, then raised the volume of the altered pitch segment and replayed the region again; all before altering the pitch and repeating this process of multiple plays all over again.  This example reveals that the user had a great desire to prepare himself for achieving the intermediate goal of pitch adjustment not by determining the segment which needed adjustment and then proceeding, but by mentally arranging the entire task before proceeding.  That is to say, the user did not recognize his goal and proceed piecemeal by clicking the concerned segment and using a trial-and-error method to adjust the pitch.  Instead, he replayed the song until he knew the degree and direction with which he would need to change the pitch, as well as how the adjusted track ought to fit with the other tracks, all before proceeding to the editing process.&lt;br /&gt;
*C.3 Photoshop study&lt;br /&gt;
:This study presented a participant with the Adobe Photoshop program.  Five images were pre-loaded, four of various monochromatic photographs from periods before color photography and one of a modern photograph of a man in New York City.  The user was then tasked with altering the modern photograph to appear similar to one or more of the older photographs.  The goal was one final, edited version of the original photograph.&lt;br /&gt;
**C.3.1 Feasibility of a creative sandbox task&lt;br /&gt;
::This task allowed users to approach the image from many viewpoints.  Because there was not an inherent goal of converting the modern image to grayscale or incorporating certain aspects of the older images into it, the user could choose from a myriad of different aspects of the older photos to adhere the modern photo to.  As a result, the task requires that the gui have extremely high task saliency, with readily understood terms and editing processes which match those intuited by the user.  Such a task is thus less a measure of the individual user’s approach to the goal than it is a measure of the interface’s ability to cope with a myriad of user goals.&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
*C.3 Lessons Learned&lt;br /&gt;
**C.3.1 Setup of study, metric methods&lt;br /&gt;
**C.3.2 Possibility for future critique-enabled user workflow&lt;br /&gt;
&lt;br /&gt;
===Trevor and Eric===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interaction Histories&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project idea:&#039;&#039;&#039;  Generating interaction histories within scientific visualization applications to facilitate individual and collaborative scientific discovery. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&lt;br /&gt;
# A software infrastructure for caching and filtering interaction events has been developed within an existing scientific application aimed at exploring animal kinematic data captured via high-speed x-ray and CT.&lt;br /&gt;
# Methods for visualizing, editing, and sharing interaction histories have been designed and implemented.&lt;br /&gt;
# Methods for annotating and querying interactions have been implemented.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work from the Future&#039;&#039;&#039;&lt;br /&gt;
# Software model extended to two other interactive visualizations -- a flow visualization application, and a protein visualization application.&lt;br /&gt;
# User study to examine techniques for automatic history generation&lt;br /&gt;
## Automatic creation v. semi-automatic creation v. manual creation&lt;br /&gt;
# Timed-task pilot study performed to validate utility of interaction history techniques&lt;br /&gt;
## Task performance with histories v. without histories&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;General Outline of Tasks&#039;&#039;&#039;&lt;br /&gt;
# Capture user interaction history&lt;br /&gt;
# Predict user interactions, given interaction history&lt;br /&gt;
## Use a relational markov model?&lt;br /&gt;
# Modify the UI, given predicted user interactions&lt;br /&gt;
# Evaluate this modified UI&lt;br /&gt;
## Compare performance with and without UI modifications&lt;br /&gt;
## Evaluate performance when predicted interactions are incorrect x% of the time&lt;br /&gt;
&lt;br /&gt;
===EJ===&lt;br /&gt;
&#039;&#039;&#039;Problem:&#039;&#039;&#039; There currently exists no metric for evaluating interfaces that attempts to reconcile popular and successful heuristic design guidelines with cognitive theory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
I. A mapping of empirically-effective heuristic design guidelines to fundamental cognitive principles.&lt;br /&gt;
:*A set of &amp;quot;common&amp;quot; design guidelines is arrived at through survey of popular and effective heuristic guidelines in use.&lt;br /&gt;
:*Proposed common design guidelines:&lt;br /&gt;
 Discoverability&lt;br /&gt;
 Flexibility&lt;br /&gt;
 Appropriate visual presentation&lt;br /&gt;
 Predictability&lt;br /&gt;
 Consistency&lt;br /&gt;
 Simplicity&lt;br /&gt;
 Memory load reduction&lt;br /&gt;
 Feedback&lt;br /&gt;
 Task match&lt;br /&gt;
 User control&lt;br /&gt;
 Efficiency&lt;br /&gt;
:*Proposed cognitive principles:&lt;br /&gt;
 Affordance&lt;br /&gt;
 Visual cue&lt;br /&gt;
 Cognitive load&lt;br /&gt;
 Chunking&lt;br /&gt;
 Activity&lt;br /&gt;
 Actability&lt;br /&gt;
II. A weighting or priority for each of these analogues.&lt;br /&gt;
:*These can begin as binary estimations based on empirical evidence.&lt;br /&gt;
:*Through experimentation, these values should converge to discrete priority values for each analogue, allowing a ranking of analogues.&lt;br /&gt;
III. A system for applying these analogues and respective priority to the evaluation of an interface.&lt;br /&gt;
:*This can occur manually or in an automated fashion.&lt;br /&gt;
:*In this step (or possibly in a separate step), analogues should be assigned a &#039;&#039;suggestion&#039;&#039; or potential correction to provide in the event of &amp;quot;failure&amp;quot; of a particular test by a given interface.&lt;br /&gt;
&lt;br /&gt;
===EJ and Jon===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Photoshop Study&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We demonstrate that the efficiency of performing subtasks in Photoshop can be predicted by a simple model of human perceptual and cognitive abilities.  In particular, we show that several of the tools commonly used to perform basic operations in Photoshop often violate the user&#039;s expectations of how those tools should work or where those tools ought to be located within the interface.  These violations can be categorized as follows: (1) unintuitive relationships between adjustments to common tool parameters and their perceptual results (e.g. adjusting the Magic Wand tool&#039;s &amp;quot;tolerance&amp;quot; setting often leads to unintended selections), (2) inefficient means of adjusting tool parameters (e.g. adjusting the &amp;quot;tolerance&amp;quot; setting by clicking, typing a number, hitting enter, observing the results, and iterating this process until the desired perceptual effect is achieved), (3) mismatches between the user&#039;s expectations for the names and locations of tools (or menu items) and their actual names and locations (e.g. resizing a picture via the &amp;quot;transform&amp;quot; menu item), (4) the availability of a tool in multiple locations imposes a cognitive load on the user when searching for that tool and these various contexts influence the user&#039;s expectations about the effects of using that tool, (5) [what else?], etc.&lt;br /&gt;
&lt;br /&gt;
===Jon===&lt;br /&gt;
&lt;br /&gt;
We have developed a model of human cognitive and perceptual abilities that allows us to predict human performance and thereby converge on ideal interfaces while simultaneously ruling out sub-optimal ones.  The model consists of a set of design principles combined with an extensive catalog of human perceptual and cognitive constraints on interface design.  The effectiveness of this model has been demonstrated by using the combined set of principles to assign scores to a set of GUIs designed to help a user accomplish the same overarching task, and comparing those scores with actual user performance.&lt;br /&gt;
&lt;br /&gt;
===Eric===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project idea:&#039;&#039;&#039; Introduce a method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&lt;br /&gt;
Create a simple &amp;quot;brain training&amp;quot;-style game in which users must perform a simple cognitive task. As an example, perhaps the task is to manipulate shape1 into shape2, given a simple set of operators. Each of the users actions (mouse movements, button clicks, etc.) will be documented along with the state of the game at that time. By varying the interface for different users, we can see how it affects performance in terms of cognitive and low-level tasks.&lt;br /&gt;
&lt;br /&gt;
Preliminary tests will first measure user performance in a laboratory setting. We will run subjects on two different interfaces and compare the differences in performance. We will then perform this same test in an online setting, and will evaluate how performance differs in this case, which is more subject to user interruptions and noisy data. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost. If performance differs greatly, it may be that the amount of noise introduced by users playing in a casual setting may make the project infeasible.&lt;br /&gt;
&lt;br /&gt;
===Andrew Bragdon===&lt;br /&gt;
&lt;br /&gt;
====I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  Develop a qualitative theory for predicting user performance with and without automatic meta-work tools for saving and resuming context.&lt;br /&gt;
&lt;br /&gt;
B.  Formative study should inspire 30-hour feasibility study&lt;br /&gt;
&lt;br /&gt;
C.  30-hour feasibility study should give a high-level indication into the relative merit of such an approach&lt;br /&gt;
&lt;br /&gt;
====II.  &#039;&#039;&#039;Formative Study&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  In a highly-controlled task environment, understand how an integrated model comprising current theories in perception and HCI can predict/explain task performance&lt;br /&gt;
&lt;br /&gt;
2.  Users trained in the task extensively to control for learning (learning aspect of task could be investigated in future study)&lt;br /&gt;
&lt;br /&gt;
B. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Users favored continuous rotation over static view or meditative pauses.&lt;br /&gt;
&lt;br /&gt;
2. Hand gesticulation seemed to be used as a method of validation in stereo views.&lt;br /&gt;
&lt;br /&gt;
3. Several verbal comments regarding occlusion suggest drawbacks to the tube view.&lt;br /&gt;
&lt;br /&gt;
4. Rotating is beneficial and perhaps necessary for this task.&lt;br /&gt;
&lt;br /&gt;
5. No references to external, offscreen information. In fact, very rarely did participants glance away from the screen.&lt;br /&gt;
&lt;br /&gt;
6. Our assessment of a typical session beginning with a new dataset:&lt;br /&gt;
&lt;br /&gt;
   1. Data loads, split second decision to begin rotating.&lt;br /&gt;
   2. Continuous rotation until proper viewpoint determined.&lt;br /&gt;
   3. Rocking back and forth interaction about the optimal viewpoint. (Thinking?)&lt;br /&gt;
   4. [In stereo views, participants were noted tilting their heads fairly consistently.] &lt;br /&gt;
&lt;br /&gt;
C.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  Some of the users&#039; strategies can be explained by current theories in perception&lt;br /&gt;
&lt;br /&gt;
D.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  Modeling low-level performance accurately does not seem to take into account for higher level workflow processes.&lt;br /&gt;
&lt;br /&gt;
====III.  &#039;&#039;&#039;30-hour study&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  Now that we have investigated highly controlled task performance, explore larger workflow context&lt;br /&gt;
&lt;br /&gt;
B.  Perform study examining how interruptions affect user performance, and explore what coping strategies are used by users in such an environment, and explore whether a simple tool - working sets - helps improve performance by allowing developers to save, and load, and switch between window and window state configurations&lt;br /&gt;
&lt;br /&gt;
1.  Software developers (a good example of a challenging, and creative type of information work) will receive task requests by email; notifications will appear on their screen in real time&lt;br /&gt;
&lt;br /&gt;
2.  Each email will have differrent priorities (e.g., low, high, emergency)&lt;br /&gt;
&lt;br /&gt;
3.  Participants will be asked to manage priorities effectively to accomplish the tasks given&lt;br /&gt;
&lt;br /&gt;
4.  Once they begin working, we will &amp;quot;interrupt&amp;quot; them at controlled times with new task requests of differrent priorities&lt;br /&gt;
&lt;br /&gt;
5.  We will analyze their actions for coping strategies, metawork, and working spheres to try to understand how the larger workflow context is affected by interruptions&lt;br /&gt;
&lt;br /&gt;
6.  Goal will be to run 4-6 people, &#039;&#039;&#039;would like feedback on this&#039;&#039;&#039;.  Can control for experience by recruiting experienced developers and students from the general population of Brown University.&lt;br /&gt;
&lt;br /&gt;
7.  Note on progress so far: I &amp;quot;ran&amp;quot; myself and found that switching tasks incurred a huge cost in returning to what I was doing.  I think that tools for aiding in switching between working sets will significantly benefit developers in particular, and information workers in general.  Some coping strategies I used: writing things on paper, typing notes into Notepad, and keeping previously used tabs open.  Sometimes it would get too chaotic and I would need to close all open windows and reset from my notes.  Overall, I would say that task switching/pausing/resuming in Visual Studio (test application) is not well supported.&lt;br /&gt;
&lt;br /&gt;
===Adam Darlow===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate the interactions between various cognitive principles and design principles. There are three basic relations:&lt;br /&gt;
&lt;br /&gt;
   1. The cognitive principle is the motivation behind the design principle.&lt;br /&gt;
   2. The cognitive principle suggests a method for achieving the design principle.&lt;br /&gt;
   3. The cognitive principle and design principle are unrelated. (Hopefully few)&lt;br /&gt;
&lt;br /&gt;
B.  Make design rules which are suggested by the combination of a cognitive principle and a design principle.&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Collecting commonly accepted design principles from the literature on interface design and well established cognitive principles from the cognitive psychology literature and constructing a matrix which crosses them. Most squares in the matrix should suggest specific design rules.&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As a preliminary effort, I have chosen the following two cognitive principles and three design principles:&lt;br /&gt;
&lt;br /&gt;
Cognitive principles&lt;br /&gt;
&lt;br /&gt;
#C1. People derive complex associations and causal interpretations from temporal correlations and patterns.&lt;br /&gt;
&lt;br /&gt;
#C2. People have limited working memory (7 +- 2), but each space can hold a chunk of related information.&lt;br /&gt;
&lt;br /&gt;
Design Principles (from Maeda (TBD link))&lt;br /&gt;
&lt;br /&gt;
#D1. Achieve simplicity through thoughtful reduction.&lt;br /&gt;
&lt;br /&gt;
#D2. Organization makes a system of many appear fewer.&lt;br /&gt;
&lt;br /&gt;
#D3. Knowledge makes everything simpler.&lt;br /&gt;
&lt;br /&gt;
The resulting matrix entries are as follows:&lt;br /&gt;
&lt;br /&gt;
#C1 + D1. Remove extraneous correlations. Things shouldn&#039;t consistently and apparently change or happen in conjunction unless they are actually related and their relation is important to the user.&lt;br /&gt;
&lt;br /&gt;
#C1 + D2. Use temporal and spatial contiguity to help users organize and group  multiple events meaningfully. &lt;br /&gt;
&lt;br /&gt;
#C1 + D3. Use temporal correlations to effectively teach the important  causal relations inherent in the interface.&lt;br /&gt;
&lt;br /&gt;
#C2 + D1. Reduce the interface such that a user has to be aware of no more than 5 items simultaneously.&lt;br /&gt;
&lt;br /&gt;
#C2 + D2. Groups of semantically related items can for many purposes be treated as a single item.&lt;br /&gt;
&lt;br /&gt;
#C2 + D3. Teach users how things are related so that it can be chunked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To expand the matrix and evaluate the resulting design rules.&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about the details of a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;.  There does need to be enough to define what the overall proposed work is, but that may show up in earlier sections.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2408</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2408"/>
		<updated>2009-03-10T21:56:44Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Specific Contributions */  Merged EJ&amp;#039;s and my ideas&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Impact: Such a model would allow us to predict human performance with interfaces.  Validation of the model would allow us to more rapidly converge on ideal interfaces while simultaneously ruling out sub-optimal ones.&lt;br /&gt;
## 3-Week Feasibility Study: (1) Distill (perhaps from review articles) the major findings in all of the relevant subfields into a set of principles that will be grouped to form appropriate model components.  Some of the most relevant subfields include: memory, attention, visual perception, psychoacoustics, task switching, categorization, event perception, haptics (ANY OTHERS?).  (2)  Use these to devise a simple predictive model of human interaction.  This model could simply consist of the set of design principles but should allow some form of quantitative scoring/evaluation of interfaces.  (3) Develop a small set (5-10) of candidate GUIs that are designed to help the user accomplish the same overarching task(s) (e.g. importing, analyzing, and flexibly graphing data in Matlab).   (4)  Test/validate the model by comparing predicted performance to actual performance with the GUIs.&lt;br /&gt;
## Risks/Costs: There are always potential risks to any human subjects that might participate in testing the model which might necessitate IRB approval.  Costs might include: (1) paying for any necessary hardware and software for developing, displaying, and testing the GUIs, and (2) paying for human subjects.&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A predictive, fully-integrated model of user workflow which encompasses low-level tasks, working spheres, communication chains, interruptions and multi-tasking. (OWNER: Andrew Bragdon)&lt;br /&gt;
##Traditionally, software design and usability testing is focused on low-level task performance.  However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, &#039;&#039;working sphere&#039;&#039; level.  Su, et al., develops a predictive model of task switching based on communication chains.  Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching.  We will conduct initial exploratory studies to test specific instances of this high-level hypothesis.  We will then use the refined model to identify specific predictions for the outcome of a formal, ecologically valid study involving a complex, non-trivial application.&lt;br /&gt;
## Impact: To truly design computing systems which are designed around the way users work, we must understand &#039;&#039;how&#039;&#039; users work.  To do this, we need to establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains.  Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context.  Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work.&lt;br /&gt;
## Risk and Costs: Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk.  The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel.  While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda.  The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis.  The cost will span student support, both Ph.D. and Master&#039;s students, as well as full-time research staff.  Projected cost: $1.5 million over three years.&lt;br /&gt;
## 3-week Feasibility Study:  To ascertain the feasibility of this project we will conduct an initial pilot test to investigate the core idea: a predictive model of user workflow.  We will spend 1 week studying the real workflow of several people through job shadowing.  We will then create two systems designed to help a user accomplish some simple information work task.  One system will be designed to take larger workflow into account (experimental group), while one will not (control group).  In a synthetic environment, participants will perform a controlled series of tasks while receiving interruptions at controlled times.  If the two groups perform roughly the same, then we will need to reassess this avenue of research.  However, if the two groups perform differrently then our pilot test will have lent support to our approach and core hypothesis.&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A classification of standard design guidelines into overarching principles and measurement of the  relation of each design principle with quantifiable cognitive principles. (Owners: EJ and Adam)&lt;br /&gt;
## A ranking/weighting of the relevance of each cognitive principle to each design principle.&lt;br /&gt;
## Specific rules for designing/assessing interfaces with respect to leveraging each cognitive principle for its most closely related design principles.&lt;br /&gt;
# A systematic method to determine task distribution based on psychological principles. (Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
## The theory of distributed cognition (Clark-1998-TEM) is a well-suited basis for constructing a human-computer interaction framework (Hollan-2000-DCF). However, a systematic method of determining which tasks should be distributed to which agent in a distributed environment has yet to be clearly defined. I propose that the cognitive psychology literature is ripe with empirical evidence on task performance variables. Dual-process theories (Evans-2003-ITM) usually agree on the nature of high-level cognitive operations which are used in human reasoning. It is also argued that low-level processing, which is based on perceptual similarity, contiguity, and association, is determined by a set of autonomous subsystems. In general, one might say that humans excel at these low-level functions in relation to traditional von-neuman architectures (and even current neural-networks), but recently, cognitive science has been less focused on research on high-level reasoning in humans as the evidence is showing that we rarely engage in such demanding cognitive operations. Therefore, we believe that an optimal configuration for distributed cognition amongst people and computers will take advantage of these specialties and deficiencies, and distribute tasks accordingly. Our research project will contribute a set of guidelines, or heuristics, to allow engineers to effectively determine which tasks should be assigned to which entities.&lt;br /&gt;
## The impact of this contribution will be a systematic approach to interface design. Engineers will finally have a well-documented standard about how to determine what operations humans should be responsible for, and which should be off-loaded to the computer.&lt;br /&gt;
## There is no risk for this contribution. Costs associated will be an extensive search of cognitive psychology literature for the past 25 years or so. Research on memory, reasoning, perception and more will be required in order to conduct a complete and accurate assessment.&lt;br /&gt;
## In the course of 30 hours, we may perform a hand-analysis of the empirical results in psychology in order to develop a set of approximately 10 guidelines or rules.&lt;br /&gt;
# A method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time. (Owner - Eric)&lt;br /&gt;
##In order to reach a model of human cognitive and perceptual abilities when using computers, experimental analysis of human performance on these tasks will likely be necessary. User studies can often aid in this analysis, but they require much money, much time, and are subject to user fatigue. Alternately, we propose a web-based method for evaluating user performance in perceptual, motor-control, and cognitive tasks. The idea is to take a task would normally be measured through user studies in a laboratory and map this task into a simple online game. Somewhat similar work has been done by Popovic et. al. at the University of Washington, in that they took the task of folding proteins and mapped it into an online game ( http://www.economist.com/displaystory.cfm?story_id=11326188 ) with much success. We will analyze the value of this method by comparing it to similar tasks performed in laboratory experiments, both in terms of user performance and deployment costs.&lt;br /&gt;
##By converting the task into a simple game, we hope to reduce the problem of user fatigue. Additionally, if the game is played on a social networking site, we are able to track basic information of users who perform the tasks and, more importantly, can identify returning users. Thus, we can track not only a user&#039;s performance, but also how they improve at a given task over time.&lt;br /&gt;
##There are no clear risks involved with this study. Potential costs would be those required for development of each experiment and for web hosting.&lt;br /&gt;
##As a prototype, we can select one particular task to map into a simple online game. To check for feasibility, we need to ensure that the results we get from our proposed method are similar to the results found in laboratory settings. There are two possible effects we must test for: bias in results due to the mapping into a game, and bias in results due to the sample of subjects or any change caused by the web-based component. A simple test would be to first test users in a lab using traditional methods as a baseline, and then see how performance differs from that in  laboratory tests using the game-based mapping. This will determine if the game appropriately measures the given task. Next, an online version of the game can be introduced, and performance can be compared with the laboratory settings. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost.&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
# Accurately assess computational and psychological costs for tasks and subtasks.  To do this, we will develop two non-trivial prototype systems; a conventional control system and a novel system which is based on our model of task switching.  We will use our model to make specific predictions about relative task performance and user affect responses, and then test these predictions empirically in a formal study.&lt;br /&gt;
# Develop model that accounts for qualitatively different psychological tasks&lt;br /&gt;
# Test model on real-world data&lt;br /&gt;
# Build classification for design guidelines based on cognitive analogues [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Parse and classify existing guidelines by this new metric [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Build an automated evaluation rubric for applying these classifications to interfaces in development [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Long-term goal: build a mixed-initiative interface generation system that takes some basic GUI requirements from the designer as input and attempts to maximize its score on the &amp;quot;evaluation rubric&amp;quot; within these constraints. (Eric)&lt;br /&gt;
# Develop model component to predict performance following interruptions or changes between work spheres.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget the context of the original task ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. Enacting the intention to switch becomes easier when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
(Edited by Andrew)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantitative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert users dynamically with accuracies as high as 91%.  This classifier was then used to provide different information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately? What tasks should we purposely leave the computer out of?&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature [http://vrl.cs.brown.edu/wiki/CS295J/Literature Evans-2003-ITM]. What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
(Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
&lt;br /&gt;
In considering users as information processors, interfaces should take into account people&#039;s computational limitations on short term memory, learning and vision as well as the algorithms and representations that they use to process information and pursue goals.&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as direct, non-inferential, unmediated (by retinal images or mental representations) epistemic contact with behaviorally-relevant features of the environment(Warren, 2005).  The possiblities for action that the environment offers a given animal are taken to be specified by information available in structured energy distributions (e.g. the optic array of light arriving at the eyes), and these possibilities for action constitute the affordances of the environment with respect that animal(Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
====Application to AUE====&lt;br /&gt;
&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
====Popular and seminal examples====&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
====Elements and goals of a guideline set====&lt;br /&gt;
&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application. [[User:E J Kalafarski|E J Kalafarski]] 15:21, 6 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Interaction capture====&lt;br /&gt;
&lt;br /&gt;
[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4376144 Yi et. al.] have performed a survey of the visualization literature and categorized different types of interactions that users were faced with. They are as follows:&lt;br /&gt;
# Select: mark something as interesting &lt;br /&gt;
# Explore: show me something else &lt;br /&gt;
# Reconfigure: show me a different arrangement &lt;br /&gt;
# Encode: show me a different representation&lt;br /&gt;
# Abstract/Elaborate: show me more or less detail &lt;br /&gt;
# Filter: show me something conditionally &lt;br /&gt;
# Connect: show me related items &lt;br /&gt;
&lt;br /&gt;
Different GUI components may be able to perform the same type of interaction. We would like to categorize GUI components or patterns that are used bring about these interactions. We then have a library of components we can use to complete a given task. The goal is to create components for a given interaction that can minimize cost to the user. Because the cost of a component is likely dependent on the other components used, the goal of the designer might be to choose a combination of components that minimizes this cost. To do this, we need a way to measure costs, which is discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
In [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4658124 A Framework of Interaction Costs in Information Visualization], Lam performs a survey of 32 user studies and classifies several types of costs that can be used for qualitative interface evaluation. The classification scheme is based on Donald Norman&#039;s [http://en.wikipedia.org/wiki/Seven_stages_of_action Seven Stages of Action] from his book, [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746 The Design of Everyday Things] ([http://www.networksplus.net/tracyj/everydaythings.pdf summary]).&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Decision costs:&#039;&#039;&#039; How does user performance decrease when there is an overwhelming amount of data to observe or too many possible actions to take.&lt;br /&gt;
# &#039;&#039;&#039;System-power costs:&#039;&#039;&#039; How does the user translate a high-level goal into a sequence of allowable actions by the interface?&lt;br /&gt;
# &#039;&#039;&#039;Multiple input mode costs:&#039;&#039;&#039; Cost of providing an action selection system that is not unified, for example, if there is one button that does two different things, depending on context. &lt;br /&gt;
# &#039;&#039;&#039;Physical-motion costs:&#039;&#039;&#039; Physical cost to the user to interact with the interface, for example, measuring mouse movement costs with Fitts&#039; Law.&lt;br /&gt;
# &#039;&#039;&#039;Visual-cluttering costs:&#039;&#039;&#039; Cost due to unwanted visual distractions, such as a mouse hovering pop-up occluding part of the screen.&lt;br /&gt;
# &#039;&#039;&#039;View- and State-change costs:&#039;&#039;&#039; when the user causes the interface to change views, this new view should be consistent with the old one, in that it should meet the users expectations of where things should be in the new view, based on his knowledge of the old one.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation in practice ====&lt;br /&gt;
User interfaces are usually evaluated in practice using two methods: &#039;&#039;usability inspection methods&#039;&#039;, where a programmer or one or more experts evaluates the interface through inspection; or &#039;&#039;usability testing&#039;&#039;, where empirical tests are performed with some group of naive human users. Some usability inspection methods include [http://en.wikipedia.org/wiki/Cognitive_walkthrough Cognitive walkthrough], [http://en.wikipedia.org/wiki/Heuristic_evaluation Heuristic evaluation], and [http://en.wikipedia.org/wiki/Pluralistic_walkthrough Pluralistic walkthrough]. While these inspection methods do not using naive human subjects, the details of the methods might be useful in helping to formalize what interactions are made between a user and an interface, and what each interactions&#039; costs are for a given design.&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=108862 Jeffries et. al.] provide a real-world comparison between two of the usability inspection methods (heuristic evaluation and cognitive walkthrough), the usability testing method, as well as following some published software guidelines for interface design.&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Some recent advances in the pragmatic use of EEGs in HCI research can be seen in [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]  The possibility of using brain function to interface with a machine is cause for great excitement in the HCI community, and further advances in non-invasive techniques for accessing brain function may allow teleo-HCI to become a reality.  &lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
=== Workflow analysis ===&lt;br /&gt;
&lt;br /&gt;
Research in workflow and interaction analysis remains relatively sparse, though its utility would appear to be many-fold.  Tools for such analysis have the potential to facilitate data navigation, provide search mechanisms, and allow for more efficient collaborative discovery.  In addition, awareness and caching of interaction histories readily allows for explanatory presentations of results, and has the potential to provide training data for machine learning mechanisms.&lt;br /&gt;
&lt;br /&gt;
VisTrails is an optimized workflow system developed at the Sci Institute at the University of Utah, and implemented within their VTK visualization package.  The primary purpose of the system is to increase performance when working with multiple visualizations simultaneously.  This is accomplished by storing low-level workflow processes to reduce computational redundancy.  Three papers on VisTrails can be found here: [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-MED.pdf Callahan-2006-MED], [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-VVM.pdf Callahan-2006-VVM], [http://www.cs.brown.edu/people/trevor/Papers/Bavoil-2005-VEI.pdf Bavoil-2005-VEI]&lt;br /&gt;
&lt;br /&gt;
Jeff Heer of Stanford (formerly Berkley) has presented work on using Graphical Interaction Histories within the Tableau InfoVis application.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including some evaluation from the deployment of his techniques within Tableau.  His paper from Infovis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Heer-2008-GraphicalHistories]&lt;br /&gt;
&lt;br /&gt;
If you want to check out some of Trevor&#039;s work having to do with using interaction histories in 3D, time-varying scientific visualizations, his preliminary work that was presented at Vis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visDemo.pdf Abstract], [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visPoster.pdf Poster]&lt;br /&gt;
&lt;br /&gt;
Optimizing workflows that have been captured -- Tovi?&lt;br /&gt;
&lt;br /&gt;
Does ethnography fit in here?&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
=== Gideon ===&lt;br /&gt;
&lt;br /&gt;
====Preliminary Psychological Measures Show the Need for a Standardized Division of Cognitive Labor Across Humans and Computers.====&lt;br /&gt;
&lt;br /&gt;
A meta-analysis of a subset of the cognitive psychology and human-computer interaction literature presents evidence that interactions between humans and computers can be improved by taking into account the cognitive resources required for different types of tasks. It is well known that humans and computers excel at different types of tasks, but the field has not made an explicit effort to standardize a set of guidelines that interface designers may use when developing computer systems. For people and computers to function in an optimized, complementary fashion, we still need a systematic way of distributing tasks amongst them.&lt;br /&gt;
&lt;br /&gt;
It is often the case that what computers excel at, humans have difficulty with (e.g., arithmetic). While, the opposite is also true (e.g., pattern-recognition). After a preliminary search of the literature, we&#039;ve explored common tasks in software today that have neglected consideration of this performance dichotomy. Designers have not been appropriately addressed these gaps in the computer industry due to a lack of multidisciplinary research. Our meta-analysis presents data that supports our view on two different tasks: 3D shape-rotation and face recognition.&lt;br /&gt;
&lt;br /&gt;
We have demonstrated in just a 30-hour study that computer assisted 3D Shape Rotation is consistently preferred over human-only mental rotation. Complementarily, humans consistently outperform modern computer systems in face recognition. Sometimes these performance gaps are obvious due to a lack of technology or common-sense. However, that is not only the case. Our preliminary study clearly demonstrates that systems benefit from consistent and rule-based task distribution guidelines. Furthermore, the literature is rich with other types of tasks which await our systematic exploitation. Upon further study, the community will benefit from a tested and systematic approach for designing improved human-computer interfaces.&lt;br /&gt;
&lt;br /&gt;
=== Steven ===&lt;br /&gt;
**C.1.3 Terms&lt;br /&gt;
*C.2 Garage Band study&lt;br /&gt;
:This study presented a participant with the Apple Garage Band program.  A sample song had been synthesized using the program, and parts of it “broken” in various ways (e.g. pitch changed, track segment deleted, etc.).  The user was then tasked with returning the song to its original form.&lt;br /&gt;
**C.2.1 Feasibility of a creative problem-solving task&lt;br /&gt;
::This task required users to approach the song from a fairly mechanistic perspective, as segments which sounded incorrect could only be fixed via the manipulation of track variables.  The search for such broken segments required a qualitative listening trial, however, and the user thus had to correlate the song’s aural product with its technical components.  These individual problems allow each user’s intuitive problem-solving process to be assessed several times in the use of one gui without the process becoming repetitive.  If the gui adheres to proper design rules and maintains task saliency, the user ought to have minimal difficulty in performing the first task, but even if there is an initial learning curve associated with the program, the user ought to see continual improvement in the effort required to solve each task.&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
::The task revealed a few very interesting flaws in the program’s user interface design, as well as a few intriguing insights into the user’s process for reaching the goal.  The interface was designed to appear elegant and to convey ease of use, but in doing so it seems to have neglected sign salience for the average user.  Although the program has the ability to create very high quality music, it is bundled with Mac OSX and thus ought to be fairly easily comprehended by the average user.  The study showed that the user had great difficulty in finding certain functions, however, and in some cases even tasks as simple as dragging a track segment to extend its length were achieved only after multiple attempts.&lt;br /&gt;
::The user’s behavior was both puzzling and insightful.  He replayed the entire song several times throughout the trial, both before and after each task and often during.  He also would replay the segment of the song upon which he was working several times between steps.  In attempting to fix the segment which had an altered pitch, for example, the user played through the entire song three times, then played only the concerned region repeatedly, then every segment besides the one with an altered pitch, then raised the volume of the altered pitch segment and replayed the region again; all before altering the pitch and repeating this process of multiple plays all over again.  This example reveals that the user had a great desire to prepare himself for achieving the intermediate goal of pitch adjustment not by determining the segment which needed adjustment and then proceeding, but by mentally arranging the entire task before proceeding.  That is to say, the user did not recognize his goal and proceed piecemeal by clicking the concerned segment and using a trial-and-error method to adjust the pitch.  Instead, he replayed the song until he knew the degree and direction with which he would need to change the pitch, as well as how the adjusted track ought to fit with the other tracks, all before proceeding to the editing process.&lt;br /&gt;
*C.3 Photoshop study&lt;br /&gt;
:This study presented a participant with the Adobe Photoshop program.  Five images were pre-loaded, four of various monochromatic photographs from periods before color photography and one of a modern photograph of a man in New York City.  The user was then tasked with altering the modern photograph to appear similar to one or more of the older photographs.  The goal was one final, edited version of the original photograph.&lt;br /&gt;
**C.3.1 Feasibility of a creative sandbox task&lt;br /&gt;
::This task allowed users to approach the image from many viewpoints.  Because there was not an inherent goal of converting the modern image to grayscale or incorporating certain aspects of the older images into it, the user could choose from a myriad of different aspects of the older photos to adhere the modern photo to.  As a result, the task requires that the gui have extremely high task saliency, with readily understood terms and editing processes which match those intuited by the user.  Such a task is thus less a measure of the individual user’s approach to the goal than it is a measure of the interface’s ability to cope with a myriad of user goals.&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
*C.3 Lessons Learned&lt;br /&gt;
**C.3.1 Setup of study, metric methods&lt;br /&gt;
**C.3.2 Possibility for future critique-enabled user workflow&lt;br /&gt;
&lt;br /&gt;
===Trevor and Eric===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interaction Histories&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project idea:&#039;&#039;&#039;  Generating interaction histories within scientific visualization applications to facilitate individual and collaborative scientific discovery. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&lt;br /&gt;
# A software infrastructure for caching and filtering interaction events has been developed within an existing scientific application aimed at exploring animal kinematic data captured via high-speed x-ray and CT.&lt;br /&gt;
# Methods for visualizing, editing, and sharing interaction histories have been designed and implemented.&lt;br /&gt;
# Methods for annotating and querying interactions have been implemented.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work from the Future&#039;&#039;&#039;&lt;br /&gt;
# Software model extended to two other interactive visualizations -- a flow visualization application, and a protein visualization application.&lt;br /&gt;
# User study to examine techniques for automatic history generation&lt;br /&gt;
## Automatic creation v. semi-automatic creation v. manual creation&lt;br /&gt;
# Timed-task pilot study performed to validate utility of interaction history techniques&lt;br /&gt;
## Task performance with histories v. without histories&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;General Outline of Tasks&#039;&#039;&#039;&lt;br /&gt;
# Capture user interaction history&lt;br /&gt;
# Predict user interactions, given interaction history&lt;br /&gt;
## Use a relational markov model?&lt;br /&gt;
# Modify the UI, given predicted user interactions&lt;br /&gt;
# Evaluate this modified UI&lt;br /&gt;
## Compare performance with and without UI modifications&lt;br /&gt;
## Evaluate performance when predicted interactions are incorrect x% of the time&lt;br /&gt;
&lt;br /&gt;
===EJ===&lt;br /&gt;
&#039;&#039;&#039;Problem:&#039;&#039;&#039; There currently exists no metric for evaluating interfaces that attempts to reconcile popular and successful heuristic design guidelines with cognitive theory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
I. A mapping of empirically-effective heuristic design guidelines to fundamental cognitive principles.&lt;br /&gt;
:*A set of &amp;quot;common&amp;quot; design guidelines is arrived at through survey of popular and effective heuristic guidelines in use.&lt;br /&gt;
:*Proposed common design guidelines:&lt;br /&gt;
 Discoverability&lt;br /&gt;
 Flexibility&lt;br /&gt;
 Appropriate visual presentation&lt;br /&gt;
 Predictability&lt;br /&gt;
 Consistency&lt;br /&gt;
 Simplicity&lt;br /&gt;
 Memory load reduction&lt;br /&gt;
 Feedback&lt;br /&gt;
 Task match&lt;br /&gt;
 User control&lt;br /&gt;
 Efficiency&lt;br /&gt;
:*Proposed cognitive principles:&lt;br /&gt;
 Affordance&lt;br /&gt;
 Visual cue&lt;br /&gt;
 Cognitive load&lt;br /&gt;
 Chunking&lt;br /&gt;
 Activity&lt;br /&gt;
 Actability&lt;br /&gt;
II. A weighting or priority for each of these analogues.&lt;br /&gt;
:*These can begin as binary estimations based on empirical evidence.&lt;br /&gt;
:*Through experimentation, these values should converge to discrete priority values for each analogue, allowing a ranking of analogues.&lt;br /&gt;
III. A system for applying these analogues and respective priority to the evaluation of an interface.&lt;br /&gt;
:*This can occur manually or in an automated fashion.&lt;br /&gt;
:*In this step (or possibly in a separate step), analogues should be assigned a &#039;&#039;suggestion&#039;&#039; or potential correction to provide in the event of &amp;quot;failure&amp;quot; of a particular test by a given interface.&lt;br /&gt;
&lt;br /&gt;
===EJ and Jon===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Photoshop Study&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We demonstrate that the efficiency of performing subtasks in Photoshop can be predicted by a simple model of human perceptual and cognitive abilities.  In particular, we show that several of the tools commonly used to perform basic operations in Photoshop often violate the user&#039;s expectations of how those tools should work or where those tools ought to be located within the interface.  These violations can be categorized as follows: (1) unintuitive relationships between adjustments to common tool parameters and their perceptual results (e.g. adjusting the Magic Wand tool&#039;s &amp;quot;tolerance&amp;quot; setting often leads to unintended selections), (2) inefficient means of adjusting tool parameters (e.g. adjusting the &amp;quot;tolerance&amp;quot; setting by clicking, typing a number, hitting enter, observing the results, and iterating this process until the desired perceptual effect is achieved), (3) mismatches between the user&#039;s expectations for the names and locations of tools (or menu items) and their actual names and locations (e.g. resizing a picture via the &amp;quot;transform&amp;quot; menu item), (4) the availability of a tool in multiple locations imposes a cognitive load on the user when searching for that tool and these various contexts influence the user&#039;s expectations about the effects of using that tool, (5) [what else?], etc.&lt;br /&gt;
&lt;br /&gt;
===Jon===&lt;br /&gt;
&lt;br /&gt;
We have developed a model of human cognitive and perceptual abilities that allows us to predict human performance and thereby converge on ideal interfaces while simultaneously ruling out sub-optimal ones.  The model consists of a set of design principles combined with an extensive catalog of human perceptual and cognitive constraints on interface design.  The effectiveness of this model has been demonstrated by using the combined set of principles to assign scores to a set of GUIs designed to help a user accomplish the same overarching task, and comparing those scores with actual user performance.&lt;br /&gt;
&lt;br /&gt;
===Eric===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project idea:&#039;&#039;&#039; Introduce a method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&lt;br /&gt;
Create a simple &amp;quot;brain training&amp;quot;-style game in which users must perform a simple cognitive task. As an example, perhaps the task is to manipulate shape1 into shape2, given a simple set of operators. Each of the users actions (mouse movements, button clicks, etc.) will be documented along with the state of the game at that time. By varying the interface for different users, we can see how it affects performance in terms of cognitive and low-level tasks.&lt;br /&gt;
&lt;br /&gt;
Preliminary tests will first measure user performance in a laboratory setting. We will run subjects on two different interfaces and compare the differences in performance. We will then perform this same test in an online setting, and will evaluate how performance differs in this case, which is more subject to user interruptions and noisy data. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost. If performance differs greatly, it may be that the amount of noise introduced by users playing in a casual setting may make the project infeasible.&lt;br /&gt;
&lt;br /&gt;
===Andrew Bragdon===&lt;br /&gt;
&lt;br /&gt;
====I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  Develop a qualitative theory for predicting user performance with and without automatic meta-work tools for saving and resuming context.&lt;br /&gt;
&lt;br /&gt;
B.  Formative study should inspire 30-hour feasibility study&lt;br /&gt;
&lt;br /&gt;
C.  30-hour feasibility study should give a high-level indication into the relative merit of such an approach&lt;br /&gt;
&lt;br /&gt;
====II.  &#039;&#039;&#039;Formative Study&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  In a highly-controlled task environment, understand how an integrated model comprising current theories in perception and HCI can predict/explain task performance&lt;br /&gt;
&lt;br /&gt;
2.  Users trained in the task extensively to control for learning (learning aspect of task could be investigated in future study)&lt;br /&gt;
&lt;br /&gt;
B. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Users favored continuous rotation over static view or meditative pauses.&lt;br /&gt;
&lt;br /&gt;
2. Hand gesticulation seemed to be used as a method of validation in stereo views.&lt;br /&gt;
&lt;br /&gt;
3. Several verbal comments regarding occlusion suggest drawbacks to the tube view.&lt;br /&gt;
&lt;br /&gt;
4. Rotating is beneficial and perhaps necessary for this task.&lt;br /&gt;
&lt;br /&gt;
5. No references to external, offscreen information. In fact, very rarely did participants glance away from the screen.&lt;br /&gt;
&lt;br /&gt;
6. Our assessment of a typical session beginning with a new dataset:&lt;br /&gt;
&lt;br /&gt;
   1. Data loads, split second decision to begin rotating.&lt;br /&gt;
   2. Continuous rotation until proper viewpoint determined.&lt;br /&gt;
   3. Rocking back and forth interaction about the optimal viewpoint. (Thinking?)&lt;br /&gt;
   4. [In stereo views, participants were noted tilting their heads fairly consistently.] &lt;br /&gt;
&lt;br /&gt;
C.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  Some of the users&#039; strategies can be explained by current theories in perception&lt;br /&gt;
&lt;br /&gt;
D.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  Modeling low-level performance accurately does not seem to take into account for higher level workflow processes.&lt;br /&gt;
&lt;br /&gt;
====III.  &#039;&#039;&#039;30-hour study&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  Now that we have investigated highly controlled task performance, explore larger workflow context&lt;br /&gt;
&lt;br /&gt;
B.  Perform study examining how interruptions affect user performance, and explore what coping strategies are used by users in such an environment, and explore whether a simple tool - working sets - helps improve performance by allowing developers to save, and load, and switch between window and window state configurations&lt;br /&gt;
&lt;br /&gt;
1.  Software developers (a good example of a challenging, and creative type of information work) will receive task requests by email; notifications will appear on their screen in real time&lt;br /&gt;
&lt;br /&gt;
2.  Each email will have differrent priorities (e.g., low, high, emergency)&lt;br /&gt;
&lt;br /&gt;
3.  Participants will be asked to manage priorities effectively to accomplish the tasks given&lt;br /&gt;
&lt;br /&gt;
4.  Once they begin working, we will &amp;quot;interrupt&amp;quot; them at controlled times with new task requests of differrent priorities&lt;br /&gt;
&lt;br /&gt;
5.  We will analyze their actions for coping strategies, metawork, and working spheres to try to understand how the larger workflow context is affected by interruptions&lt;br /&gt;
&lt;br /&gt;
6.  Goal will be to run 4-6 people, &#039;&#039;&#039;would like feedback on this&#039;&#039;&#039;.  Can control for experience by recruiting experienced developers and students from the general population of Brown University.&lt;br /&gt;
&lt;br /&gt;
7.  Note on progress so far: I &amp;quot;ran&amp;quot; myself and found that switching tasks incurred a huge cost in returning to what I was doing.  I think that tools for aiding in switching between working sets will significantly benefit developers in particular, and information workers in general.  Some coping strategies I used: writing things on paper, typing notes into Notepad, and keeping previously used tabs open.  Sometimes it would get too chaotic and I would need to close all open windows and reset from my notes.  Overall, I would say that task switching/pausing/resuming in Visual Studio (test application) is not well supported.&lt;br /&gt;
&lt;br /&gt;
===Adam Darlow===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate the interactions between various cognitive principles and design principles. There are three basic relations:&lt;br /&gt;
&lt;br /&gt;
   1. The cognitive principle is the motivation behind the design principle.&lt;br /&gt;
   2. The cognitive principle suggests a method for achieving the design principle.&lt;br /&gt;
   3. The cognitive principle and design principle are unrelated. (Hopefully few)&lt;br /&gt;
&lt;br /&gt;
B.  Make design rules which are suggested by the combination of a cognitive principle and a design principle.&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Collecting commonly accepted design principles from the literature on interface design and well established cognitive principles from the cognitive psychology literature and constructing a matrix which crosses them. Most squares in the matrix should suggest specific design rules.&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As a preliminary effort, I have chosen the following two cognitive principles and three design principles:&lt;br /&gt;
&lt;br /&gt;
Cognitive principles&lt;br /&gt;
&lt;br /&gt;
#C1. People derive complex associations and causal interpretations from temporal correlations and patterns.&lt;br /&gt;
&lt;br /&gt;
#C2. People have limited working memory (7 +- 2), but each space can hold a chunk of related information.&lt;br /&gt;
&lt;br /&gt;
Design Principles (from Maeda (TBD link))&lt;br /&gt;
&lt;br /&gt;
#D1. Achieve simplicity through thoughtful reduction.&lt;br /&gt;
&lt;br /&gt;
#D2. Organization makes a system of many appear fewer.&lt;br /&gt;
&lt;br /&gt;
#D3. Knowledge makes everything simpler.&lt;br /&gt;
&lt;br /&gt;
The resulting matrix entries are as follows:&lt;br /&gt;
&lt;br /&gt;
#C1 + D1. Remove extraneous correlations. Things shouldn&#039;t consistently and apparently change or happen in conjunction unless they are actually related and their relation is important to the user.&lt;br /&gt;
&lt;br /&gt;
#C1 + D2. Use temporal and spatial contiguity to help users organize and group  multiple events meaningfully. &lt;br /&gt;
&lt;br /&gt;
#C1 + D3. Use temporal correlations to effectively teach the important  causal relations inherent in the interface.&lt;br /&gt;
&lt;br /&gt;
#C2 + D1. Reduce the interface such that a user has to be aware of no more than 5 items simultaneously.&lt;br /&gt;
&lt;br /&gt;
#C2 + D2. Groups of semantically related items can for many purposes be treated as a single item.&lt;br /&gt;
&lt;br /&gt;
#C2 + D3. Teach users how things are related so that it can be chunked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To expand the matrix and evaluate the resulting design rules.&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about the details of a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;.  There does need to be enough to define what the overall proposed work is, but that may show up in earlier sections.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2398</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2398"/>
		<updated>2009-03-09T23:13:59Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Workflow Context */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Impact: Such a model would allow us to predict human performance with interfaces.  Validation of the model would allow us to more rapidly converge on ideal interfaces while simultaneously ruling out sub-optimal ones.&lt;br /&gt;
## 3-Week Feasibility Study: (1) Distill (perhaps from review articles) the major findings in all of the relevant subfields into a set of principles that will be grouped to form appropriate model components.  Some of the most relevant subfields include: memory, attention, visual perception, psychoacoustics, task switching, categorization, event perception, haptics (ANY OTHERS?).  (2)  Use these to devise a simple predictive model of human interaction.  This model could simply consist of the set of design principles but should allow some form of quantitative scoring/evaluation of interfaces.  (3) Develop a small set (5-10) of candidate GUIs that are designed to help the user accomplish the same overarching task(s) (e.g. importing, analyzing, and flexibly graphing data in Matlab).   (4)  Test/validate the model by comparing predicted performance to actual performance with the GUIs.&lt;br /&gt;
## Risks/Costs: There are always potential risks to any human subjects that might participate in testing the model which might necessitate IRB approval.  Costs might include: (1) paying for any necessary hardware and software for developing, displaying, and testing the GUIs, and (2) paying for human subjects.&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A predictive, fully-integrated model of user workflow which encompasses low-level tasks, working spheres, communication chains, interruptions and multi-tasking. (OWNER: Andrew Bragdon)&lt;br /&gt;
##Traditionally, software design and usability testing is focused on low-level task performance.  However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, &#039;&#039;working sphere&#039;&#039; level.  Su, et al., develops a predictive model of task switching based on communication chains.  Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching.  We will conduct initial exploratory studies to test specific instances of this high-level hypothesis.  We will then use the refined model to identify specific predictions for the outcome of a formal, ecologically valid study involving a complex, non-trivial application.&lt;br /&gt;
## Impact: To truly design computing systems which are designed around the way users work, we must understand &#039;&#039;how&#039;&#039; users work.  To do this, we need to establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains.  Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context.  Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work.&lt;br /&gt;
## Risk and Costs: Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk.  The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel.  While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda.  The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis.  The cost will span student support, both Ph.D. and Master&#039;s students, as well as full-time research staff.  Projected cost: $1.5 million over three years.&lt;br /&gt;
## 3-week Feasibility Study:  To ascertain the feasibility of this project we will conduct an initial pilot test to investigate the core idea: a predictive model of user workflow.  We will spend 1 week studying the real workflow of several people through job shadowing.  We will then create two systems designed to help a user accomplish some simple information work task.  One system will be designed to take larger workflow into account (experimental group), while one will not (control group).  In a synthetic environment, participants will perform a controlled series of tasks while receiving interruptions at controlled times.  If the two groups perform roughly the same, then we will need to reassess this avenue of research.  However, if the two groups perform differrently then our pilot test will have lent support to our approach and core hypothesis.&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A classification of standard design guidelines into overarching principles with quantifiable cognitive analogues &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:16, 6 February 2009 (UTC)&lt;br /&gt;
## A ranking/weighting for these principles based on their cognitive relevance&lt;br /&gt;
## An interface evaluation mechanism for the application of these principles and their associated guidelines.&lt;br /&gt;
# A systematic method to determine task distribution based on psychological principles. (Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
## The theory of distributed cognition (Clark-1998-TEM) is a well-suited basis for constructing a human-computer interaction framework (Hollan-2000-DCF). However, a systematic method of determining which tasks should be distributed to which agent in a distributed environment has yet to be clearly defined. I propose that the cognitive psychology literature is ripe with empirical evidence on task performance variables. Dual-process theories (Evans-2003-ITM) usually agree on the nature of high-level cognitive operations which are used in human reasoning. It is also argued that low-level processing, which is based on perceptual similarity, contiguity, and association, is determined by a set of autonomous subsystems. In general, one might say that humans excel at these low-level functions in relation to traditional von-neuman architectures (and even current neural-networks), but recently, cognitive science has been less focused on research on high-level reasoning in humans as the evidence is showing that we rarely engage in such demanding cognitive operations. Therefore, we believe that an optimal configuration for distributed cognition amongst people and computers will take advantage of these specialties and deficiencies, and distribute tasks accordingly. Our research project will contribute a set of guidelines, or heuristics, to allow engineers to effectively determine which tasks should be assigned to which entities.&lt;br /&gt;
## The impact of this contribution will be a systematic approach to interface design. Engineers will finally have a well-documented standard about how to determine what operations humans should be responsible for, and which should be off-loaded to the computer.&lt;br /&gt;
## There is no risk for this contribution. Costs associated will be an extensive search of cognitive psychology literature for the past 25 years or so. Research on memory, reasoning, perception and more will be required in order to conduct a complete and accurate assessment.&lt;br /&gt;
## In the course of 30 hours, we may perform a hand-analysis of the empirical results in psychology in order to develop a set of approximately 10 guidelines or rules.&lt;br /&gt;
# A method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time. (Owner - Eric)&lt;br /&gt;
##In order to reach a model of human cognitive and perceptual abilities when using computers, experimental analysis of human performance on these tasks will likely be necessary. User studies can often aid in this analysis, but they require much money, much time, and are subject to user fatigue. Alternately, we propose a web-based method for evaluating user performance in perceptual, motor-control, and cognitive tasks. The idea is to take a task would normally be measured through user studies in a laboratory and map this task into a simple online game. Somewhat similar work has been done by Popovic et. al. at the University of Washington, in that they took the task of folding proteins and mapped it into an online game ( http://www.economist.com/displaystory.cfm?story_id=11326188 ) with much success. We will analyze the value of this method by comparing it to similar tasks performed in laboratory experiments, both in terms of user performance and deployment costs.&lt;br /&gt;
##By converting the task into a simple game, we hope to reduce the problem of user fatigue. Additionally, if the game is played on a social networking site, we are able to track basic information of users who perform the tasks and, more importantly, can identify returning users. Thus, we can track not only a user&#039;s performance, but also how they improve at a given task over time.&lt;br /&gt;
##There are no clear risks involved with this study. Potential costs would be those required for development of each experiment and for web hosting.&lt;br /&gt;
##As a prototype, we can select one particular task to map into a simple online game. To check for feasibility, we need to ensure that the results we get from our proposed method are similar to the results found in laboratory settings. There are two possible effects we must test for: bias in results due to the mapping into a game, and bias in results due to the sample of subjects or any change caused by the web-based component. A simple test would be to first test users in a lab using traditional methods as a baseline, and then see how performance differs from that in  laboratory tests using the game-based mapping. This will determine if the game appropriately measures the given task. Next, an online version of the game can be introduced, and performance can be compared with the laboratory settings. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost.&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
# Accurately assess computational and psychological costs for tasks and subtasks.  To do this, we will develop two non-trivial prototype systems; a conventional control system and a novel system which is based on our model of task switching.  We will use our model to make specific predictions about relative task performance and user affect responses, and then test these predictions empirically in a formal study.&lt;br /&gt;
# Develop model that accounts for qualitatively different psychological tasks&lt;br /&gt;
# Test model on real-world data&lt;br /&gt;
# Build classification for design guidelines based on cognitive analogues [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Parse and classify existing guidelines by this new metric [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Build an automated evaluation rubric for applying these classifications to interfaces in development [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Long-term goal: build a mixed-initiative interface generation system that takes some basic GUI requirements from the designer as input and attempts to maximize its score on the &amp;quot;evaluation rubric&amp;quot; within these constraints. (Eric)&lt;br /&gt;
# Develop model component to predict performance following interruptions or changes between work spheres.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget the context of the original task ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. Enacting the intention to switch becomes easier when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
(Edited by Andrew)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantitative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert users dynamically with accuracies as high as 91%.  This classifier was then used to provide different information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately? What tasks should we purposely leave the computer out of?&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature [http://vrl.cs.brown.edu/wiki/CS295J/Literature Evans-2003-ITM]. What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
(Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
&lt;br /&gt;
In considering users as information processors, interfaces should take into account people&#039;s computational limitations on short term memory, learning and vision as well as the algorithms and representations that they use to process information and pursue goals.&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as direct, non-inferential, unmediated (by retinal images or mental representations) epistemic contact with behaviorally-relevant features of the environment(Warren, 2005).  The possiblities for action that the environment offers a given animal are taken to be specified by information available in structured energy distributions (e.g. the optic array of light arriving at the eyes), and these possibilities for action constitute the affordances of the environment with respect that animal(Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
====Application to AUE====&lt;br /&gt;
&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
====Popular and seminal examples====&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
====Elements and goals of a guideline set====&lt;br /&gt;
&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application. [[User:E J Kalafarski|E J Kalafarski]] 15:21, 6 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Interaction capture====&lt;br /&gt;
&lt;br /&gt;
[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4376144 Yi et. al.] have performed a survey of the visualization literature and categorized different types of interactions that users were faced with. They are as follows:&lt;br /&gt;
# Select: mark something as interesting &lt;br /&gt;
# Explore: show me something else &lt;br /&gt;
# Reconfigure: show me a different arrangement &lt;br /&gt;
# Encode: show me a different representation&lt;br /&gt;
# Abstract/Elaborate: show me more or less detail &lt;br /&gt;
# Filter: show me something conditionally &lt;br /&gt;
# Connect: show me related items &lt;br /&gt;
&lt;br /&gt;
Different GUI components may be able to perform the same type of interaction. We would like to categorize GUI components or patterns that are used bring about these interactions. We then have a library of components we can use to complete a given task. The goal is to create components for a given interaction that can minimize cost to the user. Because the cost of a component is likely dependent on the other components used, the goal of the designer might be to choose a combination of components that minimizes this cost. To do this, we need a way to measure costs, which is discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
In [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4658124 A Framework of Interaction Costs in Information Visualization], Lam performs a survey of 32 user studies and classifies several types of costs that can be used for qualitative interface evaluation. The classification scheme is based on Donald Norman&#039;s [http://en.wikipedia.org/wiki/Seven_stages_of_action Seven Stages of Action] from his book, [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746 The Design of Everyday Things] ([http://www.networksplus.net/tracyj/everydaythings.pdf summary]).&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Decision costs:&#039;&#039;&#039; How does user performance decrease when there is an overwhelming amount of data to observe or too many possible actions to take.&lt;br /&gt;
# &#039;&#039;&#039;System-power costs:&#039;&#039;&#039; How does the user translate a high-level goal into a sequence of allowable actions by the interface?&lt;br /&gt;
# &#039;&#039;&#039;Multiple input mode costs:&#039;&#039;&#039; Cost of providing an action selection system that is not unified, for example, if there is one button that does two different things, depending on context. &lt;br /&gt;
# &#039;&#039;&#039;Physical-motion costs:&#039;&#039;&#039; Physical cost to the user to interact with the interface, for example, measuring mouse movement costs with Fitts&#039; Law.&lt;br /&gt;
# &#039;&#039;&#039;Visual-cluttering costs:&#039;&#039;&#039; Cost due to unwanted visual distractions, such as a mouse hovering pop-up occluding part of the screen.&lt;br /&gt;
# &#039;&#039;&#039;View- and State-change costs:&#039;&#039;&#039; when the user causes the interface to change views, this new view should be consistent with the old one, in that it should meet the users expectations of where things should be in the new view, based on his knowledge of the old one.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation in practice ====&lt;br /&gt;
User interfaces are usually evaluated in practice using two methods: &#039;&#039;usability inspection methods&#039;&#039;, where a programmer or one or more experts evaluates the interface through inspection; or &#039;&#039;usability testing&#039;&#039;, where empirical tests are performed with some group of naive human users. Some usability inspection methods include [http://en.wikipedia.org/wiki/Cognitive_walkthrough Cognitive walkthrough], [http://en.wikipedia.org/wiki/Heuristic_evaluation Heuristic evaluation], and [http://en.wikipedia.org/wiki/Pluralistic_walkthrough Pluralistic walkthrough]. While these inspection methods do not using naive human subjects, the details of the methods might be useful in helping to formalize what interactions are made between a user and an interface, and what each interactions&#039; costs are for a given design.&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=108862 Jeffries et. al.] provide a real-world comparison between two of the usability inspection methods (heuristic evaluation and cognitive walkthrough), the usability testing method, as well as following some published software guidelines for interface design.&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Some recent advances in the pragmatic use of EEGs in HCI research can be seen in [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]  The possibility of using brain function to interface with a machine is cause for great excitement in the HCI community, and further advances in non-invasive techniques for accessing brain function may allow teleo-HCI to become a reality.  &lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
=== Workflow analysis ===&lt;br /&gt;
&lt;br /&gt;
Research in workflow and interaction analysis remains relatively sparse, though its utility would appear to be many-fold.  Tools for such analysis have the potential to facilitate data navigation, provide search mechanisms, and allow for more efficient collaborative discovery.  In addition, awareness and caching of interaction histories readily allows for explanatory presentations of results, and has the potential to provide training data for machine learning mechanisms.&lt;br /&gt;
&lt;br /&gt;
VisTrails is an optimized workflow system developed at the Sci Institute at the University of Utah, and implemented within their VTK visualization package.  The primary purpose of the system is to increase performance when working with multiple visualizations simultaneously.  This is accomplished by storing low-level workflow processes to reduce computational redundancy.  Three papers on VisTrails can be found here: [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-MED.pdf Callahan-2006-MED], [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-VVM.pdf Callahan-2006-VVM], [http://www.cs.brown.edu/people/trevor/Papers/Bavoil-2005-VEI.pdf Bavoil-2005-VEI]&lt;br /&gt;
&lt;br /&gt;
Jeff Heer of Stanford (formerly Berkley) has presented work on using Graphical Interaction Histories within the Tableau InfoVis application.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including some evaluation from the deployment of his techniques within Tableau.  His paper from Infovis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Heer-2008-GraphicalHistories]&lt;br /&gt;
&lt;br /&gt;
If you want to check out some of Trevor&#039;s work having to do with using interaction histories in 3D, time-varying scientific visualizations, his preliminary work that was presented at Vis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visDemo.pdf Abstract], [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visPoster.pdf Poster]&lt;br /&gt;
&lt;br /&gt;
Optimizing workflows that have been captured -- Tovi?&lt;br /&gt;
&lt;br /&gt;
Does ethnography fit in here?&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
=== Gideon ===&lt;br /&gt;
&lt;br /&gt;
====Preliminary Psychological Measures Show the Need for a Standardized Division of Cognitive Labor Across Humans and Computers.====&lt;br /&gt;
&lt;br /&gt;
A meta-analysis of a subset of the cognitive psychology and human-computer interaction literature presents evidence that interactions between humans and computers can be improved by taking into account the cognitive resources required for different types of tasks. It is well known that humans and computers excel at different types of tasks, but the field has not made an explicit effort to standardize a set of guidelines that interface designers may use when developing computer systems. For people and computers to function in an optimized, complementary fashion, we still need a systematic way of distributing tasks amongst them.&lt;br /&gt;
&lt;br /&gt;
It is often the case that what computers excel at, humans have difficulty with (e.g., arithmetic). While, the opposite is also true (e.g., pattern-recognition). After a preliminary search of the literature, we&#039;ve explored common tasks in software today that have neglected consideration of this performance dichotomy. Designers have not been appropriately addressed these gaps in the computer industry due to a lack of multidisciplinary research. Our meta-analysis presents data that supports our view on two different tasks: 3D shape-rotation and face recognition.&lt;br /&gt;
&lt;br /&gt;
We have demonstrated in just a 30-hour study that computer assisted 3D Shape Rotation is consistently preferred over human-only mental rotation. Complementarily, humans consistently outperform modern computer systems in face recognition. Sometimes these performance gaps are obvious due to a lack of technology or common-sense. However, that is not only the case. Our preliminary study clearly demonstrates that systems benefit from consistent and rule-based task distribution guidelines. Furthermore, the literature is rich with other types of tasks which await our systematic exploitation. Upon further study, the community will benefit from a tested and systematic approach for designing improved human-computer interfaces.&lt;br /&gt;
&lt;br /&gt;
=== Steven ===&lt;br /&gt;
**C.1.3 Terms&lt;br /&gt;
*C.2 Garage Band study&lt;br /&gt;
:This study presented a participant with the Apple Garage Band program.  A sample song had been synthesized using the program, and parts of it “broken” in various ways (e.g. pitch changed, track segment deleted, etc.).  The user was then tasked with returning the song to its original form.&lt;br /&gt;
**C.2.1 Feasibility of a creative problem-solving task&lt;br /&gt;
::This task required users to approach the song from a fairly mechanistic perspective, as segments which sounded incorrect could only be fixed via the manipulation of track variables.  The search for such broken segments required a qualitative listening trial, however, and the user thus had to correlate the song’s aural product with its technical components.  These individual problems allow each user’s intuitive problem-solving process to be assessed several times in the use of one gui without the process becoming repetitive.  If the gui adheres to proper design rules and maintains task saliency, the user ought to have minimal difficulty in performing the first task, but even if there is an initial learning curve associated with the program, the user ought to see continual improvement in the effort required to solve each task.&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
::The task revealed a few very interesting flaws in the program’s user interface design, as well as a few intriguing insights into the user’s process for reaching the goal.  The interface was designed to appear elegant and to convey ease of use, but in doing so it seems to have neglected sign salience for the average user.  Although the program has the ability to create very high quality music, it is bundled with Mac OSX and thus ought to be fairly easily comprehended by the average user.  The study showed that the user had great difficulty in finding certain functions, however, and in some cases even tasks as simple as dragging a track segment to extend its length were achieved only after multiple attempts.&lt;br /&gt;
::The user’s behavior was both puzzling and insightful.  He replayed the entire song several times throughout the trial, both before and after each task and often during.  He also would replay the segment of the song upon which he was working several times between steps.  In attempting to fix the segment which had an altered pitch, for example, the user played through the entire song three times, then played only the concerned region repeatedly, then every segment besides the one with an altered pitch, then raised the volume of the altered pitch segment and replayed the region again; all before altering the pitch and repeating this process of multiple plays all over again.  This example reveals that the user had a great desire to prepare himself for achieving the intermediate goal of pitch adjustment not by determining the segment which needed adjustment and then proceeding, but by mentally arranging the entire task before proceeding.  That is to say, the user did not recognize his goal and proceed piecemeal by clicking the concerned segment and using a trial-and-error method to adjust the pitch.  Instead, he replayed the song until he knew the degree and direction with which he would need to change the pitch, as well as how the adjusted track ought to fit with the other tracks, all before proceeding to the editing process.&lt;br /&gt;
*C.3 Photoshop study&lt;br /&gt;
:This study presented a participant with the Adobe Photoshop program.  Five images were pre-loaded, four of various monochromatic photographs from periods before color photography and one of a modern photograph of a man in New York City.  The user was then tasked with altering the modern photograph to appear similar to one or more of the older photographs.  The goal was one final, edited version of the original photograph.&lt;br /&gt;
**C.3.1 Feasibility of a creative sandbox task&lt;br /&gt;
::This task allowed users to approach the image from many viewpoints.  Because there was not an inherent goal of converting the modern image to grayscale or incorporating certain aspects of the older images into it, the user could choose from a myriad of different aspects of the older photos to adhere the modern photo to.  As a result, the task requires that the gui have extremely high task saliency, with readily understood terms and editing processes which match those intuited by the user.  Such a task is thus less a measure of the individual user’s approach to the goal than it is a measure of the interface’s ability to cope with a myriad of user goals.&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
*C.3 Lessons Learned&lt;br /&gt;
**C.3.1 Setup of study, metric methods&lt;br /&gt;
**C.3.2 Possibility for future critique-enabled user workflow&lt;br /&gt;
&lt;br /&gt;
===Trevor and Eric===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interaction Histories&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project idea:&#039;&#039;&#039;  Generating interaction histories within scientific visualization applications to facilitate individual and collaborative scientific discovery. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&lt;br /&gt;
# A software infrastructure for caching and filtering interaction events has been developed within an existing scientific application aimed at exploring animal kinematic data captured via high-speed x-ray and CT.&lt;br /&gt;
# Methods for visualizing, editing, and sharing interaction histories have been designed and implemented.&lt;br /&gt;
# Methods for annotating and querying interactions have been implemented.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work from the Future&#039;&#039;&#039;&lt;br /&gt;
# Software model extended to two other interactive visualizations -- a flow visualization application, and a protein visualization application.&lt;br /&gt;
# User study to examine techniques for automatic history generation&lt;br /&gt;
## Automatic creation v. semi-automatic creation v. manual creation&lt;br /&gt;
# Timed-task pilot study performed to validate utility of interaction history techniques&lt;br /&gt;
## Task performance with histories v. without histories&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;General Outline of Tasks&#039;&#039;&#039;&lt;br /&gt;
# Capture user interaction history&lt;br /&gt;
# Predict user interactions, given interaction history&lt;br /&gt;
## Use a relational markov model?&lt;br /&gt;
# Modify the UI, given predicted user interactions&lt;br /&gt;
# Evaluate this modified UI&lt;br /&gt;
## Compare performance with and without UI modifications&lt;br /&gt;
## Evaluate performance when predicted interactions are incorrect x% of the time&lt;br /&gt;
&lt;br /&gt;
===EJ===&lt;br /&gt;
&#039;&#039;&#039;Problem:&#039;&#039;&#039; There currently exists no metric for evaluating interfaces that attempts to reconcile popular and successful heuristic design guidelines with cognitive theory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
I. A mapping of empirically-effective heuristic design guidelines to fundamental cognitive principles.&lt;br /&gt;
:*A set of &amp;quot;common&amp;quot; design guidelines is arrived at through survey of popular and effective heuristic guidelines in use.&lt;br /&gt;
:*Proposed common design guidelines:&lt;br /&gt;
 Discoverability&lt;br /&gt;
 Flexibility&lt;br /&gt;
 Appropriate visual presentation&lt;br /&gt;
 Predictability&lt;br /&gt;
 Consistency&lt;br /&gt;
 Simplicity&lt;br /&gt;
 Memory load reduction&lt;br /&gt;
 Feedback&lt;br /&gt;
 Task match&lt;br /&gt;
 User control&lt;br /&gt;
 Efficiency&lt;br /&gt;
:*Proposed cognitive principles:&lt;br /&gt;
 Affordance&lt;br /&gt;
 Visual cue&lt;br /&gt;
 Cognitive load&lt;br /&gt;
 Chunking&lt;br /&gt;
 Activity&lt;br /&gt;
 Actability&lt;br /&gt;
II. A weighting or priority for each of these analogues.&lt;br /&gt;
:*These can begin as binary estimations based on empirical evidence.&lt;br /&gt;
:*Through experimentation, these values should converge to discrete priority values for each analogue, allowing a ranking of analogues.&lt;br /&gt;
III. A system for applying these analogues and respective priority to the evaluation of an interface.&lt;br /&gt;
:*This can occur manually or in an automated fashion.&lt;br /&gt;
:*In this step (or possibly in a separate step), analogues should be assigned a &#039;&#039;suggestion&#039;&#039; or potential correction to provide in the event of &amp;quot;failure&amp;quot; of a particular test by a given interface.&lt;br /&gt;
&lt;br /&gt;
===EJ and Jon===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Photoshop Study&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We demonstrate that the efficiency of performing subtasks in Photoshop can be predicted by a simple model of human perceptual and cognitive abilities.  In particular, we show that several of the tools commonly used to perform basic operations in Photoshop often violate the user&#039;s expectations of how those tools should work or where those tools ought to be located within the interface.  These violations can be categorized as follows: (1) unintuitive relationships between adjustments to common tool parameters and their perceptual results (e.g. adjusting the Magic Wand tool&#039;s &amp;quot;tolerance&amp;quot; setting often leads to unintended selections), (2) inefficient means of adjusting tool parameters (e.g. adjusting the &amp;quot;tolerance&amp;quot; setting by clicking, typing a number, hitting enter, observing the results, and iterating this process until the desired perceptual effect is achieved), (3) mismatches between the user&#039;s expectations for the names and locations of tools (or menu items) and their actual names and locations (e.g. resizing a picture via the &amp;quot;transform&amp;quot; menu item), (4) the availability of a tool in multiple locations imposes a cognitive load on the user when searching for that tool and these various contexts influence the user&#039;s expectations about the effects of using that tool, (5) [what else?], etc.&lt;br /&gt;
&lt;br /&gt;
===Jon===&lt;br /&gt;
&lt;br /&gt;
We have developed a model of human cognitive and perceptual abilities that allows us to predict human performance and thereby converge on ideal interfaces while simultaneously ruling out sub-optimal ones.  The model consists of a set of design principles combined with an extensive catalog of human perceptual and cognitive constraints on interface design.  The effectiveness of this model has been demonstrated by using the combined set of principles to assign scores to a set of GUIs designed to help a user accomplish the same overarching task, and comparing those scores with actual user performance.&lt;br /&gt;
&lt;br /&gt;
===Eric===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project idea:&#039;&#039;&#039; Introduce a method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&lt;br /&gt;
Create a simple &amp;quot;brain training&amp;quot;-style game in which users must perform a simple cognitive task. As an example, perhaps the task is to manipulate shape1 into shape2, given a simple set of operators. Each of the users actions (mouse movements, button clicks, etc.) will be documented along with the state of the game at that time. By varying the interface for different users, we can see how it affects performance in terms of cognitive and low-level tasks.&lt;br /&gt;
&lt;br /&gt;
Preliminary tests will first measure user performance in a laboratory setting. We will run subjects on two different interfaces and compare the differences in performance. We will then perform this same test in an online setting, and will evaluate how performance differs in this case, which is more subject to user interruptions and noisy data. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost. If performance differs greatly, it may be that the amount of noise introduced by users playing in a casual setting may make the project infeasible.&lt;br /&gt;
&lt;br /&gt;
===Andrew Bragdon===&lt;br /&gt;
&lt;br /&gt;
====I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  Develop a qualitative theory for predicting user performance with and without automatic meta-work tools for saving and resuming context.&lt;br /&gt;
&lt;br /&gt;
B.  Formative study should inspire 30-hour feasibility study&lt;br /&gt;
&lt;br /&gt;
C.  30-hour feasibility study should give a high-level indication into the relative merit of such an approach&lt;br /&gt;
&lt;br /&gt;
====II.  &#039;&#039;&#039;Formative Study&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  In a highly-controlled task environment, understand how an integrated model comprising current theories in perception and HCI can predict/explain task performance&lt;br /&gt;
&lt;br /&gt;
2.  Users trained in the task extensively to control for learning (learning aspect of task could be investigated in future study)&lt;br /&gt;
&lt;br /&gt;
B. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. Users favored continuous rotation over static view or meditative pauses.&lt;br /&gt;
&lt;br /&gt;
2. Hand gesticulation seemed to be used as a method of validation in stereo views.&lt;br /&gt;
&lt;br /&gt;
3. Several verbal comments regarding occlusion suggest drawbacks to the tube view.&lt;br /&gt;
&lt;br /&gt;
4. Rotating is beneficial and perhaps necessary for this task.&lt;br /&gt;
&lt;br /&gt;
5. No references to external, offscreen information. In fact, very rarely did participants glance away from the screen.&lt;br /&gt;
&lt;br /&gt;
6. Our assessment of a typical session beginning with a new dataset:&lt;br /&gt;
&lt;br /&gt;
   1. Data loads, split second decision to begin rotating.&lt;br /&gt;
   2. Continuous rotation until proper viewpoint determined.&lt;br /&gt;
   3. Rocking back and forth interaction about the optimal viewpoint. (Thinking?)&lt;br /&gt;
   4. [In stereo views, participants were noted tilting their heads fairly consistently.] &lt;br /&gt;
&lt;br /&gt;
C.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  Some of the users&#039; strategies can be explained by current theories in perception&lt;br /&gt;
&lt;br /&gt;
D.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1.  Modeling low-level performance accurately does not seem to take into account for higher level workflow processes.&lt;br /&gt;
&lt;br /&gt;
====III.  &#039;&#039;&#039;30-hour study&#039;&#039;&#039;====&lt;br /&gt;
&lt;br /&gt;
A.  Now that we have investigated highly controlled task performance, explore larger workflow context&lt;br /&gt;
&lt;br /&gt;
B.  Perform study examining how interruptions affect user performance, and explore what coping strategies are used by users in such an environment, and explore whether a simple tool - working sets - helps improve performance by allowing developers to save, and load, and switch between window and window state configurations&lt;br /&gt;
&lt;br /&gt;
1.  Software developers (a good example of a challenging, and creative type of information work) will receive task requests by email; notifications will appear on their screen in real time&lt;br /&gt;
&lt;br /&gt;
2.  Each email will have differrent priorities (e.g., low, high, emergency)&lt;br /&gt;
&lt;br /&gt;
3.  Participants will be asked to manage priorities effectively to accomplish the tasks given&lt;br /&gt;
&lt;br /&gt;
4.  Once they begin working, we will &amp;quot;interrupt&amp;quot; them at controlled times with new task requests of differrent priorities&lt;br /&gt;
&lt;br /&gt;
5.  We will analyze their actions for coping strategies, metawork, and working spheres to try to understand how the larger workflow context is affected by interruptions&lt;br /&gt;
&lt;br /&gt;
6.  Goal will be to run 4-6 people, &#039;&#039;&#039;would like feedback on this&#039;&#039;&#039;.  Can control for experience by recruiting experienced developers and students from the general population of Brown University.&lt;br /&gt;
&lt;br /&gt;
7.  Note on progress so far: I &amp;quot;ran&amp;quot; myself and found that switching tasks incurred a huge cost in returning to what I was doing.  I think that tools for aiding in switching between working sets will significantly benefit developers in particular, and information workers in general.  Some coping strategies I used: writing things on paper, typing notes into Notepad, and keeping previously used tabs open.  Sometimes it would get too chaotic and I would need to close all open windows and reset from my notes.  Overall, I would say that task switching/pausing/resuming in Visual Studio (test application) is not well supported.&lt;br /&gt;
&lt;br /&gt;
===Adam Darlow===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate the interactions between various cognitive principles and design principles. There are three basic relations:&lt;br /&gt;
&lt;br /&gt;
   1. The cognitive principle is the motivation behind the design principle.&lt;br /&gt;
   2. The cognitive principle suggests a method for achieving the design principle.&lt;br /&gt;
   3. The cognitive principle and design principle are unrelated. (Hopefully few)&lt;br /&gt;
&lt;br /&gt;
B.  Make design rules which are suggested by the combination of a cognitive principle and a design principle.&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Collecting commonly accepted design principles from the literature on interface design and well established cognitive principles from the cognitive psychology literature and constructing a matrix which crosses them. Most squares in the matrix should suggest specific design rules.&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As a preliminary effort, I have chosen the following two cognitive principles and three design principles:&lt;br /&gt;
&lt;br /&gt;
Cognitive principles&lt;br /&gt;
&lt;br /&gt;
#C1. People derive complex associations and causal interpretations from temporal correlations and patterns.&lt;br /&gt;
&lt;br /&gt;
#C2. People have limited working memory (7 +- 2), but each space can hold a chunk of related information.&lt;br /&gt;
&lt;br /&gt;
Design Principles (from Maeda (TBD link))&lt;br /&gt;
&lt;br /&gt;
#D1. Achieve simplicity through thoughtful reduction.&lt;br /&gt;
&lt;br /&gt;
#D2. Organization makes a system of many appear fewer.&lt;br /&gt;
&lt;br /&gt;
#D3. Knowledge makes everything simpler.&lt;br /&gt;
&lt;br /&gt;
The resulting matrix entries are as follows:&lt;br /&gt;
&lt;br /&gt;
#C1 + D1. Remove extraneous correlations. Things shouldn&#039;t consistently and apparently change or happen in conjunction unless they are actually related and their relation is important to the user.&lt;br /&gt;
&lt;br /&gt;
#C1 + D2. Use temporal and spatial contiguity to help users organize and group  multiple events meaningfully. &lt;br /&gt;
&lt;br /&gt;
#C1 + D3. Use temporal correlations to effectively teach the important  causal relations inherent in the interface.&lt;br /&gt;
&lt;br /&gt;
#C2 + D1. Reduce the interface such that a user has to be aware of no more than 5 items simultaneously.&lt;br /&gt;
&lt;br /&gt;
#C2 + D2. Groups of semantically related items can for many purposes be treated as a single item.&lt;br /&gt;
&lt;br /&gt;
#C2 + D3. Teach users how things are related so that it can be chunked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To expand the matrix and evaluate the resulting design rules.&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about the details of a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;.  There does need to be enough to define what the overall proposed work is, but that may show up in earlier sections.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_to_read_for_class_8&amp;diff=2365</id>
		<title>CS295J/Literature to read for class 8</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature_to_read_for_class_8&amp;diff=2365"/>
		<updated>2009-03-06T22:13:19Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: Added a survey paper on simulating users&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Ritter, F. E., &amp;amp; Young, R. M. (2001). [http://acs.ist.psu.edu/papers/ijhcs-em-si/ritterY01.pdf Embodied models as simulated users: Introduction to this special issue on using cognitive models to improve interface design]. International Journal of Human-Computer Studies, 55, 1-14&lt;br /&gt;
&lt;br /&gt;
This will give us an idea of where the field is, or was 8 years ago. (Adam)&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2300</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2300"/>
		<updated>2009-03-05T15:34:20Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Adam Darlow */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Impact: Such a model would allow us to predict human performance with interfaces.  Validation of the model would allow us to more rapidly converge on ideal interfaces while simultaneously ruling out sub-optimal ones.&lt;br /&gt;
## 3-Week Feasibility Study: (1) Distill (perhaps from review articles) the major findings in all of the relevant subfields into a set of principles that will be grouped to form appropriate model components.  Some of the most relevant subfields include: memory, attention, visual perception, psychoacoustics, task switching, categorization, event perception, haptics (ANY OTHERS?).  (2)  Use these to devise a simple predictive model of human interaction.  This model could simply consist of the set of design principles but should allow some form of quantitative scoring/evaluation of interfaces.  (3) Develop a small set (5-10) of candidate GUIs that are designed to help the user accomplish the same overarching task(s) (e.g. importing, analyzing, and flexibly graphing data in Matlab).   (4)  Test/validate the model by comparing predicted performance to actual performance with the GUIs.&lt;br /&gt;
## Risks/Costs: There are always potential risks to any human subjects that might participate in testing the model which might necessitate IRB approval.  Costs might include: (1) paying for any necessary hardware and software for developing, displaying, and testing the GUIs, and (2) paying for human subjects.&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A predictive, fully-integrated model of user workflow which encompasses low-level tasks, working spheres, communication chains, interruptions and multi-tasking. (OWNER: Andrew Bragdon)&lt;br /&gt;
##Traditionally, software design and usability testing is focused on low-level task performance.  However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, &#039;&#039;working sphere&#039;&#039; level.  Su, et al., develops a predictive model of task switching based on communication chains.  Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching.  We will conduct initial exploratory studies to test specific instances of this high-level hypothesis.  We will then use the refined model to identify specific predictions for the outcome of a formal, ecologically valid study involving a complex, non-trivial application.&lt;br /&gt;
## Impact: To truly design computing systems which are designed around the way users work, we must understand &#039;&#039;how&#039;&#039; users work.  To do this, we need to establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains.  Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context.  Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work.&lt;br /&gt;
## Risk and Costs: Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk.  The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel.  While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda.  The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis.  The cost will span student support, both Ph.D. and Master&#039;s students, as well as full-time research staff.  Projected cost: $1.5 million over three years.&lt;br /&gt;
## 3-week Feasibility Study:  To ascertain the feasibility of this project we will conduct an initial pilot test to investigate the core idea: a predictive model of user workflow.  We will spend 1 week studying the real workflow of several people through job shadowing.  We will then create two systems designed to help a user accomplish some simple information work task.  One system will be designed to take larger workflow into account (experimental group), while one will not (control group).  In a synthetic environment, participants will perform a controlled series of tasks while receiving interruptions at controlled times.  If the two groups perform roughly the same, then we will need to reassess this avenue of research.  However, if the two groups perform differrently then our pilot test will have lent support to our approach and core hypothesis.&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A classification of standard design guidelines into overarching principles with quantifiable cognitive analogues &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:16, 6 February 2009 (UTC)&lt;br /&gt;
## A ranking/weighting for these principles based on their cognitive relevance&lt;br /&gt;
## An interface evaluation mechanism for the application of these principles and their associated guidelines.&lt;br /&gt;
# A systematic method to determine task distribution based on psychological principles. (Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
## The theory of distributed cognition (Clark-1998-TEM) is a well-suited basis for constructing a human-computer interaction framework (Hollan-2000-DCF). However, a systematic method of determining which tasks should be distributed to which agent in a distributed environment has yet to be clearly defined. I propose that the cognitive psychology literature is ripe with empirical evidence on task performance variables. Dual-process theories (Evans-2003-ITM) usually agree on the nature of high-level cognitive operations which are used in human reasoning. It is also argued that low-level processing, which is based on perceptual similarity, contiguity, and association, is determined by a set of autonomous subsystems. In general, one might say that humans excel at these low-level functions in relation to traditional von-neuman architectures (and even current neural-networks), but recently, cognitive science has been less focused on research on high-level reasoning in humans as the evidence is showing that we rarely engage in such demanding cognitive operations. Therefore, we believe that an optimal configuration for distributed cognition amongst people and computers will take advantage of these specialties and deficiencies, and distribute tasks accordingly. Our research project will contribute a set of guidelines, or heuristics, to allow engineers to effectively determine which tasks should be assigned to which entities.&lt;br /&gt;
## The impact of this contribution will be a systematic approach to interface design. Engineers will finally have a well-documented standard about how to determine what operations humans should be responsible for, and which should be off-loaded to the computer.&lt;br /&gt;
## There is no risk for this contribution. Costs associated will be an extensive search of cognitive psychology literature for the past 25 years or so. Research on memory, reasoning, perception and more will be required in order to conduct a complete and accurate assessment.&lt;br /&gt;
## In the course of 30 hours, we may perform a hand-analysis of the empirical results in psychology in order to develop a set of approximately 10 guidelines or rules.&lt;br /&gt;
# A method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time. (Owner - Eric)&lt;br /&gt;
##In order to reach a model of human cognitive and perceptual abilities when using computers, experimental analysis of human performance on these tasks will likely be necessary. User studies can often aid in this analysis, but they require much money, much time, and are subject to user fatigue. Alternately, we propose a web-based method for evaluating user performance in perceptual, motor-control, and cognitive tasks. The idea is to take a task would normally be measured through user studies in a laboratory and map this task into a simple online game. Somewhat similar work has been done by Popovic et. al. at the University of Washington, in that they took the task of folding proteins and mapped it into an online game ( http://www.economist.com/displaystory.cfm?story_id=11326188 ) with much success. We will analyze the value of this method by comparing it to similar tasks performed in laboratory experiments, both in terms of user performance and deployment costs.&lt;br /&gt;
##By converting the task into a simple game, we hope to reduce the problem of user fatigue. Additionally, if the game is played on a social networking site, we are able to track basic information of users who perform the tasks and, more importantly, can identify returning users. Thus, we can track not only a user&#039;s performance, but also how they improve at a given task over time.&lt;br /&gt;
##There are no clear risks involved with this study. Potential costs would be those required for development of each experiment and for web hosting.&lt;br /&gt;
##As a prototype, we can select one particular task to map into a simple online game. To check for feasibility, we need to ensure that the results we get from our proposed method are similar to the results found in laboratory settings. There are two possible effects we must test for: bias in results due to the mapping into a game, and bias in results due to the sample of subjects or any change caused by the web-based component. A simple test would be to first test users in a lab using traditional methods as a baseline, and then see how performance differs from that in  laboratory tests using the game-based mapping. This will determine if the game appropriately measures the given task. Next, an online version of the game can be introduced, and performance can be compared with the laboratory settings. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost.&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
# Accurately assess computational and psychological costs for tasks and subtasks.  To do this, we will develop two non-trivial prototype systems; a conventional control system and a novel system which is based on our model of task switching.  We will use our model to make specific predictions about relative task performance and user affect responses, and then test these predictions empirically in a formal study.&lt;br /&gt;
# Develop model that accounts for qualitatively different psychological tasks&lt;br /&gt;
# Test model on real-world data&lt;br /&gt;
# Build classification for design guidelines based on cognitive analogues [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Parse and classify existing guidelines by this new metric [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Build an automated evaluation rubric for applying these classifications to interfaces in development [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Long-term goal: build a mixed-initiative interface generation system that takes some basic GUI requirements from the designer as input and attempts to maximize its score on the &amp;quot;evaluation rubric&amp;quot; within these constraints. (Eric)&lt;br /&gt;
# Develop model component to predict performance following interruptions or changes between work spheres.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget the context of the original task ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. The intention to switch becomes easy &#039;&#039;&#039;(huh??? &amp;quot;easy to identify&amp;quot;?)&#039;&#039;&#039; when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
(Edited by Andrew)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantitative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert users dynamically with accuracies as high as 91%.  This classifier was then used to provide different information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately? What tasks should we purposely leave the computer out of?&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature [http://vrl.cs.brown.edu/wiki/CS295J/Literature Evans-2003-ITM]. What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
(Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
&lt;br /&gt;
In considering users as information processors, interfaces should take into account people&#039;s computational limitations on short term memory, learning and vision as well as the algorithms and representations that they use to process information and pursue goals.&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as direct, non-inferential, unmediated (by retinal images or mental representations) epistemic contact with behaviorally-relevant features of the environment(Warren, 2005).  The possiblities for action that the environment offers a given animal are taken to be specified by information available in structured energy distributions (e.g. the optic array of light arriving at the eyes), and these possibilities for action constitute the affordances of the environment with respect that animal(Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
====Application to AUE====&lt;br /&gt;
&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
====Popular and seminal examples====&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
====Elements and goals of a guideline set====&lt;br /&gt;
&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application. [[User:E J Kalafarski|E J Kalafarski]] 15:21, 6 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Interaction capture====&lt;br /&gt;
&lt;br /&gt;
[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4376144 Yi et. al.] have performed a survey of the visualization literature and categorized different types of interactions that users were faced with. They are as follows:&lt;br /&gt;
# Select: mark something as interesting &lt;br /&gt;
# Explore: show me something else &lt;br /&gt;
# Reconfigure: show me a different arrangement &lt;br /&gt;
# Encode: show me a different representation&lt;br /&gt;
# Abstract/Elaborate: show me more or less detail &lt;br /&gt;
# Filter: show me something conditionally &lt;br /&gt;
# Connect: show me related items &lt;br /&gt;
&lt;br /&gt;
Different GUI components may be able to perform the same type of interaction. We would like to categorize GUI components or patterns that are used bring about these interactions. We then have a library of components we can use to complete a given task. The goal is to create components for a given interaction that can minimize cost to the user. Because the cost of a component is likely dependent on the other components used, the goal of the designer might be to choose a combination of components that minimizes this cost. To do this, we need a way to measure costs, which is discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
In [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4658124 A Framework of Interaction Costs in Information Visualization], Lam performs a survey of 32 user studies and classifies several types of costs that can be used for qualitative interface evaluation. The classification scheme is based on Donald Norman&#039;s [http://en.wikipedia.org/wiki/Seven_stages_of_action Seven Stages of Action] from his book, [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746 The Design of Everyday Things] ([http://www.networksplus.net/tracyj/everydaythings.pdf summary]).&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Decision costs:&#039;&#039;&#039; How does user performance decrease when there is an overwhelming amount of data to observe or too many possible actions to take.&lt;br /&gt;
# &#039;&#039;&#039;System-power costs:&#039;&#039;&#039; How does the user translate a high-level goal into a sequence of allowable actions by the interface?&lt;br /&gt;
# &#039;&#039;&#039;Multiple input mode costs:&#039;&#039;&#039; Cost of providing an action selection system that is not unified, for example, if there is one button that does two different things, depending on context. &lt;br /&gt;
# &#039;&#039;&#039;Physical-motion costs:&#039;&#039;&#039; Physical cost to the user to interact with the interface, for example, measuring mouse movement costs with Fitts&#039; Law.&lt;br /&gt;
# &#039;&#039;&#039;Visual-cluttering costs:&#039;&#039;&#039; Cost due to unwanted visual distractions, such as a mouse hovering pop-up occluding part of the screen.&lt;br /&gt;
# &#039;&#039;&#039;View- and State-change costs:&#039;&#039;&#039; when the user causes the interface to change views, this new view should be consistent with the old one, in that it should meet the users expectations of where things should be in the new view, based on his knowledge of the old one.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation in practice ====&lt;br /&gt;
User interfaces are usually evaluated in practice using two methods: &#039;&#039;usability inspection methods&#039;&#039;, where a programmer or one or more experts evaluates the interface through inspection; or &#039;&#039;usability testing&#039;&#039;, where empirical tests are performed with some group of naive human users. Some usability inspection methods include [http://en.wikipedia.org/wiki/Cognitive_walkthrough Cognitive walkthrough], [http://en.wikipedia.org/wiki/Heuristic_evaluation Heuristic evaluation], and [http://en.wikipedia.org/wiki/Pluralistic_walkthrough Pluralistic walkthrough]. While these inspection methods do not using naive human subjects, the details of the methods might be useful in helping to formalize what interactions are made between a user and an interface, and what each interactions&#039; costs are for a given design.&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=108862 Jeffries et. al.] provide a real-world comparison between two of the usability inspection methods (heuristic evaluation and cognitive walkthrough), the usability testing method, as well as following some published software guidelines for interface design.&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Some recent advances in the pragmatic use of EEGs in HCI research can be seen in [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]  The possibility of using brain function to interface with a machine is cause for great excitement in the HCI community, and further advances in non-invasive techniques for accessing brain function may allow teleo-HCI to become a reality.  &lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
=== Workflow analysis ===&lt;br /&gt;
&lt;br /&gt;
Research in workflow and interaction analysis remains relatively sparse, though its utility would appear to be many-fold.  Tools for such analysis have the potential to facilitate data navigation, provide search mechanisms, and allow for more efficient collaborative discovery.  In addition, awareness and caching of interaction histories readily allows for explanatory presentations of results, and has the potential to provide training data for machine learning mechanisms.&lt;br /&gt;
&lt;br /&gt;
VisTrails is an optimized workflow system developed at the Sci Institute at the University of Utah, and implemented within their VTK visualization package.  The primary purpose of the system is to increase performance when working with multiple visualizations simultaneously.  This is accomplished by storing low-level workflow processes to reduce computational redundancy.  Three papers on VisTrails can be found here: [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-MED.pdf Callahan-2006-MED], [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-VVM.pdf Callahan-2006-VVM], [http://www.cs.brown.edu/people/trevor/Papers/Bavoil-2005-VEI.pdf Bavoil-2005-VEI]&lt;br /&gt;
&lt;br /&gt;
Jeff Heer of Stanford (formerly Berkley) has presented work on using Graphical Interaction Histories within the Tableau InfoVis application.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including some evaluation from the deployment of his techniques within Tableau.  His paper from Infovis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Heer-2008-GraphicalHistories]&lt;br /&gt;
&lt;br /&gt;
If you want to check out some of Trevor&#039;s work having to do with using interaction histories in 3D, time-varying scientific visualizations, his preliminary work that was presented at Vis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visDemo.pdf Abstract], [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visPoster.pdf Poster]&lt;br /&gt;
&lt;br /&gt;
Optimizing workflows that have been captured -- Tovi?&lt;br /&gt;
&lt;br /&gt;
Does ethnography fit in here?&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
* Data obtained from user-interactions show that there are significant discrepancies between individual&#039;s performances. Much of current software doesn&#039;t not gracefully handle this fact.&lt;br /&gt;
* Video recordings give us an interesting measure of frustration.&lt;br /&gt;
* Video recordings show general eye-gaze.&lt;br /&gt;
* Keyboard shortcut usage is highly variable.&lt;br /&gt;
(Gideon)&lt;br /&gt;
&lt;br /&gt;
C. Preliminary Studies&lt;br /&gt;
*C.1 Overview of analysis method&lt;br /&gt;
**C.1.1 Justification for qualitative analysis&lt;br /&gt;
**C.1.2 Restrictions on qualitative analysis&lt;br /&gt;
**C.1.3 Terms&lt;br /&gt;
*C.2 Garage Band study&lt;br /&gt;
**C.2.1 Feasibility of a creative problem-solving task&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
*C.3 Photoshop study&lt;br /&gt;
**C.3.1 Feasibility of a creative sandbox task&lt;br /&gt;
**C.3.2 Unique aspects and pertinence&lt;br /&gt;
*C.4 Lessons Learned&lt;br /&gt;
**C.4.1 Setup of study, metric methods&lt;br /&gt;
**C.4.2 Possibility for future critique-enabled user workflow&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
===Trevor===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Interaction Histories&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Project idea:&#039;&#039;&#039;  Generating interaction histories within scientific visualization applications to facilitate individual and collaborative scientific discovery. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&lt;br /&gt;
# A software infrastructure for caching and filtering interaction events has been developed within an existing scientific application aimed at exploring animal kinematic data captured via high-speed x-ray and CT.&lt;br /&gt;
# Methods for visualizing, editing, and sharing interaction histories have been designed and implemented.&lt;br /&gt;
# Methods for annotating and querying interactions have been implemented.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work from the Future&#039;&#039;&#039;&lt;br /&gt;
# Software model extended to two other interactive visualizations -- a flow visualization application, and a protein visualization application.&lt;br /&gt;
# User study to examine techniques for automatic history generation&lt;br /&gt;
## Automatic creation v. semi-automatic creation v. manual creation&lt;br /&gt;
# Timed-task pilot study performed to validate utility of interaction history techniques&lt;br /&gt;
## Task performance with histories v. without histories&lt;br /&gt;
&lt;br /&gt;
===EJ===&lt;br /&gt;
&#039;&#039;&#039;Problem:&#039;&#039;&#039; There currently exists no metric for evaluating interfaces that attempt to reconcile popular and successful heuristic design guidelines with cognitive theory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Preliminary Work&#039;&#039;&#039;&amp;lt;br /&amp;gt;&lt;br /&gt;
I. A mapping of empirically-effective heuristic design guidelines to fundamental cognitive principles.&lt;br /&gt;
:*A set of &amp;quot;common&amp;quot; design guidelines is arrived at through survey of popular and effective heuristic guidelines in use.&lt;br /&gt;
:*Proposed common design guidelines:&lt;br /&gt;
 Discoverability&lt;br /&gt;
 Appropriate visual presentation&lt;br /&gt;
 Predictability&lt;br /&gt;
 Consistency&lt;br /&gt;
 Simplicity&lt;br /&gt;
 Memory load reduction&lt;br /&gt;
 Feedback&lt;br /&gt;
 Task match&lt;br /&gt;
 User control&lt;br /&gt;
 Efficiency&lt;br /&gt;
:*Proposed cognitive principles:&lt;br /&gt;
 Affordance&lt;br /&gt;
 Visual cue&lt;br /&gt;
 Cognitive load&lt;br /&gt;
 Chunking&lt;br /&gt;
 Activity&lt;br /&gt;
 Actability&lt;br /&gt;
II. A weighting or priority for each of these analogues.&lt;br /&gt;
:*These can begin as binary estimations based on empirical evidence.&lt;br /&gt;
:*Through experimentation, these values should converge to discrete priority values for each analogue, allowing a ranking of analogues.&lt;br /&gt;
III. A system for applying these analogues and respective priority to the evaluation of an interface.&lt;br /&gt;
:*This can occur manually or in an automated fashion.&lt;br /&gt;
:*In this step (or possibly in a separate step), analogues should be assigned a &#039;&#039;suggestion&#039;&#039; or potential correction to provide in the event of &amp;quot;failure&amp;quot; of a particular test by a given interface.&lt;br /&gt;
&lt;br /&gt;
===EJ and Jon===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Photoshop Study&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We demonstrate that the efficiency of performing subtasks in Photoshop can be predicted by a simple model of human perceptual and cognitive abilities.  In particular, we show that several of the tools commonly used to perform basic operations in Photoshop often violate the user&#039;s expectations of how those tools should work or where those tools ought to be located within the interface.  These violations can be categorized as follows: (1) unintuitive relationships between adjustments to common tool parameters and their perceptual results (e.g. adjusting the Magic Wand tool&#039;s &amp;quot;tolerance&amp;quot; setting often leads to unintended selections), (2) inefficient means of adjusting tool parameters (e.g. adjusting the &amp;quot;tolerance&amp;quot; setting by clicking, typing a number, hitting enter, observing the results, and iterating this process until the desired perceptual effect is achieved), (3) mismatches between the user&#039;s expectations for the names and locations of tools (or menu items) and their actual names and locations (e.g. resizing a picture via the &amp;quot;transform&amp;quot; menu item), (4) the availability of a tool in multiple locations imposes a cognitive load on the user when searching for that tool and these various contexts influence the user&#039;s expectations about the effects of using that tool, (5) [what else?], etc.&lt;br /&gt;
&lt;br /&gt;
===Andrew Bragdon===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate feasability of an integrated model of current theories in perception and HCI for predicting task performance; also evaluate the feasability of this experimental methodology&lt;br /&gt;
&lt;br /&gt;
B.  Formative study should inspire future directions&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  In a highly-controlled task environment, understand how an integrated model comprising current theories in perception and HCI can predict/explain task performance&lt;br /&gt;
&lt;br /&gt;
B.  Users trained in the task extensively to control for learning (learning aspect of task could be investigated in future study)&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. Users favored continuous rotation over static view or meditative pauses.&lt;br /&gt;
&lt;br /&gt;
B. Hand gesticulation seemed to be used as a method of validation in stereo views.&lt;br /&gt;
&lt;br /&gt;
C. Several verbal comments regarding occlusion suggest drawbacks to the tube view.&lt;br /&gt;
&lt;br /&gt;
D. Rotating is beneficial and perhaps necessary for this task.&lt;br /&gt;
&lt;br /&gt;
E. No references to external, offscreen information. In fact, very rarely did participants glance away from the screen.&lt;br /&gt;
&lt;br /&gt;
F. Our assessment of a typical session beginning with a new dataset:&lt;br /&gt;
&lt;br /&gt;
   1. Data loads, split second decision to begin rotating.&lt;br /&gt;
   2. Continuous rotation until proper viewpoint determined.&lt;br /&gt;
   3. Rocking back and forth interaction about the optimal viewpoint. (Thinking?)&lt;br /&gt;
   4. [In stereo views, participants were noted tilting their heads fairly consistently.] &lt;br /&gt;
&lt;br /&gt;
IV.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Some of the users&#039; strategies can be explained by current theories in perception&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions (we will be running this study this week!)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Now that we have investigated highly controlled task performance, explore larger workflow context&lt;br /&gt;
&lt;br /&gt;
B.  Perform study examining how interuptions affect user performance, and explore what coping strategies are used by users in such an environment&lt;br /&gt;
&lt;br /&gt;
1.  Software developers (a good example of a challenging, and creative type of information work) will receive task requests by email; notifications will appear on their screen in real time&lt;br /&gt;
&lt;br /&gt;
2.  Each email will have differrent priorities (e.g., low, high, emergency)&lt;br /&gt;
&lt;br /&gt;
3.  Participants will be asked to manage priorities effectively to accomplish the tasks given&lt;br /&gt;
&lt;br /&gt;
4.  Once they begin working, we will &amp;quot;interrupt&amp;quot; them at controlled times with new task requests of differrent priorities&lt;br /&gt;
&lt;br /&gt;
5.  We will analyze their actions for coping strategies, metawork, and working spheres to try to understand how the larger workflow context is affected by interruptions&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Adam Darlow===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate the interactions between various cognitive principles and design principles. There are three basic relations:&lt;br /&gt;
&lt;br /&gt;
   1. The cognitive principle is the motivation behind the design principle.&lt;br /&gt;
   2. The cognitive principle suggests a method for achieving the design principle.&lt;br /&gt;
   3. The cognitive principle and design principle are unrelated. (Hopefully few)&lt;br /&gt;
&lt;br /&gt;
B.  Make design rules which are suggested by the combination of a cognitive principle and a design principle.&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Collecting commonly accepted design principles from the literature on interface design and well established cognitive principles from the cognitive psychology literature and constructing a matrix which crosses them. Most squares in the matrix should suggest specific design rules.&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As a preliminary effort, I have chosen the following two cognitive principles and three design principles:&lt;br /&gt;
&lt;br /&gt;
Cognitive principles&lt;br /&gt;
&lt;br /&gt;
#C1. People derive complex associations and causal interpretations from temporal correlations and patterns.&lt;br /&gt;
&lt;br /&gt;
#C2. People have limited working memory (7 +- 2), but each space can hold a chunk of related information.&lt;br /&gt;
&lt;br /&gt;
Design Principles (from Maeda (TBD link))&lt;br /&gt;
&lt;br /&gt;
#D1. Achieve simplicity through thoughtful reduction.&lt;br /&gt;
&lt;br /&gt;
#D2. Organization makes a system of many appear fewer.&lt;br /&gt;
&lt;br /&gt;
#D3. Knowledge makes everything simpler.&lt;br /&gt;
&lt;br /&gt;
The resulting matrix entries are as follows:&lt;br /&gt;
&lt;br /&gt;
#C1 + D1. Remove extraneous correlations. Things shouldn&#039;t consistently and apparently change or happen in conjunction unless they are actually related and their relation is important to the user.&lt;br /&gt;
&lt;br /&gt;
#C1 + D2. Use temporal and spatial contiguity to help users organize and group  multiple events meaningfully. &lt;br /&gt;
&lt;br /&gt;
#C1 + D3. Use temporal correlations to effectively teach the important  causal relations inherent in the interface.&lt;br /&gt;
&lt;br /&gt;
#C2 + D1. Reduce the interface such that a user has to be aware of no more than 5 items simultaneously.&lt;br /&gt;
&lt;br /&gt;
#C2 + D2. Groups of semantically related items can for many purposes be treated as a single item.&lt;br /&gt;
&lt;br /&gt;
#C2 + D3. Teach users how things are related so that it can be chunked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To expand the matrix and evaluate the resulting design rules.&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about the details of a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;.  There does need to be enough to define what the overall proposed work is, but that may show up in earlier sections.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2275</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2275"/>
		<updated>2009-03-04T20:37:56Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Adam Darlow */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Impact: Such a model would allow us to predict human performance with interfaces.  Validation of the model would allow us to more rapidly converge on ideal interfaces while simultaneously ruling out sub-optimal ones.&lt;br /&gt;
## 3-Week Feasibility Study: (1) Distill (perhaps from review articles) the major findings in all of the relevant subfields into a set of principles that will be grouped to form appropriate model components.  Some of the most relevant subfields include: memory, attention, visual perception, psychoacoustics, task switching, categorization, event perception, haptics (ANY OTHERS?).  (2)  Use these to devise a simple predictive model of human interaction.  This model could simply consist of the set of design principles but should allow some form of quantitative scoring/evaluation of interfaces.  (3) Develop a small set (5-10) of candidate GUIs that are designed to help the user accomplish the same overarching task(s) (e.g. importing, analyzing, and flexibly graphing data in Matlab).   (4)  Test/validate the model by comparing predicted performance to actual performance with the GUIs.&lt;br /&gt;
## Risks/Costs: There are always potential risks to any human subjects that might participate in testing the model which might necessitate IRB approval.  Costs might include: (1) paying for any necessary hardware and software for developing, displaying, and testing the GUIs, and (2) paying for human subjects.&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A predictive, fully-integrated model of user workflow which encompasses low-level tasks, working spheres, communication chains, interruptions and multi-tasking. (OWNER: Andrew Bragdon)&lt;br /&gt;
##Traditionally, software design and usability testing is focused on low-level task performance.  However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, &#039;&#039;working sphere&#039;&#039; level.  Su, et al., develops a predictive model of task switching based on communication chains.  Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching.  We will conduct initial exploratory studies to test specific instances of this high-level hypothesis.  We will then use the refined model to identify specific predictions for the outcome of a formal, ecologically valid study involving a complex, non-trivial application.&lt;br /&gt;
## Impact: To truly design computing systems which are designed around the way users work, we must understand &#039;&#039;how&#039;&#039; users work.  To do this, we need to establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains.  Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context.  Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work.&lt;br /&gt;
## Risk and Costs: Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk.  The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel.  While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda.  The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis.  The cost will span student support, both Ph.D. and Master&#039;s students, as well as full-time research staff.  Projected cost: $1.5 million over three years.&lt;br /&gt;
## 3-week Feasibility Study:  To ascertain the feasibility of this project we will conduct an initial pilot test to investigate the core idea: a predictive model of user workflow.  We will spend 1 week studying the real workflow of several people through job shadowing.  We will then create two systems designed to help a user accomplish some simple information work task.  One system will be designed to take larger workflow into account (experimental group), while one will not (control group).  In a synthetic environment, participants will perform a controlled series of tasks while receiving interruptions at controlled times.  If the two groups perform roughly the same, then we will need to reassess this avenue of research.  However, if the two groups perform differrently then our pilot test will have lent support to our approach and core hypothesis.&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A classification of standard design guidelines into overarching principles with quantifiable cognitive analogues &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:16, 6 February 2009 (UTC)&lt;br /&gt;
## A ranking/weighting for these principles based on their cognitive relevance&lt;br /&gt;
## An interface evaluation mechanism for the application of these principles and their associated guidelines.&lt;br /&gt;
# A systematic method to determine task distribution based on psychological principles. (Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
## The theory of distributed cognition (Clark-1998-TEM) is a well-suited basis for constructing a human-computer interaction framework (Hollan-2000-DCF). However, a systematic method of determining which tasks should be distributed to which agent in a distributed environment has yet to be clearly defined. I propose that the cognitive psychology literature is ripe with empirical evidence on task performance variables. Dual-process theories (Evans-2003-ITM) usually agree on the nature of high-level cognitive operations which are used in human reasoning. It is also argued that low-level processing, which is based on perceptual similarity, contiguity, and association, is determined by a set of autonomous subsystems. In general, one might say that humans excel at these low-level functions in relation to traditional von-neuman architectures (and even current neural-networks), but recently, cognitive science has been less focused on research on high-level reasoning in humans as the evidence is showing that we rarely engage in such demanding cognitive operations. Therefore, we believe that an optimal configuration for distributed cognition amongst people and computers will take advantage of these specialties and deficiencies, and distribute tasks accordingly. Our research project will contribute a set of guidelines, or heuristics, to allow engineers to effectively determine which tasks should be assigned to which entities.&lt;br /&gt;
## The impact of this contribution will be a systematic approach to interface design. Engineers will finally have a well-documented standard about how to determine what operations humans should be responsible for, and which should be off-loaded to the computer.&lt;br /&gt;
## There is no risk for this contribution. Costs associated will be an extensive search of cognitive psychology literature for the past 25 years or so. Research on memory, reasoning, perception and more will be required in order to conduct a complete and accurate assessment.&lt;br /&gt;
## In the course of 30 hours, we may perform a hand-analysis of the empirical results in psychology in order to develop a set of approximately 10 guidelines or rules.&lt;br /&gt;
# A method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time. (Owner - Eric)&lt;br /&gt;
##In order to reach a model of human cognitive and perceptual abilities when using computers, experimental analysis of human performance on these tasks will likely be necessary. User studies can often aid in this analysis, but they require much money, much time, and are subject to user fatigue. Alternately, we propose a web-based method for evaluating user performance in perceptual, motor-control, and cognitive tasks. The idea is to take a task would normally be measured through user studies in a laboratory and map this task into a simple online game. Somewhat similar work has been done by Popovic et. al. at the University of Washington, in that they took the task of folding proteins and mapped it into an online game ( http://www.economist.com/displaystory.cfm?story_id=11326188 ) with much success. We will analyze the value of this method by comparing it to similar tasks performed in laboratory experiments, both in terms of user performance and deployment costs.&lt;br /&gt;
##By converting the task into a simple game, we hope to reduce the problem of user fatigue. Additionally, if the game is played on a social networking site, we are able to track basic information of users who perform the tasks and, more importantly, can identify returning users. Thus, we can track not only a user&#039;s performance, but also how they improve at a given task over time.&lt;br /&gt;
##There are no clear risks involved with this study. Potential costs would be those required for development of each experiment and for web hosting.&lt;br /&gt;
##As a prototype, we can select one particular task to map into a simple online game. To check for feasibility, we need to ensure that the results we get from our proposed method are similar to the results found in laboratory settings. There are two possible effects we must test for: bias in results due to the mapping into a game, and bias in results due to the sample of subjects or any change caused by the web-based component. A simple test would be to first test users in a lab using traditional methods as a baseline, and then see how performance differs from that in  laboratory tests using the game-based mapping. This will determine if the game appropriately measures the given task. Next, an online version of the game can be introduced, and performance can be compared with the laboratory settings. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost.&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
# Accurately assess computational and psychological costs for tasks and subtasks.  To do this, we will develop two non-trivial prototype systems; a conventional control system and a novel system which is based on our model of task switching.  We will use our model to make specific predictions about relative task performance and user affect responses, and then test these predictions empirically in a formal study.&lt;br /&gt;
# Develop model that accounts for qualitatively different psychological tasks&lt;br /&gt;
# Test model on real-world data&lt;br /&gt;
# Build classification for design guidelines based on cognitive analogues [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Parse and classify existing guidelines by this new metric [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Build an automated evaluation rubric for applying these classifications to interfaces in development [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Long-term goal: build a mixed-initiative interface generation system that takes some basic GUI requirements from the designer as input and attempts to maximize its score on the &amp;quot;evaluation rubric&amp;quot; within these constraints. (Eric)&lt;br /&gt;
# Develop model component to predict performance following interruptions or changes between work spheres.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget the context of the original task ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. The intention to switch becomes easy &#039;&#039;&#039;(huh??? &amp;quot;easy to identify&amp;quot;?)&#039;&#039;&#039; when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
(Edited by Andrew)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantitative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert users dynamically with accuracies as high as 91%.  This classifier was then used to provide different information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately? What tasks should we purposely leave the computer out of?&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature [http://vrl.cs.brown.edu/wiki/CS295J/Literature Evans-2003-ITM]. What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
(Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
&lt;br /&gt;
In considering users as information processors, interfaces should take into account people&#039;s computational limitations on short term memory, learning and vision as well as the algorithms and representations that they use to process information and pursue goals.&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as direct, non-inferential, unmediated (by retinal images or mental representations) epistemic contact with behaviorally-relevant features of the environment(Warren, 2005).  The possiblities for action that the environment offers a given animal are taken to be specified by information available in structured energy distributions (e.g. the optic array of light arriving at the eyes), and these possibilities for action constitute the affordances of the environment with respect that animal(Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
====Application to AUE====&lt;br /&gt;
&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
====Popular and seminal examples====&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
====Elements and goals of a guideline set====&lt;br /&gt;
&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application. [[User:E J Kalafarski|E J Kalafarski]] 15:21, 6 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Interaction capture====&lt;br /&gt;
&lt;br /&gt;
[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4376144 Yi et. al.] have performed a survey of the visualization literature and categorized different types of interactions that users were faced with. They are as follows:&lt;br /&gt;
# Select: mark something as interesting &lt;br /&gt;
# Explore: show me something else &lt;br /&gt;
# Reconfigure: show me a different arrangement &lt;br /&gt;
# Encode: show me a different representation&lt;br /&gt;
# Abstract/Elaborate: show me more or less detail &lt;br /&gt;
# Filter: show me something conditionally &lt;br /&gt;
# Connect: show me related items &lt;br /&gt;
&lt;br /&gt;
Different GUI components may be able to perform the same type of interaction. We would like to categorize GUI components or patterns that are used bring about these interactions. We then have a library of components we can use to complete a given task. The goal is to create components for a given interaction that can minimize cost to the user. Because the cost of a component is likely dependent on the other components used, the goal of the designer might be to choose a combination of components that minimizes this cost. To do this, we need a way to measure costs, which is discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
In [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4658124 A Framework of Interaction Costs in Information Visualization], Lam performs a survey of 32 user studies and classifies several types of costs that can be used for qualitative interface evaluation. The classification scheme is based on Donald Norman&#039;s [http://en.wikipedia.org/wiki/Seven_stages_of_action Seven Stages of Action] from his book, [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746 The Design of Everyday Things] ([http://www.networksplus.net/tracyj/everydaythings.pdf summary]).&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Decision costs:&#039;&#039;&#039; How does user performance decrease when there is an overwhelming amount of data to observe or too many possible actions to take.&lt;br /&gt;
# &#039;&#039;&#039;System-power costs:&#039;&#039;&#039; How does the user translate a high-level goal into a sequence of allowable actions by the interface?&lt;br /&gt;
# &#039;&#039;&#039;Multiple input mode costs:&#039;&#039;&#039; Cost of providing an action selection system that is not unified, for example, if there is one button that does two different things, depending on context. &lt;br /&gt;
# &#039;&#039;&#039;Physical-motion costs:&#039;&#039;&#039; Physical cost to the user to interact with the interface, for example, measuring mouse movement costs with Fitts&#039; Law.&lt;br /&gt;
# &#039;&#039;&#039;Visual-cluttering costs:&#039;&#039;&#039; Cost due to unwanted visual distractions, such as a mouse hovering pop-up occluding part of the screen.&lt;br /&gt;
# &#039;&#039;&#039;View- and State-change costs:&#039;&#039;&#039; when the user causes the interface to change views, this new view should be consistent with the old one, in that it should meet the users expectations of where things should be in the new view, based on his knowledge of the old one.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation in practice ====&lt;br /&gt;
User interfaces are usually evaluated in practice using two methods: &#039;&#039;usability inspection methods&#039;&#039;, where a programmer or one or more experts evaluates the interface through inspection; or &#039;&#039;usability testing&#039;&#039;, where empirical tests are performed with some group of naive human users. Some usability inspection methods include [http://en.wikipedia.org/wiki/Cognitive_walkthrough Cognitive walkthrough], [http://en.wikipedia.org/wiki/Heuristic_evaluation Heuristic evaluation], and [http://en.wikipedia.org/wiki/Pluralistic_walkthrough Pluralistic walkthrough]. While these inspection methods do not using naive human subjects, the details of the methods might be useful in helping to formalize what interactions are made between a user and an interface, and what each interactions&#039; costs are for a given design.&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=108862 Jeffries et. al.] provide a real-world comparison between two of the usability inspection methods (heuristic evaluation and cognitive walkthrough), the usability testing method, as well as following some published software guidelines for interface design.&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Some recent advances in the pragmatic use of EEGs in HCI research can be seen in [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]  The possibility of using brain function to interface with a machine is cause for great excitement in the HCI community, and further advances in non-invasive techniques for accessing brain function may allow teleo-HCI to become a reality.  &lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
=== Workflow analysis ===&lt;br /&gt;
&lt;br /&gt;
Research in workflow and interaction analysis remains relatively sparse, though its utility would appear to be many-fold.  Tools for such analysis have the potential to facilitate data navigation, provide search mechanisms, and allow for more efficient collaborative discovery.  In addition, awareness and caching of interaction histories readily allows for explanatory presentations of results, and has the potential to provide training data for machine learning mechanisms.&lt;br /&gt;
&lt;br /&gt;
VisTrails is an optimized workflow system developed at the Sci Institute at the University of Utah, and implemented within their VTK visualization package.  The primary purpose of the system is to increase performance when working with multiple visualizations simultaneously.  This is accomplished by storing low-level workflow processes to reduce computational redundancy.  Three papers on VisTrails can be found here: [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-MED.pdf Callahan-2006-MED], [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-VVM.pdf Callahan-2006-VVM], [http://www.cs.brown.edu/people/trevor/Papers/Bavoil-2005-VEI.pdf Bavoil-2005-VEI]&lt;br /&gt;
&lt;br /&gt;
Jeff Heer of Stanford (formerly Berkley) has presented work on using Graphical Interaction Histories within the Tableau InfoVis application.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including some evaluation from the deployment of his techniques within Tableau.  His paper from Infovis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Heer-2008-GraphicalHistories]&lt;br /&gt;
&lt;br /&gt;
If you want to check out some of Trevor&#039;s work having to do with using interaction histories in 3D, time-varying scientific visualizations, his preliminary work that was presented at Vis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visDemo.pdf Abstract], [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visPoster.pdf Poster]&lt;br /&gt;
&lt;br /&gt;
Optimizing workflows that have been captured -- Tovi?&lt;br /&gt;
&lt;br /&gt;
Does ethnography fit in here?&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
* Data obtained from user-interactions show that there are significant discrepancies between individual&#039;s performances. Much of current software doesn&#039;t not gracefully handle this fact.&lt;br /&gt;
* Video recordings give us an interesting measure of frustration.&lt;br /&gt;
* Video recordings show general eye-gaze.&lt;br /&gt;
* Keyboard shortcut usage is highly variable.&lt;br /&gt;
(Gideon)&lt;br /&gt;
&lt;br /&gt;
C. Preliminary Studies&lt;br /&gt;
*C.1 Overview of analysis method&lt;br /&gt;
**C.1.1 Justification for qualitative analysis&lt;br /&gt;
**C.1.2 Restrictions on qualitative analysis&lt;br /&gt;
**C.1.3 Terms&lt;br /&gt;
*C.2 Garage Band study&lt;br /&gt;
**C.2.1 Feasibility of a creative problem-solving task&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
*C.3 Photoshop study&lt;br /&gt;
**C.3.1 Feasibility of a creative sandbox task&lt;br /&gt;
**C.3.2 Unique aspects and pertinence&lt;br /&gt;
*C.4 Lessons Learned&lt;br /&gt;
**C.4.1 Setup of study, metric methods&lt;br /&gt;
**C.4.2 Possibility for future critique-enabled user workflow&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
===EJ and Jon===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Photoshop Study&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We demonstrate that the efficiency of performing subtasks in Photoshop can be predicted by a simple model of human perceptual and cognitive abilities.  In particular, we show that several of the tools commonly used to perform basic operations in Photoshop often violate the user&#039;s expectations of how those tools should work or where those tools ought to be located within the interface.  These violations can be categorized as follows: (1) unintuitive relationships between adjustments to common tool parameters and their perceptual results (e.g. adjusting the Magic Wand tool&#039;s &amp;quot;tolerance&amp;quot; setting often leads to unintended selections), (2) inefficient means of adjusting tool parameters (e.g. adjusting the &amp;quot;tolerance&amp;quot; setting by clicking, typing a number, hitting enter, observing the results, and iterating this process until the desired perceptual effect is achieved), (3) mismatches between the user&#039;s expectations for the names and locations of tools (or menu items) and their actual names and locations (e.g. resizing a picture via the &amp;quot;transform&amp;quot; menu item), (4) the availability of a tool in multiple locations imposes a cognitive load on the user when searching for that tool and these various contexts influence the user&#039;s expectations about the effects of using that tool, (5) [what else?], etc.&lt;br /&gt;
&lt;br /&gt;
===Andrew Bragdon===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate feasability of an integrated model of current theories in perception and HCI for predicting task performance; also evaluate the feasability of this experimental methodology&lt;br /&gt;
&lt;br /&gt;
B.  Formative study should inspire future directions&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  In a highly-controlled task environment, understand how an integrated model comprising current theories in perception and HCI can predict/explain task performance&lt;br /&gt;
&lt;br /&gt;
B.  Users trained in the task extensively to control for learning (learning aspect of task could be investigated in future study)&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. Users favored continuous rotation over static view or meditative pauses.&lt;br /&gt;
&lt;br /&gt;
B. Hand gesticulation seemed to be used as a method of validation in stereo views.&lt;br /&gt;
&lt;br /&gt;
C. Several verbal comments regarding occlusion suggest drawbacks to the tube view.&lt;br /&gt;
&lt;br /&gt;
D. Rotating is beneficial and perhaps necessary for this task.&lt;br /&gt;
&lt;br /&gt;
E. No references to external, offscreen information. In fact, very rarely did participants glance away from the screen.&lt;br /&gt;
&lt;br /&gt;
F. Our assessment of a typical session beginning with a new dataset:&lt;br /&gt;
&lt;br /&gt;
   1. Data loads, split second decision to begin rotating.&lt;br /&gt;
   2. Continuous rotation until proper viewpoint determined.&lt;br /&gt;
   3. Rocking back and forth interaction about the optimal viewpoint. (Thinking?)&lt;br /&gt;
   4. [In stereo views, participants were noted tilting their heads fairly consistently.] &lt;br /&gt;
&lt;br /&gt;
IV.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Some of the users&#039; strategies can be explained by current theories in perception&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions (we will be running this study this week!)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Now that we have investigated highly controlled task performance, explore larger workflow context&lt;br /&gt;
&lt;br /&gt;
B.  Perform study examining how interuptions affect user performance, and explore what coping strategies are used by users in such an environment&lt;br /&gt;
&lt;br /&gt;
1.  Software developers (a good example of a challenging, and creative type of information work) will receive task requests by email; notifications will appear on their screen in real time&lt;br /&gt;
&lt;br /&gt;
2.  Each email will have differrent priorities (e.g., low, high, emergency)&lt;br /&gt;
&lt;br /&gt;
3.  Participants will be asked to manage priorities effectively to accomplish the tasks given&lt;br /&gt;
&lt;br /&gt;
4.  Once they begin working, we will &amp;quot;interrupt&amp;quot; them at controlled times with new task requests of differrent priorities&lt;br /&gt;
&lt;br /&gt;
5.  We will analyze their actions for coping strategies, metawork, and working spheres to try to understand how the larger workflow context is affected by interruptions&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Adam Darlow===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate the interactions between various cognitive principles and design principles. There are three basic relations:&lt;br /&gt;
&lt;br /&gt;
   1. The cognitive principle is the motivation behind the design principle.&lt;br /&gt;
   2. The cognitive principle suggests a method for achieving the design principle.&lt;br /&gt;
   3. The cognitive principle and design principle are unrelated. (Hopefully few)&lt;br /&gt;
&lt;br /&gt;
B.  Make design rules which are suggested by the combination of a cognitive principle and a design principle.&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Collecting commonly accepted design principles from the literature on interface design and well established cognitive principles from the cognitive psychology literature and constructing a matrix which crosses them. Most squares in the matrix should suggest specific design rules.&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As a preliminary effort, I have chosen the following two cognitive principles and three design principles:&lt;br /&gt;
&lt;br /&gt;
Cognitive principles&lt;br /&gt;
&lt;br /&gt;
#C1. People derive complex associations and causal interpretations from temporal correlations and patterns.&lt;br /&gt;
&lt;br /&gt;
#C2. People have limited working memory (7 +- 2), but each space can hold a chunk of related information.&lt;br /&gt;
&lt;br /&gt;
Design Principles (from Maeda (TBD link))&lt;br /&gt;
&lt;br /&gt;
#D1. Achieve simplicity through thoughtful reduction.&lt;br /&gt;
&lt;br /&gt;
#D2. Organization makes a system of many appear fewer.&lt;br /&gt;
&lt;br /&gt;
#D3. Knowledge makes everything simpler.&lt;br /&gt;
&lt;br /&gt;
The resulting matrix entries are as follows:&lt;br /&gt;
&lt;br /&gt;
#C1 + D1. Remove extraneous correlations. Things shouldn&#039;t consistently and apparently change or happen in conjunction unless they are actually related and their relation is important to the user.&lt;br /&gt;
&lt;br /&gt;
#C1 + D2. Use temporal and spatial contiguity to help users organize and group  multiple events meaningfully. &lt;br /&gt;
&lt;br /&gt;
#C1 + D3. Use temporal correlations to effectively teach the important  causal relations inherent in the interface.&lt;br /&gt;
&lt;br /&gt;
#C2 + D1. Reduce the interface such that a user has to be aware of no more than 5 items simultaneously.&lt;br /&gt;
&lt;br /&gt;
#C2 + D2. Groups of semantically related items can for many purposes be treated as a single item.&lt;br /&gt;
&lt;br /&gt;
#C2 + D3. Teach users how things are related so that it can be chunked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions (we will be running this study this week!)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To expand the matrix and evaluate the resulting design rules.&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about the details of a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;.  There does need to be enough to define what the overall proposed work is, but that may show up in earlier sections.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2274</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=2274"/>
		<updated>2009-03-04T20:36:39Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: Added preliminary results - Adam&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Impact: Such a model would allow us to predict human performance with interfaces.  Validation of the model would allow us to more rapidly converge on ideal interfaces while simultaneously ruling out sub-optimal ones.&lt;br /&gt;
## 3-Week Feasibility Study: (1) Distill (perhaps from review articles) the major findings in all of the relevant subfields into a set of principles that will be grouped to form appropriate model components.  Some of the most relevant subfields include: memory, attention, visual perception, psychoacoustics, task switching, categorization, event perception, haptics (ANY OTHERS?).  (2)  Use these to devise a simple predictive model of human interaction.  This model could simply consist of the set of design principles but should allow some form of quantitative scoring/evaluation of interfaces.  (3) Develop a small set (5-10) of candidate GUIs that are designed to help the user accomplish the same overarching task(s) (e.g. importing, analyzing, and flexibly graphing data in Matlab).   (4)  Test/validate the model by comparing predicted performance to actual performance with the GUIs.&lt;br /&gt;
## Risks/Costs: There are always potential risks to any human subjects that might participate in testing the model which might necessitate IRB approval.  Costs might include: (1) paying for any necessary hardware and software for developing, displaying, and testing the GUIs, and (2) paying for human subjects.&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A predictive, fully-integrated model of user workflow which encompasses low-level tasks, working spheres, communication chains, interruptions and multi-tasking. (OWNER: Andrew Bragdon)&lt;br /&gt;
##Traditionally, software design and usability testing is focused on low-level task performance.  However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, &#039;&#039;working sphere&#039;&#039; level.  Su, et al., develops a predictive model of task switching based on communication chains.  Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching.  We will conduct initial exploratory studies to test specific instances of this high-level hypothesis.  We will then use the refined model to identify specific predictions for the outcome of a formal, ecologically valid study involving a complex, non-trivial application.&lt;br /&gt;
## Impact: To truly design computing systems which are designed around the way users work, we must understand &#039;&#039;how&#039;&#039; users work.  To do this, we need to establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains.  Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context.  Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work.&lt;br /&gt;
## Risk and Costs: Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk.  The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel.  While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda.  The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis.  The cost will span student support, both Ph.D. and Master&#039;s students, as well as full-time research staff.  Projected cost: $1.5 million over three years.&lt;br /&gt;
## 3-week Feasibility Study:  To ascertain the feasibility of this project we will conduct an initial pilot test to investigate the core idea: a predictive model of user workflow.  We will spend 1 week studying the real workflow of several people through job shadowing.  We will then create two systems designed to help a user accomplish some simple information work task.  One system will be designed to take larger workflow into account (experimental group), while one will not (control group).  In a synthetic environment, participants will perform a controlled series of tasks while receiving interruptions at controlled times.  If the two groups perform roughly the same, then we will need to reassess this avenue of research.  However, if the two groups perform differrently then our pilot test will have lent support to our approach and core hypothesis.&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A classification of standard design guidelines into overarching principles with quantifiable cognitive analogues &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:16, 6 February 2009 (UTC)&lt;br /&gt;
## A ranking/weighting for these principles based on their cognitive relevance&lt;br /&gt;
## An interface evaluation mechanism for the application of these principles and their associated guidelines.&lt;br /&gt;
# A systematic method to determine task distribution based on psychological principles. (Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
## The theory of distributed cognition (Clark-1998-TEM) is a well-suited basis for constructing a human-computer interaction framework (Hollan-2000-DCF). However, a systematic method of determining which tasks should be distributed to which agent in a distributed environment has yet to be clearly defined. I propose that the cognitive psychology literature is ripe with empirical evidence on task performance variables. Dual-process theories (Evans-2003-ITM) usually agree on the nature of high-level cognitive operations which are used in human reasoning. It is also argued that low-level processing, which is based on perceptual similarity, contiguity, and association, is determined by a set of autonomous subsystems. In general, one might say that humans excel at these low-level functions in relation to traditional von-neuman architectures (and even current neural-networks), but recently, cognitive science has been less focused on research on high-level reasoning in humans as the evidence is showing that we rarely engage in such demanding cognitive operations. Therefore, we believe that an optimal configuration for distributed cognition amongst people and computers will take advantage of these specialties and deficiencies, and distribute tasks accordingly. Our research project will contribute a set of guidelines, or heuristics, to allow engineers to effectively determine which tasks should be assigned to which entities.&lt;br /&gt;
## The impact of this contribution will be a systematic approach to interface design. Engineers will finally have a well-documented standard about how to determine what operations humans should be responsible for, and which should be off-loaded to the computer.&lt;br /&gt;
## There is no risk for this contribution. Costs associated will be an extensive search of cognitive psychology literature for the past 25 years or so. Research on memory, reasoning, perception and more will be required in order to conduct a complete and accurate assessment.&lt;br /&gt;
## In the course of 30 hours, we may perform a hand-analysis of the empirical results in psychology in order to develop a set of approximately 10 guidelines or rules.&lt;br /&gt;
# A method for collecting data on user performance in cognitive, perceptual, and motor-control tasks that requires less monetary cost, allows for a greater number of samples, and measures user improvements over time. (Owner - Eric)&lt;br /&gt;
##In order to reach a model of human cognitive and perceptual abilities when using computers, experimental analysis of human performance on these tasks will likely be necessary. User studies can often aid in this analysis, but they require much money, much time, and are subject to user fatigue. Alternately, we propose a web-based method for evaluating user performance in perceptual, motor-control, and cognitive tasks. The idea is to take a task would normally be measured through user studies in a laboratory and map this task into a simple online game. Somewhat similar work has been done by Popovic et. al. at the University of Washington, in that they took the task of folding proteins and mapped it into an online game ( http://www.economist.com/displaystory.cfm?story_id=11326188 ) with much success. We will analyze the value of this method by comparing it to similar tasks performed in laboratory experiments, both in terms of user performance and deployment costs.&lt;br /&gt;
##By converting the task into a simple game, we hope to reduce the problem of user fatigue. Additionally, if the game is played on a social networking site, we are able to track basic information of users who perform the tasks and, more importantly, can identify returning users. Thus, we can track not only a user&#039;s performance, but also how they improve at a given task over time.&lt;br /&gt;
##There are no clear risks involved with this study. Potential costs would be those required for development of each experiment and for web hosting.&lt;br /&gt;
##As a prototype, we can select one particular task to map into a simple online game. To check for feasibility, we need to ensure that the results we get from our proposed method are similar to the results found in laboratory settings. There are two possible effects we must test for: bias in results due to the mapping into a game, and bias in results due to the sample of subjects or any change caused by the web-based component. A simple test would be to first test users in a lab using traditional methods as a baseline, and then see how performance differs from that in  laboratory tests using the game-based mapping. This will determine if the game appropriately measures the given task. Next, an online version of the game can be introduced, and performance can be compared with the laboratory settings. If performance is similar in all of these tests, we have found a method for measuring low-level tasks that allows for many samples and minimal cost.&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
# Accurately assess computational and psychological costs for tasks and subtasks.  To do this, we will develop two non-trivial prototype systems; a conventional control system and a novel system which is based on our model of task switching.  We will use our model to make specific predictions about relative task performance and user affect responses, and then test these predictions empirically in a formal study.&lt;br /&gt;
# Develop model that accounts for qualitatively different psychological tasks&lt;br /&gt;
# Test model on real-world data&lt;br /&gt;
# Build classification for design guidelines based on cognitive analogues [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Parse and classify existing guidelines by this new metric [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Build an automated evaluation rubric for applying these classifications to interfaces in development [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Long-term goal: build a mixed-initiative interface generation system that takes some basic GUI requirements from the designer as input and attempts to maximize its score on the &amp;quot;evaluation rubric&amp;quot; within these constraints. (Eric)&lt;br /&gt;
# Develop model component to predict performance following interruptions or changes between work spheres.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget the context of the original task ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. The intention to switch becomes easy &#039;&#039;&#039;(huh??? &amp;quot;easy to identify&amp;quot;?)&#039;&#039;&#039; when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
(Edited by Andrew)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantitative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert users dynamically with accuracies as high as 91%.  This classifier was then used to provide different information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately? What tasks should we purposely leave the computer out of?&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature [http://vrl.cs.brown.edu/wiki/CS295J/Literature Evans-2003-ITM]. What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
(Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
&lt;br /&gt;
In considering users as information processors, interfaces should take into account people&#039;s computational limitations on short term memory, learning and vision as well as the algorithms and representations that they use to process information and pursue goals.&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as direct, non-inferential, unmediated (by retinal images or mental representations) epistemic contact with behaviorally-relevant features of the environment(Warren, 2005).  The possiblities for action that the environment offers a given animal are taken to be specified by information available in structured energy distributions (e.g. the optic array of light arriving at the eyes), and these possibilities for action constitute the affordances of the environment with respect that animal(Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
====Application to AUE====&lt;br /&gt;
&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
====Popular and seminal examples====&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
====Elements and goals of a guideline set====&lt;br /&gt;
&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application. [[User:E J Kalafarski|E J Kalafarski]] 15:21, 6 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Interaction capture====&lt;br /&gt;
&lt;br /&gt;
[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4376144 Yi et. al.] have performed a survey of the visualization literature and categorized different types of interactions that users were faced with. They are as follows:&lt;br /&gt;
# Select: mark something as interesting &lt;br /&gt;
# Explore: show me something else &lt;br /&gt;
# Reconfigure: show me a different arrangement &lt;br /&gt;
# Encode: show me a different representation&lt;br /&gt;
# Abstract/Elaborate: show me more or less detail &lt;br /&gt;
# Filter: show me something conditionally &lt;br /&gt;
# Connect: show me related items &lt;br /&gt;
&lt;br /&gt;
Different GUI components may be able to perform the same type of interaction. We would like to categorize GUI components or patterns that are used bring about these interactions. We then have a library of components we can use to complete a given task. The goal is to create components for a given interaction that can minimize cost to the user. Because the cost of a component is likely dependent on the other components used, the goal of the designer might be to choose a combination of components that minimizes this cost. To do this, we need a way to measure costs, which is discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
In [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4658124 A Framework of Interaction Costs in Information Visualization], Lam performs a survey of 32 user studies and classifies several types of costs that can be used for qualitative interface evaluation. The classification scheme is based on Donald Norman&#039;s [http://en.wikipedia.org/wiki/Seven_stages_of_action Seven Stages of Action] from his book, [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746 The Design of Everyday Things] ([http://www.networksplus.net/tracyj/everydaythings.pdf summary]).&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Decision costs:&#039;&#039;&#039; How does user performance decrease when there is an overwhelming amount of data to observe or too many possible actions to take.&lt;br /&gt;
# &#039;&#039;&#039;System-power costs:&#039;&#039;&#039; How does the user translate a high-level goal into a sequence of allowable actions by the interface?&lt;br /&gt;
# &#039;&#039;&#039;Multiple input mode costs:&#039;&#039;&#039; Cost of providing an action selection system that is not unified, for example, if there is one button that does two different things, depending on context. &lt;br /&gt;
# &#039;&#039;&#039;Physical-motion costs:&#039;&#039;&#039; Physical cost to the user to interact with the interface, for example, measuring mouse movement costs with Fitts&#039; Law.&lt;br /&gt;
# &#039;&#039;&#039;Visual-cluttering costs:&#039;&#039;&#039; Cost due to unwanted visual distractions, such as a mouse hovering pop-up occluding part of the screen.&lt;br /&gt;
# &#039;&#039;&#039;View- and State-change costs:&#039;&#039;&#039; when the user causes the interface to change views, this new view should be consistent with the old one, in that it should meet the users expectations of where things should be in the new view, based on his knowledge of the old one.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation in practice ====&lt;br /&gt;
User interfaces are usually evaluated in practice using two methods: &#039;&#039;usability inspection methods&#039;&#039;, where a programmer or one or more experts evaluates the interface through inspection; or &#039;&#039;usability testing&#039;&#039;, where empirical tests are performed with some group of naive human users. Some usability inspection methods include [http://en.wikipedia.org/wiki/Cognitive_walkthrough Cognitive walkthrough], [http://en.wikipedia.org/wiki/Heuristic_evaluation Heuristic evaluation], and [http://en.wikipedia.org/wiki/Pluralistic_walkthrough Pluralistic walkthrough]. While these inspection methods do not using naive human subjects, the details of the methods might be useful in helping to formalize what interactions are made between a user and an interface, and what each interactions&#039; costs are for a given design.&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=108862 Jeffries et. al.] provide a real-world comparison between two of the usability inspection methods (heuristic evaluation and cognitive walkthrough), the usability testing method, as well as following some published software guidelines for interface design.&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Some recent advances in the pragmatic use of EEGs in HCI research can be seen in [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]  The possibility of using brain function to interface with a machine is cause for great excitement in the HCI community, and further advances in non-invasive techniques for accessing brain function may allow teleo-HCI to become a reality.  &lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
=== Workflow analysis ===&lt;br /&gt;
&lt;br /&gt;
Research in workflow and interaction analysis remains relatively sparse, though its utility would appear to be many-fold.  Tools for such analysis have the potential to facilitate data navigation, provide search mechanisms, and allow for more efficient collaborative discovery.  In addition, awareness and caching of interaction histories readily allows for explanatory presentations of results, and has the potential to provide training data for machine learning mechanisms.&lt;br /&gt;
&lt;br /&gt;
VisTrails is an optimized workflow system developed at the Sci Institute at the University of Utah, and implemented within their VTK visualization package.  The primary purpose of the system is to increase performance when working with multiple visualizations simultaneously.  This is accomplished by storing low-level workflow processes to reduce computational redundancy.  Three papers on VisTrails can be found here: [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-MED.pdf Callahan-2006-MED], [http://www.cs.brown.edu/people/trevor/Papers/Callahan-2006-VVM.pdf Callahan-2006-VVM], [http://www.cs.brown.edu/people/trevor/Papers/Bavoil-2005-VEI.pdf Bavoil-2005-VEI]&lt;br /&gt;
&lt;br /&gt;
Jeff Heer of Stanford (formerly Berkley) has presented work on using Graphical Interaction Histories within the Tableau InfoVis application.  Though geared toward two-dimensional visualizations with clearly defined events, his work offers some very useful design guidelines for working with interaction histories, including some evaluation from the deployment of his techniques within Tableau.  His paper from Infovis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/Papers/Heer-2008-GraphicalHistories.pdf Heer-2008-GraphicalHistories]&lt;br /&gt;
&lt;br /&gt;
If you want to check out some of Trevor&#039;s work having to do with using interaction histories in 3D, time-varying scientific visualizations, his preliminary work that was presented at Vis &#039;08 can be seen here: [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visDemo.pdf Abstract], [http://www.cs.brown.edu/people/trevor/trevor_iweb/Publications_files/obrien-2008-visPoster.pdf Poster]&lt;br /&gt;
&lt;br /&gt;
Optimizing workflows that have been captured -- Tovi?&lt;br /&gt;
&lt;br /&gt;
Does ethnography fit in here?&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
* Data obtained from user-interactions show that there are significant discrepancies between individual&#039;s performances. Much of current software doesn&#039;t not gracefully handle this fact.&lt;br /&gt;
* Video recordings give us an interesting measure of frustration.&lt;br /&gt;
* Video recordings show general eye-gaze.&lt;br /&gt;
* Keyboard shortcut usage is highly variable.&lt;br /&gt;
(Gideon)&lt;br /&gt;
&lt;br /&gt;
C. Preliminary Studies&lt;br /&gt;
*C.1 Overview of analysis method&lt;br /&gt;
**C.1.1 Justification for qualitative analysis&lt;br /&gt;
**C.1.2 Restrictions on qualitative analysis&lt;br /&gt;
**C.1.3 Terms&lt;br /&gt;
*C.2 Garage Band study&lt;br /&gt;
**C.2.1 Feasibility of a creative problem-solving task&lt;br /&gt;
**C.2.2 Unique aspects and pertinence&lt;br /&gt;
*C.3 Photoshop study&lt;br /&gt;
**C.3.1 Feasibility of a creative sandbox task&lt;br /&gt;
**C.3.2 Unique aspects and pertinence&lt;br /&gt;
*C.4 Lessons Learned&lt;br /&gt;
**C.4.1 Setup of study, metric methods&lt;br /&gt;
**C.4.2 Possibility for future critique-enabled user workflow&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
===EJ and Jon===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Photoshop Study&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We demonstrate that the efficiency of performing subtasks in Photoshop can be predicted by a simple model of human perceptual and cognitive abilities.  In particular, we show that several of the tools commonly used to perform basic operations in Photoshop often violate the user&#039;s expectations of how those tools should work or where those tools ought to be located within the interface.  These violations can be categorized as follows: (1) unintuitive relationships between adjustments to common tool parameters and their perceptual results (e.g. adjusting the Magic Wand tool&#039;s &amp;quot;tolerance&amp;quot; setting often leads to unintended selections), (2) inefficient means of adjusting tool parameters (e.g. adjusting the &amp;quot;tolerance&amp;quot; setting by clicking, typing a number, hitting enter, observing the results, and iterating this process until the desired perceptual effect is achieved), (3) mismatches between the user&#039;s expectations for the names and locations of tools (or menu items) and their actual names and locations (e.g. resizing a picture via the &amp;quot;transform&amp;quot; menu item), (4) the availability of a tool in multiple locations imposes a cognitive load on the user when searching for that tool and these various contexts influence the user&#039;s expectations about the effects of using that tool, (5) [what else?], etc.&lt;br /&gt;
&lt;br /&gt;
===Andrew Bragdon===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate feasability of an integrated model of current theories in perception and HCI for predicting task performance; also evaluate the feasability of this experimental methodology&lt;br /&gt;
&lt;br /&gt;
B.  Formative study should inspire future directions&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  In a highly-controlled task environment, understand how an integrated model comprising current theories in perception and HCI can predict/explain task performance&lt;br /&gt;
&lt;br /&gt;
B.  Users trained in the task extensively to control for learning (learning aspect of task could be investigated in future study)&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A. Users favored continuous rotation over static view or meditative pauses.&lt;br /&gt;
&lt;br /&gt;
B. Hand gesticulation seemed to be used as a method of validation in stereo views.&lt;br /&gt;
&lt;br /&gt;
C. Several verbal comments regarding occlusion suggest drawbacks to the tube view.&lt;br /&gt;
&lt;br /&gt;
D. Rotating is beneficial and perhaps necessary for this task.&lt;br /&gt;
&lt;br /&gt;
E. No references to external, offscreen information. In fact, very rarely did participants glance away from the screen.&lt;br /&gt;
&lt;br /&gt;
F. Our assessment of a typical session beginning with a new dataset:&lt;br /&gt;
&lt;br /&gt;
   1. Data loads, split second decision to begin rotating.&lt;br /&gt;
   2. Continuous rotation until proper viewpoint determined.&lt;br /&gt;
   3. Rocking back and forth interaction about the optimal viewpoint. (Thinking?)&lt;br /&gt;
   4. [In stereo views, participants were noted tilting their heads fairly consistently.] &lt;br /&gt;
&lt;br /&gt;
IV.  &#039;&#039;&#039;Discussion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Some of the users&#039; strategies can be explained by current theories in perception&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions (we will be running this study this week!)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Now that we have investigated highly controlled task performance, explore larger workflow context&lt;br /&gt;
&lt;br /&gt;
B.  Perform study examining how interuptions affect user performance, and explore what coping strategies are used by users in such an environment&lt;br /&gt;
&lt;br /&gt;
1.  Software developers (a good example of a challenging, and creative type of information work) will receive task requests by email; notifications will appear on their screen in real time&lt;br /&gt;
&lt;br /&gt;
2.  Each email will have differrent priorities (e.g., low, high, emergency)&lt;br /&gt;
&lt;br /&gt;
3.  Participants will be asked to manage priorities effectively to accomplish the tasks given&lt;br /&gt;
&lt;br /&gt;
4.  Once they begin working, we will &amp;quot;interrupt&amp;quot; them at controlled times with new task requests of differrent priorities&lt;br /&gt;
&lt;br /&gt;
5.  We will analyze their actions for coping strategies, metawork, and working spheres to try to understand how the larger workflow context is affected by interruptions&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Adam Darlow===&lt;br /&gt;
&lt;br /&gt;
I.  &#039;&#039;&#039;Goals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Evaluate the interactions between various cognitive principles and design principles. There are three basic relations:&lt;br /&gt;
&lt;br /&gt;
   1. The cognitive principle is the motivation behind the design principle.&lt;br /&gt;
   2. The cognitive principle suggests a method for achieving the design principle.&lt;br /&gt;
   3. The cognitive principle and design principle are unrelated. (Hopefully few)&lt;br /&gt;
&lt;br /&gt;
B.  Make design rules which are suggested by the combination of a cognitive principle and a design principle.&lt;br /&gt;
&lt;br /&gt;
II.  &#039;&#039;&#039;Description of methodology/experimental procedure&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A.  Collecting commonly accepted design principles from the literature on interface design and well established cognitive principles from the cognitive psychology literature and constructing a matrix which crosses them. Most squares in the matrix should suggest specific design rules.&lt;br /&gt;
&lt;br /&gt;
III. &#039;&#039;&#039;Results&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  As a preliminary effort, I have chosen the following two cognitive principles and three design principles:&lt;br /&gt;
&lt;br /&gt;
Cognitive principles&lt;br /&gt;
&lt;br /&gt;
#C1. People derive complex associations and causal interpretations from temporal correlations and patterns.&lt;br /&gt;
&lt;br /&gt;
#C2. People have limited working memory (7 +- 2), but each space can hold a chunk of related information.&lt;br /&gt;
&lt;br /&gt;
Design Principles (from Maeda (TBD link))&lt;br /&gt;
&lt;br /&gt;
#D1. Achieve simplicity through thoughtful reduction.&lt;br /&gt;
&lt;br /&gt;
#D2. Organization makes a system of many appear fewer.&lt;br /&gt;
&lt;br /&gt;
#D3. Knowledge makes everything simpler.&lt;br /&gt;
&lt;br /&gt;
The resulting matrix entries are as follows:&lt;br /&gt;
&lt;br /&gt;
#C1 + D1. Remove extraneous correlations. Things shouldn&#039;t consistently and apparently change or happen in conjunction unless they are actually related and their relation is important to the user.&lt;br /&gt;
&lt;br /&gt;
#C1 + D2. Use temporal and spatial contiguity to help users organize and group  multiple events meaningfully. &lt;br /&gt;
&lt;br /&gt;
#C1 + D3. Use temporal correlations to effectively teach the important  causal relations inherent in the interface.&lt;br /&gt;
&lt;br /&gt;
#C2 + D1. Reduce the interface such that a user has to be aware of no more than 5 items simultaneously.&lt;br /&gt;
&lt;br /&gt;
#C2 + D2. Groups of semantically related items can for many purposes be treated as a single item.&lt;br /&gt;
&lt;br /&gt;
#C2 + D3. Teach users how things are related so that it can be chunked.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
V.  &#039;&#039;&#039;Conclusion&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
VI.  &#039;&#039;&#039;Future Directions (we will be running this study this week!)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  To expand the matrix and evaluate the resulting design rules.&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about the details of a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;.  There does need to be enough to define what the overall proposed work is, but that may show up in earlier sections.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Application_Critiques&amp;diff=2199</id>
		<title>CS295J/Application Critiques</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Application_Critiques&amp;diff=2199"/>
		<updated>2009-02-25T22:29:07Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Microsoft OneNote 2007 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Applications ==&lt;br /&gt;
&lt;br /&gt;
A list of analytical applications we will consider critiquing and the pros and cons of proceeding with each.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.craigslist.org Craigslist] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A website that hosts user-submitted classified ads, from job postings to garage sale items.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; The process of purchasing an item from the site (for example, &amp;quot;men&#039;s size 10 shoes&amp;quot;).&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Most people are familiar with the site, or can readily try using it.&lt;br /&gt;
*# Has a design clearly different from most websites, yet is extremely popular (despite its design?).&lt;br /&gt;
*# May be a source for discussing cognitive limitations such as information overload and problems with categorization.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not used for scientific analysis, so it doesn&#039;t exactly meet the criteria for the type of application we&#039;re looking for.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Microsoft_Excel Microsoft Excel 2007] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Andrew Bragdon&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A spreadsheet application that helps users perform complex financial and scientific calculations and analysis.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; A sensemaking task in a large spreadsheet, such as understanding why the project is over budget by more than 20%.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Most people are familiar with it or can readily try using it.&lt;br /&gt;
*# Excel is heavily used by the users who use it, and is frequently used to do analysis and sensemaking tasks such as the workflow outlined above.&lt;br /&gt;
*# Contains many visualization features including graph generation, equation flow visualization, and conditional formatting.&lt;br /&gt;
*# Is often used to ask questions of the nature &amp;quot;what if...&amp;quot; and even includes basic features to support this workflow.&lt;br /&gt;
*# Is used extensively for scientific analysis and visualization.  Although it does not include some of the features of other scientific analysis packages, it is generally seen as much easier to use and there are third-party add-ins which accommodate many of these features.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Spreadsheets may not be seen as fun.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Microsoft_OneNote Microsoft OneNote 2007] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Adam Darlow&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A free-form note-taking application which allows users to enter, annotate and organize text and other information.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Taking notes from a lecture while viewing it, then organizing those notes into a presentable summary.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# People are unfamiliar with the application, but it has a shallow learning curve.&lt;br /&gt;
*# Can test transfer of knowledge from other Microsoft applications.&lt;br /&gt;
*# Can support a broad range of tasks.&lt;br /&gt;
* &#039;&#039;&#039;Cons of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not tailored for any specific tasks.&lt;br /&gt;
*# Not a very visual application.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Photoshop Photoshop] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER:&#039;&#039;&#039;&#039;&#039; [[User:E J Kalafarski|E J Kalafarski]] 15:43, 19 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A graphics-editing program for complex image manipulation or analysis.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; A goal-oriented task requiring a series of non-linear operations, such as the application of masks, the organization of layers, the application of filters, and the navigation of paths, color channels, and the 2D space.&lt;br /&gt;
* &#039;&#039;&#039;Pros of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Has a non-linear workflow and 2.5D space (layers and masks require semi-three-dimensional thinking), allowing for goal-oriented tasks with various approaches each.&lt;br /&gt;
*# Is familiar in goal and purpose to most users, but most users will need to explore the application space and discover functionality. &lt;br /&gt;
*# Applications of design principles such as simplicity (or lack thereof), predictability, feedback and control, and interruptibility.  These have cognitive analogues in affordance (e.g., time-dependent widgets like history and space-dependent widgets such as channels and paths), retention, etc.&lt;br /&gt;
* &#039;&#039;&#039;Cons of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Image manipulation has relation to limited scientific analysis, in fields such as computer vision, object recognition, HCI, etc.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Google_Scholar Google Scholar] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Gideon Goldin&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A web search engine for scholarly articles.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; The process of finding and selecting an article with incomplete information (e.g., partial title, single author, etc.).&lt;br /&gt;
* &#039;&#039;&#039;Pros of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Used often by those in research. In some metrics considered to be biggest journal searching program.&lt;br /&gt;
*# Very similar to general purpose web search such as Google. Obvious implications.&lt;br /&gt;
*# Advanced search allows highly customizable searches.&lt;br /&gt;
*# Simple program. Not too in-depth.&lt;br /&gt;
* &#039;&#039;&#039;Cons of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not used directly for scientific analysis (except for right now).&lt;br /&gt;
*# 2D interface&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Photoshop MATLAB] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Jon Ericson&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A programming language and computing environment that permits easy manipulation of matrices, flexible data plotting, as well as algorithm and GUI development.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Flexibly graphing data through a GUI developed within MATLAB (e.g. quickly changing the independent variable or axis scale via a drop-down menu, etc.)&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# MATLAB is widely used in the scientific community for data analysis and visualization.&lt;br /&gt;
*# It is often used in conjunction with Excel (it has Excel import/export functions), and therefore might permit analysis of how two workflows interact or flow into one another in the context of two popular scientific applications.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Requires some knowledge of the MATLAB programming language and basic matrix operations.&lt;br /&gt;
*# Because it is such a flexible environment it is generally used in highly idiosyncratic ways; perhaps it would be best to focus on the design of the MATLAB interface itself and one of the workflows it permits (e.g. script-writing, making GUI layouts, or debugging).&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Maya_(software) Autodesk&#039;s Maya] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Trevor O&#039;Brien&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
Maya is a commercial software package for high quality 3D modeling, animation, visual effects, and rendering. &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Designing, modeling and navigating an expansive 3D scene.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Maya is the industry-leading environment for 3D modeling and animation, and is becoming prevalent in medical and scientific visualization.&lt;br /&gt;
*# A good deal of research on the design of Maya tools has been published in the HCI community.  May be interesting to compare design decisions against workflow analysis.&lt;br /&gt;
*# 3D nature allows for interesting perceptual analysis.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Potentially finding enough users who are comfortable with Maya.  (Although it may be interesting to analyze novice users, having a group of experts as well would be ideal.)&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/SPSS SPSS] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Steven Ellis&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
SPSS (Statistical Package for the Social Sciences) is, as the name states, a statistical analysis program which facilitates data analysis via various descriptive and test procedures, as well as 2D visualizations. &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Determining the important features of a set of data.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# SPSS is one of a small group of leading statistical packages and is used by social scientists across the country (including here at Brown).&lt;br /&gt;
*# Statisticians and social scientists often are not particularly tech-savvy, and have often been trained with highly theoretical models.  Ease of use is thus a critical aspect of a program such as SPSS.&lt;br /&gt;
*# Tasks are all highly formal, thus making them salient for non-techies will prove an interesting exercise.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not the kind of program which one would ever have need to pick up on their own, thus having a somewhat steep learning curve is not necessarily a negative.&lt;br /&gt;
*# Small choice-set of visualizations.&lt;br /&gt;
&lt;br /&gt;
=== [http://blast.ncbi.nlm.nih.gov/Blast.cgi NCBI BLAST Searching] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Ian Spector&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
BLAST (Basic Local Alignment Search Tool) is a tool used in bioinformatics to compare the similarity of genetic or proteomic information that may be discovered by scientists to a known database of genomic information.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Determining similarity between a discovered genetic sequence and known sequences.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# BLAST searching through NCBI (National Center for Biotechnology Information) is a heavily used resource for genetic research around the world&lt;br /&gt;
*# There is a massive amount of information available to be mined in the NCBI databases and any improvement to the existing way the system works would likely be appreciated by all who make use of the service.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Outrageously complicated for anyone who doesn&#039;t know anything about genetics, and possibly even those who are quite familiar&lt;br /&gt;
*# The learning curve is vertical&lt;br /&gt;
&lt;br /&gt;
=== Tekplot for 3D fluid flow anallysis ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: David&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
Tekplot provides interactive access to 3D time-varying fluid flow simulations.  It can show both data values in 3D and the meshes on which they were calculated.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workflow:&#039;&#039;&#039;&lt;br /&gt;
*Specify the format of your data (usually only needed for first use)&lt;br /&gt;
*Import data&lt;br /&gt;
*Get an overview of what&#039;s there&lt;br /&gt;
**Sometimes 3D, sometimes going through 2D slices, often looking at a small subset of time steps.&lt;br /&gt;
**Confirm that the simulation didn&#039;t produce something obviously wrong&lt;br /&gt;
**Identify a region/slice/time-step to study more closely&lt;br /&gt;
*Drill down into region of interest&lt;br /&gt;
**Study in more detail, quantifying values at specific locations, confirming broader trends, comparing with conditions from another simulation.&lt;br /&gt;
**Determine parameters for next simulation or results to publish and iterate&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Application_Critiques&amp;diff=2198</id>
		<title>CS295J/Application Critiques</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Application_Critiques&amp;diff=2198"/>
		<updated>2009-02-25T19:05:32Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Applications ==&lt;br /&gt;
&lt;br /&gt;
A list of analytical applications we will consider critiquing and the pros and cons of proceeding with each.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.craigslist.org Craigslist] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A website that hosts user-submitted classified ads, from job postings to garage sale items.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; The process of purchasing an item from the site (for example, &amp;quot;men&#039;s size 10 shoes&amp;quot;).&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Most people are familiar with the site, or can readily try using it.&lt;br /&gt;
*# Has a design clearly different from most websites, yet is extremely popular (despite its design?).&lt;br /&gt;
*# May be a source for discussing cognitive limitations such as information overload and problems with categorization.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not used for scientific analysis, so it doesn&#039;t exactly meet the criteria for the type of application we&#039;re looking for.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Microsoft_Excel Microsoft Excel 2007] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Andrew Bragdon&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A spreadsheet application that helps users perform complex financial and scientific calculations and analysis.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; A sensemaking task in a large spreadsheet, such as understanding why the project is over budget by more than 20%.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Most people are familiar with it or can readily try using it.&lt;br /&gt;
*# Excel is heavily used by the users who use it, and is frequently used to do analysis and sensemaking tasks such as the workflow outlined above.&lt;br /&gt;
*# Contains many visualization features including graph generation, equation flow visualization, and conditional formatting.&lt;br /&gt;
*# Is often used to ask questions of the nature &amp;quot;what if...&amp;quot; and even includes basic features to support this workflow.&lt;br /&gt;
*# Is used extensively for scientific analysis and visualization.  Although it does not include some of the features of other scientific analysis packages, it is generally seen as much easier to use and there are third-party add-ins which accommodate many of these features.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Spreadsheets may not be seen as fun.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Microsoft_OneNote Microsoft OneNote 2007] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Adam Darlow&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A free-form note-taking application which allows users to enter, annotate and organize text and other information.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Taking notes from a lecture while viewing it, then organizing those notes into a presentable summary.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# People are unfamiliar with the application, but it has a shallow learning curve.&lt;br /&gt;
*# Can test transfer of knowledge from other Microsoft applications.&lt;br /&gt;
*# TBD&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# TBD&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Photoshop Photoshop] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER:&#039;&#039;&#039;&#039;&#039; [[User:E J Kalafarski|E J Kalafarski]] 15:43, 19 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A graphics-editing program for complex image manipulation or analysis.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; A goal-oriented task requiring a series of non-linear operations, such as the application of masks, the organization of layers, the application of filters, and the navigation of paths, color channels, and the 2D space.&lt;br /&gt;
* &#039;&#039;&#039;Pros of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Has a non-linear workflow and 2.5D space (layers and masks require semi-three-dimensional thinking), allowing for goal-oriented tasks with various approaches each.&lt;br /&gt;
*# Is familiar in goal and purpose to most users, but most users will need to explore the application space and discover functionality. &lt;br /&gt;
*# Applications of design principles such as simplicity (or lack thereof), predictability, feedback and control, and interruptibility.  These have cognitive analogues in affordance (e.g., time-dependent widgets like history and space-dependent widgets such as channels and paths), retention, etc.&lt;br /&gt;
* &#039;&#039;&#039;Cons of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Image manipulation has relation to limited scientific analysis, in fields such as computer vision, object recognition, HCI, etc.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Google_Scholar Google Scholar] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Gideon Goldin&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A web search engine for scholarly articles.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; The process of finding and selecting an article with incomplete information (e.g., partial title, single author, etc.).&lt;br /&gt;
* &#039;&#039;&#039;Pros of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Used often by those in research. In some metrics considered to be biggest journal searching program.&lt;br /&gt;
*# Very similar to general purpose web search such as Google. Obvious implications.&lt;br /&gt;
*# Advanced search allows highly customizable searches.&lt;br /&gt;
*# Simple program. Not too in-depth.&lt;br /&gt;
* &#039;&#039;&#039;Cons of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not used directly for scientific analysis (except for right now).&lt;br /&gt;
*# 2D interface&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Photoshop MATLAB] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Jon Ericson&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A programming language and computing environment that permits easy manipulation of matrices, flexible data plotting, as well as algorithm and GUI development.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Flexibly graphing data through a GUI developed within MATLAB (e.g. quickly changing the independent variable or axis scale via a drop-down menu, etc.)&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# MATLAB is widely used in the scientific community for data analysis and visualization.&lt;br /&gt;
*# It is often used in conjunction with Excel (it has Excel import/export functions), and therefore might permit analysis of how two workflows interact or flow into one another in the context of two popular scientific applications.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Requires some knowledge of the MATLAB programming language and basic matrix operations.&lt;br /&gt;
*# Because it is such a flexible environment it is generally used in highly idiosyncratic ways; perhaps it would be best to focus on the design of the MATLAB interface itself and one of the workflows it permits (e.g. script-writing, making GUI layouts, or debugging).&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Maya_(software) Autodesk&#039;s Maya] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Trevor O&#039;Brien&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
Maya is a commercial software package for high quality 3D modeling, animation, visual effects, and rendering. &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Designing, modeling and navigating an expansive 3D scene.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Maya is the industry-leading environment for 3D modeling and animation, and is becoming prevalent in medical and scientific visualization.&lt;br /&gt;
*# A good deal of research on the design of Maya tools has been published in the HCI community.  May be interesting to compare design decisions against workflow analysis.&lt;br /&gt;
*# 3D nature allows for interesting perceptual analysis.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Potentially finding enough users who are comfortable with Maya.  (Although it may be interesting to analyze novice users, having a group of experts as well would be ideal.)&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/SPSS SPSS] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Steven Ellis&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
SPSS (Statistical Package for the Social Sciences) is, as the name states, a statistical analysis program which facilitates data analysis via various descriptive and test procedures, as well as 2D visualizations. &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Determining the important features of a set of data.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# SPSS is one of a small group of leading statistical packages and is used by social scientists across the country (including here at Brown).&lt;br /&gt;
*# Statisticians and social scientists often are not particularly tech-savvy, and have often been trained with highly theoretical models.  Ease of use is thus a critical aspect of a program such as SPSS.&lt;br /&gt;
*# Tasks are all highly formal, thus making them salient for non-techies will prove an interesting exercise.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not the kind of program which one would ever have need to pick up on their own, thus having a somewhat steep learning curve is not necessarily a negative.&lt;br /&gt;
*# Small choice-set of visualizations.&lt;br /&gt;
&lt;br /&gt;
=== [http://blast.ncbi.nlm.nih.gov/Blast.cgi NCBI BLAST Searching] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Ian Spector&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
BLAST (Basic Local Alignment Search Tool) is a tool used in bioinformatics to compare the similarity of genetic or proteomic information that may be discovered by scientists to a known database of genomic information.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Determining similarity between a discovered genetic sequence and known sequences.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# BLAST searching through NCBI (National Center for Biotechnology Information) is a heavily used resource for genetic research around the world&lt;br /&gt;
*# There is a massive amount of information available to be mined in the NCBI databases and any improvement to the existing way the system works would likely be appreciated by all who make use of the service.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Outrageously complicated for anyone who doesn&#039;t know anything about genetics, and possibly even those who are quite familiar&lt;br /&gt;
*# The learning curve is vertical&lt;br /&gt;
&lt;br /&gt;
=== Tekplot for 3D fluid flow anallysis ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: David&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
Tekplot provides interactive access to 3D time-varying fluid flow simulations.  It can show both data values in 3D and the meshes on which they were calculated.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workflow:&#039;&#039;&#039;&lt;br /&gt;
*Specify the format of your data (usually only needed for first use)&lt;br /&gt;
*Import data&lt;br /&gt;
*Get an overview of what&#039;s there&lt;br /&gt;
**Sometimes 3D, sometimes going through 2D slices, often looking at a small subset of time steps.&lt;br /&gt;
**Confirm that the simulation didn&#039;t produce something obviously wrong&lt;br /&gt;
**Identify a region/slice/time-step to study more closely&lt;br /&gt;
*Drill down into region of interest&lt;br /&gt;
**Study in more detail, quantifying values at specific locations, confirming broader trends, comparing with conditions from another simulation.&lt;br /&gt;
**Determine parameters for next simulation or results to publish and iterate&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Application_Critiques&amp;diff=2197</id>
		<title>CS295J/Application Critiques</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Application_Critiques&amp;diff=2197"/>
		<updated>2009-02-25T19:05:06Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: Started adding Microsoft OneNote&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Applications ==&lt;br /&gt;
&lt;br /&gt;
A list of analytical applications we will consider critiquing and the pros and cons of proceeding with each.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.craigslist.org Craigslist] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A website that hosts user-submitted classified ads, from job postings to garage sale items.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; The process of purchasing an item from the site (for example, &amp;quot;men&#039;s size 10 shoes&amp;quot;).&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Most people are familiar with the site, or can readily try using it.&lt;br /&gt;
*# Has a design clearly different from most websites, yet is extremely popular (despite its design?).&lt;br /&gt;
*# May be a source for discussing cognitive limitations such as information overload and problems with categorization.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not used for scientific analysis, so it doesn&#039;t exactly meet the criteria for the type of application we&#039;re looking for.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Microsoft_Excel Microsoft Excel 2007] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Andrew Bragdon&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A spreadsheet application that helps users perform complex financial and scientific calculations and analysis.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; A sensemaking task in a large spreadsheet, such as understanding why the project is over budget by more than 20%.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Most people are familiar with it or can readily try using it.&lt;br /&gt;
*# Excel is heavily used by the users who use it, and is frequently used to do analysis and sensemaking tasks such as the workflow outlined above.&lt;br /&gt;
*# Contains many visualization features including graph generation, equation flow visualization, and conditional formatting.&lt;br /&gt;
*# Is often used to ask questions of the nature &amp;quot;what if...&amp;quot; and even includes basic features to support this workflow.&lt;br /&gt;
*# Is used extensively for scientific analysis and visualization.  Although it does not include some of the features of other scientific analysis packages, it is generally seen as much easier to use and there are third-party add-ins which accommodate many of these features.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Spreadsheets may not be seen as fun.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Microsoft_OneNote Microsoft Excel 2007] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Adam Darlow&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A free-form note-taking application which allows users to enter, annotate and organize text and other information.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Taking notes from a lecture while viewing it, then organizing those notes into a presentable summary.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# People are unfamiliar with the application, but it has a shallow learning curve.&lt;br /&gt;
*# Can test transfer of knowledge from other Microsoft applications.&lt;br /&gt;
*# TBD&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# TBD&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Photoshop Photoshop] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER:&#039;&#039;&#039;&#039;&#039; [[User:E J Kalafarski|E J Kalafarski]] 15:43, 19 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A graphics-editing program for complex image manipulation or analysis.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; A goal-oriented task requiring a series of non-linear operations, such as the application of masks, the organization of layers, the application of filters, and the navigation of paths, color channels, and the 2D space.&lt;br /&gt;
* &#039;&#039;&#039;Pros of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Has a non-linear workflow and 2.5D space (layers and masks require semi-three-dimensional thinking), allowing for goal-oriented tasks with various approaches each.&lt;br /&gt;
*# Is familiar in goal and purpose to most users, but most users will need to explore the application space and discover functionality. &lt;br /&gt;
*# Applications of design principles such as simplicity (or lack thereof), predictability, feedback and control, and interruptibility.  These have cognitive analogues in affordance (e.g., time-dependent widgets like history and space-dependent widgets such as channels and paths), retention, etc.&lt;br /&gt;
* &#039;&#039;&#039;Cons of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Image manipulation has relation to limited scientific analysis, in fields such as computer vision, object recognition, HCI, etc.&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Google_Scholar Google Scholar] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Gideon Goldin&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A web search engine for scholarly articles.  &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; The process of finding and selecting an article with incomplete information (e.g., partial title, single author, etc.).&lt;br /&gt;
* &#039;&#039;&#039;Pros of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Used often by those in research. In some metrics considered to be biggest journal searching program.&lt;br /&gt;
*# Very similar to general purpose web search such as Google. Obvious implications.&lt;br /&gt;
*# Advanced search allows highly customizable searches.&lt;br /&gt;
*# Simple program. Not too in-depth.&lt;br /&gt;
* &#039;&#039;&#039;Cons of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not used directly for scientific analysis (except for right now).&lt;br /&gt;
*# 2D interface&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Photoshop MATLAB] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Jon Ericson&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; A programming language and computing environment that permits easy manipulation of matrices, flexible data plotting, as well as algorithm and GUI development.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Flexibly graphing data through a GUI developed within MATLAB (e.g. quickly changing the independent variable or axis scale via a drop-down menu, etc.)&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# MATLAB is widely used in the scientific community for data analysis and visualization.&lt;br /&gt;
*# It is often used in conjunction with Excel (it has Excel import/export functions), and therefore might permit analysis of how two workflows interact or flow into one another in the context of two popular scientific applications.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Requires some knowledge of the MATLAB programming language and basic matrix operations.&lt;br /&gt;
*# Because it is such a flexible environment it is generally used in highly idiosyncratic ways; perhaps it would be best to focus on the design of the MATLAB interface itself and one of the workflows it permits (e.g. script-writing, making GUI layouts, or debugging).&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/Maya_(software) Autodesk&#039;s Maya] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Trevor O&#039;Brien&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
Maya is a commercial software package for high quality 3D modeling, animation, visual effects, and rendering. &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Designing, modeling and navigating an expansive 3D scene.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Maya is the industry-leading environment for 3D modeling and animation, and is becoming prevalent in medical and scientific visualization.&lt;br /&gt;
*# A good deal of research on the design of Maya tools has been published in the HCI community.  May be interesting to compare design decisions against workflow analysis.&lt;br /&gt;
*# 3D nature allows for interesting perceptual analysis.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Potentially finding enough users who are comfortable with Maya.  (Although it may be interesting to analyze novice users, having a group of experts as well would be ideal.)&lt;br /&gt;
&lt;br /&gt;
=== [http://en.wikipedia.org/wiki/SPSS SPSS] ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Steven Ellis&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
SPSS (Statistical Package for the Social Sciences) is, as the name states, a statistical analysis program which facilitates data analysis via various descriptive and test procedures, as well as 2D visualizations. &lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Determining the important features of a set of data.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# SPSS is one of a small group of leading statistical packages and is used by social scientists across the country (including here at Brown).&lt;br /&gt;
*# Statisticians and social scientists often are not particularly tech-savvy, and have often been trained with highly theoretical models.  Ease of use is thus a critical aspect of a program such as SPSS.&lt;br /&gt;
*# Tasks are all highly formal, thus making them salient for non-techies will prove an interesting exercise.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Not the kind of program which one would ever have need to pick up on their own, thus having a somewhat steep learning curve is not necessarily a negative.&lt;br /&gt;
*# Small choice-set of visualizations.&lt;br /&gt;
&lt;br /&gt;
=== [http://blast.ncbi.nlm.nih.gov/Blast.cgi NCBI BLAST Searching] ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: Ian Spector&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
BLAST (Basic Local Alignment Search Tool) is a tool used in bioinformatics to compare the similarity of genetic or proteomic information that may be discovered by scientists to a known database of genomic information.&lt;br /&gt;
* &#039;&#039;&#039;Workflow to critique:&#039;&#039;&#039; Determining similarity between a discovered genetic sequence and known sequences.&lt;br /&gt;
* &#039;&#039;&#039;Pro&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# BLAST searching through NCBI (National Center for Biotechnology Information) is a heavily used resource for genetic research around the world&lt;br /&gt;
*# There is a massive amount of information available to be mined in the NCBI databases and any improvement to the existing way the system works would likely be appreciated by all who make use of the service.&lt;br /&gt;
* &#039;&#039;&#039;Con&#039;s of critiquing&#039;&#039;&#039;&lt;br /&gt;
*# Outrageously complicated for anyone who doesn&#039;t know anything about genetics, and possibly even those who are quite familiar&lt;br /&gt;
*# The learning curve is vertical&lt;br /&gt;
&lt;br /&gt;
=== Tekplot for 3D fluid flow anallysis ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;OWNER: David&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Description of application:&#039;&#039;&#039; &lt;br /&gt;
Tekplot provides interactive access to 3D time-varying fluid flow simulations.  It can show both data values in 3D and the meshes on which they were calculated.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workflow:&#039;&#039;&#039;&lt;br /&gt;
*Specify the format of your data (usually only needed for first use)&lt;br /&gt;
*Import data&lt;br /&gt;
*Get an overview of what&#039;s there&lt;br /&gt;
**Sometimes 3D, sometimes going through 2D slices, often looking at a small subset of time steps.&lt;br /&gt;
**Confirm that the simulation didn&#039;t produce something obviously wrong&lt;br /&gt;
**Identify a region/slice/time-step to study more closely&lt;br /&gt;
*Drill down into region of interest&lt;br /&gt;
**Study in more detail, quantifying values at specific locations, confirming broader trends, comparing with conditions from another simulation.&lt;br /&gt;
**Determine parameters for next simulation or results to publish and iterate&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1922</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1922"/>
		<updated>2009-02-06T16:58:31Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Information Processing Approach to Cognition */  added a sentence on relevance&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# Traditionally, software design and usability testing is focused on low-level task performance.  However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, &#039;&#039;working sphere&#039;&#039; level.  Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching.  We will conduct initial exploratory studies to test specific instances of this high-level hypothesis.  We will then use the refined model to identify specific predictions for the outcome of a formal, ecologically valid study involving a complex, non-trivial application.&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A classification of standard design guidelines into overarching principles with quantifiable cognitive analogues &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:16, 6 February 2009 (UTC)&lt;br /&gt;
## A ranking/weighting for these principles based on their cognitive relevance&lt;br /&gt;
## An interface evaluation mechanism for the application of these principles and their associated guidelines.&lt;br /&gt;
# A systematic technique to determine task distribution based on psychological principles. (Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
## We can build a computational classification algorithm based on the nature of the task properties (e.g., memory-intensive, perceptual, reasoning, etc...)&lt;br /&gt;
## Construct a simple set of guidelines for UI engineers to use&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
# Accurately assess computational and psychological costs for tasks and subtasks.  To do this, we will develop two non-trivial prototype systems; a conventional control system and a novel system which is based on our model of task switching.  We will use our model to make specific predictions about relative task performance and user affect responses, and then test these predictions empirically in a formal study.&lt;br /&gt;
# Develop model that accounts for qualitatively different psychological tasks&lt;br /&gt;
# Test model on real-world data&lt;br /&gt;
# Build classification for design guidelines based on cognitive analogues [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Parse and classify existing guidelines by this new metric [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
# Build an automated evaluation rubric for applying these classifications to interfaces in development [[User:E J Kalafarski|E J Kalafarski]] 16:18, 6 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
(Steven)&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget the context of the original task ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. The intention to switch becomes easy &#039;&#039;&#039;(huh??? &amp;quot;easy to identify&amp;quot;?)&#039;&#039;&#039; when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
(Edited by Andrew)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantitative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert users dynamically with accuracies as high as 91%.  This classifier was then used to provide different information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately? What tasks should we purposely leave the computer out of?&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature [http://vrl.cs.brown.edu/wiki/CS295J/Literature Evans-2003-ITM]. What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
(Owner - [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
&lt;br /&gt;
In considering users as information processors, interfaces should take into account people&#039;s computational limitations on short term memory, learning and vision as well as the algorithms and representations that they use to process information and pursue goals.&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development.  They can range in scale from one primary rule to as many Christopher Alexander&#039;s 253 rules for urban environments,&amp;lt;ref&amp;gt;http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf&amp;lt;/ref&amp;gt; which he introduced with the concept design patterns in the 1970s.  Study has likewise been conducted on the use of these rules:&amp;lt;ref&amp;gt;http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf&amp;lt;/ref&amp;gt; guidelines are often only partially understood, indistinct to the developer, and &amp;quot;fraught&amp;quot; with potential usability problems given a real-world situation.&lt;br /&gt;
&lt;br /&gt;
====Application to AUE====&lt;br /&gt;
&lt;br /&gt;
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically.  The most successful, such as Raskin&#039;s and Schneiderman&#039;s, have been forged from years of observation instead of empirical study and experimentation.  The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.&amp;lt;ref&amp;gt;http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf&amp;lt;/ref&amp;gt;  In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation.  A mutually-reinforcing development of both simultaneously has not been attempted.&lt;br /&gt;
&lt;br /&gt;
Overlap between rulesets is inevitable and unavoidable.  For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching &#039;&#039;principles&#039;&#039; or &#039;&#039;philosophy&#039;&#039; (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.&lt;br /&gt;
&lt;br /&gt;
====Popular and seminal examples====&lt;br /&gt;
Schneiderman&#039;s [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited.  They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to &#039;&#039;repeated use&#039;&#039;, versus &#039;&#039;discoverability&#039;&#039;.  Up to five of Schneiderman&#039;s rules emphasize &#039;&#039;predictability&#039;&#039; in the outcomes of operations and &#039;&#039;increased feedback and control&#039;&#039; in the agency of the user.  His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as &#039;&#039;simplicity&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Raskin&#039;s [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules.  While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman&#039;s: reliability or &#039;&#039;predictability&#039;&#039;, &#039;&#039;simplicity&#039;&#039; or &#039;&#039;efficiency&#039;&#039; (which we can construe as two sides of the same coin), and finally he introduces a concept of &#039;&#039;uninterruptibility&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Maeda&#039;s [http://lawsofsimplicity.com/?cat=5&amp;amp;order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize &#039;&#039;simplicity&#039;&#039; exclusively, although elements of &#039;&#039;use&#039;&#039; as related by Schneiderman&#039;s rules and &#039;&#039;efficiency&#039;&#039; as defined by Raskin may be facets of this simplicity.  Google&#039;s corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines.  &#039;&#039;Efficiency&#039;&#039; and &#039;&#039;simplicity&#039;&#039; are cited explicitly, aesthetics are once again noted as crucial, and working within a user&#039;s trust is another application of &#039;&#039;predictability&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
====Elements and goals of a guideline set====&lt;br /&gt;
&lt;br /&gt;
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties.  For example, it is likely &#039;&#039;simplicity&#039;&#039; has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=j5q0VvOGExYC&amp;amp;oi=fnd&amp;amp;pg=PA357&amp;amp;dq=seven+plus+or+minus+two&amp;amp;ots=prI3PKJBar&amp;amp;sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two].  &#039;&#039;Predictability&#039;&#039; likewise may have an analogue in Activity Theory, in regards to a user&#039;s perceptual expectations for a given action; &#039;&#039;uninterruptibility&#039;&#039; has implications in cognitive task-switching;&amp;lt;ref&amp;gt;http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774&amp;lt;/ref&amp;gt; and so forth.&lt;br /&gt;
&lt;br /&gt;
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues.  By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of &amp;quot;meta-guidelines&amp;quot; and rules for applying them to a given interface in an automated manner.  Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application. [[User:E J Kalafarski|E J Kalafarski]] 15:21, 6 February 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Interaction capture====&lt;br /&gt;
&lt;br /&gt;
[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4376144 Yi et. al.] have performed a survey of the visualization literature and categorized different types of interactions that users were faced with. They are as follows:&lt;br /&gt;
# Select: mark something as interesting &lt;br /&gt;
# Explore: show me something else &lt;br /&gt;
# Reconfigure: show me a different arrangement &lt;br /&gt;
# Encode: show me a different representation&lt;br /&gt;
# Abstract/Elaborate: show me more or less detail &lt;br /&gt;
# Filter: show me something conditionally &lt;br /&gt;
# Connect: show me related items &lt;br /&gt;
&lt;br /&gt;
Different GUI components may be able to perform the same type of interaction. We would like to categorize GUI components or patterns that are used bring about these interactions. We then have a library of components we can use to complete a given task. The goal is to create components for a given interaction that can minimize cost to the user. Because the cost of a component is likely dependent on the other components used, the goal of the designer might be to choose a combination of components that minimizes this cost. To do this, we need a way to measure costs, which is discussed in the next section.&lt;br /&gt;
&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
In [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4658124 A Framework of Interaction Costs in Information Visualization], Lam performs a survey of 32 user studies and classifies several types of costs that can be used for qualitative interface evaluation. The classification scheme is based on Donald Norman&#039;s [http://en.wikipedia.org/wiki/Seven_stages_of_action Seven Stages of Action] from his book, [http://www.amazon.com/Design-Everyday-Things-Donald-Norman/dp/0385267746 The Design of Everyday Things] ([http://www.networksplus.net/tracyj/everydaythings.pdf summary]).&lt;br /&gt;
&lt;br /&gt;
# &#039;&#039;&#039;Decision costs:&#039;&#039;&#039; How does user performance decrease when there is an overwhelming amount of data to observe or too many possible actions to take.&lt;br /&gt;
# &#039;&#039;&#039;System-power costs:&#039;&#039;&#039; How does the user translate a high-level goal into a sequence of allowable actions by the interface?&lt;br /&gt;
# &#039;&#039;&#039;Multiple input mode costs:&#039;&#039;&#039; Cost of providing an action selection system that is not unified, for example, if there is one button that does two different things, depending on context. &lt;br /&gt;
# &#039;&#039;&#039;Physical-motion costs:&#039;&#039;&#039; Physical cost to the user to interact with the interface, for example, measuring mouse movement costs with Fitts&#039; Law.&lt;br /&gt;
# &#039;&#039;&#039;Visual-cluttering costs:&#039;&#039;&#039; Cost due to unwanted visual distractions, such as a mouse hovering pop-up occluding part of the screen.&lt;br /&gt;
# &#039;&#039;&#039;View- and State-change costs:&#039;&#039;&#039; when the user causes the interface to change views, this new view should be consistent with the old one, in that it should meet the users expectations of where things should be in the new view, based on his knowledge of the old one.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation in practice ====&lt;br /&gt;
User interfaces are usually evaluated in practice using two methods: &#039;&#039;usability inspection methods&#039;&#039;, where a programmer or one or more experts evaluates the interface through inspection; or &#039;&#039;usability testing&#039;&#039;, where empirical tests are performed with some group of naive human users. Some usability inspection methods include [http://en.wikipedia.org/wiki/Cognitive_walkthrough Cognitive walkthrough], [http://en.wikipedia.org/wiki/Heuristic_evaluation Heuristic evaluation], and [http://en.wikipedia.org/wiki/Pluralistic_walkthrough Pluralistic walkthrough]. While these inspection methods do not using naive human subjects, the details of the methods might be useful in helping to formalize what interactions are made between a user and an interface, and what each interactions&#039; costs are for a given design.&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=108862 Jeffries et. al.] provide a real-world comparison between two of the usability inspection methods (heuristic evaluation and cognitive walkthrough), the usability testing method, as well as following some published software guidelines for interface design.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;OWNER:&#039;&#039;&#039; [[User:Eric Sodomka|Eric Sodomka]]&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Some recent advances in the pragmatic use of EEGs in HCI research can be seen in [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]  The possibility of using brain function to interface with a machine is cause for great excitement in the HCI community, and further advances in non-invasive techniques for accessing brain function may allow teleo-HCI to become a reality.  &lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1855</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1855"/>
		<updated>2009-02-05T15:49:28Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Workflow Context */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A set of critiques of existing software used for visual and analytical work based on design rules&lt;br /&gt;
## ???&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. The intention to switch becomes easy when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantiative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%.  This classifier was then used to provide differrent information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking and information processing. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately. OFF-LOADING (REF).&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of countless examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature (REF TO EVANS). What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
(Note: Gibsonianism is really a theory of perception and should be moved to that section if someone knows how.)&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
I&#039;m a little behind today, but I will have this section drafted by tonight.  In the meantime, here is my outline. &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 17:20, 3 February 2009 (UTC)&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
====History/interaction capture====&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
Heidi Lam, &amp;quot;A Framework of Interaction Costs in Information Visualization,&amp;quot; IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1149-1156, Nov./Dec. 2008, doi:10.1109/TVCG.2008.109&lt;br /&gt;
&lt;br /&gt;
I&#039;m pretty sure that this paper referred to some earlier paper with a very similar title.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Nevertheless, the possibility of using any brain function at all to interface with a machine is cause for excitement, and further advances in non-invasive techniques for accessing brain function may allow telepathic HCI to become a reality.&lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
Recent advances in the pragmatic use of EEGs in HCI research, in particular, by [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1854</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1854"/>
		<updated>2009-02-05T15:49:01Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: added links&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A set of critiques of existing software used for visual and analytical work based on design rules&lt;br /&gt;
## ???&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=7&amp;amp;sid=54ec1e22-3df2-462c-b484-7a7c052c2173%40SRCSM1 Aarts et al., 1999]). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (Stroop, 1935). This inhibition also makes switching back to the routine task slower slower (Allport et al., 1994). All of these problems can be alleviated to some degree by salient cues in the environment. The intention to switch becomes easy when there is a salient reminder at the appropriate time (McDaniel and Einstein, 1993) and associating different environmental cues with different goals can automatically trigger appropriate behavior ([http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=68f77032-f093-4139-a833-760d2217b513%40sessionmgr9 Aarts and Dijksterhuis, 2003]).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantiative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%.  This classifier was then used to provide differrent information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking and information processing. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately. OFF-LOADING (REF).&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of countless examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature (REF TO EVANS). What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
(Note: Gibsonianism is really a theory of perception and should be moved to that section if someone knows how.)&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
I&#039;m a little behind today, but I will have this section drafted by tonight.  In the meantime, here is my outline. &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 17:20, 3 February 2009 (UTC)&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
====History/interaction capture====&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
Heidi Lam, &amp;quot;A Framework of Interaction Costs in Information Visualization,&amp;quot; IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1149-1156, Nov./Dec. 2008, doi:10.1109/TVCG.2008.109&lt;br /&gt;
&lt;br /&gt;
I&#039;m pretty sure that this paper referred to some earlier paper with a very similar title.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Nevertheless, the possibility of using any brain function at all to interface with a machine is cause for excitement, and further advances in non-invasive techniques for accessing brain function may allow telepathic HCI to become a reality.&lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
Recent advances in the pragmatic use of EEGs in HCI research, in particular, by [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1853</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1853"/>
		<updated>2009-02-05T15:38:32Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: added more about task pursuit to /* Workflow Context */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A set of critiques of existing software used for visual and analytical work based on design rules&lt;br /&gt;
## ???&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffculty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
The problem of task switching is exacerbated when some tasks are more routine than others. When a person intends to switch from a routine task to a novel task at some later time, they often forget (REF Aarts). Also, if both tasks are done in the same context, with the same tools or with the same materials, people have difficulty inhibiting the routine task while doing the novel task (REF Stroop). This inhibition also makes switching back to the routine task slower slower (REF Allport). All of these problems can be alleviated to some degree by salient cues in the environment. The intention to switch becomes easy when there is a salient reminder at the appropriate time (REF prospective memory) and associating different environmental cues with different goals can automatically trigger appropriate behavior (REF Aarts).&lt;br /&gt;
&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantiative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%.  This classifier was then used to provide differrent information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking and information processing. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately. OFF-LOADING (REF).&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of countless examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature (REF TO EVANS). What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
(Note: Gibsonianism is really a theory of perception and should be moved to that section if someone knows how.)&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
I&#039;m a little behind today, but I will have this section drafted by tonight.  In the meantime, here is my outline. &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 17:20, 3 February 2009 (UTC)&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
====History/interaction capture====&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
Heidi Lam, &amp;quot;A Framework of Interaction Costs in Information Visualization,&amp;quot; IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1149-1156, Nov./Dec. 2008, doi:10.1109/TVCG.2008.109&lt;br /&gt;
&lt;br /&gt;
I&#039;m pretty sure that this paper referred to some earlier paper with a very similar title.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Nevertheless, the possibility of using any brain function at all to interface with a machine is cause for excitement, and further advances in non-invasive techniques for accessing brain function may allow telepathic HCI to become a reality.&lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
Recent advances in the pragmatic use of EEGs in HCI research, in particular, by [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1852</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1852"/>
		<updated>2009-02-05T15:20:52Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: the meaning of affordances is specific and in my opinion inappropriate here&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A set of critiques of existing software used for visual and analytical work based on design rules&lt;br /&gt;
## ???&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing cues to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantiative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%.  This classifier was then used to provide differrent information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking and information processing. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately. OFF-LOADING (REF).&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of countless examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature (REF TO EVANS). What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
(Note: Gibsonianism is really a theory of perception and should be moved to that section if someone knows how.)&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
I&#039;m a little behind today, but I will have this section drafted by tonight.  In the meantime, here is my outline. &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 17:20, 3 February 2009 (UTC)&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
====History/interaction capture====&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
Heidi Lam, &amp;quot;A Framework of Interaction Costs in Information Visualization,&amp;quot; IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1149-1156, Nov./Dec. 2008, doi:10.1109/TVCG.2008.109&lt;br /&gt;
&lt;br /&gt;
I&#039;m pretty sure that this paper referred to some earlier paper with a very similar title.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Nevertheless, the possibility of using any brain function at all to interface with a machine is cause for excitement, and further advances in non-invasive techniques for accessing brain function may allow telepathic HCI to become a reality.&lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
Recent advances in the pragmatic use of EEGs in HCI research, in particular, by [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1851</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1851"/>
		<updated>2009-02-05T14:37:52Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Specific Aims */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A set of critiques of existing software used for visual and analytical work based on design rules&lt;br /&gt;
## ???&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and/or space.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantiative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%.  This classifier was then used to provide differrent information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking and information processing. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately. OFF-LOADING (REF).&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of countless examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature (REF TO EVANS). What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
(Note: Gibsonianism is really a theory of perception and should be moved to that section if someone knows how.)&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
I&#039;m a little behind today, but I will have this section drafted by tonight.  In the meantime, here is my outline. &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 17:20, 3 February 2009 (UTC)&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
====History/interaction capture====&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
Heidi Lam, &amp;quot;A Framework of Interaction Costs in Information Visualization,&amp;quot; IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1149-1156, Nov./Dec. 2008, doi:10.1109/TVCG.2008.109&lt;br /&gt;
&lt;br /&gt;
I&#039;m pretty sure that this paper referred to some earlier paper with a very similar title.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Nevertheless, the possibility of using any brain function at all to interface with a machine is cause for excitement, and further advances in non-invasive techniques for accessing brain function may allow telepathic HCI to become a reality.&lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
Recent advances in the pragmatic use of EEGs in HCI research, in particular, by [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1850</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1850"/>
		<updated>2009-02-05T14:36:30Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: added specific aim related to motion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A set of critiques of existing software used for visual and analytical work based on design rules&lt;br /&gt;
## ???&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
# Develop a scoring system for interfaces to evaluate the degree to which all changes and causal relations are tracked by motion cues that are contiguous in time and space.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantiative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%.  This classifier was then used to provide differrent information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking and information processing. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately. OFF-LOADING (REF).&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of countless examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature (REF TO EVANS). What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
(Note: Gibsonianism is really a theory of perception and should be moved to that section if someone knows how.)&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
I&#039;m a little behind today, but I will have this section drafted by tonight.  In the meantime, here is my outline. &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 17:20, 3 February 2009 (UTC)&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
====History/interaction capture====&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
Heidi Lam, &amp;quot;A Framework of Interaction Costs in Information Visualization,&amp;quot; IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1149-1156, Nov./Dec. 2008, doi:10.1109/TVCG.2008.109&lt;br /&gt;
&lt;br /&gt;
I&#039;m pretty sure that this paper referred to some earlier paper with a very similar title.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Nevertheless, the possibility of using any brain function at all to interface with a machine is cause for excitement, and further advances in non-invasive techniques for accessing brain function may allow telepathic HCI to become a reality.&lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
Recent advances in the pragmatic use of EEGs in HCI research, in particular, by [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1849</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1849"/>
		<updated>2009-02-05T14:28:53Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: moved Gibsonianism to perception&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A set of critiques of existing software used for visual and analytical work based on design rules&lt;br /&gt;
## ???&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
====Workflow Context====&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.  In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes.  This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this are important.&lt;br /&gt;
&lt;br /&gt;
Czerwinski, et al. conducted a [http://portal.acm.org/citation.cfm?id=985692.985715&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 diary study] of task switching and interruptions of users in 2004.  This study showed that task complexity, task duration, length of absence, and number of interruptions all affected the users&#039; own perceived diffuclty of switching tasks.  [http://delivery.acm.org/10.1145/1250000/1240730/p677-iqbal.pdf?key1=1240730&amp;amp;key2=4525483321&amp;amp;coll=Portal&amp;amp;dl=ACM&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Iqbal, et al.] studied task disruption and recovery in a field study, and found that users often visited several applications as a result of an alert, such as a new email notification, and that 27% of task suspensions resulted in 2 hours or more of disruption.  Users in the study said that losing context was a significant problem in switching tasks, and led in part to the length of some of these disruptions.  This work hints at the importance of providing affordances to users to maintain and regain lost context during task switching.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Qunatitative Models: Fitts&#039;s law, Steering Law====&lt;br /&gt;
Fitts&#039;s law and the steering law are examples of quantiative models that predict user performance when using certain types of user interfaces.  In addition to these classic models, [http://delivery.acm.org/10.1145/1250000/1240850/p1495-cao.pdf?key1=1240850&amp;amp;key2=9904483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Cao and Zhai] developed and validated a quantitative model of human performance of pen stroke gestures in 2007.  [http://tlaloc.sfsu.edu/~lank/research/appearing/FSS604LankE.pdf Lank and Saund] utilized a model which used curvature to predict the speed of a pen as it moved across a surface to help disambiguate target selection intent.&lt;br /&gt;
&lt;br /&gt;
In addition, quantiative models are often tested against new interfaces to verify that they hold.  For example, [http://portal.acm.org/citation.cfm?id=1054972.1055012&amp;amp;coll=GUIDE&amp;amp;dl=ACM&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Grossman et al.] verified that their Bubble Cursor approach to enlarging effective pointing target sizes obeyed Fitts&#039;s law for actual distance traveled.&lt;br /&gt;
&lt;br /&gt;
In addition to formal models, machine learning techniques have been applied to modeling user interaction as well.  For example, [http://delivery.acm.org/10.1145/1250000/1240669/p271-hurst.pdf?key1=1240669&amp;amp;key2=6465483321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=20565232&amp;amp;CFTOKEN=69614228 Hurst, et al.], used a learning classifier, trained on low-level mouse and keyboard usage patterns, to identify novice and expert use dynamically with accuracies as high as 91%.  This classifier was then used to provide differrent information and feedback to the user as appropriate.&lt;br /&gt;
&lt;br /&gt;
(Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking and information processing. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately. OFF-LOADING (REF).&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of countless examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature (REF TO EVANS). What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
(Note: Gibsonianism is really a theory of perception and should be moved to that section if someone knows how.)&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
I&#039;m a little behind today, but I will have this section drafted by tonight.  In the meantime, here is my outline. &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 17:20, 3 February 2009 (UTC)&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
====History/interaction capture====&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
Heidi Lam, &amp;quot;A Framework of Interaction Costs in Information Visualization,&amp;quot; IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1149-1156, Nov./Dec. 2008, doi:10.1109/TVCG.2008.109&lt;br /&gt;
&lt;br /&gt;
I&#039;m pretty sure that this paper referred to some earlier paper with a very similar title.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of electroencephelograms (EEGs) in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Nevertheless, the possibility of using any brain function at all to interface with a machine is cause for excitement, and further advances in non-invasive techniques for accessing brain function may allow telepathic HCI to become a reality.&lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
Recent advances in the pragmatic use of EEGs in HCI research, in particular, by [http://portal.acm.org/citation.cfm?id=1357054.1357187&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=21136843&amp;amp;CFTOKEN=23841774 Grimes et al.]&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1813</id>
		<title>CS295J/Research proposal</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Research_proposal&amp;diff=1813"/>
		<updated>2009-02-05T00:13:18Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: /* Information Processing Approach to Cognition */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Project summary==&lt;br /&gt;
&lt;br /&gt;
We propose to integrate theories models of cognition, models of perception, rules of design, and concepts from the discipline of human-computer interaction to develop a predictive model of user performance in interacting with computer software for visual and analytical work.  Our proposed model comprises a set of computational elements representing components of human cognition, memory, or perception.  The collective abilities and limitations of these elements can be used to provide feedback on the likely efficacy of user interaction techniques.&lt;br /&gt;
&lt;br /&gt;
The choice of human computational elements will be guided by several models or theories of cognition and perception, including Gestalt, Distributed, Gibson, ???(where pathway, when pathway)???, ???working-memory???, ..., and ???.  The list of elements will be extensible.  The framework coupling them will allow for experimental predictions of utility of user interfaces that can be verified against human performance.&lt;br /&gt;
&lt;br /&gt;
Coupling the system with users will involve a data capture mechanism for collecting the communications between a user interface and a user.  These will be primarily event based, and will include a new low-cost camera-based eye-tracking system&lt;br /&gt;
&lt;br /&gt;
During early development, existing interfaces will be evaluated manually to characterize their &lt;br /&gt;
&lt;br /&gt;
(we need some way to specify interaction techniques...)&lt;br /&gt;
&lt;br /&gt;
==Specific Contributions==&lt;br /&gt;
# A model of human cognitive and perceptual abilities when using computers&lt;br /&gt;
## Demonstration of the model in predicting human performance with some interfaces.&lt;br /&gt;
## ???&lt;br /&gt;
# Something about design rules collected and merged&lt;br /&gt;
## Something comparing these collected rules to a baseline (establishing their value)&lt;br /&gt;
## ???&lt;br /&gt;
# A low-overhead mechanism for capturing event-based interactions between a user and a computer, including web-cam based eye tracking.  (should we buy or find out about borrowing use of pupil tracker?)  &#039;&#039;Should we include other methods of interaction here?  Audio recognition seems to be the lowest cost.  It would seem that a system that took into account head-tracking, audio, and fingertip or some other high-DOF input would provide a very strong foundation for a multi-modal HCI system.  It may be more interesting to create a software toolkit that allows for synchronized usage of those inputs than a low-cost hardware setup for pupil-tracking.  I agree pupil-tracking is useful, but developing something in-house may not be the strongest contribution we can make with our time.&#039;&#039; (Trevor)&lt;br /&gt;
## Accuracy study of eye tracking (2 cameras?  double as an input device?)&lt;br /&gt;
## ???&lt;br /&gt;
# A set of critiques of existing software used for visual and analytical work based on design rules&lt;br /&gt;
## ???&lt;br /&gt;
&lt;br /&gt;
==Specific Aims==&lt;br /&gt;
&lt;br /&gt;
# build X&lt;br /&gt;
# build Y&lt;br /&gt;
# run experiment Z&lt;br /&gt;
# compare X with existing approach Q&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
&lt;br /&gt;
===Models of cognition===&lt;br /&gt;
There are several models of cognition, ranging from fundamental aspects of neurological processing to extremely high-level psychological analysis.  Three main theories seem to have become recognized as the most helpful in conceptualizing the actual process of HCI.  These models all agree that one cannot accurately analyze HCI by viewing the user without context, but the extent and nature of this context varies greatly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Activity_Theory Activity Theory]&#039;&#039;&#039;, developed in the early 20th century by Russian psychologists S.L. Rubinstein and A.N. Leontiev, posits the existence of four discrete aspects of human-computer interaction.  The &amp;quot;Subject&amp;quot; is the human interacting with the item, who possesses an &amp;quot;Object&amp;quot; (e.g. a goal) which they hope to accomplish by using a tool.  The Subject conceptualizes the realization of the Object via an &amp;quot;Action&amp;quot;, which may be as simple or complex as is necessary.  The Action is made up of one or more &amp;quot;Operations&amp;quot;, the most fundamental level of interaction including typing, clicking, etc.&lt;br /&gt;
&lt;br /&gt;
A key concept in Activity Theory is that of the artifact, which mediates all interaction.  The computer itself need not be the only artifact in HCI - others include all sorts of signs, algorithmic methods, instruments, etc.&lt;br /&gt;
&lt;br /&gt;
A longer synopsis of Activity Theory may be found at [http://mcs.open.ac.uk/yr258/act_theory/ this website].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The [http://en.wikipedia.org/wiki/Situated_cognition Situated Action Model]&#039;&#039;&#039; focuses on emergent behavior, emphasizing the subjective aspect of human-computer interaction and the therefore-necessary allowance for a wide variety of users.  This model proposes the least amount of contextual interaction, and seems to maintain that the interactive experience is determined entirely by the user&#039;s ability to use the system in question.  While limiting, this concept of usability can be very informative when designing for less tech-savvy users.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[http://en.wikipedia.org/wiki/Distributed_cognition Distributed Cognition]&#039;&#039;&#039; proposes that the computer (or, as in Activity Theory, any other artifact) can be used and ought to be thought of as an extension of the mental processing of the human.  This is not to say that the two are of equal or even comparable cognitive abilities, but that each has unique strengths and that recognition of and planning around these relative advantages can lead to increased efficiency and effectiveness.  The rotation of blocks in Tetris serves as a perfect example of this sort of cognitive symbiosis.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Workflow Context&#039;&#039;&#039; (Andrew Bragdon - OWNER)&lt;br /&gt;
&lt;br /&gt;
There are, at least, two levels at which users work ([http://portal.acm.org/citation.cfm?id=985692.985707&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 Gonzales, et al., 2004]).  Users accomplish individual low-level tasks which are part of larger &#039;&#039;working spheres&#039;&#039;; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of &amp;quot;adding a new section to the website.&amp;quot;  Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Gibsonianism====&lt;br /&gt;
(relevant stuff about Gibson&#039;s theory)&lt;br /&gt;
We will build on top of this theory/model by ... .&lt;br /&gt;
&lt;br /&gt;
(Note: Gibsonianism is really a theory of perception and should be moved to that section if someone knows how.)&lt;br /&gt;
&lt;br /&gt;
Gibsonianism, named after James J. Gibson and more commonly referred to as ecological psychology, is an epistemological direct realist theory of perception and action.  In contrast to information processing and cognitivist approaches which generally assume that perception is a constructive process operating on impoverished sense-data inputs (e.g. photoreceptor activity) to generate representations of the the world with added structure and meaning (e.g. a mental or neural &amp;quot;picture&amp;quot; of a chair), ecological psychology treats perception as a relation that places animals in direct and lawful epistemic contact with behaviorally-relevant events and features of their environmental niches (Warren, 2005).  These lawfully-specified and behaviorally-relevant features of the environment constitute the possibilities for action that the environment &amp;quot;affords&amp;quot; the animal (Gibson, 1986).&lt;br /&gt;
&lt;br /&gt;
Gibson&#039;s notion of affordance has many implications for our enterprise, however, it is worth noting that the original definition of affordance emphasizes possibilities for action and not their relative likelihoods.  For example, for most humans, laptop computer screens afford puncturing with Swiss Army knives, however, it is unlikely that a user will attempt to retrieve an electronic coupon by carving it out of their monitor.  This example illustrates that interfaces often afford a class of actions that are undesirable from the perspective of both the designer and the user.&lt;br /&gt;
&lt;br /&gt;
(Jon)&lt;br /&gt;
&lt;br /&gt;
====Distributed cognition====&lt;br /&gt;
&lt;br /&gt;
Distributed cognition is a theory in which thoughts take place in and outside of the brain. Human&#039;s have a great ability to use tools and to incorporate their environments into their sphere of thinking and information processing. Clark puts it nicely in [[http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature#Cognition Clark-1994-TEM]].&lt;br /&gt;
&lt;br /&gt;
Therefore, optimal configurations when considering HCI design will treat the brain, person, interface, and computer as a holistic system comprising cognitions.&lt;br /&gt;
&lt;br /&gt;
In practical terms, the issue at hand for our proposal is how to best maximize utility by distributing the cognitive tasks at hand to different components of the whole system. Simply, which tasks can we off-load to the computer to do for us, faster and more accurately. OFF-LOADING (REF).&lt;br /&gt;
&lt;br /&gt;
Typically, those tasks most eligible to be off-loaded are the ones which we perform poorly on. Conveniently, the tasks which computers perform poorly on we excel at. Here are a few examples:&lt;br /&gt;
&lt;br /&gt;
*Computer&#039;s Area of Expertise: number crunching, memory, logical reasoning, precise&lt;br /&gt;
*Human&#039;s Area of Expertise: Associative thought, real-world knowledge, social behavior, alogical reasoning, imprecise&lt;br /&gt;
&lt;br /&gt;
Using this division of cognitive labor allows us to optimize task work flows. Ignoring it puts strain and bottlenecks at either the computer, the human, or the interface. The field of HCI is full of countless examples of failures which can be attributed to not recognizing which tasks should be handled by which sub-system.&lt;br /&gt;
&lt;br /&gt;
As a heuristic to divide thinking, one might turn to the dual-process theory literature (REF TO EVANS). What is most often called System 1 is what human&#039;s are good at, while System 2 tasks are what computers do well.&lt;br /&gt;
&lt;br /&gt;
([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
====Information Processing Approach to Cognition====&lt;br /&gt;
The dominant approach in Cognitive Science is called information processing. It sees cognition as a system that takes in information from the environment, forms mental representations and manipulates those representations in order to create the information needed to achieve its goals. This approach includes three levels of analysis originally proposed by Marr (will cite):&lt;br /&gt;
# Computational - What are the goals of a process or representation? What are the inputs and desired outputs required of a system which performs a task? Models at this level of analysis are often considered normative models, because any agent wanting to perform the task should conform to them. Rational agent models of decision making, for example, belong at this level of analysis.&lt;br /&gt;
# Process/Algorithmic - What are the processes or algorithms involved in how humans perform the task? This is the most common level of analysis as it focuses on the mental representations, manipulations and computational faculties involved in actual human processing. Algorithmic descriptions of human capabilities and limitations, such as working memory size, belong at this level of analysis.&lt;br /&gt;
# Implementation - How are the processes and algorithms be actual biological computation? Dopamine theories of reward learning, for example, belong at this level of analysis.&lt;br /&gt;
The information processing approach is often contrasted with the distributed cognition approach. Its advantage is that it finds general mechanisms that are valid across many different contexts and situations. Its disadvantage is that it can have difficulty explaining the rich interactions between people and their environment.&lt;br /&gt;
(Adam)&lt;br /&gt;
&lt;br /&gt;
====???====&lt;br /&gt;
&lt;br /&gt;
===Models of perception===&lt;br /&gt;
&lt;br /&gt;
===Design guidelines===&lt;br /&gt;
&lt;br /&gt;
I&#039;m a little behind today, but I will have this section drafted by tonight.  In the meantime, here is my outline. &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 17:20, 3 February 2009 (UTC)&lt;br /&gt;
:I. Introduction, applications of guidelines&lt;br /&gt;
::A. Application to automated usability evaluations (AUE)&lt;br /&gt;
:II. Popular and seminal examples&lt;br /&gt;
::A. Shneiderman&lt;br /&gt;
::B. Google&lt;br /&gt;
::C. Maeda&lt;br /&gt;
::D. Existing international standards&lt;br /&gt;
:III. Elements of guideline sets, relationship to design patterns&lt;br /&gt;
:IV. Goals for potentially developing a guideline set within the scope of this proposal&lt;br /&gt;
&lt;br /&gt;
===User interface evaluations===&lt;br /&gt;
====History/interaction capture====&lt;br /&gt;
====Cost-based analyses====&lt;br /&gt;
Heidi Lam, &amp;quot;A Framework of Interaction Costs in Information Visualization,&amp;quot; IEEE Transactions on Visualization and Computer Graphics, vol. 14, no. 6, pp. 1149-1156, Nov./Dec. 2008, doi:10.1109/TVCG.2008.109&lt;br /&gt;
&lt;br /&gt;
I&#039;m pretty sure that this paper referred to some earlier paper with a very similar title.&lt;br /&gt;
&lt;br /&gt;
====Fitt&#039;s law and the steering law====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Multimodal HCI ===&lt;br /&gt;
&lt;br /&gt;
Continued advancements in several signal processing techniques have given rise to a multitude of mechanisms that allow for rich, multimodal, human-computer interaction.  These include systems for head-tracking, eye- or pupil-tracking, fingertip tracking, recognition of speech, and detection of electrical impulses in the brain, among others [http://vr.kjist.ac.kr/~dhong/website/paperworks/hci2002coursePapers/April24/Sharma98.pdf Sharma-1998-TMH].  With ever-increasing computing power, integrating these systems in real-time applications has become a plausible endeavor. &lt;br /&gt;
&lt;br /&gt;
==== Head-tracking ====&lt;br /&gt;
:In virtual, stereoscopic environments, head-tracking has been exploited with great success to create an immersive effect, allowing a user to move freely and naturally while dynamically updating the user’s viewpoint in a visual environment.  Head-tracking has been employed in non-immersive settings as well, though careful consideration must be paid to account for unintended movements by the user, which may result in distracting visual effects.&lt;br /&gt;
&lt;br /&gt;
==== Pupil-tracking ====&lt;br /&gt;
:Pupil-tracking has been studied a great deal in the field of Cognitive Science ... (need some examples here from CogSci).  In the HCI community, pupil-tracking has traditionally been used for posterior analysis of interface designs, and is particularly prevalent in web interface design.  An alternative utility of pupil-tracking is to employ it in real-time as an actual mode of interaction.  This has been examined in relatively few cases (citations), where typically the eyes are used to control a cursor onscreen.  Like head-tracking, implementations of pupil-tracking must be conscious of unintended eye-movements, which are incredibly frequent.&lt;br /&gt;
&lt;br /&gt;
==== Fingertip-tracking, Gestural recognition ====&lt;br /&gt;
:Fingertip tracking and gestural recognition of the hands are the subjects of much research in the HCI community, particularly in the Virtual Environment and Augmented Reality disciplines.  Less implicit than head or pupil-tracking, gestural recognition of the hands may draw upon the wealth of precedents readily observed in natural human interactions.  As sensing technologies become less obtrusive and more robust, this method of interaction has the potential to become quite effective.&lt;br /&gt;
&lt;br /&gt;
==== Speech Recognition ====&lt;br /&gt;
:Speech recognition is becoming much better, though effective implementation of its desired effects is non-trivial in many applications.  (More on this later).&lt;br /&gt;
&lt;br /&gt;
==== Brain Activity Detection ====&lt;br /&gt;
:The use of EEG in HCI is quite recent, and with limited degrees of freedom, few robust interfaces have been designed around it.  Nevertheless, the possibility of using any brain function at all to interface with a machine is cause for excitement, and further advances in non-invasive techniques for accessing brain function may allow telepathic HCI to become a reality.&lt;br /&gt;
&lt;br /&gt;
In sum, the synchronized usage of these modes of interaction make it possible to architect an HCI system capable of sensing and interpreting many of the mechanisms humans use to transmit information among one another.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
==Significance==&lt;br /&gt;
&lt;br /&gt;
==Preliminary results==&lt;br /&gt;
&lt;br /&gt;
==Research plan==&lt;br /&gt;
&lt;br /&gt;
We can speculate here about a longer-term research plan, but it may not be necessary to actually flesh out this part of the &amp;quot;proposal&amp;quot;&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=1789</id>
		<title>CS295J/Literature</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Literature&amp;diff=1789"/>
		<updated>2009-02-03T20:46:43Z</updated>

		<summary type="html">&lt;p&gt;Adam Darlow: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Perception==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Information Visualization: Perception for Design&lt;br /&gt;
:Insight into some of the theory of perception as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* Some unindentified paper(s)/book(s) about Gestalt theories of perception and cognition [http://en.wikipedia.org/wiki/Gestalt_psychology wikipedia page]&lt;br /&gt;
:These theories, from the 40&#039;s inform visual design and may provide an analogy for integration of theory and practice.  They describe some characteristics of perception that have been used as evaluative rules in UI design. (David)&lt;br /&gt;
&lt;br /&gt;
* [http://vrlab.epfl.ch/~pglardon/VR05/papers/chi2004.pdf Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures] Lecuyer-2004-FBH&lt;br /&gt;
: A cool technique on &amp;quot;hacking&amp;quot; human perception by modifying the control/display ratio of visible elements to simulate haptic feedback for the user. Strong analysis of which parts of haptic feedback are useful (e.g., vertical elements can be discarded). Pseudo-haptic feedback is implemented by combining the use of visible feedback with the changing sensitivity of a passive input device (e.g., a mouse). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=267474 Research issues in perception and user interfaces] Encarnacao-1994-RIP&lt;br /&gt;
: &amp;quot;The authors focus at on three things: presentation of information to best match human cognitive and perceptual capabilities, interactive tools and systems to facilitate creation and navigation of visualizations, and software system features to improve visualization tools.&amp;quot;  First and third points sound relevant. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=vnax4nN4Ws4C&amp;amp;oi=fnd&amp;amp;pg=PA703&amp;amp;dq=slanty+design&amp;amp;ots=P7G259hzJa&amp;amp;sig=vbYZmYkquwuA_ollOI6EgciNJjU The Uniqueness of Individual Perception] Whitehouse-1999-ID&lt;br /&gt;
: Focuses on the commonalities of perception.  Rough overview of sensory mechanisms, and strong anecdotal support of not adapting completely to the user, but rather requiring the user to adapt as well.  Identifies some common perceptual problems with particular groups of EUs (e.g., blind people). [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.psychonomic.org/search/view.cgi?id=4180 Guided search 2. 0. A revised model of visual search] Wolfe-1994-GS2&lt;br /&gt;
: A theory of visual search that builds on the distinction between visual targets that you need to search for in a field of distractors and those that &amp;quot;pop out&amp;quot; at you. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;font color=green&amp;gt;&amp;lt;b&amp;gt;[http://biologie.kappa.ro/Literature/Misc_cogsci/articole/dvp/scholl00.pdf Perceptual causality and animacy] [[Scholl-2000-PCA]]&lt;br /&gt;
: Discusses some of the automatic interpretation in our perception, focusing on inferring causal relations and animacy. &amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/class/fall2002/cmsc434-0201/p79-gaver.pdf Technology Affordances] Gaver-1991-TAF&lt;br /&gt;
: Affordances are actions that are appropriate for an object and that come to mind when perceiving the object. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/ft_gateway.cfm?id=301168&amp;amp;type=pdf&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19761061&amp;amp;CFTOKEN=10084975 Affordance, conventions, and design] Norman-1999-ACD&lt;br /&gt;
: How the original concept of affordances differs from how it has been used in HCI. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://books.google.com/books?hl=en&amp;amp;lr=&amp;amp;id=DrhCCWmJpWUC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecological+approach+visual+perception&amp;amp;ots=TeE80z49Fr&amp;amp;sig=c0jHz0ucQUTFNvUM5ObQouQq_Oc The Ecological Approach to Visual Perception] Gibson-1986-EAV&lt;br /&gt;
: Outlines direct perception and the original theory of affordances.  (Jon) 14:07, 3 February 2009.&lt;br /&gt;
&lt;br /&gt;
==Cognition==&lt;br /&gt;
&lt;br /&gt;
* Colin Ware: Visual Thinking: For Design&lt;br /&gt;
:Insight into some of the theory of cognition as it pertains to building visual interfaces (David)&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_plus_or_minus_two Wikipedia&#039;s Seven Plus or Minus Two page]&lt;br /&gt;
:A clear description of one part of human thinking; will probably provide pointers to other things to read (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=108844.108865 A cognitive model for the perception and understanding of graphs] Lohse-1991-CMP&lt;br /&gt;
: Describes a computer program that predicts response time to a query from assumptions from eye-tracking, short-term memory capacity, and the amount of information that can be absorbed from the query in each &amp;quot;glance.&amp;quot;  Attempts to lay the foundation for explaining several steps of human cognition, including input, memory, and processing. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784395566~db=all~order=page Cognitive load during problem solving: Effects on learning] John Swiller&lt;br /&gt;
: Older article but referenced in a lot of newer ones; looks at how conventional problem-solving is ineffective as a learning device. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://web.ebscohost.com/ehost/pdf?vid=1&amp;amp;hid=4&amp;amp;sid=13a77fd7-bead-41ce-bfb1-38addb0dfa53%40sessionmgr7 Dimensional overlap: Cognitive basis for stimulus–response compatibility—A model and taxonomy] Kornblum-1990-DOC&lt;br /&gt;
: People are more effective at a task when the stimulus and response representations are compatible and they don&#039;t require &amp;quot;translation&amp;quot;. (Adam) 16:22, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Iverson-TNR-ImPact.pdf Tracking Neuropsychological Recovery Following Concussion in Sport] &lt;br /&gt;
: This paper discusses the neurological basis for the ImPact test given to athletes after they&#039;ve suffered a concussion.  It provides testing and quantitative measures for verbal memory, visual memory, and reaction times.  These simple measures of cognition may be useful to incorporate in an HCI study.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
A Framework of Interaction Costs in Information Visualization&lt;br /&gt;
:ABSTRACT: Interaction cost is an important but poorly understood factor in visualization design. We propose a framework of interaction costs inspired by Norman’s Seven Stages of Action to facilitate study. From 484 papers, we collected 61 interaction-related usability problems reported in 32 user studies and placed them into our framework of seven costs: (1) Decision costs to form goals; (2) System-power costs to form system operations; (3) Multiple input mode costs to form physical sequences; (4) Physical-motion costs to execute sequences; (5) Visual-cluttering costs to perceive state; (6) View-change costs to interpret perception; (7) State-change costs to evaluate interpretation. We also suggested ways to narrow the gulfs of execution (2–4) and evaluation (5–7) based on collected reports. Our framework suggests a need to consider decision costs (1) as the gulf of goal formation.&lt;br /&gt;
: Includes some ideas for quantitatively evaluating information visualization interfaces (David)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Distributed Cognition as a Theoretical Framework for HCI (1994) Christine A. Halverson [http://hci.ucsd.edu/cogsci/faculty_pubs/9403.ps]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font color=&amp;quot;green&amp;quot;&amp;gt;&lt;br /&gt;
*&amp;lt;b&amp;gt; [http://consc.net/papers/extended.html The Extended Mind] Clark-1998-TEM&lt;br /&gt;
: Cognition can be thought to be distributable across mediums (outside of the skull). How might we off-load &amp;quot;cognitive&amp;quot; processes to computer systems? ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - Owner)&lt;br /&gt;
: I think that Ware gets into this in some of his writing about information visualization (or in his second book, thinking with visualization).  We can build in external &amp;quot;caches&amp;quot; or other constructs to be part of our cognitive model.  It seems like most of an analytical user interface is part of the external cognitive process. (David)&lt;br /&gt;
&amp;lt;/b&amp;gt;&amp;lt;/font&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://courses.csail.mit.edu/6.803/pdf/dualtask.pdf Sources of Flexibility in Human Cognition: Dual-Task Studies of Space and Language] HermerVazquez-1999-SFC&lt;br /&gt;
: Our use of language serves as a higher-order cognitive system which can be utilized as &amp;quot;scaffolding&amp;quot; in human thought, supporting goal-driven tasks. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sjdm.org/mail-archive/jdm-society/attachments/20060401/963b50da/attachment-0003.pdf In two minds: dual-process accounts of reasoning] Evans-2003-ITM&lt;br /&gt;
: It is hypothesized that there are two distinct systems of reasoning in the mind. System 1 is innate and fast, system 2 is controlled and slow. Knowledge of this might help us determine which tasks are candidates for one system or another. ([http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon])&lt;br /&gt;
&lt;br /&gt;
* [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6WJB-467J83F-3&amp;amp;_user=489286&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000022678&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=489286&amp;amp;md5=316065013955ca22e9dde19df6a0f9b8 Priming against your will: How accessible alternatives affect goal pursuit] Shah-2002-PAY&lt;br /&gt;
: The authors demonstrate how priming the means to achieving a goal also primes the goal, but inhibits alternative means to achieving the same goal. It means that making the means of achieving a goal salient in an interface will make it more likely that people pursue that goal, and less likely that they will think of other means to pursue it. (Adam)&lt;br /&gt;
&lt;br /&gt;
==HCI==&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1357054.1357125&amp;amp;coll=portal&amp;amp;dl=ACM&amp;amp;type=series&amp;amp;idx=SERIES260&amp;amp;part=series&amp;amp;WantType=Proceedings&amp;amp;title=CHI&amp;amp;CFID=20781736&amp;amp;CFTOKEN=83176621 A diary study of mobile information needs]&lt;br /&gt;
&lt;br /&gt;
A detailed study into how people use mobile devices.  &#039;&#039;&#039;(Andrew Bragdon - OWNER for Assignment 2)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* John M. Carroll: HCI Models, Theories, and Frameworks: Toward a Multidisciplinary Science&lt;br /&gt;
&lt;br /&gt;
:A gargantuan book with chapters by many folks describing some of the models and theories from HCI that may relate back to cognition; may need to create individual  (David)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=1461832&amp;amp;dl=&amp;amp;coll= Project ernestine: validating a GOMS analysis for predicting and explaining real-world task performance]&lt;br /&gt;
&lt;br /&gt;
: A study in which the [http://en.wikipedia.org/wiki/GOMS#cite_note-CHI92-1 GOMS] method is used to correctly predict the performance of call center operators using a new workstation. Might be interesting because of the methodology used to decompose the task into basic cognitive and perceptual actions, and then measuring these actions to evaluate the new interface. (Eric)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~content=a784766580~db=all The Growth of Cognitive Modeling in Human-Computer Interaction Since GOMS]&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=169059.169426&amp;amp;coll=Portal&amp;amp;dl=GUIDE&amp;amp;CFID=19188713&amp;amp;CFTOKEN=13376420 The limits of expert performance using hierarchic marking menus]&lt;br /&gt;
Marking menus naturally facillitate the transition from novice to expert performance for command invocation, and have been quite influential over the years to research into menu techniques. (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/302979.303053 Manual and gaze input cascaded (MAGIC) pointing]&lt;br /&gt;
&lt;br /&gt;
This is a system which combines gaze input (coarse-grained) and mouse input (fine-grained) to quickly target items.  This is important because it &amp;quot;kind of&amp;quot; gets around Fitt&#039;s law by using gaze input to &amp;quot;warp&amp;quot; the cursor to the general vicinity of what the user wants to work on.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;small&amp;gt;[http://doi.acm.org/10.1145/985692.985727  If not now, when?: the effects of interruption at different moments within task execution] Adamczyk-2004-INN&lt;br /&gt;
&lt;br /&gt;
Presents task models of user attention.  (Andrew Bragdon) (Adam - owner; [http://vrl.cs.brown.edu/wiki/CS295J/Class_Members%27_Pages/Gideon Gideon] - discussant) &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:58, 28 January 2009 (UTC)&lt;br /&gt;
&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* [http://doi.acm.org/10.1145/985692.985707 &amp;quot;Constant, constant, multi-tasking craziness&amp;quot;: managing multiple working spheres.] Gonzalez-2004-CCM&lt;br /&gt;
&lt;br /&gt;
Empirical study of how information workers spend their time.  Puts forward a theory of how users organize small individual tasks into &amp;quot;working spheres.&amp;quot;  (Andrew Bragdon - OWNER; Adam Darlow - Discussant; Steven Ellis - Discussant)&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;250-word summary and relevance statement&#039;&#039;&#039; (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
Any visualization which is used extensively by a user over a period of time will be used in the context of that user&#039;s daily workflow.  It is therefore essential to understand this larger workflow context to design the visualization application appropriately to fit the needs of real world users.  This paper studies in detail the daily workflow tasks and patterns of work of analysts, managers and software developers in a medium-sized software company.  This paper provides strong empirical evidence that users, rather than working on discrete and well-defined tasks, in reality, switch tasks on average every two to three minutes, and instead, work on larger thematically connected units of work (working spheres).  In addition, the study found that users switched between these larger working spheres on average every 12 minutes.  Thus, it is strongly indicated by this paper that many information workers are in a constant state of rapid fire multi-tasking.  This suggests that for a visualization to be relevant to any of these information workers, it would need to fit into, and support, this workflow.  This is just a first step towards understanding how users interact with visualizations in particular, however; future work that studies how users interact with visualizations as part of their larger daily work patterns is warranted, and would be an important component of a broad theory of visualization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf The state of the art in automating usability evaluation of user interfaces] Ivory-2000-SAA&lt;br /&gt;
: Presents a new taxonomy for automating usability analysis.  Advantages of automated evaluation are purported to be advantages linked to efficiency, such as comparing alternate designs, uncovering more errors more consistently, and predicting time/error costs across an entire design.  Breaks down a taxonomy with individual benefits and drawbacks of each method, and checks observations against existing guidelines (e.g. Smith and Mosier guidelines, Motif style guidelines, etc).  Introduces several visual tools.  Looks extremely relevant as a comprehensive survey of existing techniques.  &#039;&#039;&#039;OWNER&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC) &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC) Discussant: Steven Ellis&lt;br /&gt;
&lt;br /&gt;
* [http://www.hpl.hp.com/techreports/91/HPL-91-03.pdf User Interface Evaluation in the Real World: A Comparison of Four Techniques] Jeffries-1991-UIE&lt;br /&gt;
: Overview of the four major UI evaluation methods: heuristic evaluation, usability testing, guidelines, and cognitive walkthrough, followed by a comparison in their application to a case study.  [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://ics.colorado.edu/techpubs/pdf/91-01.pdf Cognitive Walkthroughs: A Method for Theory-Based Evaluation of User Interfaces] Polson-xxxx-CWM&lt;br /&gt;
: Presents the concept of performing a hand walkthrough of the cognitive process, based on another theory of &amp;quot;learning by exploration.&amp;quot; Strong results for a limited evaluation timeframe and little or no time for formal instruction of the interface for the user. The reviewer considers each behavior of the interface and its resultant effect on the user, attempting to identify actions that would be difficult for the &amp;quot;average&amp;quot; user. Claims that a given step will &#039;&#039;not&#039;&#039; be difficult must be supported with empirical data or theory.  The application of cognitive theory early in the design process seems useful in avoiding costly redesigns when problems are identified later. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=142834&amp;amp;coll= Finding usability problems through heuristic evaluation] Nielsen-1992-FUP&lt;br /&gt;
: Emphasis on heuristic evaluation. Shockingly, usability experts are found to be better at performing this type of evaluation. Usability problems relating to elements that are completely missing from the interface are difficult to identify with this method when evaluating unimplemented designs. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/130000/128728/p152-jacob.pdf?key1=128728&amp;amp;key2=7032992321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19735001&amp;amp;CFTOKEN=82907542 The Use of Eye Movements in Human-Computer Interaction Techniques: What You Look at is What you Get] Jacob-1991-UEM&lt;br /&gt;
: One of the first research papers to introduce eye tracking as a viable HCI technique.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1221372&amp;amp;isnumber=27434 Real-Time Eye Tracking for Human Computer Interfaces] Amarnag-2003-RTE&lt;br /&gt;
: Technical details about the implementation of a recent real-time eye-tracking system.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.ipgems.com/present/swui_chi_20070502.pdf Semantic Web HCI: Discussing Research Implications] Degler-2007-SWH&lt;br /&gt;
: A workshop discussion from CHI 2007 discussing the idea of a &amp;quot;semantic internet&amp;quot; and its relevance to the HCI community. Discusses things like adaptive web interfaces, mashups, dynamic interactions, etc.  (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.springerlink.com/content/u3q14156h6r648h8/fulltext.pdf Implicit Human Computer Interaction Through Context] Schmidtt-2000-IHC&lt;br /&gt;
: A highly cited paper discussing the notion of implicit HCI, including semantic grouping of interactions, and some perceptual rules.  (&#039;&#039;&#039;Trevor - OWNER&#039;&#039;&#039;; Andrew Bragdon - discussant; &#039;&#039;&#039;DISCUSSANT&#039;&#039;&#039;: [[User:E J Kalafarski|E J Kalafarski]] 22:57, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1903_1  Cognitive Strategies for the Visual Search of Hierarchical Computer Displays] Anthony J. Hornof&lt;br /&gt;
: This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* [http://www.informaworld.com/smpp/content~db=all?content=10.1207/s15327051hci1904_9 Unseen and Unaware: Implications of Recent Research on Failures of Visual Awareness for Human-Computer Interface Design] D. Alexander Varakin;  Daniel T. Levin; Roger Fidler  &lt;br /&gt;
: This article reviews basic and applied research documenting failures of visual awareness and the related metacognitive failure and then discuss misplaced beliefs that could accentuate both in the context of the human-computer interface. (lisajane)&lt;br /&gt;
&lt;br /&gt;
* Shneiderman, Plaisant: Designing the User Interface&lt;br /&gt;
: My textbook for an HCI class, has many good lists of guidelines. Especially Ch.2 pp 59-102. (lisajane) &lt;br /&gt;
&lt;br /&gt;
* Robert Mack, Jakob Nielsen: Usability Inspection Methods (Ch. 1 Executive Summary)&lt;br /&gt;
: Provides an overview of main usability inspection methods, a fair introduction to the industrial applications, as well as certain costs and benefits, of the methods as well as suggestions for expansive research. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://portal.acm.org/citation.cfm?id=22950 Jock Mackinlay, Automating the design of graphical presentations of relational information], ACM Transactions on Graphics (TOG),&lt;br /&gt;
5(2):110-141, 1986. (Jian)&lt;br /&gt;
: The first paper talked about how to automatically generate *good* graphs.&lt;br /&gt;
&lt;br /&gt;
* [http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=04376133 Jock Mackinlay, Pat Hanrahan, Chris Stolte, Show Me: Automatic Presentation for Visual Analysis] IEEE TVCG 13(6): 1137-1144, Nov-Dec, 2007 (Jian)&lt;br /&gt;
: Extend their previous paper to analytics tasks.&lt;br /&gt;
&lt;br /&gt;
* [http://www.win.tue.nl/~vanwijk/vov.pdf Jarke J. van Wjik, The value of visualization], IEEE visualization 2005. (Jian)&lt;br /&gt;
: discuss vis from a variety of angles as for art, science and technology and question and quantify the utility of visualization.&lt;br /&gt;
&lt;br /&gt;
* [http://www.almaden.ibm.com/u/zhai/papers/steering/chi97.pdf Johnny Accot and Shumin Zhai, Beyond Fitts&#039; law: models for trajectory-based HCI tasks], CHI 97. (Jian) &lt;br /&gt;
: Extend Fitts&#039; to trajectory based tasks.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01703371 Saraiya, P.,   North, C., Lam, V., and Duca, K.A, An Insight-Based Longitudinal Study of Visual Analytics], TVCG 12(6): 1511-1522, 2006.(Jian)&lt;br /&gt;
* the first paper that quantifies what insight is by comparing several infoVis tools for bioinformatics.&lt;br /&gt;
&lt;br /&gt;
*[http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1359732 David H. Laidlaw, Michael Kirby, Cullen Jackson, J. Scott Davidson, Timothy Miller, Marco DaSilva, William Warren, and Michael Tarr.Comparing 2D vector field visualization methods: A user study]. IEEE Transactions on Visualization and Computer Graphics, 11(1):59-70, 2005. (Jian)&lt;br /&gt;
: an application-specific comparison of visualization method. a cool paper.&lt;br /&gt;
&lt;br /&gt;
* [http://web.mit.edu/rruth/www/Papers/RosenholtzEtAlCHI2005Clutter.pdf Rosenholtz, Li, Mansfield, and Jin, Feature Congestion: A Measure of Display Clutter], CHI 2005. (Jian)&lt;br /&gt;
: quantify visual complexity from a statistics point of view.&lt;br /&gt;
&lt;br /&gt;
* [http://delivery.acm.org/10.1145/240000/236054/p320-john.pdf?key1=236054&amp;amp;key2=2285613321&amp;amp;coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;CFID=19490404&amp;amp;CFTOKEN=25744022 The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast] (Trevor)&lt;br /&gt;
: This paper offers an analysis of four types of GOMS (Goals, Objects, Methods and Selection) based interaction techniques.  GOMS is a widely used UI paradigm, made popular by Card et al in The Psychology of Human-Computer Interaction (1983). &lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Lisetti-2000-AFE.pdf Automatic Facial Expression Interpretation: Where Human-Computer Interaction, Artificial Intelligence and Cognitive Science Intersect] (Trevor)&lt;br /&gt;
: Using advanced computer vision/AI techniques, this work aims to discern and make use of users&#039; emotions in UI design.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/weld-2003-APU.pdf Automatically Personalizing User Interfaces] (Trevor)&lt;br /&gt;
: Discusses some techniques and design decisions for constructing adaptable and customizable user interfaces.  There are some useful references in the paper on using HMMs and RMMs (Relational Markov Models) for interaction prediction.&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Gajos-2006-.pdf Exploring the Design Space for Adaptive Graphical User Interfaces] (Trevor)&lt;br /&gt;
: This paper presents comparative evaluations of three methods for implementing adaptable user interfaces.  The evaluation methodology gives rise to three key concepts that affect the performance of adaptable UIs: frequency of adaptation, accuracy of adaptation, and the impact of predictability.&lt;br /&gt;
&lt;br /&gt;
* Conceptual Modeling for User Interface Development - David Benyon, Diana Bental, and Thomas Green&lt;br /&gt;
: Proposes a new set of terminology for describing and comparing existing and future cognitive models of HCI. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://homepage.ntlworld.com/greenery/workStuff/Papers/ERMIA-Skull.PDF The Skull beneath the Skin:&lt;br /&gt;
Entity-Relationship Models of Information Artefacts] T. R. G. Green, D. R. Benyon&lt;br /&gt;
: A paper in form of prelude to the above, gives a good overview of the ERMIA method. (Steven)&lt;br /&gt;
&lt;br /&gt;
===Cognitive Modeling===&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Kaptelinin-1994-ATI.pdf Activity Theory: Implications for Human-Computer Interaction] (Trevor)&lt;br /&gt;
: Discusses the notion of Activity Theory as the basis for HCI research.  The most interesting part of this paper for me was the introduction which expressed the need for a &#039;&#039;Theory of HCI&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Seven_stages_of_action Norman&#039;s Seven stages of action]&lt;br /&gt;
: Presents Norman&#039;s seven stages of action, as well as his model of evaluation. (Steven)&lt;br /&gt;
&lt;br /&gt;
* [http://vis.cs.brown.edu/~trevor/Papers/Wright-2000-AHC.pdf Analyzing Human-Computer Interaction as Distributed Cognition: The Resources Model] (Trevor)&lt;br /&gt;
: Creates a compelling argument for why distributed cognition research fits in with HCI, and what types of impacts it may have on the HCI community.&lt;br /&gt;
&lt;br /&gt;
* [http://sonify.psych.gatech.edu/~walkerb/classes/hci/extrareading/nardi.pdf Studying Context, A Comparison of Activity Theory, Situated Action Models, and Distributed Cognition] Bonnie A. Nardi&lt;br /&gt;
: Defines the task of the HCI specialist as the application of psychological and anthropological principles to specific design problems.  It posits an inherent feud between the accurate study of relative contexts and the necessary, but more general, development of comparative models and results.  Gives a coherent overview of activity theory, situated action models, and distributed cognition; finds that activity theory presents the best overall framework.  There is little reason given for this ranking, however, and the description of activity theory is the most theoretical and least developed of the three.&lt;br /&gt;
: Having spent quite a bit of time studying Soviet psychology (from which came activity theory) last semester, I question the validity of the paper’s claim, as its description of activity theory bears the artifacts of the oppressive regulations which the Soviet government imposed on psychologists.  Although the theory may sound more practical, it seems fairly weak as a basis for empirical design analysis.&lt;br /&gt;
: The paper’s strongest point is the criticisms which follow descriptions, in which theoretical shortcomings of each perspective are discussed. (&#039;&#039;&#039;Owner:&#039;&#039;&#039; Steven, &#039;&#039;&#039;Discussant:&#039;&#039;&#039;  --- [[User:Trevor O&amp;amp;#39;Brien|Trevor O&amp;amp;#39;Brien]] 23:22, 28 January 2009 (UTC))&lt;br /&gt;
&lt;br /&gt;
* [http://www.billbuxton.com/chunking.html Chunking and Phrasing and the Design of Human-Computer Dialogues]&lt;br /&gt;
&lt;br /&gt;
High-level theory of human-computer dialogues.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* Polson, P. and Lewis, C. Theory-Based Design for Easily Learned Interfaces. Human-Computer Interaction, 5, 2 (June 1990), 191-220.&lt;br /&gt;
&lt;br /&gt;
: This is a cognitive model of how users find and learn commands in an unfamiliar user interface.  This could potentially be adapted to be a piece of a theory of visualization.  (Andrew Bragdon)&lt;br /&gt;
&lt;br /&gt;
* [http://www.syros.aegean.gr/users/tsp/conf_pub/C12/c12.pdf Activity Theory vs Cognitive Science in the Study of Human-Computer Interaction]&lt;br /&gt;
: Provides a brief history of Cognitive Science and HCI, then compares the effectiveness of the aforementioned theories in aiding design and development. (Owner - week 2 : Steven)&lt;br /&gt;
&lt;br /&gt;
==Design==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia articles on [http://en.wikipedia.org/wiki/A_Pattern_Language &amp;quot;A Pattern Language&amp;quot;] and [http://en.wikipedia.org/wiki/Design_pattern &amp;quot;Design Pattern&amp;quot;]&lt;br /&gt;
:see summary for Alexander below (David)&lt;br /&gt;
&lt;br /&gt;
* UI Design principles (feedback, etc -- find ref)&lt;br /&gt;
&lt;br /&gt;
* Alexander: A Pattern Language: Towns, Buildings, Construction&lt;br /&gt;
:The original design pattern source; what makes a human space work, ineffable best practices, ~250 rules is enough to do communities and house-sized artifacts; could be a good metaphor for making; could be a good metaphor for making human virtual space work? (David)&lt;br /&gt;
&lt;br /&gt;
* [http://useraware.iict.ch/uploads/media/TangibleBits.pdf Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms] Ishii-1997-TBT&lt;br /&gt;
:A specific UI proposal, but has nice relevant discussion on how we perceive &amp;quot;foreground&amp;quot; items and &amp;quot;background&amp;quot; items and their relationship, taking advantage of this &amp;quot;parallel&amp;quot; processing of perception.  Includes the use of visual metaphors, phicons, and a notion they invent called &amp;quot;digital shadows,&amp;quot; in which the shadow projected by an object conveys some information on its contents. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://list.cs.brown.edu/courses/csci1900/2007/documents/restricted/beale.pdf Slanty Design] Beale-2007-SD&lt;br /&gt;
:Design method with emphasis on discouraging undesirable behavior, by perhaps forcing the user to adapt to the interface, giving equal weight to user goals, user &amp;quot;non-goals,&amp;quot; and wider goals of stakeholders besides the immediate user.  The important insight seems to be that these wider goals can enhance the user&#039;s experience with the larger system in the long run, if not in the immediate timeframe.  Five major design steps. [[User:E J Kalafarski|E J Kalafarski]] 16:02, 26 January 2009 (UTC)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Designing-Interactions-Bill-Moggridge/dp/0262134748/ref=pd_bbs_sr_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989194&amp;amp;sr=8-1 Designing Interactions] by Bill Moggridge&lt;br /&gt;
: Really awesome book on the evolution of interactions with technology. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://www.amazon.com/Sketching-User-Experiences-Interactive-Technologies/dp/0123740371/ref=sr_1_1?ie=UTF8&amp;amp;s=books&amp;amp;qid=1232989269&amp;amp;sr=1-1 Sketching User Experiences] by Bill Buxton&lt;br /&gt;
: Another great book on the practices of interaction design. (Trevor)&lt;br /&gt;
&lt;br /&gt;
* [http://site.ebrary.com/lib/brown/docDetail.action?docID=10173678 The Laws of Simplicity] by John Maeda&lt;br /&gt;
: An interesting work on the efficiency of minimalist design.  Quick read for those interested. (Steven)&lt;br /&gt;
: A set of design guidelines some of which we may be able to build on in automating interface evaluation; will certainly apply to manual evaluations [[User:David Laidlaw|David Laidlaw]]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.umd.edu/hcil/pubs/presentations/eyeshaveit/index.htm Ben Shneiderman, The Eyes Have It: User Interfaces for  Information Visualization], UM, tech-report, CS-TR-3665. (Jian)&lt;br /&gt;
: the paper talks about visualization mantra.&lt;br /&gt;
&lt;br /&gt;
==Thinking, analysis, decision making==&lt;br /&gt;
&lt;br /&gt;
* Morgan D. Jones: The Thinker&#039;s Toolkit: Fourteen Powerful Techniques for Problem Solving&lt;br /&gt;
&lt;br /&gt;
:Set of methods for solving problems that might be incorporated into tools for thinking (David)&lt;br /&gt;
&lt;br /&gt;
* Keim, Shazeer, Littman: Proverb: The Probabilistic Cruciverbalist&lt;br /&gt;
&lt;br /&gt;
:An automatic crossword-puzzle solver; the software framework for building this program may be a metaphor for some thinking groupware with plug-in modules. (David)&lt;br /&gt;
&lt;br /&gt;
* Thomas, Cook: Illuminating the Path&lt;br /&gt;
&lt;br /&gt;
:a research agenda for tools for intelligence analysts; not sure of relevance (David)&lt;br /&gt;
&lt;br /&gt;
* Richard Thaler, Cass Sunstein: Nudge - Improving Decisions About Wealth, Health, and Happiness&lt;br /&gt;
: A great, easy read for someone who isn&#039;t familiar with the psychological perspective.  Focuses mainly on public policy issues, but certain sections (on developing a better social security website, for example) relate specifically to digital design. (Steven)&lt;br /&gt;
&lt;br /&gt;
==Visualization==&lt;br /&gt;
&lt;br /&gt;
* Min Chen, David Ebert, Hans Hagen, Robert S. Laramee, Robert van Liere, Kwan-Liu Ma, William Ribarsky, Gerik Scheuermann, Deborah Silver, &amp;quot;Data, Information, and Knowledge in Visualization,&amp;quot; IEEE Computer Graphics and Applications, vol. 29, no. 1, pp. 12-19, Jan./Feb. 2009, doi:10.1109/MCG.2009.6&lt;/div&gt;</summary>
		<author><name>Adam Darlow</name></author>
	</entry>
</feed>