CS295J/Research proposal (draft 2): Difference between revisions
Jon Ericson (talk | contribs) |
Adam Darlow (talk | contribs) |
||
| (95 intermediate revisions by 10 users not shown) | |||
| Line 1: | Line 1: | ||
= Introduction = | = Introduction = | ||
* Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]] | * Owners: [[User:Adam Darlow | Adam Darlow]], [[User:Eric Sodomka | Eric Sodomka]], [[User: Trevor O'Brien | Trevor]] | ||
'''Propose:''' The design, application and evaluation of a novel, cognition-based, computational framework for assessing interface design and providing automated suggestions to optimize usability. | |||
'''Evaluation Methodology:''' Our techniques will be evaluated quantitatively through a series of user-study trials, as well as qualitatively by a team of expert interface designers. | |||
'''Contributions and Significance:''' We expect this work to make the following contributions: | |||
# design-space analysis and quantitative evaluation of cognition-based techniques for assessing user interfaces. | |||
# design and quantitative evaluation of techniques for suggesting optimized interface-design changes. | |||
# an extensible, multimodal software architecture for capturing user traces integrated with pupil-tracking data, auditory recognition, and muscle-activity monitoring. | |||
# specification (language?) of how to define an interface evaluation module and how to integrate it into a larger system. | |||
# (there may be more here, like testing different cognitive models, generating a markup language to represent interfaces, maybe even a unique metric space for interface usability) | |||
-- | |||
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations. | We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations. | ||
| Line 20: | Line 33: | ||
* Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using) | * Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using) | ||
= | == Cognitive Models == | ||
I plan to port over most of the background on cognitive models of HCI from the old proposal | |||
Additions will comprise of: | |||
* | *CPM-GOMS as a bridge from GOMS architecture to the promising procedural optimization of the Model Human Processor | ||
* | **Context of CPM development, discuss its relation to original GOMS and KLM | ||
* | ***Establish the tasks which were relevant for optimization when CPM was developed and note that its obsolescence may have been unavoidable | ||
* | **Focus on CPM as the first step in transitioning from descriptive data, provided by mounting efforts in the cognitive sciences realm to discover the nature of task processing and accomplishment, to prescriptive algorithms which can predict an interface’s efficiency and suggest improvements | ||
**CPM’s purpose as an abstraction of cognitive processing – a symbolic representation not designed for accuracy but precision | |||
**CPM’s successful trials, e.g. Ernestine | |||
***Implications of this project include CPM’s ability to accurately estimate processing at a psychomotor level | |||
***Project does suggest limitations, however, when one attempts to examine more complex tasks which involve deeper and more numerous cognitive processes | |||
*ACT-R as an example of a progressive cognitive modeling tool | |||
**A tool clearly built by and for cognitive scientists, and as a result presents a much more accurate view of human processing – helpful for our research | |||
**Built-in automation, which now seems to be a standard feature of cognitive modeling tools | |||
**Still an abstraction of cognitive processing, but makes adaptation to cutting-edge cognitive research findings an integral aspect of its modular structure | |||
**Expand on its focus on multi-tasking, taking what was a huge advance between GOMS and its CPM variation and bringing the simulation several steps closer to approximating the nature of cognition in regards to HCI | |||
**Far more accessible both for researchers and the lay user/designer in its portability to LISP, pre-construction of modules representing cognitive capacities and underlying algorithms modeling paths of cognitive processing | |||
==Design guidelines== | |||
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development. They can range in scale from one primary rule to as many Christopher Alexander's 253 rules for urban environments,<ref>[http://hci.rwth-aachen.de/materials/publications/borchers2000a.pdf Borchers, Jan O. "A Pattern Approach to Interaction Design." 2000.]</ref> which he introduced with the concept design patterns in the 1970s. Study has likewise been conducted on the use of these rules:<ref>http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf</ref> guidelines are often only partially understood, indistinct to the developer, and "fraught" with potential usability problems given a real-world situation. | |||
== | ===Application to AUE=== | ||
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically. The most successful, such as Raskin's and Schneiderman's, have been forged from years of observation instead of empirical study and experimentation. The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.<ref>[http://www.eecs.berkeley.edu/Pubs/TechRpts/2000/CSD-00-1105.pdf Ivory, M and Hearst, M. "The State of the Art in Automated Usability Evaluation of User Interfaces." ACM Computing Surveys (CSUR), 2001.]</ref> In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation. A mutually-reinforcing development of both simultaneously has not been attempted. | |||
Overlap between rulesets is inevitable and unavoidable. For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching ''principles'' or ''philosophy'' (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition. | |||
== | ===Popular and seminal examples=== | ||
Schneiderman's [http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html Eight Golden Rules] date to 1987 and are arguably the most-cited. They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to ''repeated use'', versus ''discoverability''. Up to five of Schneiderman's rules emphasize ''predictability'' in the outcomes of operations and ''increased feedback and control'' in the agency of the user. His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as ''simplicity''. | |||
' | Raskin's [http://www.mprove.de/script/02/raskin/designrules.html Design Rules] are classified into five principles by the author, augmented by definitions and supporting rules. While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman's: reliability or ''predictability'', ''simplicity'' or ''efficiency'' (which we can construe as two sides of the same coin), and finally he introduces a concept of ''uninterruptibility''. | ||
'' | Maeda's [http://lawsofsimplicity.com/?cat=5&order=ASC Laws of Simplicity] are fewer, and ostensibly emphasize ''simplicity'' exclusively, although elements of ''use'' as related by Schneiderman's rules and ''efficiency'' as defined by Raskin may be facets of this simplicity. Google's corporate mission statement presents [http://www.google.com/corporate/ux.html Ten Principles], only half of which can be considered true interface guidelines. ''Efficiency'' and ''simplicity'' are cited explicitly, aesthetics are once again noted as crucial, and working within a user's trust is another application of ''predictability''. | ||
== | ===Elements and goals of a guideline set=== | ||
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties. For example, it is likely ''simplicity'' has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of [http://books.google.com/books?hl=en&lr=&id=j5q0VvOGExYC&oi=fnd&pg=PA357&dq=seven+plus+or+minus+two&ots=prI3PKJBar&sig=vOZnqpnkXKGYWxK6_XlA4I_CRyI Seven, Plus or Minus Two]. ''Predictability'' likewise may have an analogue in Activity Theory, in regards to a user's perceptual expectations for a given action; ''uninterruptibility'' has implications in cognitive task-switching;<ref>[http://portal.acm.org/citation.cfm?id=985692.985715&coll=Portal&dl=ACM&CFID=21136843&CFTOKEN=23841774 Czerwinski, Horvitz, and White. "A Diary Study of Task Switching and Interruptions." Proceedings of the SIGCHI conference on Human factors in computing systems, 2004.]</ref> and so forth. | |||
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues. By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of "meta-guidelines" and rules for applying them to a given interface in an automated manner. Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application. | |||
== | == Perception and Action (in progress) == | ||
*Information Processing Approach | |||
**Advantages | |||
***Formalism eases translation of theory into scripting language | |||
**Disadvantages | |||
***Assumes symbolic representation | |||
*Ecological (Gibsonian) Approach | |||
**Advantages | |||
***Emphasis on bodily and environmental constraints | |||
**Disadvantages | |||
***Lack of formalism hinders translation of theory into scripting language | |||
The contributions section has been moved to a [[CS295J/Final contributions|standalone page]]. | |||
= Preliminary Results = | |||
== | |||
===Workflow, Multi-tasking, and Interruption=== | |||
=== | ====I. '''Goals'''==== | ||
The goals of the preliminary work are to gain qualitative insight into how information workers practice metawork, and to determine whether people might be better-supported with software which facillitates metawork and interruptions. Thus, the preliminary work should investigate, and demonstrate, the need and impact of the core goals of the project. | |||
''' | ====II. '''Methodology'''==== | ||
Seven information workers, ages 20-38 (5 male, 2 female), were interviewed to determine which methods they use to "stay organized". An initial list of metawork strategies was established from two pilot interviews, and then a final list was compiled. Participants then responded to a series of 17 questions designed to gain insight into their metawork strategies and process. In addition, verbal interviews were conducted to get additional open-ended feedback. | |||
====III. '''Final Results'''==== | |||
A histogram of methods people use to "stay organized" in terms of tracking things they need to do (TODOs), appointments and meetings, etc. is shown in the figure below. | |||
[[Image:AcbGraph.jpg]] | |||
In addition to these methods, participants also used a number of other methods, including: | |||
* iCal | |||
* Notes written in xterms | |||
* "Inbox zero" method of email organization | |||
* iGoogle Notepad (for tasks) | |||
* Tag emails as "TODO", "Important", etc. | |||
* Things (Organizer Software) | |||
* Physical items placed to "remind me of things" | |||
* Sometimes arranging windows on desk | |||
* Keeping browser tabs open | |||
* Bookmarking web pages | |||
* Keep programs/files open scrolled to certain locations sometimes with things selected | |||
In addition, three participants said that when interrupted they "rarely" or "very rarely" were able to resume the task they were working on prior to the interruption. Three of the participants said that they would not actively recommend their metawork strategies for other people, and two said that staying organized was "difficult". | |||
Four participants were neutral to the idea of new tools to help them stay organized and three said that they would like to have such a tool/tools. | |||
=== | ====IV. '''Discussion'''==== | ||
These results qunatiatively support our hypothesis that there is no clearly dominant set of metawork strategies employed by information workers. This highly fragemented landscape is surprising, even though most information workers work in a similar environment - at a desk, on the phone, in meetings - and with the same types of tools - computers, pens, paper, etc. We believe that this suggests that there are complex tradeoffs between these methods and that no single method is sufficient. We therefore believe that users will be better supported with a new set of software-based metawork tools. | |||
===Causal perception of interfaces=== | |||
Owner: [[Adam]] | |||
''' | ====I. '''Goals'''==== | ||
The goals of the preliminary work are to demonstrate the importance of the principles of causal perception to how efficiently people learn to use a new interface. Causal reasoning is a fast growing field in cognitive psychology which has demonstrated that much of how people perceive and understand the world is influenced by what causal relations they perceive. These preliminary results demonstrate that a novel human-computer interface is easier to learn when it can more naturally be understood in terms of causes (control elements) having effects (upon data elements). The demonstration focuses on the principle of causal order, that causes always precede their effects. While the issue of order (termed noun-verb or action-object order) has been addressed in the HCI literature (e.g., Shneiderman), it is commonly the opposite order that is championed, because choosing an object before an action limits the number of relevant actions. This project will demonstrate that when all else is equal the action-object order is easier for users to learn, presumably because it accords with causal order. The ultimate goal is to measure the adherence of interfaces to this and other principles of causal perception and inference. | |||
====II. '''Methodology'''==== | |||
For the purpose of this demonstration, I created a game in which most objects are both controls and data. Each object has intrinsic properties which can be transferred to other objects. These properties can be modified by other objects' intrinsic properties. For example, the blue object can color other objects blue, but if it was modified by the gradient object it colors other objects gradient blue. The goal is to create an object with a certain combination of properties. The game can be seen [http://www.cog.brown.edu:16080/~adarlow/HCI/ here]. There were two conditions, one consistent with causal order and one inconsistent. In the consistent condition, the participant had to click object A then object B in order to effect object A's property on object B. In the inconsistent condition, the order was reversed. If participants in the consistent condition solve the game faster than those in the inconsistent condition, this is taken to be evidence that causal interpretation helps learn novel interfaces. | |||
9 students part | |||
Nine students participated by playing the game and reporting the time and number of clicks it took them to complete the game. They were assigned randomly to the two conditions. Unfortunately, the random assignment put 6 participants in the inconsistent condition and only 3 in the consistent condition. | |||
====III. '''Results'''==== | |||
Despite the small sample size, the differences between the two groups were large enough to be statistically significant. Participants in the consistent condition completed the game in less time (M=2.22 minutes, SD=0.54) and fewer clicks (M=61 clicks, SD=15.4) than participants in the inconsistent condition (M=6.12 minutes, SD=2.28; M=140.7 clicks, SD=65.3). | |||
''' | ====IV. '''Discussion'''==== | ||
These results support our hypothesis that an interface is easier to learn and use when it satisfies people's expectations of causal systems. Participants who had to use a novel interface took nearly three times as long to complete a task when the interface dynamics defied natural causal order. These results should be expanded to other causal principles and other interfaces. | |||
Worth noting about this interface is the lack of a delineation between commands and data, each object serves as both. This is not typical of current interfaces, but it could become more prevalent in the future. Modern interfaces give users more and more control over the interface and as these manipulations become easier and more natural they become a larger part of typical workflow. Thus users will spend more time manipulating the interface, paving the way for meta-interface commands and tools. This progression seems even more natural in real-world simulation type environments like BumpTop, which attempt to capitalize on people's physical world intuitions. One real-world convention which these interfaces haven't yet adopted is that tools are objects, too. Just because we use a hammer to manipulate other objects, doesn't mean we can't paint the hammer red. Identifying the causal principles that people use to understand and interact with the world will allow us to abstract away from the rigid adherence to real-world physics without losing the richness and intuition it provides. | |||
'' | |||
= [Criticisms] = | = [Criticisms] = | ||
| Line 141: | Line 154: | ||
''Any criticisms or questions we have regarding the proposal can go here.'' | ''Any criticisms or questions we have regarding the proposal can go here.'' | ||
=References= | |||
<references/> | |||
Latest revision as of 19:39, 21 August 2009
Introduction
- Owners: Adam Darlow, Eric Sodomka, Trevor
Propose: The design, application and evaluation of a novel, cognition-based, computational framework for assessing interface design and providing automated suggestions to optimize usability.
Evaluation Methodology: Our techniques will be evaluated quantitatively through a series of user-study trials, as well as qualitatively by a team of expert interface designers.
Contributions and Significance: We expect this work to make the following contributions:
- design-space analysis and quantitative evaluation of cognition-based techniques for assessing user interfaces.
- design and quantitative evaluation of techniques for suggesting optimized interface-design changes.
- an extensible, multimodal software architecture for capturing user traces integrated with pupil-tracking data, auditory recognition, and muscle-activity monitoring.
- specification (language?) of how to define an interface evaluation module and how to integrate it into a larger system.
- (there may be more here, like testing different cognitive models, generating a markup language to represent interfaces, maybe even a unique metric space for interface usability)
--
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinions of each expert, and outputs to the developer a merged evaluation score and a weighted set of recommendations.
Systematic methods of estimating human performance with computer interfaces are used only sparsely despite their obvious benefits, the reason being the overhead involved in implementing them. In order to test an interface, both manual coding systems like the GOMS variations and user simulations like those based on ACT-R/PM and EPIC require detailed pseudo-code descriptions of the user workflow with the application interface. Any change to the interface then requires extensive changes to the pseudo-code, a major problem because of the trial-and-error nature of interface design. Updating the models themselves is even more complicated. Even an expert in CPM-GOMS, for example, can't necessarily adapt it to take into account results from new cognitive research.
Our proposal makes automatic interface evaluation easier to use in several ways. First of all, we propose to divide the input to the system into three separate parts, functionality, user traces and interface. By separating the functionality from the interface, even radical interface changes will require updating only that part of the input. The user traces are also defined over the functionality so that they too translate across different interfaces. Second, the parallel modular architecture allows for a lower "entry cost" for using the tool. The system includes a broad array of evaluation modules some of which are very simple and some more complex. The simpler modules use only a subset of the input that a system like GOMS or ACT-R would require. This means that while more input will still lead to better output, interface designers can get minimal evaluations with only minimal information. For example, a visual search module may not require any functionality or user traces in order to determine whether all interface elements are distinct enough to be easy to find. Finally, a parallel modular architecture is much easier to augment with relevant cognitive and design evaluations.
Background / Related Work
Each person should add the background related to their specific aims.
- Steven Ellis - Cognitive models of HCI, including GOMS variations and ACT-R
- EJ - Design Guidelines
- Jon - Perception and Action
- Andrew - Multiple task environments
- Gideon - Cognition and dual systems
- Ian - Interface design process
- Trevor - User trace collection methods (especially any eye-tracking, EEG, ... you want to suggest using)
Cognitive Models
I plan to port over most of the background on cognitive models of HCI from the old proposal
Additions will comprise of:
- CPM-GOMS as a bridge from GOMS architecture to the promising procedural optimization of the Model Human Processor
- Context of CPM development, discuss its relation to original GOMS and KLM
- Establish the tasks which were relevant for optimization when CPM was developed and note that its obsolescence may have been unavoidable
- Focus on CPM as the first step in transitioning from descriptive data, provided by mounting efforts in the cognitive sciences realm to discover the nature of task processing and accomplishment, to prescriptive algorithms which can predict an interface’s efficiency and suggest improvements
- CPM’s purpose as an abstraction of cognitive processing – a symbolic representation not designed for accuracy but precision
- CPM’s successful trials, e.g. Ernestine
- Implications of this project include CPM’s ability to accurately estimate processing at a psychomotor level
- Project does suggest limitations, however, when one attempts to examine more complex tasks which involve deeper and more numerous cognitive processes
- Context of CPM development, discuss its relation to original GOMS and KLM
- ACT-R as an example of a progressive cognitive modeling tool
- A tool clearly built by and for cognitive scientists, and as a result presents a much more accurate view of human processing – helpful for our research
- Built-in automation, which now seems to be a standard feature of cognitive modeling tools
- Still an abstraction of cognitive processing, but makes adaptation to cutting-edge cognitive research findings an integral aspect of its modular structure
- Expand on its focus on multi-tasking, taking what was a huge advance between GOMS and its CPM variation and bringing the simulation several steps closer to approximating the nature of cognition in regards to HCI
- Far more accessible both for researchers and the lay user/designer in its portability to LISP, pre-construction of modules representing cognitive capacities and underlying algorithms modeling paths of cognitive processing
Design guidelines
A multitude of rule sets exist for the design of not only interface, but architecture, city planning, and software development. They can range in scale from one primary rule to as many Christopher Alexander's 253 rules for urban environments,[1] which he introduced with the concept design patterns in the 1970s. Study has likewise been conducted on the use of these rules:[2] guidelines are often only partially understood, indistinct to the developer, and "fraught" with potential usability problems given a real-world situation.
Application to AUE
And yet, the vast majority of guideline sets, including the most popular rulesets, have been arrived at heuristically. The most successful, such as Raskin's and Schneiderman's, have been forged from years of observation instead of empirical study and experimentation. The problem is similar to the problem of circular logic faced by automated usability evaluations: an automated system is limited in the suggestions it can offer to a set of preprogrammed guidelines which have often not been subjected to rigorous experimentation.[3] In the vast majority of existing studies, emphasis has traditionally been placed on either the development of guidelines or the application of existing guidelines to automated evaluation. A mutually-reinforcing development of both simultaneously has not been attempted.
Overlap between rulesets is inevitable and unavoidable. For our purposes of evaluating existing rulesets efficiently, without extracting and analyzing each rule individually, it may be desirable to identify the the overarching principles or philosophy (max. 2 or 3) for a given ruleset and determining their quantitative relevance to problems of cognition.
Popular and seminal examples
Schneiderman's Eight Golden Rules date to 1987 and are arguably the most-cited. They are heuristic, but can be somewhat classified by cognitive objective: at least two rules apply primarily to repeated use, versus discoverability. Up to five of Schneiderman's rules emphasize predictability in the outcomes of operations and increased feedback and control in the agency of the user. His final rule, paradoxically, removes control from the user by suggesting a reduced short-term memory load, which we can arguably classify as simplicity.
Raskin's Design Rules are classified into five principles by the author, augmented by definitions and supporting rules. While one principle is primarily aesthetic (a design problem arguably out of the bounds of this proposal) and one is a basic endorsement of testing, the remaining three begin to reflect philosophies similar to Schneiderman's: reliability or predictability, simplicity or efficiency (which we can construe as two sides of the same coin), and finally he introduces a concept of uninterruptibility.
Maeda's Laws of Simplicity are fewer, and ostensibly emphasize simplicity exclusively, although elements of use as related by Schneiderman's rules and efficiency as defined by Raskin may be facets of this simplicity. Google's corporate mission statement presents Ten Principles, only half of which can be considered true interface guidelines. Efficiency and simplicity are cited explicitly, aesthetics are once again noted as crucial, and working within a user's trust is another application of predictability.
Elements and goals of a guideline set
Myriad rulesets exist, but variation becomes scarce—it indeed seems possible to parse these common rulesets into overarching principles that can be converted to or associated with quantifiable cognitive properties. For example, it is likely simplicity has an analogue in the short-term memory retention or visual retention of the user, vis a vis the rule of Seven, Plus or Minus Two. Predictability likewise may have an analogue in Activity Theory, in regards to a user's perceptual expectations for a given action; uninterruptibility has implications in cognitive task-switching;[4] and so forth.
Within the scope of this proposal, we aim to reduce and refine these philosophies found in seminal rulesets and identify their logical cognitive analogues. By assigning a quantifiable taxonomy to these principles, we will be able to rank and weight them with regard to their real-world applicability, developing a set of "meta-guidelines" and rules for applying them to a given interface in an automated manner. Combined with cognitive models and multi-modal HCI analysis, we seek to develop, in parallel with these guidelines, the interface evaluation system responsible for their application.
Perception and Action (in progress)
- Information Processing Approach
- Advantages
- Formalism eases translation of theory into scripting language
- Disadvantages
- Assumes symbolic representation
- Advantages
- Ecological (Gibsonian) Approach
- Advantages
- Emphasis on bodily and environmental constraints
- Disadvantages
- Lack of formalism hinders translation of theory into scripting language
- Advantages
The contributions section has been moved to a standalone page.
Preliminary Results
Workflow, Multi-tasking, and Interruption
I. Goals
The goals of the preliminary work are to gain qualitative insight into how information workers practice metawork, and to determine whether people might be better-supported with software which facillitates metawork and interruptions. Thus, the preliminary work should investigate, and demonstrate, the need and impact of the core goals of the project.
II. Methodology
Seven information workers, ages 20-38 (5 male, 2 female), were interviewed to determine which methods they use to "stay organized". An initial list of metawork strategies was established from two pilot interviews, and then a final list was compiled. Participants then responded to a series of 17 questions designed to gain insight into their metawork strategies and process. In addition, verbal interviews were conducted to get additional open-ended feedback.
III. Final Results
A histogram of methods people use to "stay organized" in terms of tracking things they need to do (TODOs), appointments and meetings, etc. is shown in the figure below.
In addition to these methods, participants also used a number of other methods, including:
- iCal
- Notes written in xterms
- "Inbox zero" method of email organization
- iGoogle Notepad (for tasks)
- Tag emails as "TODO", "Important", etc.
- Things (Organizer Software)
- Physical items placed to "remind me of things"
- Sometimes arranging windows on desk
- Keeping browser tabs open
- Bookmarking web pages
- Keep programs/files open scrolled to certain locations sometimes with things selected
In addition, three participants said that when interrupted they "rarely" or "very rarely" were able to resume the task they were working on prior to the interruption. Three of the participants said that they would not actively recommend their metawork strategies for other people, and two said that staying organized was "difficult".
Four participants were neutral to the idea of new tools to help them stay organized and three said that they would like to have such a tool/tools.
IV. Discussion
These results qunatiatively support our hypothesis that there is no clearly dominant set of metawork strategies employed by information workers. This highly fragemented landscape is surprising, even though most information workers work in a similar environment - at a desk, on the phone, in meetings - and with the same types of tools - computers, pens, paper, etc. We believe that this suggests that there are complex tradeoffs between these methods and that no single method is sufficient. We therefore believe that users will be better supported with a new set of software-based metawork tools.
Causal perception of interfaces
Owner: Adam
I. Goals
The goals of the preliminary work are to demonstrate the importance of the principles of causal perception to how efficiently people learn to use a new interface. Causal reasoning is a fast growing field in cognitive psychology which has demonstrated that much of how people perceive and understand the world is influenced by what causal relations they perceive. These preliminary results demonstrate that a novel human-computer interface is easier to learn when it can more naturally be understood in terms of causes (control elements) having effects (upon data elements). The demonstration focuses on the principle of causal order, that causes always precede their effects. While the issue of order (termed noun-verb or action-object order) has been addressed in the HCI literature (e.g., Shneiderman), it is commonly the opposite order that is championed, because choosing an object before an action limits the number of relevant actions. This project will demonstrate that when all else is equal the action-object order is easier for users to learn, presumably because it accords with causal order. The ultimate goal is to measure the adherence of interfaces to this and other principles of causal perception and inference.
II. Methodology
For the purpose of this demonstration, I created a game in which most objects are both controls and data. Each object has intrinsic properties which can be transferred to other objects. These properties can be modified by other objects' intrinsic properties. For example, the blue object can color other objects blue, but if it was modified by the gradient object it colors other objects gradient blue. The goal is to create an object with a certain combination of properties. The game can be seen here. There were two conditions, one consistent with causal order and one inconsistent. In the consistent condition, the participant had to click object A then object B in order to effect object A's property on object B. In the inconsistent condition, the order was reversed. If participants in the consistent condition solve the game faster than those in the inconsistent condition, this is taken to be evidence that causal interpretation helps learn novel interfaces. 9 students part
Nine students participated by playing the game and reporting the time and number of clicks it took them to complete the game. They were assigned randomly to the two conditions. Unfortunately, the random assignment put 6 participants in the inconsistent condition and only 3 in the consistent condition.
III. Results
Despite the small sample size, the differences between the two groups were large enough to be statistically significant. Participants in the consistent condition completed the game in less time (M=2.22 minutes, SD=0.54) and fewer clicks (M=61 clicks, SD=15.4) than participants in the inconsistent condition (M=6.12 minutes, SD=2.28; M=140.7 clicks, SD=65.3).
IV. Discussion
These results support our hypothesis that an interface is easier to learn and use when it satisfies people's expectations of causal systems. Participants who had to use a novel interface took nearly three times as long to complete a task when the interface dynamics defied natural causal order. These results should be expanded to other causal principles and other interfaces. Worth noting about this interface is the lack of a delineation between commands and data, each object serves as both. This is not typical of current interfaces, but it could become more prevalent in the future. Modern interfaces give users more and more control over the interface and as these manipulations become easier and more natural they become a larger part of typical workflow. Thus users will spend more time manipulating the interface, paving the way for meta-interface commands and tools. This progression seems even more natural in real-world simulation type environments like BumpTop, which attempt to capitalize on people's physical world intuitions. One real-world convention which these interfaces haven't yet adopted is that tools are objects, too. Just because we use a hammer to manipulate other objects, doesn't mean we can't paint the hammer red. Identifying the causal principles that people use to understand and interact with the world will allow us to abstract away from the rigid adherence to real-world physics without losing the richness and intuition it provides.
[Criticisms]
- Owner: Andrew Bragdon
Any criticisms or questions we have regarding the proposal can go here.
References
- ↑ Borchers, Jan O. "A Pattern Approach to Interaction Design." 2000.
- ↑ http://stl.cs.queensu.ca/~graham/cisc836/lectures/readings/tetzlaff-guidelines.pdf
- ↑ Ivory, M and Hearst, M. "The State of the Art in Automated Usability Evaluation of User Interfaces." ACM Computing Surveys (CSUR), 2001.
- ↑ Czerwinski, Horvitz, and White. "A Diary Study of Task Switching and Interruptions." Proceedings of the SIGCHI conference on Human factors in computing systems, 2004.
