CS295J/Final contributions: Difference between revisions
Steven Ellis (talk | contribs) →Evaluation Metrics: commented |
Steven Ellis (talk | contribs) |
||
| Line 162: | Line 162: | ||
* What exactly are the differences between this and EJ's earlier contribution? I think that if they are the same, this one is a bit more clear, IMHO. (Gideon) | * What exactly are the differences between this and EJ's earlier contribution? I think that if they are the same, this one is a bit more clear, IMHO. (Gideon) | ||
<span style="color: gray;">[I have a similar question to Gideon's. My hunch is that this defines a sort of "namespace" for the modules themselves, while the other contribution (the one you guys assigned to me) is more explicitly the aggregator of these outputs. Correct me if I'm wrong. But if that's the case, the two contributions might need to be validated similarly/together. [[User:E J Kalafarski|E J Kalafarski]] 15:17, 29 April 2009 (UTC)]</span> | <span style="color: gray;">[I have a similar question to Gideon's. My hunch is that this defines a sort of "namespace" for the modules themselves, while the other contribution (the one you guys assigned to me) is more explicitly the aggregator of these outputs. Correct me if I'm wrong. But if that's the case, the two contributions might need to be validated similarly/together. [[User:E J Kalafarski|E J Kalafarski]] 15:17, 29 April 2009 (UTC)]</span> | ||
* I feel like this structure's clarity may come at expense of specificity, I'd like to know more than that it's parallel. - Steven | |||
== Evaluation and Recommendation via Modules == | == Evaluation and Recommendation via Modules == | ||
Revision as of 19:45, 29 April 2009
Specific Aims and Contributions (to be separated later)
[David: I have removed some unattributed stuff that wasn't in the format described in assignment 13. If that was in error, go back to an earlier version to recover the text, attribute it, and put in the correct format.]
Recording User-Interaction Primitives
- Owner: Trevor O'Brien
Aims
- Develop system for logging low-level user-interactions within existing applications. By low-level interactions, we refer to those interactions for which a quantitative, predictive, GOMS-like model may be generated.[How do you propose to get around the problem Andrew brought up of accurately linking this low-level data to the interface/program's state at the moment it occurred? - Steven]
- At the same scale of granularity, integrate explicit interface-interactions with multimodal-sensing data. i.e. pupil-tracking, muscle-activity monitoring, auditory recognition, EEG data, and facial and posture-focused video.
Contributions
- Extensible, multi-modal, HCI framework for recording rich interaction-history data in existing applications.[Perhaps something should be said about sectioning this data off into modules - i.e. compiling all e-mail writing recordings to analyze separately from e-mail sorting recordings in order to facilitate analysis. - Steven]
Demonstration
- Techniques for pupil-tracking, auditory-recognition, and muscle-activity monitoring will be evaluated with respect to accuracy, sampling rate, and computational cost in a series of quantitative studies. [Is there an explicit, quantifiable demonstration you can propose? Such some kind of conclusive comparison between automated capture techniques and manual techniques? Or maybe a demonstration of some kind of methodical, but still low-level, procedure for clustering related interaction primitives? E J Kalafarski 15:02, 29 April 2009 (UTC)] [Should this take into account the need to match recordings to interface state? - Steven]
Dependencies
- Hardware for pupil-tracking and muscle activity monitoring. Commercial software packages for such software may be required.
Background and Related Work
- The utility of interaction histories with respect to assessing interface design has been demonstrated in [1].
- In addition, data management histories have been shown effective in the visualization community in [2] [3] [4], providing visualizations by analogy [5] and offering automated suggestions [6], which we expect to generalize to user interaction history.
'review'
- This is recording everything possible during an interaction. It is necessary to do for our project. (Gideon)
- Two thumbs up. - Steven
Semantic-level Interaction Chunking
- Owner: Trevor O'Brien
Aims
- Develop techniques for chunking low-level interaction primitives into semantic-level interactions, given an application's functionality and data-context. (And de-chunking? Invertible mapping needed?)
- Perform user-study evaluation to validate chunking methods.
Contributions
- Design and user-study evaluation of semantic-chunking techniques for interaction.
Demonstration
- These techniques will be validated quantitatively with respect to efficiency and user error rate on a set of timed-tasks over a series of case studies. In addition, qualitative feedback on user satisfaction will be incorporated in the evaluation of our methods. [As above, how will this be validated quantitatively? Comparing two methodologies for manual chunking? What would these differing methodologies be? E J Kalafarski 15:06, 29 April 2009 (UTC)]
Dependencies
- Software framework for collecting user interactions.[I'd like to see this fleshed out in a theoretical manner. - Steven]
- Formal description of interface functionality.
- Description of data objects that can be manipulated through interaction.
review
- With what level of accuracy can this be performed. I like the idea, its worthy of a nice project in and of itself. (Gideon)
Reconciling Usability Heuristics with Cognitive Theory
- Owner: E J Kalafarski 14:56, 28 April 2009 (UTC)
Contribution: A weighted framework for the unification of established heuristic usability guidelines and accepted cognitive principles.
[David: I'm having trouble parsing this contribution. How do you weight a framework? Also, I'd like to have a little more sense of how this might fit into the bigger picture. The assignment didn't ask for that, but is there some way to provide some of that context?] [I agree that the wording is is a little confusing. The framework is not weighted, but rather it weights the different guidelines/principles. It might also be worth explaining how this is a useful contribution, e.g. does it allow for more accurate interface evaluation? -Eric]
Demonstration: Three groups of experts anecdotally apply cognitive principles, heuristic usability principles, and a combination of the two.
- A "cognition expert," given a constrained, limited-functionality interface, develops an independent evaluative value for each interface element based on accepted cognitive principles.
- A "usability expert" develops an independent evaluative value for each interface element based on accepted heuristic guidelines.
- A third expert applies several unified cognitive analogues from a matrix of weighted cognitive and [...HCI design principles? -Eric]
- User testing demonstrates the assumed efficacy and applicability of the matricized analogues versus independent application of analogued principles.
[There is still the question of how these weights are determined. Is this the job of the third expert? Or is the third expert given these weights, and he just determines how to apply them? -Eric]
Dependency: A set of established cognitive principles, selected with an eye toward heuristic analogues.
Dependency: A set of established heuristic design guidelines, selected with an eye toward cognitive analogues.
review
- This seems like the 2d matrix? Is this implemented as a module? (Gideon)
- Gideon's comment brings up an important point - in what form will the weights be presented? - Steven
Evaluation for Recommendation and Incremental Improvement
- Owner: E J Kalafarski 14:56, 28 April 2009 (UTC)
Contributions: Using a limited set of interface evaluation modules for analysis, we demonstrate, in a narrow and controlled manner, the proposed efficiency and accuracy of a method of aggregating individual interface suggestions based on accepted CPM principles (e.g. Fitts' Law and Affordance) and applying them to the incremental improvement of the interface.
[David: I'm a little lost. Can you make this long sentence into a couple shorter simpler ones? I think I can imagine what you are getting at, but I'm not sure.]
Demonstration: A narrow but comprehensive study to demonstrate the efficacy and efficiency of automated aggregation of interface evaluation versus independent, analysis. Shows not total replacement, but a gain in speed of evaluation comparable to the loss of control and feature.
- Given a carefully-constrained interface, perhaps with as few as two buttons and a minimalist feature set, expert interface designer given the individual results of several basic evaluation modules makes recommendations and suggestions to the design of the interface.
- Aggregation meta-module conducts similar survey of module outputs, outputting recommendations and suggestions for improvement of given interface.
- Separate independent body of experts then implements the two sets of suggestions and performs user study on the resultant interfaces, analyzing usability change and comparing it to the time and resources committed by the evaluation expert and the aggregation module, respectively.
[David: nice demonstration!]
Dependency: Module for the analysis of Fitts' Law as it applies to the individual elements of a given interface.
Dependency: Module for the analysis of a Affordance as it applies to the individual elements of a given interface.
[David: I think this also has, as a dependency, some kind of framework for the modules. "narrow but comprehensive" sounds challenging. ]
review
- I see this as being the "black box" for our architecture? If so, good. Wouldn't the dependencies be any/all modules? (Gideon)
- [I think this is the same as my contribution, and that we should merge them together. You've covered some things that I didn't, such as a more detailed demonstration of how our framework is better than a human interface designer's evaluation. You also gave a demonstration of comparing recommendations, while I only covered evaluations. -Eric]
- I think this contribution and mine (the CPM-GOMS improvement) will need to be reconciled into a single unit. - Steven
Evaluation Metrics
Owner: Gideon Goldin
- Architecture Outputs
- Time (time to complete task)
- Performs as well or better than CPM-GOMS, demonstrated with user tasks [More an aspect of demonstration than design. - Steven]
- Dependencies
- CPM-GOMS Module with cognitive load extension
- Cognitive Load
- Predicts cognitive load during tasks, demonstrated with user tasks
- Dependencies
- CPM-GOMS Module with cognitive load extension
- Facial Gesture Recognition Module
- Frustration
- Accurately predicts users' frustration levels, demonstrated with user tasks
- Dependencies
- Facial Gesture Recognition Module
- Galvanic Skin Response Module
- Interface Efficiency Module
- Aesthetic Appeal
- Analyzes if the interface is aesthetically unpleasing, demonstrated with user tasks
- Dependencies
- Aesthetics Module
- Simplicity
- Analyzes how simple the interface is, demonstrated with user tasks
- Dependencies
- Interface Efficiency Module
[I agree that a metric (or several) is crucial to our framework, but what is your demonstration of the feasibility or usability of these particular values you've chosen. On an unrelated note, we talked a lot about how some of these might be "convertible" into others…do you see these metrics as different sides of the same coin? Can the units of one be "converted" into the units of another, like gallons to liters? E J Kalafarski 15:10, 29 April 2009 (UTC)]
[More detail on each demonstration would be helpful. -Eric]
review
- Words cannot express how
amazingantiquated this contribution is.ItThe writer should begiven as much money as requesteddrawn and quartered without question.
Parallel Framework for Evaluation Modules
- Owner: Adam Darlow, Eric Sodomka
This section will describe in more detail the inputs, outputs and architecture that were presented in the introduction.
Contribution: Create a framework that provides better interface evaluations than currently existing techniques, and a module weighting system that provides better evaluations than any of its modules taken in isolation.
Demonstration: Run a series of user studies and compare users' performance to expected performance, as given by the following interface evaluation methods:
- Traditional, manual interface evaluation
- As a baseline.
- Using our system with a single module
- "Are any of our individual modules better than currently existing methods of interface evaluation?".
- Using our system with multiple modules, but have aggregator give a fixed, equal weighting to each module
- As a baseline for our aggregator: want to show that the value of adding the dynamic weighting.
- Using our system with multiple modules, and allow the aggregator to adjust weightings for each module, but have each module get a single weighting for all dimensions (time, fatigue, etc.)
- For validating the use of a dynamic weighting system.
- Using our system with multiple modules, and allow the aggregator to adjust weightings for each module, and allow the module to give different weightings to every dimension of the module (time, fatigue, etc.)
- For validating the use of weighting across multiple utility dimensions.
Dependencies: Requires a good set of modules to plug into the framework.
review
- What exactly are the differences between this and EJ's earlier contribution? I think that if they are the same, this one is a bit more clear, IMHO. (Gideon)
[I have a similar question to Gideon's. My hunch is that this defines a sort of "namespace" for the modules themselves, while the other contribution (the one you guys assigned to me) is more explicitly the aggregator of these outputs. Correct me if I'm wrong. But if that's the case, the two contributions might need to be validated similarly/together. E J Kalafarski 15:17, 29 April 2009 (UTC)]
- I feel like this structure's clarity may come at expense of specificity, I'd like to know more than that it's parallel. - Steven
Evaluation and Recommendation via Modules
- Owner: E J Kalafarski
A "meta-module" called the aggregator will be responsible for assembling and formatting the output of all other modules into a structure that is both extensible and immediately usable, by both an automated designer or a human designer.
Requirements
The aggregator's functionality, then, is defined by its inputs, the outputs of the other modules, and the desired output of the system as a whole, per its position in the architecture. Its purpose is largely formatting and reconciliation of the products of the multitudinous (and extensible) modules. The output of the aggregator must meet several requirements: first, to generate a set of human-readable suggestions for the improvement of the given interface; second, to generate a machine-readable, but also analyzable, evaluation of the various characteristics of the interface and accompanying user traces.
From these specifications, it is logical to assume that a common language or format will be required for the output of individual modules. We propose an XML-based file format, allowing: (1) a section for the standardized identification of problem areas, applicable rules, and proposed improvements, generalized by the individual module and mapped to a single element, or group of elements, in the original interface specification; (2) a section for specification of generalizable "utility" functions, allowing a module to specify how much a measurable quantity of utility is positively or negatively affected by properties of the input interface; (3) new, user-definable sections for evaluations of the given interface not covered by the first two sections. The first two sections are capable of conveying the vast majority of module outputs predicted at this time, but the XML can extensibly allow modules to pass on whatever information may become prominent in the future.
Specification
<module id="Fitts-Law"> <interface-elements> <element> <desc>submit button</desc> <problem> <desc>size</desc> <suggestion>width *= 2</suggestion> <suggestion>height *= 2</suggestion> <human-suggestion>Increase size relative to other elements</human-suggestion> </problem> </element> </interface-elements> <utility> <dimension> <desc>time</desc> <value>0:15:35</value> </dimension> <dimension> <desc>frustration</desc> <value>pulling hair out</value> </dimension> <dimension> <desc>efficiency</desc> <value>13.2s/KPM task</value> <value>0.56m/CPM task</value> </dimension> </utility> <tasks> <task> <desc>complete form</desc> </task> <task> <desc>lookup SSN</desc> </task> <task> <desc>format phone number</desc> </task> </tasks> </module>
Logic
This file provided by each module is then the input for the aggregator. The aggregator's most straightforward function is the compilation of the "problem areas," assembling them and noting problem areas and suggestions that are recommended by more than one module, and weighting them accordingly in its final report. These weightings can begin in an equal state, but the aggregator should be capable of learning iteratively which modules' results are most relevant to the user and update weightings accordingly. This may need to be accomplished with manual tuning, or a machine-learning algorithm capable of determining which modules most often agree with others.
Secondly, the aggregator compiles the utility functions provided by the module specs. This, again, is a summation of similarly-described values from the various modules.
When confronted with user-defined sections of the XML spec, the aggregator is primarily responsible for compiling them and sending them along to the output of the machine. Even if the aggregator does not recognize a section or property of the evaluative spec, if it sees the property reported by more than one module it should be capable of aggregating these intelligently. In future versions of the spec, it should be possible for a module to provide instructions for the aggregator on how to handle unrecognized sections of the XML.
From these compilations, then, the aggregator should be capable of outputting both aggregated human-readable suggestions on interface improvements for a human designer, as well as a comprehensive evaluation of the interface's effectiveness at the given task traces. Again, this is dependent on the specification of the system as a whole, but is likely to include measures and comparisons, graphings of task versus utility, and quantitative measures of an element's effectiveness.
This section is necessarily defined by the output of the individual modules (which I already expect to be of varied and arbitrary structure) and the desired output of the machine as a whole. It will likely need to be revised heavily after other modules and the "Parellel Framework" section are defined. E J Kalafarski 12:34, 24 April 2009 (UTC)
This section describes the aggregator, which takes the output of multiple independent modules and aggregates the results to provide (1) an evaluation and (2) recommendations for the user interface. We should explain how the aggregator weights the output of different modules (this could be based on historical performance of each module, or perhaps based on E.J.'s cognitive/HCI guidelines).
review
- Cool- I like the xml. (Gideon)
- [I really like the xml, too. I'm guessing you just have placeholder text for some of these values (e.g. "pulling hair out"), but maybe we should think more about what exactly is returned. I was thinking the evaluation values would be some sort of utility value between 0-1, but this makes the "time" dimension tricky (what does a "time" score of 0.75 mean?). -Eric]
Sample Modules
CPM-GOMS
- Owners: Steven Ellis
This module will provide interface evaluations and suggestions based on a CPM-GOMS model of cognition for the given interface. It will provide a quantitative, predictive, cognition-based parameterization of usability. From empirically collected data, user trajectories through the model (critical paths) will be examined, highlighting bottlenecks within the interface, and offering suggested alterations to the interface to induce more optimal user trajectories.
I’m hoping to have some input on this section, because it seems to be the crux of the “black box” into which we take the inputs of interface description, user traces, etc. and get our outputs (time, recommendations, etc.). I know at least a few people have pretty strong thoughts on the matter and we ought to discuss the final structure.
That said – my proposal for the module:
- In my opinion the concept of the Model Human Processor (at least as applied in CPM) is outdated – it’s too economic/overly parsimonious in its conception of human activity. I think we need to create a structure which accounts for more realistic conditions of HCI including multitasking, aspects of distributed cognition (and other relevant uses of tools – as far as I can tell CPM doesn’t take into account any sort of productivity aids), executive control processes of attention, etc. ACT-R appears to take steps towards this but we would probably need to look at their algorithms to know for sure.
- Critical paths will continue to play an important role – we should in fact emphasize that part of this tool’s purpose will be a description not only of ways in which the interface should be modified to best fit a critical path, but also ways in which the user’s ought to be instructed in their use of the path. This feedback mechanism could be bidirectional – if the model’s predictions of the user’s goals are incorrect, the critical path determined will also be incorrect and the interface inherently suboptimal. The user could be prompted with a tooltip explaining in brief why and how the interface has changed, along with options to revert, select other configurations (euphemized by goals), and to view a short video detailing how to properly use the interface.
- Call me crazy but, if we assume designers will be willing to code a model of their interfaces into our ACT-R-esque language, could we allow that model to be fairly transparent to the user, who could use a gui to input their goals to find an analogue in the program which would subsequently rearrange its interface to fit the user’s needs? Even if not useful to the users, such dynamic modeling could really help designers (IMO)
- I think the model should do its best to accept models written for ACT-R and whatever other cognitive models there are out there – gives us the best chance of early adoption
- I would particularly appreciate input on the number/complexity/type of inputs we’ll be using, as well as the same qualities for the output.
Evaluation method:
- Evaluate an interface via the proposed framework, as well as traditional CPM-GOMS and ACT-R methods. Also have a small team of interface design experts evaluate the interface. Solicit comments on the current interface and suggestions for improvement from each team/method, and compare the accuracy and validity of the results.
HCI Guidelines
- Owner: E J Kalafarski
Schneiderman's Eight Golden Rules and Jakob Nielsen's Ten Heuristics are perhaps the most famous and well-regarded heuristic design guidelines to emerge over the last twenty years. Although the explicit theoretical basis for such heuristics is controversial and not well-explored, the empirical success of these guidelines is established and accepted. This module will parse out up to three or four common (that is, intersecting) principles from these accepted guidelines and apply them to the input interface.
As an example, we identify an analogous principle that appears in Schneiderman ("Reduce short-term memory load")[7] and Nielsen ("Recognition rather than recall/Minimize the user's memory load")[8]. The input interface is then evaluated for the consideration of the principle, based on an explicit formal description of the interface, such as XAML or XUL. The module attempts to determine how effectively the interface demonstrates the principle. When analyzing an interface for several principles that may be conflicting or opposing in a given context, the module makes use of a hard-coded but iterative (and evolving) weighting of these principles, based on (1) how often they appear in the training set of accepted sets of guidelines, (2) how analogues a heuristic principle is to a cognitive principle in a parallel training set, and (3) how effective the principle's associated suggestion is found to be using a feedback mechanism.
Inputs
- A formal description of the interface and its elements (e.g. buttons).
- A formal description of a particular task and the possible paths through a subset of interface elements that permit the user to accomplish that task.
Output
Standard XML-formatted file containing problem areas of the input interface, suggestions for each problem area based on principles that were found to have a strong application to a problem element and the problem itself, and a human-readable generated analysis of the element's affinity for the principle. Quantitative outputs will not be possible based on heuristic guidelines, and the "utility" section of this module's output is likely to be blank.
This section could include an example or two of established design guidelines that could easily be implemented as modules.
Fitts's Law
- Owner: Jon Ericson
Aims
- Provides an estimate of the required time to complete various tasks that have been decomposed into formalized sequences of interactions with interface elements, and will provide evaluations and recommendations for optimizing the time required to complete those tasks using the interface.
- Integrate this module into our novel cognitive framework for interface evaluation.
Contributions
- [Not sure here. Is this really novel?] [I like it. I think it's necessary. I think we demonstrated in class that this has not been formalized and standardized already. E J Kalafarski 15:22, 29 April 2009 (UTC)]
Demonstrations
- We will demonstrate the feasibility of this module by generating a formal description of an interface for scientific visualization, a formal description of a task to perform with the interface, and we will correlate the estimated time to complete the task based on Fitt's law with the actual time required based on several user traces. [Check. E J Kalafarski 15:22, 29 April 2009 (UTC)]
Dependencies
- Requires a formal description of the interface with graph nodes representing clickable interface elements and graph edges representing the physical (on-screen) distance between adjacent nodes. [Is this graph format you're suggesting part of the proposal, or is the "language" itself a dependency from literature? E J Kalafarski 15:22, 29 April 2009 (UTC)]
Module Description
- Inputs
- A formal description of the interface and its elements (e.g. buttons).
- A formal description of a particular task and the possible paths through a subset of interface elements that permit the user to accomplish that task.
- The physical distances between interface elements along those paths.
- The width of those elements along the most likely axes of motion.
- Device (e.g. mouse) characteristics including start/stop time and the inherent speed limitations of the device.
- Output
- The module will then use the Shannon formulation of Fitt's Law to compute the average time needed to complete the task along those paths.
Affordances
- Owner: Jon Ericson
Aims
- To provide interface evaluations and recommendations based on a measure of the extent to which the user perceives the relevant affordances of the interface when performing specified tasks.
Contributions
- A quantitative measure of the extent to which an interface suggests to the user the actions that it is capable of performing.
- A quantitative, indirect measure of the extent to which an interface facilitates (or hinders) the use of fast perceptual mechanisms.
[Again, I'm a fan. I don't think this has been formalized already. E J Kalafarski 15:24, 29 April 2009 (UTC)]
Demonstrations
- We will demonstrate the feasibility of this module through the following experiment:
- Specify a task for a user to perform with scientific visualization software.
- There should be several different ways to complete the task (paths through the space of possible interface actions).
- Some of these paths will be more direct than others.
- We will then measure the number of task-relevant affordances that were perceived and acted upon by analyzing the user trace, and the time required to complete the task.
- Use the formula: (affordances perceived) / [(relevant affordances present) * (time to complete task)].
- Correlate the resulting scores with verbal reports on naturalness and ease-of-use for the interface.
Dependencies
- Requires a method for capturing user traces that include formalized records of interface elements used for a particular, pre-specified task or set of tasks.
- Providing suggestions/recommendations will require interaction with other modules that analyze the perceptual salience of interface elements.
[Some kind of formalized list or arbitrary classification of affordances might be necessary to limit the scope of this contribution. E J Kalafarski 15:24, 29 April 2009 (UTC)]
Description of the Module
- Module Inputs
- Formalized descriptions of...
- Interface elements
- Their associated actions
- The functions of those actions
- A particular task
- User traces for that task
- Formalized descriptions of...
- Processing
- Inputs (1-4) are then used to generate a "user-independent" space of possible functions that the interface is capable of performing with respect to a given task -- what the interface "affords" the user. From this set of possible interactions, our model will then determine the subset of optimal paths for performing a particular task. The user trace (5) is then used to determine what functions actually were performed in the course of a given task of interest and this information is then compared to the optimal path data to determine the extent to which affordances of the interface are present but not perceived.
- Output
- The output of this module is a simple ratio of (affordances perceived) / [(relevant affordances present) * (time to complete task)] which provides a quantitative measure of the extent to which the interface is "natural" to use for a particular task.
Workflow, Multi-tasking and Interruptions
- Owner: Andrew Bragdon
[David: I think that the work proposed here is interesting, but I'm not sure that it is organized by "contribution". Try labeling the contributions and the demonstrations explicitly. A "tool" is usually not a contribution -- an evaluation of the tool could be. If this distinction isn't clear, let's talk. For these particular contributions, phrasing and integrating them is a challenge because some of the contributions are demonstrations of other contributions...]
- Scientific Study of Multi-tasking Workflow and the Impact of Interruptions
- We will undertake detailed studies to help understand the following questions:
- How does the size of a user's working set impact interruption resumption time?
- How does the size of a user's working set, when used for rapid multi-tasking, impact performance metrics?
- How does a user interface which supports multiple simultaneous working sets benefit interruption resumption time?
- No Dependencies
- Meta-work Assistance Tool
- We will perform a series of ecologically-valid studies to compare user performance between a state of the art task management system (control group) and our meta-work assistance tool (experimental group)
- Dependent on core study completion, as some of the specific design decisions will be driven by the results of this study. However, it is worth pointing out that this separate contribution can be researched in parallel to the core study.
- Baseline Comparison Between Module-based Model of HCI and Core Multi-tasking Study
- We will compare the results of the above-mentioned study in multi-tasking against results predicted by the module-based model of HCI in this proposal; this will give us an important baseline comparison, particularly given that multi-tasking and interruption involve higher brain functioning and are therefore likely difficult to predict
- Dependent on core study completion, as well as most of the rest of the proposal being completed to the point of being testable
Text for Assignment 12:
Add text here about how this can be used to evaluate automatic framework
There are, at least, two levels at which users work (Gonzales, et al., 2004). Users accomplish individual low-level tasks which are part of larger working spheres; for example, an office worker might send several emails, create several Post-It (TM) note reminders, and then edit a word document, each of these smaller tasks being part of a single larger working sphere of "adding a new section to the website." Thus, it is important to understand this larger workflow context - which often involves extensive levels of multi-tasking, as well as switching between a variety of computing devices and traditional tools, such as notebooks. In this study it was found that the information workers surveyed typically switch individual tasks every 2 minutes and have many simultaneous working spheres which they switch between, on average every 12 minutes. This frenzied pace of switching tasks and switching working spheres suggests that users will not be using a single application or device for a long period of time, and that affordances to support this characteristic pattern of information work are important.
The purpose of this module is to integrate existing work on multi-tasking, interruption and higher-level workflow into a framework which can predict user recovery times from interruptions. Specifically, the goals of this framework will be to:
- Understand the role of the larger workflow context in user interfaces
- Understand the impact of interruptions on user workflow
- Understand how to design software which fits into the larger working spheres in which information work takes place
It is important to point out that because workflow and multi-tasking rely heavily on higher-level brain functioning, it is unrealistic within the scope of this grant to propose a system which can predict user performance given a description of a set of arbitrary software programs. Therefore, we believe this module will function much more in a qualitative role to provide context to the rest of the model. Specifically, our findings related to interruption and multi-tasking will advance the basic research question of "how do you users react to interruptions when using working sets of varying sizes?". This core HCI contribution will help to inform the rest of the outputs of the model in a qualitative manner.
Inputs
N/A
Outputs
N/A
Working Memory Load
- Owner: Gideon Goldin
This module measures how much information the user needs to retain in memory while interacting with the interface and makes suggestions for improvements.
Inputs
- Visual Stimuli
- Audio Stimuli
Outputs
- Remembered percepts
- Half-Life of percepts
Automaticity of Interaction
- Owner: Gideon Goldin
Measures how easily the interaction with the interface becomes automatic with experience and makes suggestions for improvements.
Inputs
- Interface
- User goals
Outputs
- Learning curve
Anti-Pattern Conflict Resolution
- Owner: Ian Spector
Interface design patterns are defined as reusable elements which provide solutions to common problems. For instance, we expect that an arrow in the top left corner of a window which points to the left, when clicked upon, will take the user to a previous screen. Furthermore, we expect that buttons in the top left corner of a window will in some way relate to navigation.
An anti-pattern is a design pattern which breaks standard design convention, creating more problems than it solves. An example of an anti-pattern would be if in place of the 'back' button on your web browser, there would be a 'view history' button.
The purpose of this module would be to analyze an interface to see if any anti-patterns exist, identify where they are in the interface, and then suggest alternatives.
Inputs
- Formal interface description
- Tasks which can be performed within the interface
- A library of standard design patterns
- Outputs from the 'Affordances' module
- Uncommon / Custom additional pattern library (optional)
Outputs
- Identification of interface elements whose placement or function are contrary to the pattern library
- Recommendations for alternative functionality or placement of such elements.
Integration into the Design Process
- Owner: Ian Spector
This section outlines the process of designing an interface and at what stages our proposal fits in and how.
The interface design process is critical to the creation of a quality end product. The process of creating an interface can also be used as a model for analyzing a finished one.
There are a number of different philosophies on how to best design software and in turn, interface. Currently, agile development using an incremental process such as Scrum has become a well known and generally practiced procedure.
The steps to create interfaces varies significantly from text to text, although the Common Front Group at Cornell has succinctly been able to reduce this variety into six simple steps:
- Requirement Sketching
- Conceptual Design
- Logical Design
- Physical Design
- Construction
- Usability Testing
This can be broken down further into just information architecture design followed by physical design and testing.
As far as this proposal is concerned in the context of interface design, the goal of our proposal is to improve what is involved with later end of the middle two portions: logical and physical design. Prior to feeding an interface to the system we are proposing, designers should have already created a baseline model for review that should exhibit the majority, if not all, of the functionality listed in the interface requirements. Once this initial interface has been created, our system will aid in rapidly iterating through the physical design process. The ultimate end products are then subject to human usability testing.
- ↑ Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation (InfoVis 2008)
- ↑ Callahan-2006-MED
- ↑ Callahan-2006-VVM
- ↑ Bavoil-2005-VEI
- ↑ Querying and Creating Visualizations by Analogy
- ↑ VisComplete: Automating Suggestions from Visualization Pipelines
- ↑ http://faculty.washington.edu/jtenenbg/courses/360/f04/sessions/schneidermanGoldenRules.html
- ↑ http://www.useit.com/papers/heuristic/heuristic_list.html
- ↑ http://en.wikipedia.org/wiki/Baddeley%27s_model_of_working_memory
- ↑ http://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two
- ↑ Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87-185.
- ↑ http://en.wikipedia.org/wiki/Chunking_(psychology)
- ↑ http://en.wikipedia.org/wiki/Priming_(psychology)
- ↑ http://en.wikipedia.org/wiki/Subitizing
- ↑ http://en.wikipedia.org/wiki/Learning#Mathematical_models_of_learning
- ↑ http://74.125.95.132/search?q=cache:IZ-Zccsu3SEJ:psych.wisc.edu/ugstudies/psych733/logan_1988.pdf+logan+isntance+teory&cd=1&hl=en&ct=clnk&gl=us&client=firefox-a
- ↑ http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6VH9-4SM7PFK-4&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=10cd279fa80958981fcc3c06684c09af
- ↑ http://en.wikipedia.org/wiki/Fluency_heuristic