CS295J/Proposal intros from class 9
Metwork and Task-Interruption Support Interfaces (in progress...)
Owner: Andrew Bragdon
Our proposal centers on twin closely related yet distinct phases and corresponding goals. In the first phase, develop a qualitative theory for predicting user performance with and without automatic meta-work tools for saving and resuming context. In the second phase, once this theory has been developed and tested, we will then design and evaluate tools designed to support meta-work and task interruptioin based on the model developed in the first phase.
Traditionally, software design and usability testing is focused on low-level task performance. However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, working sphere level. Su, et al., developed a predictive model of task switching based on communication chains. Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching. To truly design computing systems which are designed around the way users work, we must understand how users work. To do this, we must establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains. Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context. Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work.
Finally, once the model has been developed we will then spend the second phase of the project developing systems which support users in multi-tasking, task interruption, and meta-work. We will choose two domains, specifically window and task management, and software development.
Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk. The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel. While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda. The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis. The cost will span student support, both Ph.D. and Master's students, as well as full-time research staff. Projected cost: $1.5 million over three years.
Gap Conclusion
For a medium-sized grant, I think only one core goal or closely related set of goals is feasible. However, for a large grant for funding a center it might be reasonable to propose two separate goals. Therefore, I would recommend merging the metawork proposal with
Collaborative 2
Existing guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no common theoretical foundation, many sets of guidelines have emerged and there is no way to compare or unify them. We propose to develop a theoretical foundation for interface design by drawing on recent advances in cognitive science, the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. Where possible we will use computational models to enable richer automatic interface assessment than is currently available.
A large part of our project will be to broaden the range of cognitive theories that are used in HCI design. Only a few low level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community because they are simple, make quantitative predictions and apply without modification to a broad range of tasks and interfaces. Our aim is to produce similar predictive models that apply to higher levels of cognition, including higher level vision, learning, memory, attention and task management.
Much of our work will focus on how cognitive principles can enable interface design to go beyond the focus of the functionality of the individual application. Much research has accumulated regarding how people manage multiple tasks and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a different perspective by examining the human-computer system as a unified cognitive entity. We will extract and test principles from this literature on how to ensure that the human part of the system is only responsible for those parts of the task for which it is more capable than the computer.
Alternative Collaborative (In progress...)
Established guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no integrated theoretical foundation, many rule-sets have emerged despite the absence of comparative evaluations. We propose to develop a theoretical foundation for interface design, drawing on recent advances in cognitive science -- the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. To validate our theoretical foundation, we will use our findings to develop a quantitative mechanism for assessing interface designs, identifying interface elements that are detrimental to user performance, and suggesting effective alternatives. Results from this system will be explored over a set of case studies.
A central focus of our work will be to broaden the range of cognitive theories that are used in HCI design. Few low level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community because due to their simple, quantitative nature, and wide-spread applicability. Our aim is to produce similar predictive models that apply to lower levels of perception as well as higher levels of cognition, including higher-level vision, learning, memory, attention and task management.
We will focus on generating extensible, generalizable models of cognition that can be applied to a broad range of interface design challenges. Much research has accumulated regarding how people manage multiple tasks, and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a different perspective by examining the human-computer system as a unified cognitive entity.
Collaborators so far
- Adam
- Trevor
Gideon
- Creating a model of HCI based on distributed cognition
- Our model (and its many levels)
- Consideration of cognitive theory
- Contributions
- Framework
- Quantitative Models
- End-Product
EJ (In progress…)
While attempts have been made in the past to apply cognitive theory to the task of developing human-computer interfaces, there remains much work to be done. No standard and widespread model for the cognitive interaction with a computer exists. The roles of perception and cognition, while examined and studied independently, are often at odds with empirical and successful design guidelines in practice. Methods of study and evaluation, such as eye-tracking and workflow analysis, are still governed primarily by the needs at the end of the development process, with no quantitative model capable of influencing efficiency and consistency in the field.
We demonstrate in wide-ranging preliminary work that cognitive theory has a tangible and valuable role in all the stages of interface design and evaluation: models of distributed cognition can exert useful influence on the design of interfaces and the guidelines that govern it; algorithmic workflow analysis can lead to new interaction methods, including predictive options; a model of human perception can greatly enhance the usefulness of multimodal user study techniques; a better understanding of why classical strategies work will bring us closer to the "holy grail" of automated interface evaluation and recommendation. We can bring the field further down many of the only partially-explored avenues of the field in the years ahead.
Established guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no integrated theoretical foundation, many rulesets have emerged with no intuitive way to compare or unify them. We propose to develop a theoretical foundation for interface design, drawing on recent advances in cognitive science—the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. Where possible, we will use computational models to enable richer automatic interface assessment than is currently available.
A large part of our project will be to broaden the range of cognitive theories that are used in HCI design. Only a few low level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community because they are simple, make quantitative predictions and apply without modification to a broad range of tasks and interfaces. Our aim is to produce similar predictive models that apply to these lower levels of perception as well as higher levels of cognition, including higher-level vision, learning, memory, attention and task management.
Much of our work will focus on how cognitive principles can enable interface design to go beyond the focus of the functionality of the individual application [a bit vague]. Much research has accumulated regarding how people manage multiple tasks and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a different perspective by examining the human-computer system as a unified cognitive entity. We will extract and test principles from this literature on how to ensure that the human part of the system is only responsible for those parts of the task for which it is more capable than the computer.