CS295J/Proposal intros from class 9: Difference between revisions
| (25 intermediate revisions by 5 users not shown) | |||
| Line 1: | Line 1: | ||
== | ==Meta-work and Task-Interruption Support Interfaces== | ||
Owner: Andrew Bragdon | Owner: Andrew Bragdon | ||
Our proposal centers on two closely related yet distinct phases and corresponding goals. In the first phase, we will develop a qualitative theory for predicting user performance with and without automatic meta-work tools for saving and resuming context. In the second phase, once this theory has been developed and tested, we will then design and evaluate tools designed to support meta-work and task interruptioin based on the model developed in the first phase. | |||
Traditionally, software design and usability testing is focused on low-level task performance. However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, working sphere level. Su, et al., developed a predictive model of task switching based on communication chains. Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching. To truly design computing systems which are designed around the way users work, we must understand how users work. To do this, we | Traditionally, software design and usability testing is focused on low-level task performance. However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, working sphere level. Su, et al., developed a predictive model of task switching based on communication chains. Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching. To truly design computing systems which are designed around the way users work, we must understand how users work. To do this, we must establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains. Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context. Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work. | ||
Finally, once the model has been developed we will then spend the second phase of the project developing systems which support users in multi-tasking, task interruption, and meta-work. We will choose two domains, specifically window and task management, and software development. | Finally, once the model has been developed we will then spend the second phase of the project developing systems which support users in multi-tasking, task interruption, and meta-work. We will choose two domains, specifically window and task management, and software development. Entities which exist in the real world can be interacted with with a common set of primitives: one can write on any page or flat surface with the appropriate writing implement to make notes, one can apply sticky notes to any flat surface, and one can arrange things in space on the desk for the purposes of reminders and easy access later. | ||
However in the world of software, users are limited by what features exist in a given application. One cannot apply a sticky note, or write on any document, web page, or application and then pile them up in a permanent way. In addition, in the world of software there is the potential to create much richer document organizations and groupings, or working sets, which can be saved and recalled during an interruption. | |||
Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk. The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel. While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda. The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis. The cost will span student support, both Ph.D. and Master's students, as well as full-time research staff. Projected cost: $1.5 million over three years. | Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk. The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel. While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda. The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis. The cost will span student support, both Ph.D. and Master's students, as well as full-time research staff. Projected cost: $1.5 million over three years. | ||
=== Gap Conclusion=== | |||
''Chosen Gap: Metawork Support Tool proposal; will examine integrating this into the main proposal vs. making a separate proposal.'' | |||
'''Analysis''' | |||
For a medium-sized grant, I think only one core goal or closely related set of goals is feasible. However, for a large grant for funding a center it might be reasonable to propose two separate goals. Therefore, I would recommend merging the metawork proposal with "collaboritve 2" if our goal is to produce a center grant, however if it is to get a medium-sized grant then we should branch or pick one. | |||
==Collaborative 2== | ==Collaborative 2== | ||
Existing guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no common theoretical foundation, many sets of guidelines have emerged and there is no way to compare or unify them. We propose to develop a theoretical foundation for interface design by drawing on recent advances in cognitive science, the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. Where possible we will use computational models to enable richer automatic interface assessment than is currently available. | Existing guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no common theoretical foundation, many sets of guidelines have emerged and there is no way to compare or unify them. We propose to develop a theoretical foundation for interface design by drawing on recent advances in ''cognitive science'', the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. Where possible we will use computational models to enable richer automatic interface assessment than is currently available. | ||
A large part of our project will be to broaden the range of cognitive theories that are used in HCI design. Only a few low level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community because they are simple, make quantitative predictions and apply without modification to a broad range of tasks and interfaces. Our aim is to produce similar predictive models that apply to higher levels of cognition, including higher level vision, learning, memory, attention and task management. | A large part of our project will be to broaden the range of cognitive theories that are used in HCI design. Only a few low level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community because they are simple, make quantitative predictions and apply without modification to a broad range of tasks and interfaces. Our aim is to produce similar predictive models that apply to higher levels of cognition, including higher level vision, learning, memory, attention, reasoning, and task management. | ||
Much of our work will focus on how cognitive principles can enable interface design to go beyond the focus of the functionality of the individual application. Much research has accumulated regarding how people manage multiple tasks and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a different perspective by examining the human-computer system as a unified cognitive entity. We will extract and test principles from this literature on how to ensure that the human part of the system is only responsible for those parts of the task for which it is more capable than the computer. | Much of our work will focus on how cognitive principles can enable interface design to go beyond the focus of the functionality of the individual application. Much research has accumulated regarding how people manage multiple tasks and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition (DC) also provides a different perspective by examining the human-computer system as a unified cognitive entity. We will extract and test principles from this literature on how to ensure that the human part of the system is only responsible for those parts of the task for which it is more capable than the computer. | ||
==Alternative Collaborative (In progress...)== | ==Alternative Collaborative (In progress...)== | ||
Established guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no integrated theoretical foundation, many rule-sets have emerged despite the absence of comparative evaluations. We propose to develop a theoretical foundation for interface design, drawing on recent advances in cognitive science -- the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. To validate our theoretical foundation, we will use our findings to develop a quantitative mechanism for assessing interface designs, identifying interface elements that are detrimental to user performance, and suggesting effective alternatives. Results from this system will be explored over a set of case studies. | Established guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no integrated theoretical foundation, many rule-sets have emerged despite the absence of comparative evaluations. We propose to develop a theoretical foundation for interface design, drawing on recent advances in cognitive science -- the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. To validate our theoretical foundation, we will use our findings to develop a quantitative mechanism for assessing interface designs, identifying interface elements that are detrimental to user performance, and suggesting effective alternatives. Results from this system will be explored over a set of case studies, and the quantitative assessments output by this system will be compared to actual user performance. | ||
A central focus of our work will be to broaden the range of cognitive theories that are used in HCI design. Few low level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community | A central focus of our work will be to broaden the range of cognitive theories that are used in HCI design. Few low-level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community due to their simple, quantitative nature, and wide-spread applicability. Our aim is to produce similar predictive models that apply to lower levels of perception as well as higher levels of cognition, including higher-level vision, learning, memory, attention, reasoning and task management. | ||
We will focus on generating extensible, generalizable models of cognition that can be applied to a broad range of interface design challenges. Much research has accumulated regarding how people manage multiple tasks, and we will apply it to principles of how an interface should be designed | We will focus on generating extensible, generalizable models of cognition that can be applied to a broad range of interface design challenges. Much research has accumulated regarding how people manage multiple tasks, and we will apply it to principles of how an interface should be designed such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a useful perspective by examining the human-computer system as a unified cognitive entity. | ||
===Collaborators so far=== | ===Collaborators so far=== | ||
* Adam | * Adam | ||
* Trevor | * Trevor | ||
* Eric | |||
==Gideon & Jon== | |||
===A Theoretical Framework for Human-Computer Interaction=== | |||
We propose a specific framework for the study of Human-Computer Interaction (HCI) from a cognitive science perspective. The architecture of our framework is designed to support the approach taken in Distributed Cognition (DC). In order to concretize our framework, model, and components, we review and incorporate 1000 studies from the field of cognitive science. Doing so will allow our project to dichotemise HCI into an applied side and a theoretical side, the latter of which has been mostly ignored in the history of the field. Rather than studying HCI as an applied subfield in computer science, or as a subfield of applied psychology, we show that the proposed theoretical study of HCI is a valid scientific program and would benefit from a systematic research agenda. There have been some attempts to study HCI from a DC perspective, but all studies have left the details and concrete facts too open, discouraging any follow-up research. Our framework will provide the first concrete basis from which to approach this type of research. | |||
Components in our framework are based on exhaustive understandings of the differences in processing capabilities in humans and computers. An extensive search of the literature dispels the "mind as a computer" metaphor. Now that a sufficient body of knowledge has shown that humans and computers most often process in very different ways, we can view theoretical HCI as the study of how to distribute tasks across man and machine. | |||
This project will require five years and cost 2.5 million dollars. | |||
==Eric== | |||
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinion of each expert based on its past accuracy, and outputs to the developer a merged evaluation score and a weighted set of recommendations. | |||
Different users have different abilities and interface preferences. For example, a user at NASA probably cares more about interface accuracy than speed. By passing this information to our committee of experts, we can create interfaces that are tuned to maximize the utility of a particular user type. | |||
==EJ | We evaluate our framework through a series of user studies. Interfaces passed to our committee of experts receive evaluation scores on a number of different dimensions, such as time, accuracy, and ease of use for novices versus experts. We can compare these predicted scores to the actual scores observed in user studies to evaluate performance. We can also retroactively weight the experts' opinions to determine which weighting would have given the best predictions of user behavior for the given interface, and observe whether that weighting generalizes to other interface evaluations. | ||
==EJ== | |||
While attempts have been made in the past to apply cognitive theory to the task of developing human-computer interfaces, there remains much work to be done. No standard and widespread model for the cognitive interaction with a computer exists. The roles of perception and cognition, while examined and studied independently, are often at odds with empirical and successful design guidelines in practice. Methods of study and evaluation, such as eye-tracking and workflow analysis, are still governed primarily by the needs at the end of the development process, with no quantitative model capable of influencing efficiency and consistency in the field. | While attempts have been made in the past to apply cognitive theory to the task of developing human-computer interfaces, there remains much work to be done. No standard and widespread model for the cognitive interaction with a computer exists. The roles of perception and cognition, while examined and studied independently, are often at odds with empirical and successful design guidelines in practice. Methods of study and evaluation, such as eye-tracking and workflow analysis, are still governed primarily by the needs at the end of the development process, with no quantitative model capable of influencing efficiency and consistency in the field. | ||
We demonstrate in wide-ranging preliminary work that cognitive theory has a tangible and valuable role in all the stages of interface design and evaluation: models of distributed cognition can exert useful influence on the design of interfaces and the guidelines that govern it; algorithmic workflow analysis can lead to new interaction methods, including predictive options; a model of human perception can greatly enhance the usefulness of multimodal user study techniques; a better understanding of why classical strategies work will bring us closer to the "holy grail" of automated interface evaluation and recommendation. We | Much of our work will focus on how cognitive principles can enable interface design to go beyond the focus of the functionality of the individual application. We propose to develop a theoretical foundation for interface design, drawing on recent advances in cognitive science—the study of how people think, perceive and interact with the world. We demonstrate in wide-ranging preliminary work that cognitive theory has a tangible and valuable role in all the stages of interface design and evaluation: models of distributed cognition can exert useful influence on the design of interfaces and the guidelines that govern it; algorithmic workflow analysis can lead to new interaction methods, including predictive options; a model of human perception can greatly enhance the usefulness of multimodal user study techniques; a better understanding of why classical strategies work will bring us closer to the "holy grail" of automated interface evaluation and recommendation. Much research has accumulated regarding how people manage multiple tasks and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a different perspective by examining the human-computer system as a unified cognitive entity. | ||
We will extract and test principles from this literature on how to ensure that the human part of the system is only responsible for those parts of the task for which it is more capable than the computer. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. Where possible, we will use computational models to enable richer automatic interface assessment than is currently available. | |||
==A Behavioral/Cognitive Model of Human-Computer Interaction== | |||
Several attempts have been recorded in the establishment of a unified theory of human behavior in interacting with machinery. As digital interfaces have come to replace their mechanistic counterparts, the user interface has become the standard unit of interactive design. Existing theories have thus been applied by many to these new design challenges, with mixed results. Such examinations have often sought to provide parsimonious design recommendations, sometimes at the cost of theoretical rigor. We propose to develop a theoretical foundation for interface design, drawing upon both the aforementioned theories and relevant, established theories of human behavior and cognition. We will test, modify, and empirically justify this foundation and subsequently flesh it out into a practical model of mental comprehension and progression in human-computer interaction. | |||
The central focus of our work will in piecemeal justification of relevant psychological and cognitive findings, as without such empirical rigor our model will be doomed to antiquation. We will harmonize higher-level processes with their lower-level stimuli and outputs, thereby allowing designs to progress not solely at a high or low level but as a unified whole. We will relate these findings to current rules and guidelines of interface design, thereby providing a critical review of current design thought and justifying our model’s predictions through their relation to designs widely recognized as successful. | |||
Once this model has been developed and thoroughly justified, we will examine future venues of research in the applications of cognitive and psychological science to interface design. These paths will exist not only as suggestions but theoretical predictions and, time allowing, small-scale studies. The field of human-computer interfaces is itself experiencing a great deal of innovation, and our project will keep pace with avant-garde design concepts as they appear. | |||
(Steven) | |||
===Revised: Behavioral/Cognitive Guidelines and Tools for Human-Computer Interface Design=== | |||
Several attempts have been recorded in the establishment of a unified theory of human behavior in interacting with machinery. As digital interfaces have come to replace their mechanistic counterparts, the user interface has become the standard unit of interactive design. Existing theories have thus been applied by many to these new design challenges, with mixed results. Such examinations have often sought to provide parsimonious design recommendations, sometimes at the cost of theoretical rigor. We propose to develop a set of evaluation tools and guidelines for interface design, drawing upon both the aforementioned theories and relevant, established, empirically validated and experimentally proven components of human behavior and cognition. We will test, modify, and empirically justify these tools by applying them to existing interfaces widely recognized to be well-designed. | |||
The central focus of our work will be in aggressive application of relevant psychological and cognitive findings to the design milieu. We will harmonize higher-level processes with their lower-level stimuli and outputs, thereby allowing designs to progress not solely at a high or low level but as a unified whole. We will relate these findings to current rules and guidelines of interface design, thereby providing a critical review of current design thought and justifying our model’s predictions through their relation to designs widely recognized as successful. | |||
After developing our set of guidelines, we will tackle the task of transforming our semantic concepts into algorithmic evaluation tools. We will draw upon recent advances in embodied user models to best approximate realistic user perception and interaction, building upon earlier successes in automated user evaluation tools. The tools will be built in a modular fashion to accommodate future developments in the field of behavioral and cognitive HCI research, allowing our project to keep pace with cutting-edge scientific advances as they appear. | |||
Latest revision as of 14:46, 3 April 2009
Meta-work and Task-Interruption Support Interfaces
Owner: Andrew Bragdon
Our proposal centers on two closely related yet distinct phases and corresponding goals. In the first phase, we will develop a qualitative theory for predicting user performance with and without automatic meta-work tools for saving and resuming context. In the second phase, once this theory has been developed and tested, we will then design and evaluate tools designed to support meta-work and task interruptioin based on the model developed in the first phase.
Traditionally, software design and usability testing is focused on low-level task performance. However, prior work (Gonzales, et al.) provides strong empirical evidence that users also work at a higher, working sphere level. Su, et al., developed a predictive model of task switching based on communication chains. Our model will specifically identify and predict key aspects of higher level information work behaviors, such as task switching. To truly design computing systems which are designed around the way users work, we must understand how users work. To do this, we must establish a predictive model of user workflow that encompasses multiple levels of workflow: individual task items, larger goal-oriented working spheres, multi-tasking behavior, and communication chains. Current information work systems are almost always designed around the lowest level of workflow, the individual task, and do not take into account the larger workflow context. Fundamentally, a predictive model would allow us to design computing systems which significantly increase worker productivity in the United States and around the world, by designing these systems around the way people work.
Finally, once the model has been developed we will then spend the second phase of the project developing systems which support users in multi-tasking, task interruption, and meta-work. We will choose two domains, specifically window and task management, and software development. Entities which exist in the real world can be interacted with with a common set of primitives: one can write on any page or flat surface with the appropriate writing implement to make notes, one can apply sticky notes to any flat surface, and one can arrange things in space on the desk for the purposes of reminders and easy access later.
However in the world of software, users are limited by what features exist in a given application. One cannot apply a sticky note, or write on any document, web page, or application and then pile them up in a permanent way. In addition, in the world of software there is the potential to create much richer document organizations and groupings, or working sets, which can be saved and recalled during an interruption.
Risk will play an important factor in this research, and thus a core goal of our research agenda will be to manage this risk. The most effective way to do this will be to compartmentalize the risk by conducting empirical investigations - which will form the basis for the model - into the separate areas: low-level tasks, working spheres, communication chains, interruptions and multi-tasking in parallel. While one experiment may become bogged down in details, the others will be able to advance sufficiently to contribute to a strong core model, even if one or two facets encounter setbacks during the course of the research agenda. The primary cost drivers will be the preliminary empirical evaluations, the final system implementation, and the final experiments which will be designed to support the original hypothesis. The cost will span student support, both Ph.D. and Master's students, as well as full-time research staff. Projected cost: $1.5 million over three years.
Gap Conclusion
Chosen Gap: Metawork Support Tool proposal; will examine integrating this into the main proposal vs. making a separate proposal.
Analysis For a medium-sized grant, I think only one core goal or closely related set of goals is feasible. However, for a large grant for funding a center it might be reasonable to propose two separate goals. Therefore, I would recommend merging the metawork proposal with "collaboritve 2" if our goal is to produce a center grant, however if it is to get a medium-sized grant then we should branch or pick one.
Collaborative 2
Existing guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no common theoretical foundation, many sets of guidelines have emerged and there is no way to compare or unify them. We propose to develop a theoretical foundation for interface design by drawing on recent advances in cognitive science, the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. Where possible we will use computational models to enable richer automatic interface assessment than is currently available.
A large part of our project will be to broaden the range of cognitive theories that are used in HCI design. Only a few low level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community because they are simple, make quantitative predictions and apply without modification to a broad range of tasks and interfaces. Our aim is to produce similar predictive models that apply to higher levels of cognition, including higher level vision, learning, memory, attention, reasoning, and task management.
Much of our work will focus on how cognitive principles can enable interface design to go beyond the focus of the functionality of the individual application. Much research has accumulated regarding how people manage multiple tasks and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition (DC) also provides a different perspective by examining the human-computer system as a unified cognitive entity. We will extract and test principles from this literature on how to ensure that the human part of the system is only responsible for those parts of the task for which it is more capable than the computer.
Alternative Collaborative (In progress...)
Established guidelines for designing human computer interfaces are based on experience, intuition and introspection. Because there is no integrated theoretical foundation, many rule-sets have emerged despite the absence of comparative evaluations. We propose to develop a theoretical foundation for interface design, drawing on recent advances in cognitive science -- the study of how people think, perceive and interact with the world. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. To validate our theoretical foundation, we will use our findings to develop a quantitative mechanism for assessing interface designs, identifying interface elements that are detrimental to user performance, and suggesting effective alternatives. Results from this system will be explored over a set of case studies, and the quantitative assessments output by this system will be compared to actual user performance.
A central focus of our work will be to broaden the range of cognitive theories that are used in HCI design. Few low-level theories of perception and action, such as Fitts's law, have garnered general acceptance in the HCI community due to their simple, quantitative nature, and wide-spread applicability. Our aim is to produce similar predictive models that apply to lower levels of perception as well as higher levels of cognition, including higher-level vision, learning, memory, attention, reasoning and task management.
We will focus on generating extensible, generalizable models of cognition that can be applied to a broad range of interface design challenges. Much research has accumulated regarding how people manage multiple tasks, and we will apply it to principles of how an interface should be designed such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a useful perspective by examining the human-computer system as a unified cognitive entity.
Collaborators so far
- Adam
- Trevor
- Eric
Gideon & Jon
A Theoretical Framework for Human-Computer Interaction
We propose a specific framework for the study of Human-Computer Interaction (HCI) from a cognitive science perspective. The architecture of our framework is designed to support the approach taken in Distributed Cognition (DC). In order to concretize our framework, model, and components, we review and incorporate 1000 studies from the field of cognitive science. Doing so will allow our project to dichotemise HCI into an applied side and a theoretical side, the latter of which has been mostly ignored in the history of the field. Rather than studying HCI as an applied subfield in computer science, or as a subfield of applied psychology, we show that the proposed theoretical study of HCI is a valid scientific program and would benefit from a systematic research agenda. There have been some attempts to study HCI from a DC perspective, but all studies have left the details and concrete facts too open, discouraging any follow-up research. Our framework will provide the first concrete basis from which to approach this type of research.
Components in our framework are based on exhaustive understandings of the differences in processing capabilities in humans and computers. An extensive search of the literature dispels the "mind as a computer" metaphor. Now that a sufficient body of knowledge has shown that humans and computers most often process in very different ways, we can view theoretical HCI as the study of how to distribute tasks across man and machine.
This project will require five years and cost 2.5 million dollars.
Eric
We propose a framework for interface evaluation and recommendation that integrates behavioral models and design guidelines from both cognitive science and HCI. Our framework behaves like a committee of specialized experts, where each expert provides its own assessment of the interface, given its particular knowledge of HCI or cognitive science. For example, an expert may provide an evaluation based on the GOMS method, Fitts's law, Maeda's design principles, or cognitive models of learning and memory. An aggregator collects all of these assessments and weights the opinion of each expert based on its past accuracy, and outputs to the developer a merged evaluation score and a weighted set of recommendations.
Different users have different abilities and interface preferences. For example, a user at NASA probably cares more about interface accuracy than speed. By passing this information to our committee of experts, we can create interfaces that are tuned to maximize the utility of a particular user type.
We evaluate our framework through a series of user studies. Interfaces passed to our committee of experts receive evaluation scores on a number of different dimensions, such as time, accuracy, and ease of use for novices versus experts. We can compare these predicted scores to the actual scores observed in user studies to evaluate performance. We can also retroactively weight the experts' opinions to determine which weighting would have given the best predictions of user behavior for the given interface, and observe whether that weighting generalizes to other interface evaluations.
EJ
While attempts have been made in the past to apply cognitive theory to the task of developing human-computer interfaces, there remains much work to be done. No standard and widespread model for the cognitive interaction with a computer exists. The roles of perception and cognition, while examined and studied independently, are often at odds with empirical and successful design guidelines in practice. Methods of study and evaluation, such as eye-tracking and workflow analysis, are still governed primarily by the needs at the end of the development process, with no quantitative model capable of influencing efficiency and consistency in the field.
Much of our work will focus on how cognitive principles can enable interface design to go beyond the focus of the functionality of the individual application. We propose to develop a theoretical foundation for interface design, drawing on recent advances in cognitive science—the study of how people think, perceive and interact with the world. We demonstrate in wide-ranging preliminary work that cognitive theory has a tangible and valuable role in all the stages of interface design and evaluation: models of distributed cognition can exert useful influence on the design of interfaces and the guidelines that govern it; algorithmic workflow analysis can lead to new interaction methods, including predictive options; a model of human perception can greatly enhance the usefulness of multimodal user study techniques; a better understanding of why classical strategies work will bring us closer to the "holy grail" of automated interface evaluation and recommendation. Much research has accumulated regarding how people manage multiple tasks and we will apply it to principles of how an interface should be designed with not only its own purpose in mind but such that it both helps maintain focus in a multi-tasking environment and minimizes the cost of switching to other tasks or applications in the same working sphere. The newer approach of distributed cognition also provides a different perspective by examining the human-computer system as a unified cognitive entity.
We will extract and test principles from this literature on how to ensure that the human part of the system is only responsible for those parts of the task for which it is more capable than the computer. We will distill a broad range of principles and computational models of cognition that are relevant to interface design and use them to compare and unify existing guidelines. Where possible, we will use computational models to enable richer automatic interface assessment than is currently available.
A Behavioral/Cognitive Model of Human-Computer Interaction
Several attempts have been recorded in the establishment of a unified theory of human behavior in interacting with machinery. As digital interfaces have come to replace their mechanistic counterparts, the user interface has become the standard unit of interactive design. Existing theories have thus been applied by many to these new design challenges, with mixed results. Such examinations have often sought to provide parsimonious design recommendations, sometimes at the cost of theoretical rigor. We propose to develop a theoretical foundation for interface design, drawing upon both the aforementioned theories and relevant, established theories of human behavior and cognition. We will test, modify, and empirically justify this foundation and subsequently flesh it out into a practical model of mental comprehension and progression in human-computer interaction.
The central focus of our work will in piecemeal justification of relevant psychological and cognitive findings, as without such empirical rigor our model will be doomed to antiquation. We will harmonize higher-level processes with their lower-level stimuli and outputs, thereby allowing designs to progress not solely at a high or low level but as a unified whole. We will relate these findings to current rules and guidelines of interface design, thereby providing a critical review of current design thought and justifying our model’s predictions through their relation to designs widely recognized as successful.
Once this model has been developed and thoroughly justified, we will examine future venues of research in the applications of cognitive and psychological science to interface design. These paths will exist not only as suggestions but theoretical predictions and, time allowing, small-scale studies. The field of human-computer interfaces is itself experiencing a great deal of innovation, and our project will keep pace with avant-garde design concepts as they appear.
(Steven)
Revised: Behavioral/Cognitive Guidelines and Tools for Human-Computer Interface Design
Several attempts have been recorded in the establishment of a unified theory of human behavior in interacting with machinery. As digital interfaces have come to replace their mechanistic counterparts, the user interface has become the standard unit of interactive design. Existing theories have thus been applied by many to these new design challenges, with mixed results. Such examinations have often sought to provide parsimonious design recommendations, sometimes at the cost of theoretical rigor. We propose to develop a set of evaluation tools and guidelines for interface design, drawing upon both the aforementioned theories and relevant, established, empirically validated and experimentally proven components of human behavior and cognition. We will test, modify, and empirically justify these tools by applying them to existing interfaces widely recognized to be well-designed.
The central focus of our work will be in aggressive application of relevant psychological and cognitive findings to the design milieu. We will harmonize higher-level processes with their lower-level stimuli and outputs, thereby allowing designs to progress not solely at a high or low level but as a unified whole. We will relate these findings to current rules and guidelines of interface design, thereby providing a critical review of current design thought and justifying our model’s predictions through their relation to designs widely recognized as successful.
After developing our set of guidelines, we will tackle the task of transforming our semantic concepts into algorithmic evaluation tools. We will draw upon recent advances in embodied user models to best approximate realistic user perception and interaction, building upon earlier successes in automated user evaluation tools. The tools will be built in a modular fashion to accommodate future developments in the field of behavioral and cognitive HCI research, allowing our project to keep pace with cutting-edge scientific advances as they appear.